Sample records for correct classical limit

  1. Continuous quantum measurement and the quantum to classical transition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhattacharya, Tanmoy; Habib, Salman; Jacobs, Kurt

    2003-04-01

    While ultimately they are described by quantum mechanics, macroscopic mechanical systems are nevertheless observed to follow the trajectories predicted by classical mechanics. Hence, in the regime defining macroscopic physics, the trajectories of the correct classical motion must emerge from quantum mechanics, a process referred to as the quantum to classical transition. Extending previous work [Bhattacharya, Habib, and Jacobs, Phys. Rev. Lett. 85, 4852 (2000)], here we elucidate this transition in some detail, showing that once the measurement processes that affect all macroscopic systems are taken into account, quantum mechanics indeed predicts the emergence of classical motion. We derive inequalities thatmore » describe the parameter regime in which classical motion is obtained, and provide numerical examples. We also demonstrate two further important properties of the classical limit: first, that multiple observers all agree on the motion of an object, and second, that classical statistical inference may be used to correctly track the classical motion.« less

  2. Intelligent monitoring and control of semiconductor manufacturing equipment

    NASA Technical Reports Server (NTRS)

    Murdock, Janet L.; Hayes-Roth, Barbara

    1991-01-01

    The use of AI methods to monitor and control semiconductor fabrication in a state-of-the-art manufacturing environment called the Rapid Thermal Multiprocessor is described. Semiconductor fabrication involves many complex processing steps with limited opportunities to measure process and product properties. By applying additional process and product knowledge to that limited data, AI methods augment classical control methods by detecting abnormalities and trends, predicting failures, diagnosing, planning corrective action sequences, explaining diagnoses or predictions, and reacting to anomalous conditions that classical control systems typically would not correct. Research methodology and issues are discussed, and two diagnosis scenarios are examined.

  3. Variational treatment of entanglement in the Dicke model

    NASA Astrophysics Data System (ADS)

    Bakemeier, L.; Alvermann, A.; Fehske, H.

    2015-10-01

    We introduce a variational ansatz for the Dicke model that extends mean-field theory through the inclusion of spin-oscillator correlations. The correlated variational state is obtained from the mean-field product state via a unitary transformation. The ansatz becomes correct in the limit of large oscillator frequency and in the limit of a large spin, for which it captures the leading quantum corrections to the classical limit exactly including the spin-oscillator entanglement entropy. We explain the origin of the unitary transformation before we show that the ansatz improves substantially upon mean-field theory, giving near exact results for the ground state energy and very good results for other observables. We then discuss why the ansatz still encounters problems in the transition regime at moderate spin lengths, where it fails to capture the precursors of the superradiant quantum phase transition faithfully. This observation illustrates the principal limits of semi-classical formulations, even after they are extended with correlations and entanglement.

  4. Single-Trial Normalization for Event-Related Spectral Decomposition Reduces Sensitivity to Noisy Trials

    PubMed Central

    Grandchamp, Romain; Delorme, Arnaud

    2011-01-01

    In electroencephalography, the classical event-related potential model often proves to be a limited method to study complex brain dynamics. For this reason, spectral techniques adapted from signal processing such as event-related spectral perturbation (ERSP) – and its variant event-related synchronization and event-related desynchronization – have been used over the past 20 years. They represent average spectral changes in response to a stimulus. These spectral methods do not have strong consensus for comparing pre- and post-stimulus activity. When computing ERSP, pre-stimulus baseline removal is usually performed after averaging the spectral estimate of multiple trials. Correcting the baseline of each single-trial prior to averaging spectral estimates is an alternative baseline correction method. However, we show that this method leads to positively skewed post-stimulus ERSP values. We eventually present new single-trial-based ERSP baseline correction methods that perform trial normalization or centering prior to applying classical baseline correction methods. We show that single-trial correction methods minimize the contribution of artifactual data trials with high-amplitude spectral estimates and are robust to outliers when performing statistical inference testing. We then characterize these methods in terms of their time–frequency responses and behavior compared to classical ERSP methods. PMID:21994498

  5. Integrability in AdS/CFT correspondence: quasi-classical analysis

    NASA Astrophysics Data System (ADS)

    Gromov, Nikolay

    2009-06-01

    In this review, we consider a quasi-classical method applicable to integrable field theories which is based on a classical integrable structure—the algebraic curve. We apply it to the Green-Schwarz superstring on the AdS5 × S5 space. We show that the proposed method reproduces perfectly the earlier results obtained by expanding the string action for some simple classical solutions. The construction is explicitly covariant and is not based on a particular parameterization of the fields and as a result is free from ambiguities. On the other hand, the finite size corrections in some particularly important scaling limit are studied in this paper for a system of Bethe equations. For the general superalgebra \\su(N|K) , the result for the 1/L corrections is obtained. We find an integral equation which describes these corrections in a closed form. As an application, we consider the conjectured Beisert-Staudacher (BS) equations with the Hernandez-Lopez dressing factor where the finite size corrections should reproduce quasi-classical results around a general classical solution. Indeed, we show that our integral equation can be interpreted as a sum of all physical fluctuations and thus prove the complete one-loop consistency of the BS equations. We demonstrate that any local conserved charge (including the AdS energy) computed from the BS equations is indeed given at one loop by the sum of the charges of fluctuations with an exponential precision for large S5 angular momentum of the string. As an independent result, the BS equations in an \\su(2) sub-sector were derived from Zamolodchikovs's S-matrix. The paper is based on the author's PhD thesis.

  6. Bouncing cosmologies from quantum gravity condensates

    NASA Astrophysics Data System (ADS)

    Oriti, Daniele; Sindoni, Lorenzo; Wilson-Ewing, Edward

    2017-02-01

    We show how the large-scale cosmological dynamics can be obtained from the hydrodynamics of isotropic group field theory condensate states in the Gross-Pitaevskii approximation. The correct Friedmann equations are recovered in the classical limit for some choices of the parameters in the action for the group field theory, and quantum gravity corrections arise in the high-curvature regime causing a bounce which generically resolves the big-bang and big-crunch singularities.

  7. Topics in quantum cryptography, quantum error correction, and channel simulation

    NASA Astrophysics Data System (ADS)

    Luo, Zhicheng

    In this thesis, we mainly investigate four different topics: efficiently implementable codes for quantum key expansion [51], quantum error-correcting codes based on privacy amplification [48], private classical capacity of quantum channels [44], and classical channel simulation with quantum side information [49, 50]. For the first topic, we propose an efficiently implementable quantum key expansion protocol, capable of increasing the size of a pre-shared secret key by a constant factor. Previously, the Shor-Preskill proof [64] of the security of the Bennett-Brassard 1984 (BB84) [6] quantum key distribution protocol relied on the theoretical existence of good classical error-correcting codes with the "dual-containing" property. But the explicit and efficiently decodable construction of such codes is unknown. We show that we can lift the dual-containing constraint by employing the non-dual-containing codes with excellent performance and efficient decoding algorithms. For the second topic, we propose a construction of Calderbank-Shor-Steane (CSS) [19, 68] quantum error-correcting codes, which are originally based on pairs of mutually dual-containing classical codes, by combining a classical code with a two-universal hash function. We show, using the results of Renner and Koenig [57], that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. For the third topic, we prove a regularized formula for the secret key assisted capacity region of a quantum channel for transmitting private classical information. This result parallels the work of Devetak on entanglement assisted quantum communication capacity. This formula provides a new family protocol, the private father protocol, under the resource inequality framework that includes the private classical communication without the assisted secret keys as a child protocol. For the fourth topic, we study and solve the problem of classical channel simulation with quantum side information at the receiver. Our main theorem has two important corollaries: rate-distortion theory with quantum side information and common randomness distillation. Simple proofs of achievability of classical multi-terminal source coding problems can be made via a unified approach using the channel simulation theorem as building blocks. The fully quantum generalization of the problem is also conjectured with outer and inner bounds on the achievable rate pairs.

  8. Closed Loop, DM Diversity-based, Wavefront Correction Algorithm for High Contrast Imaging Systems

    NASA Technical Reports Server (NTRS)

    Give'on, Amir; Belikov, Ruslan; Shaklan, Stuart; Kasdin, Jeremy

    2007-01-01

    High contrast imaging from space relies on coronagraphs to limit diffraction and a wavefront control systems to compensate for imperfections in both the telescope optics and the coronagraph. The extreme contrast required (up to 10(exp -10) for terrestrial planets) puts severe requirements on the wavefront control system, as the achievable contrast is limited by the quality of the wavefront. This paper presents a general closed loop correction algorithm for high contrast imaging coronagraphs by minimizing the energy in a predefined region in the image where terrestrial planets could be found. The estimation part of the algorithm reconstructs the complex field in the image plane using phase diversity caused by the deformable mirror. This method has been shown to achieve faster and better correction than classical speckle nulling.

  9. Ciliates learn to diagnose and correct classical error syndromes in mating strategies

    PubMed Central

    Clark, Kevin B.

    2013-01-01

    Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by “rivals” and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell–cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via “power” or “refrigeration” cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social contexts. PMID:23966987

  10. Stopping power of dense plasmas: The collisional method and limitations of the dielectric formalism.

    PubMed

    Clauser, C F; Arista, N R

    2018-02-01

    We present a study of the stopping power of plasmas using two main approaches: the collisional (scattering theory) and the dielectric formalisms. In the former case, we use a semiclassical method based on quantum scattering theory. In the latter case, we use the full description given by the extension of the Lindhard dielectric function for plasmas of all degeneracies. We compare these two theories and show that the dielectric formalism has limitations when it is used for slow heavy ions or atoms in dense plasmas. We present a study of these limitations and show the regimes where the dielectric formalism can be used, with appropriate corrections to include the usual quantum and classical limits. On the other hand, the semiclassical method shows the correct behavior for all plasma conditions and projectile velocity and charge. We consider different models for the ion charge distributions, including bare and dressed ions as well as neutral atoms.

  11. Computing with a single qubit faster than the computation quantum speed limit

    NASA Astrophysics Data System (ADS)

    Sinitsyn, Nikolai A.

    2018-02-01

    The possibility to save and process information in fundamentally indistinguishable states is the quantum mechanical resource that is not encountered in classical computing. I demonstrate that, if energy constraints are imposed, this resource can be used to accelerate information-processing without relying on entanglement or any other type of quantum correlations. In fact, there are computational problems that can be solved much faster, in comparison to currently used classical schemes, by saving intermediate information in nonorthogonal states of just a single qubit. There are also error correction strategies that protect such computations.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bialas, A.; Czyz, W.; Zalewski, K.

    The relation between Renyi entropies and moments of the Wigner function, representing the quantum mechanical description of the M-particle semi-inclusive distribution at freeze-out, is investigated. It is shown that in the limit of infinite volume of the system, the classical and quantum descriptions are equivalent. Finite volume corrections are derived and shown to be small for systems encountered in relativistic heavy ion collisions.

  13. Public classical communication in quantum cryptography: Error correction, integrity, and authentication

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timofeev, A. V.; Pomozov, D. I.; Makkaveev, A. P.

    2007-05-15

    Quantum cryptography systems combine two communication channels: a quantum and a classical one. (They can be physically implemented in the same fiber-optic link, which is employed as a quantum channel when one-photon states are transmitted and as a classical one when it carries classical data traffic.) Both channels are supposed to be insecure and accessible to an eavesdropper. Error correction in raw keys, interferometer balancing, and other procedures are performed by using the public classical channel. A discussion of the requirements to be met by the classical channel is presented.

  14. Brownian motion of classical spins: Anomalous dissipation and generalized Langevin equation

    NASA Astrophysics Data System (ADS)

    Bandyopadhyay, Malay; Jayannavar, A. M.

    2017-10-01

    In this work, we derive the Langevin equation (LE) of a classical spin interacting with a heat bath through momentum variables, starting from the fully dynamical Hamiltonian description. The derived LE with anomalous dissipation is analyzed in detail. The obtained LE is non-Markovian with multiplicative noise terms. The concomitant dissipative terms obey the fluctuation-dissipation theorem. The Markovian limit correctly produces the Kubo and Hashitsume equation. The perturbative treatment of our equations produces the Landau-Lifshitz equation and the Seshadri-Lindenberg equation. Then we derive the Fokker-Planck equation corresponding to LE and the concept of equilibrium probability distribution is analyzed.

  15. Errata report on Herbert Goldstein's Classical Mechanics: Second edition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unseren, M.A.; Hoffman, F.M.

    This report describes errors in Herbert Goldstein's textbook Classical Mechanics, Second Edition (Copyright 1980, ISBN 0-201-02918-9). Some of the errors in current printings of the text were corrected in the second printing; however, after communicating with Addison Wesley, the publisher for Classical Mechanics, it was discovered that the corrected galley proofs had been lost by the printer and that no one had complained of any errors in the eleven years since the second printing. The errata sheet corrects errors from all printings of the second edition.

  16. Lattice constants of pure methane and carbon dioxide hydrates at low temperatures. Implementing quantum corrections to classical molecular dynamics studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costandy, Joseph; Michalis, Vasileios K.; Economou, Ioannis G., E-mail: i.tsimpanogiannis@qatar.tamu.edu, E-mail: ioannis.economou@qatar.tamu.edu

    2016-03-28

    We introduce a simple correction to the calculation of the lattice constants of fully occupied structure sI methane or carbon dioxide pure hydrates that are obtained from classical molecular dynamics simulations using the TIP4PQ/2005 water force field. The obtained corrected lattice constants are subsequently used in order to obtain isobaric thermal expansion coefficients of the pure gas hydrates that exhibit a trend that is significantly closer to the experimental behavior than previously reported classical molecular dynamics studies.

  17. NP-hardness of decoding quantum error-correction codes

    NASA Astrophysics Data System (ADS)

    Hsieh, Min-Hsiu; Le Gall, François

    2011-05-01

    Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.

  18. A single-stage flux-corrected transport algorithm for high-order finite-volume methods

    DOE PAGES

    Chaplin, Christopher; Colella, Phillip

    2017-05-08

    We present a new limiter method for solving the advection equation using a high-order, finite-volume discretization. The limiter is based on the flux-corrected transport algorithm. Here, we modify the classical algorithm by introducing a new computation for solution bounds at smooth extrema, as well as improving the preconstraint on the high-order fluxes. We compute the high-order fluxes via a method-of-lines approach with fourth-order Runge-Kutta as the time integrator. For computing low-order fluxes, we select the corner-transport upwind method due to its improved stability over donor-cell upwind. Several spatial differencing schemes are investigated for the high-order flux computation, including centered- differencemore » and upwind schemes. We show that the upwind schemes perform well on account of the dissipation of high-wavenumber components. The new limiter method retains high-order accuracy for smooth solutions and accurately captures fronts in discontinuous solutions. Further, we need only apply the limiter once per complete time step.« less

  19. Measurement-only verifiable blind quantum computing with quantum input verification

    NASA Astrophysics Data System (ADS)

    Morimae, Tomoyuki

    2016-10-01

    Verifiable blind quantum computing is a secure delegated quantum computing where a client with a limited quantum technology delegates her quantum computing to a server who has a universal quantum computer. The client's privacy is protected (blindness), and the correctness of the computation is verifiable by the client despite her limited quantum technology (verifiability). There are mainly two types of protocols for verifiable blind quantum computing: the protocol where the client has only to generate single-qubit states and the protocol where the client needs only the ability of single-qubit measurements. The latter is called the measurement-only verifiable blind quantum computing. If the input of the client's quantum computing is a quantum state, whose classical efficient description is not known to the client, there was no way for the measurement-only client to verify the correctness of the input. Here we introduce a protocol of measurement-only verifiable blind quantum computing where the correctness of the quantum input is also verifiable.

  20. The evolving Planck mass in classically scale-invariant theories

    NASA Astrophysics Data System (ADS)

    Kannike, K.; Raidal, M.; Spethmann, C.; Veermäe, H.

    2017-04-01

    We consider classically scale-invariant theories with non-minimally coupled scalar fields, where the Planck mass and the hierarchy of physical scales are dynamically generated. The classical theories possess a fixed point, where scale invariance is spontaneously broken. In these theories, however, the Planck mass becomes unstable in the presence of explicit sources of scale invariance breaking, such as non-relativistic matter and cosmological constant terms. We quantify the constraints on such classical models from Big Bang Nucleosynthesis that lead to an upper bound on the non-minimal coupling and require trans-Planckian field values. We show that quantum corrections to the scalar potential can stabilise the fixed point close to the minimum of the Coleman-Weinberg potential. The time-averaged motion of the evolving fixed point is strongly suppressed, thus the limits on the evolving gravitational constant from Big Bang Nucleosynthesis and other measurements do not presently constrain this class of theories. Field oscillations around the fixed point, if not damped, contribute to the dark matter density of the Universe.

  1. One-loop quantum gravity repulsion in the early Universe.

    PubMed

    Broda, Bogusław

    2011-03-11

    Perturbative quantum gravity formalism is applied to compute the lowest order corrections to the classical spatially flat cosmological Friedmann-Lemaître-Robertson-Walker solution (for the radiation). The presented approach is analogous to the approach applied to compute quantum corrections to the Coulomb potential in electrodynamics, or rather to the approach applied to compute quantum corrections to the Schwarzschild solution in gravity. In the framework of the standard perturbative quantum gravity, it is shown that the corrections to the classical deceleration, coming from the one-loop graviton vacuum polarization (self-energy), have (UV cutoff free) opposite to the classical repulsive properties which are not negligible in the very early Universe. The repulsive "quantum forces" resemble those known from loop quantum cosmology.

  2. Effects of tunnelling and asymmetry for system-bath models of electron transfer

    NASA Astrophysics Data System (ADS)

    Mattiat, Johann; Richardson, Jeremy O.

    2018-03-01

    We apply the newly derived nonadiabatic golden-rule instanton theory to asymmetric models describing electron-transfer in solution. The models go beyond the usual spin-boson description and have anharmonic free-energy surfaces with different values for the reactant and product reorganization energies. The instanton method gives an excellent description of the behaviour of the rate constant with respect to asymmetry for the whole range studied. We derive a general formula for an asymmetric version of the Marcus theory based on the classical limit of the instanton and find that this gives significant corrections to the standard Marcus theory. A scheme is given to compute this rate based only on equilibrium simulations. We also compare the rate constants obtained by the instanton method with its classical limit to study the effect of tunnelling and other quantum nuclear effects. These quantum effects can increase the rate constant by orders of magnitude.

  3. Hybridizing matter-wave and classical accelerometers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lautier, J.; Volodimer, L.; Hardin, T.

    2014-10-06

    We demonstrate a hybrid accelerometer that benefits from the advantages of both conventional and atomic sensors in terms of bandwidth (DC to 430 Hz) and long term stability. First, the use of a real time correction of the atom interferometer phase by the signal from the classical accelerometer enables to run it at best performance without any isolation platform. Second, a servo-lock of the DC component of the conventional sensor output signal by the atomic one realizes a hybrid sensor. This method paves the way for applications in geophysics and in inertial navigation as it overcomes the main limitation of atomicmore » accelerometers, namely, the dead times between consecutive measurements.« less

  4. Moments of the Wigner function and Renyi entropies at freeze-out

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyz, W.; Zalewski, K.

    2006-03-01

    The relation between Renyi entropies and moments of the Wigner function, representing the quantum mechanical description of the M-particle semi-inclusive distribution at freeze-out, is investigated. It is shown that in the limit of infinite volume of the system, the classical and quantum descriptions are equivalent. Finite volume corrections are derived and shown to be small for systems encountered in relativistic heavy ion collisions.

  5. Assessment of noise exposure for basketball sports referees.

    PubMed

    Masullo, Massimiliano; Lenzuni, Paolo; Maffei, Luigi; Nataletti, Pietro; Ciaburro, Giuseppe; Annesi, Diego; Moschetto, Antonio

    2016-01-01

    Dosimetric measurements carried out on basketball referees have shown that whistles not only generate very high peak sound pressure levels, but also play a relevant role in determining the overall exposure to noise of the exposed subjects. Because of the peculiar geometry determined by the mutual positions of the whistle, the microphone, and the ear, experimental data cannot be directly compared with existing occupational noise exposure and/or action limits. In this article, an original methodology, which allows experimental results to be reliably compared with the aforementioned limits, is presented. The methodology is based on the use of two correction factors to compensate the effects of the position of the dosimeter microphone (fR) and of the sound source (fS). Correction factors were calculated by means of laboratory measurements for two models of whistles (Fox 40 Classic and Fox 40 Sonik) and for two head orientations (frontal and oblique).Results sho w that for peak sound pressure levels the values of fR and fS, are in the range -8.3 to -4.6 dB and -6.0 to -1.7 dB, respectively. If one considers the Sound Exposure Levels (SEL) of whistle events, the same correction factors are in the range of -8.9 to -5.3 dB and -5.4 to -1.5 dB, respectively. The application of these correction factors shows that the corrected weekly noise exposure level for referees is 80.6 dB(A), which is slightly in excess of the lower action limit of the 2003/10/EC directive, and a few dB below the Recommended Exposure Limit (REL) proposed by the National Institute for Occupational Safety and Health (NIOSH). The corrected largest peak sound pressure level is 134.7 dB(C) which is comparable to the lower action limit of the 2003/10/EC directive, but again substantially lower than the ceiling limit of 140 dB(A) set by NIOSH.

  6. Characterization and Operation of Liquid Crystal Adaptive Optics Phoropter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Awwal, A; Bauman, B; Gavel, D

    2003-02-05

    Adaptive optics (AO), a mature technology developed for astronomy to compensate for the effects of atmospheric turbulence, can also be used to correct the aberrations of the eye. The classic phoropter is used by ophthalmologists and optometrists to estimate and correct the lower-order aberrations of the eye, defocus and astigmatism, in order to derive a vision correction prescription for their patients. An adaptive optics phoropter measures and corrects the aberrations in the human eye using adaptive optics techniques, which are capable of dealing with both the standard low-order aberrations and higher-order aberrations, including coma and spherical aberration. High-order aberrations havemore » been shown to degrade visual performance for clinical subjects in initial investigations. An adaptive optics phoropter has been designed and constructed based on a Shack-Hartmann sensor to measure the aberrations of the eye, and a liquid crystal spatial light modulator to compensate for them. This system should produce near diffraction-limited optical image quality at the retina, which will enable investigation of the psychophysical limits of human vision. This paper describes the characterization and operation of the AO phoropter with results from human subject testing.« less

  7. Quantum corrections for spinning particles in de Sitter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fröb, Markus B.; Verdaguer, Enric, E-mail: mbf503@york.ac.uk, E-mail: enric.verdaguer@ub.edu

    We compute the one-loop quantum corrections to the gravitational potentials of a spinning point particle in a de Sitter background, due to the vacuum polarisation induced by conformal fields in an effective field theory approach. We consider arbitrary conformal field theories, assuming only that the theory contains a large number N of fields in order to separate their contribution from the one induced by virtual gravitons. The corrections are described in a gauge-invariant way, classifying the induced metric perturbations around the de Sitter background according to their behaviour under transformations on equal-time hypersurfaces. There are six gauge-invariant modes: two scalarmore » Bardeen potentials, one transverse vector and one transverse traceless tensor, of which one scalar and the vector couple to the spinning particle. The quantum corrections consist of three different parts: a generalisation of the flat-space correction, which is only significant at distances of the order of the Planck length; a constant correction depending on the undetermined parameters of the renormalised effective action; and a term which grows logarithmically with the distance from the particle. This last term is the most interesting, and when resummed gives a modified power law, enhancing the gravitational force at large distances. As a check on the accuracy of our calculation, we recover the linearised Kerr-de Sitter metric in the classical limit and the flat-space quantum correction in the limit of vanishing Hubble constant.« less

  8. Computing Wigner distributions and time correlation functions using the quantum thermal bath method: application to proton transfer spectroscopy.

    PubMed

    Basire, Marie; Borgis, Daniel; Vuilleumier, Rodolphe

    2013-08-14

    Langevin dynamics coupled to a quantum thermal bath (QTB) allows for the inclusion of vibrational quantum effects in molecular dynamics simulations at virtually no additional computer cost. We investigate here the ability of the QTB method to reproduce the quantum Wigner distribution of a variety of model potentials, designed to assess the performances and limits of the method. We further compute the infrared spectrum of a multidimensional model of proton transfer in the gas phase and in solution, using classical trajectories sampled initially from the Wigner distribution. It is shown that for this type of system involving large anharmonicities and strong nonlinear coupling to the environment, the quantum thermal bath is able to sample the Wigner distribution satisfactorily and to account for both zero point energy and tunneling effects. It leads to quantum time correlation functions having the correct short-time behavior, and the correct associated spectral frequencies, but that are slightly too overdamped. This is attributed to the classical propagation approximation rather than the generation of the quantized initial conditions themselves.

  9. Classical simulation of quantum error correction in a Fibonacci anyon code

    NASA Astrophysics Data System (ADS)

    Burton, Simon; Brell, Courtney G.; Flammia, Steven T.

    2017-02-01

    Classically simulating the dynamics of anyonic excitations in two-dimensional quantum systems is likely intractable in general because such dynamics are sufficient to implement universal quantum computation. However, processes of interest for the study of quantum error correction in anyon systems are typically drawn from a restricted class that displays significant structure over a wide range of system parameters. We exploit this structure to classically simulate, and thereby demonstrate the success of, an error-correction protocol for a quantum memory based on the universal Fibonacci anyon model. We numerically simulate a phenomenological model of the system and noise processes on lattice sizes of up to 128 ×128 sites, and find a lower bound on the error-correction threshold of approximately 0.125 errors per edge, which is comparable to those previously known for Abelian and (nonuniversal) non-Abelian anyon models.

  10. Estimating the marine signal in the near infrared for atmospheric correction of satellite ocean-color imagery over turbid waters

    NASA Astrophysics Data System (ADS)

    Bourdet, Alice; Frouin, Robert J.

    2014-11-01

    The classic atmospheric correction algorithm, routinely applied to second-generation ocean-color sensors such as SeaWiFS, MODIS, and MERIS, consists of (i) estimating the aerosol reflectance in the red and near infrared (NIR) where the ocean is considered black (i.e., totally absorbing), and (ii) extrapolating the estimated aerosol reflectance to shorter wavelengths. The marine reflectance is then retrieved by subtraction. Variants and improvements have been made over the years to deal with non-null reflectance in the red and near infrared, a general situation in estuaries and the coastal zone, but the solutions proposed so far still suffer some limitations, due to uncertainties in marine reflectance modeling in the near infrared or difficulty to extrapolate the aerosol signal to the blue when using observations in the shortwave infrared (SWIR), a spectral range far from the ocean-color wavelengths. To estimate the marine signal (i.e., the product of marine reflectance and atmospheric transmittance) in the near infrared, the proposed approach is to decompose the aerosol reflectance in the near infrared to shortwave infrared into principal components. Since aerosol scattering is smooth spectrally, a few components are generally sufficient to represent the perturbing signal, i.e., the aerosol reflectance in the near infrared can be determined from measurements in the shortwave infrared where the ocean is black. This gives access to the marine signal in the near infrared, which can then be used in the classic atmospheric correction algorithm. The methodology is evaluated theoretically from simulations of the top-of-atmosphere reflectance for a wide range of geophysical conditions and angular geometries and applied to actual MODIS imagery acquired over the Gulf of Mexico. The number of discarded pixels is reduced by over 80% using the PC modeling to determine the marine signal in the near infrared prior to applying the classic atmospheric correction algorithm.

  11. Numerical Modeling of Poroelastic-Fluid Systems Using High-Resolution Finite Volume Methods

    NASA Astrophysics Data System (ADS)

    Lemoine, Grady

    Poroelasticity theory models the mechanics of porous, fluid-saturated, deformable solids. It was originally developed by Maurice Biot to model geophysical problems, such as seismic waves in oil reservoirs, but has also been applied to modeling living bone and other porous media. Poroelastic media often interact with fluids, such as in ocean bottom acoustics or propagation of waves from soft tissue into bone. This thesis describes the development and testing of high-resolution finite volume numerical methods, and simulation codes implementing these methods, for modeling systems of poroelastic media and fluids in two and three dimensions. These methods operate on both rectilinear grids and logically rectangular mapped grids. To allow the use of these methods, Biot's equations of poroelasticity are formulated as a first-order hyperbolic system with a source term; this source term is incorporated using operator splitting. Some modifications are required to the classical high-resolution finite volume method. Obtaining correct solutions at interfaces between poroelastic media and fluids requires a novel transverse propagation scheme and the removal of the classical second-order correction term at the interface, and in three dimensions a new wave limiting algorithm is also needed to correctly limit shear waves. The accuracy and convergence rates of the methods of this thesis are examined for a variety of analytical solutions, including simple plane waves, reflection and transmission of waves at an interface between different media, and scattering of acoustic waves by a poroelastic cylinder. Solutions are also computed for a variety of test problems from the computational poroelasticity literature, as well as some original test problems designed to mimic possible applications for the simulation code.

  12. Five-wave-packet quantum error correction based on continuous-variable cluster entanglement

    PubMed Central

    Hao, Shuhong; Su, Xiaolong; Tian, Caixing; Xie, Changde; Peng, Kunchi

    2015-01-01

    Quantum error correction protects the quantum state against noise and decoherence in quantum communication and quantum computation, which enables one to perform fault-torrent quantum information processing. We experimentally demonstrate a quantum error correction scheme with a five-wave-packet code against a single stochastic error, the original theoretical model of which was firstly proposed by S. L. Braunstein and T. A. Walker. Five submodes of a continuous variable cluster entangled state of light are used for five encoding channels. Especially, in our encoding scheme the information of the input state is only distributed on three of the five channels and thus any error appearing in the remained two channels never affects the output state, i.e. the output quantum state is immune from the error in the two channels. The stochastic error on a single channel is corrected for both vacuum and squeezed input states and the achieved fidelities of the output states are beyond the corresponding classical limit. PMID:26498395

  13. Quantum and classical properties of soliton propagation in optical fibers

    NASA Astrophysics Data System (ADS)

    Krylov, Dmitriy

    2001-05-01

    Quantum and classical aspects of nonlinear optical pulse propagation in optical fibers are studied with the emphasis on temporal solitons. The theoretical and experimental investigation focuses on phenomena that can fundamentally limit transmission and detection of optical signals in fiber-optic communication systems that employ solitons. In transmission experiments the first evidence is presented that a pre-chirped high-order soliton pulse propagating in a low anomalous dispersion optical fiber will irreversibly break up into an ordered train of fundamental (N = 1) solitons. The experimental results confirm previous analytical predictions and show excellent agreement with numerical simulations. This phenomenon presents a fundamental limitation on systems that utilize dispersion-management or pre-chirping of optical pulses, and has to be taken into consideration when designing such systems. The experiments also show that the breakup process can be repeated by cascading two independent breakup stages. Each stage accepts a single input pulse and produces two independent pulses. The stages are cascaded to produce a one-to-four breakup. Solitons are also shown to be ideally suited for investigating non-classical properties of light. Based on the general quantum theory of optical pulse propagation, a new scheme for generating amplitude-squeezed solitons is designed and implemented in a highly asymmetric fiber Sagnac interferometer. A record reduction of 5.7dB (73%) and, with correction for linear losses, 7.0dB (81%) in photon-number fluctuations below the shot-noise level is measured by direct detection. The same scheme is also shown to generate significant classical noise reduction and is limited by Raman effects in fiber. Such large squeezing levels can be employed in practical fiber optic communication systems to achieve noiseless amplification and better signal to noise ratios in direct detection. The photon number states can also be used in quantum non- demolition measurements and quantum communications. Amplitude squeezing is shown to be present in the normal- dispersion regime where no soliton formation is possible. In this case, a noise reduction of 1.7dB (33%) and, with correction for linear losses, 2.5dB (47%) below the shot- noise level is measured. The dependence of noise behavior on dispersion is investigated both experimentally and theoretically.

  14. Classical Molecular Dynamics with Mobile Protons.

    PubMed

    Lazaridis, Themis; Hummer, Gerhard

    2017-11-27

    An important limitation of standard classical molecular dynamics simulations is the inability to make or break chemical bonds. This restricts severely our ability to study processes that involve even the simplest of chemical reactions, the transfer of a proton. Existing approaches for allowing proton transfer in the context of classical mechanics are rather cumbersome and have not achieved widespread use and routine status. Here we reconsider the combination of molecular dynamics with periodic stochastic proton hops. To ensure computational efficiency, we propose a non-Boltzmann acceptance criterion that is heuristically adjusted to maintain the correct or desirable thermodynamic equilibria between different protonation states and proton transfer rates. Parameters are proposed for hydronium, Asp, Glu, and His. The algorithm is implemented in the program CHARMM and tested on proton diffusion in bulk water and carbon nanotubes and on proton conductance in the gramicidin A channel. Using hopping parameters determined from proton diffusion in bulk water, the model reproduces the enhanced proton diffusivity in carbon nanotubes and gives a reasonable estimate of the proton conductance in gramicidin A.

  15. How Incorrect Is the Classical Partition Function for the Ideal Gas?

    ERIC Educational Resources Information Center

    Kroemer, Herbert

    1980-01-01

    Discussed is the classical partition function for the ideal gas and how it differs from the exact value for bosons or fermions in the classical regime. The differences in the two values are negligible hence the classical treatment leads in the end to correct answers for all observables. (Author/DS)

  16. Spinfoam cosmology with the proper vertex amplitude

    NASA Astrophysics Data System (ADS)

    Vilensky, Ilya

    2017-11-01

    The proper vertex amplitude is derived from the Engle-Pereira-Rovelli-Livine vertex by restricting to a single gravitational sector in order to achieve the correct semi-classical behaviour. We apply the proper vertex to calculate a cosmological transition amplitude that can be viewed as the Hartle-Hawking wavefunction. To perform this calculation we deduce the integral form of the proper vertex and use extended stationary phase methods to estimate the large-volume limit. We show that the resulting amplitude satisfies an operator constraint whose classical analogue is the Hamiltonian constraint of the Friedmann-Robertson-Walker cosmology. We find that the constraint dynamically selects the relevant family of coherent states and demonstrate a similar dynamic selection in standard quantum mechanics. We investigate the effects of dynamical selection on long-range correlations.

  17. Experimental rheological procedure adapted to pasty dewatered sludge up to 45 % dry matter.

    PubMed

    Mouzaoui, M; Baudez, J C; Sauceau, M; Arlabosse, P

    2018-04-15

    Wastewater sludge are characterized by complex rheological properties, strongly dependent on solids concentration and temperature. These properties are required for process hydrodynamic modelling but their correct measurement is often challenging at high solids concentrations. This is especially true to model the hydrodynamic of dewatered sludge during drying process where solids content (TS) increases with residence time. Indeed, until now, the literature mostly focused on the rheological characterization of sludge at low and moderate TS (between 4 and 8%). Limited attention was paid to pasty and highly concentrated sludge mainly because of the difficulties to carry out the measurements. Results reproducibility appeared to be poor and thus may not be always fully representative of the effective material properties. This work demonstrates that reproducible results can be obtained by controlling cracks and fractures which always take place in classical rotational rheometry. In that purpose, a well-controlled experimental procedure has been developed, allowing the exact determination of the surface effectively sheared. This surface is calculated by scattering a classical stress sweep with measurements at a reference strain value. The implementation of this procedure allows the correct determination of solid-like characteristics from 20 to 45% TS but also shows that pasty and highly concentrated sludge highlight normal forces caused by dilatancy. Moreover the surface correction appears to be independent of TS in the studied range. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Surface hopping with a manifold of electronic states. II. Application to the many-body Anderson-Holstein model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dou, Wenjie; Subotnik, Joseph E.; Nitzan, Abraham

    We investigate a simple surface hopping (SH) approach for modeling a single impurity level coupled to a single phonon and an electronic (metal) bath (i.e., the Anderson-Holstein model). The phonon degree of freedom is treated classically with motion along–and hops between–diabatic potential energy surfaces. The hopping rate is determined by the dynamics of the electronic bath (which are treated implicitly). For the case of one electronic bath, in the limit of small coupling to the bath, SH recovers phonon relaxation to thermal equilibrium and yields the correct impurity electron population (as compared with numerical renormalization group). For the case ofmore » out of equilibrium dynamics, SH current-voltage (I-V) curve is compared with the quantum master equation (QME) over a range of parameters, spanning the quantum region to the classical region. In the limit of large temperature, SH and QME agree. Furthermore, we can show that, in the limit of low temperature, the QME agrees with real-time path integral calculations. As such, the simple procedure described here should be useful in many other contexts.« less

  19. Photon and graviton mass limits

    NASA Astrophysics Data System (ADS)

    Goldhaber, Alfred Scharff; Nieto, Michael Martin

    2010-01-01

    Efforts to place limits on deviations from canonical formulations of electromagnetism and gravity have probed length scales increasing dramatically over time. Historically, these studies have passed through three stages: (1) testing the power in the inverse-square laws of Newton and Coulomb, (2) seeking a nonzero value for the rest mass of photon or graviton, and (3) considering more degrees of freedom, allowing mass while preserving explicit gauge or general-coordinate invariance. Since the previous review the lower limit on the photon Compton wavelength has improved by four orders of magnitude, to about one astronomical unit, and rapid current progress in astronomy makes further advance likely. For gravity there have been vigorous debates about even the concept of graviton rest mass. Meanwhile there are striking observations of astronomical motions that do not fit Einstein gravity with visible sources. “Cold dark matter” (slow, invisible classical particles) fits well at large scales. “Modified Newtonian dynamics” provides the best phenomenology at galactic scales. Satisfying this phenomenology is a requirement if dark matter, perhaps as invisible classical fields, could be correct here too. “Dark energy” might be explained by a graviton-mass-like effect, with associated Compton wavelength comparable to the radius of the visible universe. Significant mass limits are summarized in a table.

  20. Photon and graviton mass limits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldhaber, Alfred Scharff; Nieto, Michael Martin; Theoretical Division

    2010-01-15

    Efforts to place limits on deviations from canonical formulations of electromagnetism and gravity have probed length scales increasing dramatically over time. Historically, these studies have passed through three stages: (1) testing the power in the inverse-square laws of Newton and Coulomb, (2) seeking a nonzero value for the rest mass of photon or graviton, and (3) considering more degrees of freedom, allowing mass while preserving explicit gauge or general-coordinate invariance. Since the previous review the lower limit on the photon Compton wavelength has improved by four orders of magnitude, to about one astronomical unit, and rapid current progress in astronomymore » makes further advance likely. For gravity there have been vigorous debates about even the concept of graviton rest mass. Meanwhile there are striking observations of astronomical motions that do not fit Einstein gravity with visible sources. ''Cold dark matter'' (slow, invisible classical particles) fits well at large scales. ''Modified Newtonian dynamics'' provides the best phenomenology at galactic scales. Satisfying this phenomenology is a requirement if dark matter, perhaps as invisible classical fields, could be correct here too. ''Dark energy''might be explained by a graviton-mass-like effect, with associated Compton wavelength comparable to the radius of the visible universe. Significant mass limits are summarized in a table.« less

  1. Theoretical studies of the potential surface for the F - H2 greater than HF + H reaction

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Walch, Stephen, P.; Langhoff, Stephen R.; Taylor, Peter R.; Jaffe, Richard L.

    1987-01-01

    The F + H2 yields HF + H potential energy hypersurface was studied in the saddle point and entrance channel regions. Using a large (5s 5p 3d 2f 1g/4s 3p 2d) atomic natural orbital basis set, a classical barrier height of 1.86 kcal/mole was obtained at the CASSCF/multireference CI level (MRCI) after correcting for basis set superposition error and including a Davidson correction (+Q) for higher excitations. Based upon an analysis of the computed results, the true classical barrier is estimated to be about 1.4 kcal/mole. The location of the bottleneck on the lowest vibrationally adiabatic potential curve was also computed and the translational energy threshold determined from a one-dimensional tunneling calculation. Using the difference between the calculated and experimental threshold to adjust the classical barrier height on the computed surface yields a classical barrier in the range of 1.0 to 1.5 kcal/mole. Combining the results of the direct estimates of the classical barrier height with the empirical values obtained from the approximation calculations of the dynamical threshold, it is predicted that the true classical barrier height is 1.4 + or - 0.4 kcal/mole. Arguments are presented in favor of including the relatively large +Q correction obtained when nine electrons are correlated at the CASSCF/MRCI level.

  2. Restoring the consistency with the contact density theorem of a classical density functional theory of ions at a planar electrical double layer.

    PubMed

    Gillespie, Dirk

    2014-11-01

    Classical density functional theory (DFT) of fluids is a fast and efficient theory to compute the structure of the electrical double layer in the primitive model of ions where ions are modeled as charged, hard spheres in a background dielectric. While the hard-core repulsive component of this ion-ion interaction can be accurately computed using well-established DFTs, the electrostatic component is less accurate. Moreover, many electrostatic functionals fail to satisfy a basic theorem, the contact density theorem, that relates the bulk pressure, surface charge, and ion densities at their distances of closest approach for ions in equilibrium at a smooth, hard, planar wall. One popular electrostatic functional that fails to satisfy the contact density theorem is a perturbation approach developed by Kierlik and Rosinberg [Phys. Rev. A 44, 5025 (1991)PLRAAN1050-294710.1103/PhysRevA.44.5025] and Rosenfeld [J. Chem. Phys. 98, 8126 (1993)JCPSA60021-960610.1063/1.464569], where the full free-energy functional is Taylor-expanded around a bulk (homogeneous) reference fluid. Here, it is shown that this functional fails to satisfy the contact density theorem because it also fails to satisfy the known low-density limit. When the functional is corrected to satisfy this limit, a corrected bulk pressure is derived and it is shown that with this pressure both the contact density theorem and the Gibbs adsorption theorem are satisfied.

  3. Correcting quantum errors with entanglement.

    PubMed

    Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu

    2006-10-20

    We show how entanglement shared between encoder and decoder can simplify the theory of quantum error correction. The entanglement-assisted quantum codes we describe do not require the dual-containing constraint necessary for standard quantum error-correcting codes, thus allowing us to "quantize" all of classical linear coding theory. In particular, efficient modern classical codes that attain the Shannon capacity can be made into entanglement-assisted quantum codes attaining the hashing bound (closely related to the quantum capacity). For systems without large amounts of shared entanglement, these codes can also be used as catalytic codes, in which a small amount of initial entanglement enables quantum communication.

  4. On proton excitation of forbidden lines in positive ions

    NASA Astrophysics Data System (ADS)

    Burgess, Alan; Tully, John A.

    2005-08-01

    The semi-classical impact parameter approximations used by Bahcall and Wolf and by Bely and Faucher, for proton excitation of electric quadrupole transitions in positive ions, both fail at high energies, giving cross sections which do not fall off correctly as constant/E. This is in contrast with the pioneering example of Seaton for Fe+13 and of Reid and Schwarz for S+3, both of whom achieve the correct functional form, but do not ensure the correct constant of proportionality. By combining the Born and semi-classical approximations one can obtain cross sections which have the full correct behaviour as E → ∞, and hence, rate coefficients which have the correct high temperature behaviour (~C/T1/2 with the correct value of C). We provide a computer program for calculating these. An error in Faucher's derivation of the Born formula is also discussed.

  5. On the Ising character of the quantum-phase transition in LiHoF4

    NASA Astrophysics Data System (ADS)

    Skomski, R.

    2016-05-01

    It is investigated how a transverse magnetic field affects the quantum-mechanical character of LiHoF4, a system generally considered as a textbook example for an Ising-like quantum-phase transition. In small magnetic fields, the low-temperature behavior of the ions is Ising-like, involving the nearly degenerate low-lying Jz = ± 8 doublet. However, as the transverse field increases, there is a substantial admixture of states having |Jz| < 8. Near the quantum-phase-transition field, the system is distinctively non-Ising like, and all Jz eigenstates yield ground-state contributions of comparable magnitude. A classical analog to this mechanism is the micromagnetic single point in magnets with uniaxial anisotropy. Since Ho3+ has J = 8, the ion's behavior is reminiscent of the classical limit (J = ∞), but quantum corrections remain clearly visible.

  6. Universality in quantum chaos and the one-parameter scaling theory.

    PubMed

    García-García, Antonio M; Wang, Jiao

    2008-02-22

    The one-parameter scaling theory is adapted to the context of quantum chaos. We define a generalized dimensionless conductance, g, semiclassically and then study Anderson localization corrections by renormalization group techniques. This analysis permits a characterization of the universality classes associated to a metal (g-->infinity), an insulator (g-->0), and the metal-insulator transition (g-->g(c)) in quantum chaos provided that the classical phase space is not mixed. According to our results the universality class related to the metallic limit includes all the systems in which the Bohigas-Giannoni-Schmit conjecture holds but automatically excludes those in which dynamical localization effects are important. The universality class related to the metal-insulator transition is characterized by classical superdiffusion or a fractal spectrum in low dimensions (d < or = 2). Several examples are discussed in detail.

  7. Non-classical nuclei and growth kinetics of Cr precipitates in FeCr alloys during ageing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yulan; Hu, Shenyang Y.; Zhang, Lei

    2014-01-10

    In this manuscript, we quantitatively calculated the thermodynamic properties of critical nuclei of Cr precipitates in FeCr alloys. The concentration profiles of the critical nuclei and nucleation energy barriers were predicted by the constrained shrinking dimer dynamics (CSDD) method. It is found that Cr concentration distribution in the critical nuclei strongly depend on the overall Cr concentration as well as temperature. The critical nuclei are non-classical because the concentration in the nuclei is smaller than the thermodynamic equilibrium value. These results are in agreement with atomic probe observation. The growth kinetics of both classical and non-classical nuclei was investigated bymore » the phase field approach. The simulations of critical nucleus evolution showed a number of interesting phenomena: 1) a critical classical nucleus first shrinks toward its non-classical nucleus and then grows; 2) a non-classical nucleus has much slower growth kinetics at its earlier growth stage compared to the diffusion-controlled growth kinetics. 3) a critical classical nucleus grows faster at the earlier growth stage than the non-classical nucleus. All of these results demonstrate that it is critical to introduce the correct critical nuclei in order to correctly capture the kinetics of precipitation.« less

  8. Derivation of Einstein-Cartan theory from general relativity

    NASA Astrophysics Data System (ADS)

    Petti, Richard

    2015-04-01

    General relativity cannot describe exchange of classical intrinsic angular momentum and orbital angular momentum. Einstein-Cartan theory fixes this problem in the least invasive way. In the late 20th century, the consensus view was that Einstein-Cartan theory requires inclusion of torsion without adequate justification, it has no empirical support (though it doesn't conflict with any known evidence), it solves no important problem, and it complicates gravitational theory with no compensating benefit. In 1986 the author published a derivation of Einstein-Cartan theory from general relativity, with no additional assumptions or parameters. Starting without torsion, Poincaré symmetry, classical or quantum spin, or spinors, it derives torsion and its relation to spin from a continuum limit of general relativistic solutions. The present work makes the case that this computation, combined with supporting arguments, constitutes a derivation of Einstein-Cartan theory from general relativity, not just a plausibility argument. This paper adds more and simpler explanations, more computational details, correction of a factor of 2, discussion of limitations of the derivation, and discussion of some areas of gravitational research where Einstein-Cartan theory is relevant.

  9. Security of coherent-state quantum cryptography in the presence of Gaussian noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heid, Matthias; Luetkenhaus, Norbert

    2007-08-15

    We investigate the security against collective attacks of a continuous variable quantum key distribution scheme in the asymptotic key limit for a realistic setting. The quantum channel connecting the two honest parties is assumed to be lossy and imposes Gaussian noise on the observed quadrature distributions. Secret key rates are given for direct and reverse reconciliation schemes including post-selection in the collective attack scenario. The effect of a nonideal error correction and two-way communication in the classical post-processing step is also taken into account.

  10. Nonextensive Thomas-Fermi model

    NASA Astrophysics Data System (ADS)

    Shivamoggi, Bhimsen; Martinenko, Evgeny

    2007-11-01

    Nonextensive Thomas-Fermi model was father investigated in the following directions: Heavy atom in strong magnetic field. following Shivamoggi work on the extension of Kadomtsev equation we applied nonextensive formalism to father generalize TF model for the very strong magnetic fields (of order 10e12 G). The generalized TF equation and the binding energy of atom were calculated which contain a new nonextensive term dominating the classical one. The binding energy of a heavy atom was also evaluated. Thomas-Fermi equations in N dimensions which is technically the same as in Shivamoggi (1998) ,but behavior is different and in interesting 2 D case nonextesivity prevents from becoming linear ODE as in classical case. Effect of nonextensivity on dielectrical screening reveals itself in the reduction of the envelope radius. It was shown that nonextesivity in each case is responsible for new term dominating classical thermal correction term by order of magnitude, which is vanishing in a limit q->1. Therefore it appears that nonextensive term is ubiquitous for a wide range of systems and father work is needed to understand the origin of it.

  11. A Nonlinear Calibration Algorithm Based on Harmonic Decomposition for Two-Axis Fluxgate Sensors

    PubMed Central

    Liu, Shibin

    2018-01-01

    Nonlinearity is a prominent limitation to the calibration performance for two-axis fluxgate sensors. In this paper, a novel nonlinear calibration algorithm taking into account the nonlinearity of errors is proposed. In order to establish the nonlinear calibration model, the combined effort of all time-invariant errors is analyzed in detail, and then harmonic decomposition method is utilized to estimate the compensation coefficients. Meanwhile, the proposed nonlinear calibration algorithm is validated and compared with a classical calibration algorithm by experiments. The experimental results show that, after the nonlinear calibration, the maximum deviation of magnetic field magnitude is decreased from 1302 nT to 30 nT, which is smaller than 81 nT after the classical calibration. Furthermore, for the two-axis fluxgate sensor used as magnetic compass, the maximum error of heading is corrected from 1.86° to 0.07°, which is approximately 11% in contrast with 0.62° after the classical calibration. The results suggest an effective way to improve the calibration performance of two-axis fluxgate sensors. PMID:29789448

  12. PDF-based heterogeneous multiscale filtration model.

    PubMed

    Gong, Jian; Rutland, Christopher J

    2015-04-21

    Motivated by modeling of gasoline particulate filters (GPFs), a probability density function (PDF) based heterogeneous multiscale filtration (HMF) model is developed to calculate filtration efficiency of clean particulate filters. A new methodology based on statistical theory and classic filtration theory is developed in the HMF model. Based on the analysis of experimental porosimetry data, a pore size probability density function is introduced to represent heterogeneity and multiscale characteristics of the porous wall. The filtration efficiency of a filter can be calculated as the sum of the contributions of individual collectors. The resulting HMF model overcomes the limitations of classic mean filtration models which rely on tuning of the mean collector size. Sensitivity analysis shows that the HMF model recovers the classical mean model when the pore size variance is very small. The HMF model is validated by fundamental filtration experimental data from different scales of filter samples. The model shows a good agreement with experimental data at various operating conditions. The effects of the microstructure of filters on filtration efficiency as well as the most penetrating particle size are correctly predicted by the model.

  13. Quantum Speed Limits across the Quantum-to-Classical Transition

    NASA Astrophysics Data System (ADS)

    Shanahan, B.; Chenu, A.; Margolus, N.; del Campo, A.

    2018-02-01

    Quantum speed limits set an upper bound to the rate at which a quantum system can evolve. Adopting a phase-space approach, we explore quantum speed limits across the quantum-to-classical transition and identify equivalent bounds in the classical world. As a result, and contrary to common belief, we show that speed limits exist for both quantum and classical systems. As in the quantum domain, classical speed limits are set by a given norm of the generator of time evolution.

  14. Superdense coding interleaved with forward error correction

    DOE PAGES

    Humble, Travis S.; Sadlier, Ronald J.

    2016-05-12

    Superdense coding promises increased classical capacity and communication security but this advantage may be undermined by noise in the quantum channel. We present a numerical study of how forward error correction (FEC) applied to the encoded classical message can be used to mitigate against quantum channel noise. By studying the bit error rate under different FEC codes, we identify the unique role that burst errors play in superdense coding, and we show how these can be mitigated against by interleaving the FEC codewords prior to transmission. As a result, we conclude that classical FEC with interleaving is a useful methodmore » to improve the performance in near-term demonstrations of superdense coding.« less

  15. Quantum error correction in crossbar architectures

    NASA Astrophysics Data System (ADS)

    Helsen, Jonas; Steudtner, Mark; Veldhorst, Menno; Wehner, Stephanie

    2018-07-01

    A central challenge for the scaling of quantum computing systems is the need to control all qubits in the system without a large overhead. A solution for this problem in classical computing comes in the form of so-called crossbar architectures. Recently we made a proposal for a large-scale quantum processor (Li et al arXiv:1711.03807 (2017)) to be implemented in silicon quantum dots. This system features a crossbar control architecture which limits parallel single-qubit control, but allows the scheme to overcome control scaling issues that form a major hurdle to large-scale quantum computing systems. In this work, we develop a language that makes it possible to easily map quantum circuits to crossbar systems, taking into account their architecture and control limitations. Using this language we show how to map well known quantum error correction codes such as the planar surface and color codes in this limited control setting with only a small overhead in time. We analyze the logical error behavior of this surface code mapping for estimated experimental parameters of the crossbar system and conclude that logical error suppression to a level useful for real quantum computation is feasible.

  16. High-order noise filtering in nontrivial quantum logic gates.

    PubMed

    Green, Todd; Uys, Hermann; Biercuk, Michael J

    2012-07-13

    Treating the effects of a time-dependent classical dephasing environment during quantum logic operations poses a theoretical challenge, as the application of noncommuting control operations gives rise to both dephasing and depolarization errors that must be accounted for in order to understand total average error rates. We develop a treatment based on effective Hamiltonian theory that allows us to efficiently model the effect of classical noise on nontrivial single-bit quantum logic operations composed of arbitrary control sequences. We present a general method to calculate the ensemble-averaged entanglement fidelity to arbitrary order in terms of noise filter functions, and provide explicit expressions to fourth order in the noise strength. In the weak noise limit we derive explicit filter functions for a broad class of piecewise-constant control sequences, and use them to study the performance of dynamically corrected gates, yielding good agreement with brute-force numerics.

  17. Asymptotic Linear Spectral Statistics for Spiked Hermitian Random Matrices

    NASA Astrophysics Data System (ADS)

    Passemier, Damien; McKay, Matthew R.; Chen, Yang

    2015-07-01

    Using the Coulomb Fluid method, this paper derives central limit theorems (CLTs) for linear spectral statistics of three "spiked" Hermitian random matrix ensembles. These include Johnstone's spiked model (i.e., central Wishart with spiked correlation), non-central Wishart with rank-one non-centrality, and a related class of non-central matrices. For a generic linear statistic, we derive simple and explicit CLT expressions as the matrix dimensions grow large. For all three ensembles under consideration, we find that the primary effect of the spike is to introduce an correction term to the asymptotic mean of the linear spectral statistic, which we characterize with simple formulas. The utility of our proposed framework is demonstrated through application to three different linear statistics problems: the classical likelihood ratio test for a population covariance, the capacity analysis of multi-antenna wireless communication systems with a line-of-sight transmission path, and a classical multiple sample significance testing problem.

  18. Asymptotics of quantum weighted Hurwitz numbers

    NASA Astrophysics Data System (ADS)

    Harnad, J.; Ortmann, Janosch

    2018-06-01

    This work concerns both the semiclassical and zero temperature asymptotics of quantum weighted double Hurwitz numbers. The partition function for quantum weighted double Hurwitz numbers can be interpreted in terms of the energy distribution of a quantum Bose gas with vanishing fugacity. We compute the leading semiclassical term of the partition function for three versions of the quantum weighted Hurwitz numbers, as well as lower order semiclassical corrections. The classical limit is shown to reproduce the simple single and double Hurwitz numbers studied by Okounkov and Pandharipande (2000 Math. Res. Lett. 7 447–53, 2000 Lett. Math. Phys. 53 59–74). The KP-Toda τ-function that serves as generating function for the quantum Hurwitz numbers is shown to have the τ-function of Okounkov and Pandharipande (2000 Math. Res. Lett. 7 447–53, 2000 Lett. Math. Phys. 53 59–74) as its leading term in the classical limit, and, with suitable scaling, the same holds for the partition function, the weights and expectations of Hurwitz numbers. We also compute the zero temperature limit of the partition function and quantum weighted Hurwitz numbers. The KP or Toda τ-function serving as generating function for the quantum Hurwitz numbers are shown to give the one for Belyi curves in the zero temperature limit and, with suitable scaling, the same holds true for the partition function, the weights and the expectations of Hurwitz numbers.

  19. Improved wavefront correction for coherent image restoration.

    PubMed

    Zelenka, Claudius; Koch, Reinhard

    2017-08-07

    Coherent imaging has a wide range of applications in, for example, microscopy, astronomy, and radar imaging. Particularly interesting is the field of microscopy, where the optical quality of the lens is the main limiting factor. In this article, novel algorithms for the restoration of blurred images in a system with known optical aberrations are presented. Physically motivated by the scalar diffraction theory, the new algorithms are based on Haugazeau POCS and FISTA, and are faster and more robust than methods presented earlier. With the new approach the level of restoration quality on real images is very high, thereby blurring and ringing caused by defocus can be effectively removed. In classical microscopy, lenses with very low aberration must be used, which puts a practical limit on their size and numerical aperture. A coherent microscope using the novel restoration method overcomes this limitation. In contrast to incoherent microscopy, severe optical aberrations including defocus can be removed, hence the requirements on the quality of the optics are lower. This can be exploited for an essential price reduction of the optical system. It can be also used to achieve higher resolution than in classical microscopy, using lenses with high numerical aperture and high aberration. All this makes the coherent microscopy superior to the traditional incoherent in suited applications.

  20. Effect of non-classical current paths in networks of 1-dimensional wires

    NASA Astrophysics Data System (ADS)

    Echternach, P. M.; Mikhalchuk, A. G.; Bozler, H. M.; Gershenson, M. E.; Bogdanov, A. L.; Nilsson, B.

    1996-04-01

    At low temperatures, the quantum corrections to the resistance due to weak localization and electron-electron interaction are affected by the shape and topology of samples. We observed these effects in the resistance of 2D percolation networks made from 1D wires and in a series of long 1D wires with regularly spaced side branches. Branches outside the classical current path strongly reduce the quantum corrections to the resistance and these reductions become a measure of the quantum lengths.

  1. Cell-model prediction of the melting of a Lennard-Jones solid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holian, B.L.

    The classical free energy of the Lennard-Jones 6-12 solid is computed from a single-particle anharmonic cell model with a correction to the entropy given by the classical correlational entropy of quasiharmonic lattice dynamics. The free energy of the fluid is obtained from the Hansen-Ree analytic fit to Monte Carlo equation-of-state calculations. The resulting predictions of the solid-fluid coexistence curves by this corrected cell model of the solid are in excellent agreement with the computer experiments.

  2. On the Small Mass Limit of Quantum Brownian Motion with Inhomogeneous Damping and Diffusion

    NASA Astrophysics Data System (ADS)

    Lim, Soon Hoe; Wehr, Jan; Lampo, Aniello; García-March, Miguel Ángel; Lewenstein, Maciej

    2018-01-01

    We study the small mass limit (or: the Smoluchowski-Kramers limit) of a class of quantum Brownian motions with inhomogeneous damping and diffusion. For Ohmic bath spectral density with a Lorentz-Drude cutoff, we derive the Heisenberg-Langevin equations for the particle's observables using a quantum stochastic calculus approach. We set the mass of the particle to equal m = m0 ɛ , the reduced Planck constant to equal \\hbar = ɛ and the cutoff frequency to equal Λ = E_{Λ}/ɛ , where m_0 and E_{Λ} are positive constants, so that the particle's de Broglie wavelength and the largest energy scale of the bath are fixed as ɛ → 0. We study the limit as ɛ → 0 of the rescaled model and derive a limiting equation for the (slow) particle's position variable. We find that the limiting equation contains several drift correction terms, the quantum noise-induced drifts, including terms of purely quantum nature, with no classical counterparts.

  3. Speckle Noise in Highly Corrected Coronagraphs

    NASA Technical Reports Server (NTRS)

    Bloemhof, Eric E.

    2004-01-01

    Speckles in a highly corrected adaptive optic imaging system have been studied through numerical simulations and through analytic and algebraic investigations of the Fourier-optical expressions connecting pupil plane and focal plane, which simplify at high Strehl ratio. Significant insights into the behavior of speckles, and the speckle noise caused when they vary over time, have thus been gained. Such speckle noise is expected to set key limits on the sensitivity of searches for companions around other stars, including extrasolar planets. In most cases, it is advantageous to use a coronagraph of some kind to suppress the bright primary star and so enhance the dynamic range of companion searches. In the current paper, I investigate speckle behavior and its impact on speckle noise in some common coronagraphic architectures, including the classical Lyot coronagraph and the new four quadrant phase mask (FQPM) concept.

  4. Linearized semiclassical initial value time correlation functions with maximum entropy analytic continuation.

    PubMed

    Liu, Jian; Miller, William H

    2008-09-28

    The maximum entropy analytic continuation (MEAC) method is used to extend the range of accuracy of the linearized semiclassical initial value representation (LSC-IVR)/classical Wigner approximation for real time correlation functions. LSC-IVR provides a very effective "prior" for the MEAC procedure since it is very good for short times, exact for all time and temperature for harmonic potentials (even for correlation functions of nonlinear operators), and becomes exact in the classical high temperature limit. This combined MEAC+LSC/IVR approach is applied here to two highly nonlinear dynamical systems, a pure quartic potential in one dimensional and liquid para-hydrogen at two thermal state points (25 and 14 K under nearly zero external pressure). The former example shows the MEAC procedure to be a very significant enhancement of the LSC-IVR for correlation functions of both linear and nonlinear operators, and especially at low temperature where semiclassical approximations are least accurate. For liquid para-hydrogen, the LSC-IVR is seen already to be excellent at T=25 K, but the MEAC procedure produces a significant correction at the lower temperature (T=14 K). Comparisons are also made as to how the MEAC procedure is able to provide corrections for other trajectory-based dynamical approximations when used as priors.

  5. Quantum implications of a scale invariant regularization

    NASA Astrophysics Data System (ADS)

    Ghilencea, D. M.

    2018-04-01

    We study scale invariance at the quantum level in a perturbative approach. For a scale-invariant classical theory, the scalar potential is computed at a three-loop level while keeping manifest this symmetry. Spontaneous scale symmetry breaking is transmitted at a quantum level to the visible sector (of ϕ ) by the associated Goldstone mode (dilaton σ ), which enables a scale-invariant regularization and whose vacuum expectation value ⟨σ ⟩ generates the subtraction scale (μ ). While the hidden (σ ) and visible sector (ϕ ) are classically decoupled in d =4 due to an enhanced Poincaré symmetry, they interact through (a series of) evanescent couplings ∝ɛ , dictated by the scale invariance of the action in d =4 -2 ɛ . At the quantum level, these couplings generate new corrections to the potential, as scale-invariant nonpolynomial effective operators ϕ2 n +4/σ2 n. These are comparable in size to "standard" loop corrections and are important for values of ϕ close to ⟨σ ⟩. For n =1 , 2, the beta functions of their coefficient are computed at three loops. In the IR limit, dilaton fluctuations decouple, the effective operators are suppressed by large ⟨σ ⟩, and the effective potential becomes that of a renormalizable theory with explicit scale symmetry breaking by the DR scheme (of μ =constant).

  6. Reply to "Comment on `Simple improvements to classical bubble nucleation models'"

    NASA Astrophysics Data System (ADS)

    Tanaka, Kyoko K.; Tanaka, Hidekazu; Angélil, Raymond; Diemand, Jürg

    2016-08-01

    We reply to the Comment by Schmelzer and Baidakov [Phys. Rev. E 94, 026801 (2016)]., 10.1103/PhysRevE.94.026801 They suggest that a more modern approach than the classic description by Tolman is necessary to model the surface tension of curved interfaces. Therefore we now consider the higher-order Helfrich correction, rather than the simpler first-order Tolman correction. Using a recent parametrization of the Helfrich correction provided by Wilhelmsen et al. [J. Chem. Phys. 142, 064706 (2015)], 10.1063/1.4907588, we test this description against measurements from our simulations, and find an agreement stronger than what the pure Tolman description offers. Our analyses suggest a necessary correction of order higher than the second for small bubbles with radius ≲1 nm. In addition, we respond to other minor criticism about our results.

  7. Theoretical prediction of crystallization kinetics of a supercooled Lennard-Jones fluid

    NASA Astrophysics Data System (ADS)

    Gunawardana, K. G. S. H.; Song, Xueyu

    2018-05-01

    The first order curvature correction to the crystal-liquid interfacial free energy is calculated using a theoretical model based on the interfacial excess thermodynamic properties. The correction parameter (δ), which is analogous to the Tolman length at a liquid-vapor interface, is found to be 0.48 ± 0.05 for a Lennard-Jones (LJ) fluid. We show that this curvature correction is crucial in predicting the nucleation barrier when the size of the crystal nucleus is small. The thermodynamic driving force (Δμ) corresponding to available simulated nucleation conditions is also calculated by combining the simulated data with a classical density functional theory. In this paper, we show that the classical nucleation theory is capable of predicting the nucleation barrier with excellent agreement to the simulated results when the curvature correction to the interfacial free energy is accounted for.

  8. Inferential Procedures for Correlation Coefficients Corrected for Attenuation.

    ERIC Educational Resources Information Center

    Hakstian, A. Ralph; And Others

    1988-01-01

    A model and computation procedure based on classical test score theory are presented for determination of a correlation coefficient corrected for attenuation due to unreliability. Delta and Monte Carlo method applications are discussed. A power analysis revealed no serious loss in efficiency resulting from correction for attentuation. (TJH)

  9. Quantum decoration transformation for spin models

    NASA Astrophysics Data System (ADS)

    Braz, F. F.; Rodrigues, F. C.; de Souza, S. M.; Rojas, Onofre

    2016-09-01

    It is quite relevant the extension of decoration transformation for quantum spin models since most of the real materials could be well described by Heisenberg type models. Here we propose an exact quantum decoration transformation and also showing interesting properties such as the persistence of symmetry and the symmetry breaking during this transformation. Although the proposed transformation, in principle, cannot be used to map exactly a quantum spin lattice model into another quantum spin lattice model, since the operators are non-commutative. However, it is possible the mapping in the "classical" limit, establishing an equivalence between both quantum spin lattice models. To study the validity of this approach for quantum spin lattice model, we use the Zassenhaus formula, and we verify how the correction could influence the decoration transformation. But this correction could be useless to improve the quantum decoration transformation because it involves the second-nearest-neighbor and further nearest neighbor couplings, which leads into a cumbersome task to establish the equivalence between both lattice models. This correction also gives us valuable information about its contribution, for most of the Heisenberg type models, this correction could be irrelevant at least up to the third order term of Zassenhaus formula. This transformation is applied to a finite size Heisenberg chain, comparing with the exact numerical results, our result is consistent for weak xy-anisotropy coupling. We also apply to bond-alternating Ising-Heisenberg chain model, obtaining an accurate result in the limit of the quasi-Ising chain.

  10. Effective monopoles within thick branes

    NASA Astrophysics Data System (ADS)

    Hoff da Silva, J. M.; da Rocha, Roldão

    2012-10-01

    The monopole mass is revealed to be considerably modified in the thick braneworld paradigm, and depends on the position of the monopole in the brane as well. Accordingly, the monopole radius continuously increases, leading to an unacceptable setting that can be circumvented when the brane thickness has an upper limit. Despite such peculiar behavior, the accrued quantum corrections —involving the classical monopole solution— are shown to be still under control. We analyze the monopole's peculiarities also taking into account the localization of the gauge fields. Furthermore, some additional analysis in the thick braneworld context and the similar behavior evinced by the topological string are investigated.

  11. Multi-server blind quantum computation over collective-noise channels

    NASA Astrophysics Data System (ADS)

    Xiao, Min; Liu, Lin; Song, Xiuli

    2018-03-01

    Blind quantum computation (BQC) enables ordinary clients to securely outsource their computation task to costly quantum servers. Besides two essential properties, namely correctness and blindness, practical BQC protocols also should make clients as classical as possible and tolerate faults from nonideal quantum channel. In this paper, using logical Bell states as quantum resource, we propose multi-server BQC protocols over collective-dephasing noise channel and collective-rotation noise channel, respectively. The proposed protocols permit completely or almost classical client, meet the correctness and blindness requirements of BQC protocol, and are typically practical BQC protocols.

  12. Quantum correction to classical gravitational interaction between two polarizable objects

    NASA Astrophysics Data System (ADS)

    Wu, Puxun; Hu, Jiawei; Yu, Hongwei

    2016-12-01

    When gravity is quantized, there inevitably exist quantum gravitational vacuum fluctuations which induce quadrupole moments in gravitationally polarizable objects and produce a quantum correction to the classical Newtonian interaction between them. Here, based upon linearized quantum gravity and the leading-order perturbation theory, we study, from a quantum field-theoretic prospect, this quantum correction between a pair of gravitationally polarizable objects treated as two-level harmonic oscillators. We find that the interaction potential behaves like r-11 in the retarded regime and r-10 in the near regime. Our result agrees with what were recently obtained in different approaches. Our study seems to indicate that linearized quantum gravity is robust in dealing with quantum gravitational effects at low energies.

  13. Loop quantum cosmology of Bianchi IX: effective dynamics

    NASA Astrophysics Data System (ADS)

    Corichi, Alejandro; Montoya, Edison

    2017-03-01

    We study solutions to the effective equations for the Bianchi IX class of spacetimes within loop quantum cosmology (LQC). We consider Bianchi IX models whose matter content is a massless scalar field, by numerically solving the loop quantum cosmology effective equations, with and without inverse triad corrections. The solutions are classified using certain geometrically motivated classical observables. We show that both effective theories—with lapse N  =  V and N  =  1—resolve the big bang singularity and reproduce the classical dynamics far from the bounce. Moreover, due to the positive spatial curvature, there is an infinite number of bounces and recollapses. We study the limit of large field momentum and show that both effective theories reproduce the same dynamics, thus recovering general relativity. We implement a procedure to identify amongst the Bianchi IX solutions, those that behave like k  =  0,1 FLRW as well as Bianchi I, II, and VII0 models. The effective solutions exhibit Bianchi I phases with Bianchi II transitions and also Bianchi VII0 phases, which had not been studied before. We comment on the possible implications of these results for a quantum modification to the classical BKL behaviour.

  14. CCSD(T) potential energy and induced dipole surfaces for N2–H2(D2): retrieval of the collision-induced absorption integrated intensities in the regions of the fundamental and first overtone vibrational transitions.

    PubMed

    Buryak, Ilya; Lokshtanov, Sergei; Vigasin, Andrey

    2012-09-21

    The present work aims at ab initio characterization of the integrated intensity temperature variation of collision-induced absorption (CIA) in N(2)-H(2)(D(2)). Global fits of potential energy surface (PES) and induced dipole moment surface (IDS) were made on the basis of CCSD(T) (coupled cluster with single and double and perturbative triple excitations) calculations with aug-cc-pV(T,Q)Z basis sets. Basis set superposition error correction and extrapolation to complete basis set (CBS) limit techniques were applied to both energy and dipole moment. Classical second cross virial coefficient calculations accounting for the first quantum correction were employed to prove the quality of the obtained PES. The CIA temperature dependence was found in satisfactory agreement with available experimental data.

  15. Green autofluorescence, a double edged monitoring tool for bacterial growth and activity in micro-plates

    NASA Astrophysics Data System (ADS)

    Mihalcescu, Irina; Van-Melle Gateau, Mathilde; Chelli, Bernard; Pinel, Corinne; Ravanat, Jean-Luc

    2015-12-01

    The intrinsic green autofluorescence of an Escherichia coli culture has long been overlooked and empirically corrected in green fluorescent protein (GFP) reporter experiments. We show here, by using complementary methods of fluorescence analysis and HPLC, that this autofluorescence, principally arise from the secreted flavins in the external media. The cells secrete roughly 10 times more than what they keep inside. We show next that the secreted flavin fluorescence can be used as a complementary method in measuring the cell concentration particularly when the classical method, based on optical density measure, starts to fail. We also demonstrate that the same external flavins limit the dynamical range of GFP quantification and can lead to a false interpretation of lower global dynamic range of expression than what really happens. In the end we evaluate different autofluorescence correction methods to extract the real GFP signal.

  16. The dynamical mass of a classical Cepheid variable star in an eclipsing binary system.

    PubMed

    Pietrzyński, G; Thompson, I B; Gieren, W; Graczyk, D; Bono, G; Udalski, A; Soszyński, I; Minniti, D; Pilecki, B

    2010-11-25

    Stellar pulsation theory provides a means of determining the masses of pulsating classical Cepheid supergiants-it is the pulsation that causes their luminosity to vary. Such pulsational masses are found to be smaller than the masses derived from stellar evolution theory: this is the Cepheid mass discrepancy problem, for which a solution is missing. An independent, accurate dynamical mass determination for a classical Cepheid variable star (as opposed to type-II Cepheids, low-mass stars with a very different evolutionary history) in a binary system is needed in order to determine which is correct. The accuracy of previous efforts to establish a dynamical Cepheid mass from Galactic single-lined non-eclipsing binaries was typically about 15-30% (refs 6, 7), which is not good enough to resolve the mass discrepancy problem. In spite of many observational efforts, no firm detection of a classical Cepheid in an eclipsing double-lined binary has hitherto been reported. Here we report the discovery of a classical Cepheid in a well detached, double-lined eclipsing binary in the Large Magellanic Cloud. We determine the mass to a precision of 1% and show that it agrees with its pulsation mass, providing strong evidence that pulsation theory correctly and precisely predicts the masses of classical Cepheids.

  17. Comparative assessment of astigmatism-corrected Czerny-Turner imaging spectrometer using off-the-shelf optics

    NASA Astrophysics Data System (ADS)

    Yuan, Qun; Zhu, Dan; Chen, Yueyang; Guo, Zhenyan; Zuo, Chao; Gao, Zhishan

    2017-04-01

    We present the optical design of a Czerny-Turner imaging spectrometer for which astigmatism is corrected using off-the-shelf optics resulting in spectral resolution of 0.1 nm. The classic Czerny-Turner imaging spectrometer, consisting of a plane grating, two spherical mirrors, and a sensor with 10-μm pixels, was used as the benchmark. We comparatively assessed three configurations of the spectrometer that corrected astigmatism with divergent illumination of the grating, by adding a cylindrical lens, or by adding a cylindrical mirror. When configured with the added cylindrical lens, the imaging spectrometer with a point field of view (FOV) and a linear sensor achieved diffraction-limited performance over a broadband width of 400 nm centered at 800 nm, while the maximum allowable bandwidth was only 200 nm for the other two configurations. When configured with the added cylindrical mirror, the imaging spectrometer with a one-dimensional field of view (1D FOV) and an area sensor showed its superiority on imaging quality, spectral nonlinearity, as well as keystone over 100 nm bandwidth and 10 mm spatial extent along the entrance slit.

  18. Application of the N-quantum approximation to the proton radius problem

    NASA Astrophysics Data System (ADS)

    Cowen, Steven

    This thesis is organized into three parts: 1. Introduction and bound state calculations of electronic and muonic hydrogen, 2. Bound states in motion, and 3.Treatment of soft photons. In the first part, we apply the N-Quantum Approximation (NQA) to electronic and muonic hydrogen and search for any new corrections to energy levels that could account for the 0.31 meV discrepancy of the proton radius problem. We derive a bound state equation and compare our numerical solutions and wave functions to those of the Dirac equation. We find NQA Lamb shift diagrams and calculate the associated energy shift contributions. We do not find any new corrections large enough to account for the discrepancy. In part 2, we discuss the effects of motion on bound states using the NQA. We find classical Lorentz contraction of the lowest order NQA wave function. Finally, in part 3, we develop a clothing transformation for interacting fields in order to produce the correct asymptotic limits. We find the clothing eliminates a trilinear interacting Hamiltonian term and produces a quadrilinear soft photon interaction term.

  19. The Proper Sequence for Correcting Correlation Coefficients for Range Restriction and Unreliability.

    ERIC Educational Resources Information Center

    Stauffer, Joseph M.; Mendoza, Jorge L.

    2001-01-01

    Uses classical test theory to show that it is the nature of the range restriction, rather than the nature of the available reliability coefficient, that determines the sequence for applying corrections for range restriction and unreliability. Shows how the common rule of thumb for choosing the sequence is tenable only when the correction does not…

  20. Modifying Spearman's Attenuation Equation to Yield Partial Corrections for Measurement Error--With Application to Sample Size Calculations

    ERIC Educational Resources Information Center

    Nicewander, W. Alan

    2018-01-01

    Spearman's correction for attenuation (measurement error) corrects a correlation coefficient for measurement errors in either-or-both of two variables, and follows from the assumptions of classical test theory. Spearman's equation removes all measurement error from a correlation coefficient which translates into "increasing the reliability of…

  1. Experimental Blind Quantum Computing for a Classical Client.

    PubMed

    Huang, He-Liang; Zhao, Qi; Ma, Xiongfeng; Liu, Chang; Su, Zu-En; Wang, Xi-Lin; Li, Li; Liu, Nai-Le; Sanders, Barry C; Lu, Chao-Yang; Pan, Jian-Wei

    2017-08-04

    To date, blind quantum computing demonstrations require clients to have weak quantum devices. Here we implement a proof-of-principle experiment for completely classical clients. Via classically interacting with two quantum servers that share entanglement, the client accomplishes the task of having the number 15 factorized by servers who are denied information about the computation itself. This concealment is accompanied by a verification protocol that tests servers' honesty and correctness. Our demonstration shows the feasibility of completely classical clients and thus is a key milestone towards secure cloud quantum computing.

  2. Experimental Blind Quantum Computing for a Classical Client

    NASA Astrophysics Data System (ADS)

    Huang, He-Liang; Zhao, Qi; Ma, Xiongfeng; Liu, Chang; Su, Zu-En; Wang, Xi-Lin; Li, Li; Liu, Nai-Le; Sanders, Barry C.; Lu, Chao-Yang; Pan, Jian-Wei

    2017-08-01

    To date, blind quantum computing demonstrations require clients to have weak quantum devices. Here we implement a proof-of-principle experiment for completely classical clients. Via classically interacting with two quantum servers that share entanglement, the client accomplishes the task of having the number 15 factorized by servers who are denied information about the computation itself. This concealment is accompanied by a verification protocol that tests servers' honesty and correctness. Our demonstration shows the feasibility of completely classical clients and thus is a key milestone towards secure cloud quantum computing.

  3. Feasibility of a new Indiana Coordinate Reference System (INCRS).

    DOT National Transportation Integrated Search

    2012-10-01

    Engineers, Surveyors, and GIS Professionals spend an enormous amount of time correcting field surveys to the classical State Plane : Coordinate System (SPCS). The current mapping corrections are in the order of 1:33,000, or 30 parts per million (ppm)...

  4. A proposal for self-correcting stabilizer quantum memories in 3 dimensions (or slightly less)

    NASA Astrophysics Data System (ADS)

    Brell, Courtney G.

    2016-01-01

    We propose a family of local CSS stabilizer codes as possible candidates for self-correcting quantum memories in 3D. The construction is inspired by the classical Ising model on a Sierpinski carpet fractal, which acts as a classical self-correcting memory. Our models are naturally defined on fractal subsets of a 4D hypercubic lattice with Hausdorff dimension less than 3. Though this does not imply that these models can be realized with local interactions in {{{R}}}3, we also discuss this possibility. The X and Z sectors of the code are dual to one another, and we show that there exists a finite temperature phase transition associated with each of these sectors, providing evidence that the system may robustly store quantum information at finite temperature.

  5. Fermi orbital derivatives in self-interaction corrected density functional theory: Applications to closed shell atoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pederson, Mark R., E-mail: mark.pederson@science.doe.gov

    2015-02-14

    A recent modification of the Perdew-Zunger self-interaction-correction to the density-functional formalism has provided a framework for explicitly restoring unitary invariance to the expression for the total energy. The formalism depends upon construction of Löwdin orthonormalized Fermi-orbitals which parametrically depend on variational quasi-classical electronic positions. Derivatives of these quasi-classical electronic positions, required for efficient minimization of the self-interaction corrected energy, are derived and tested, here, on atoms. Total energies and ionization energies in closed-shell singlet atoms, where correlation is less important, using the Perdew-Wang 1992 Local Density Approximation (PW92) functional, are in good agreement with experiment and non-relativistic quantum-Monte-Carlo results albeitmore » slightly too low.« less

  6. Stochastic solution to quantum dynamics

    NASA Technical Reports Server (NTRS)

    John, Sarah; Wilson, John W.

    1994-01-01

    The quantum Liouville equation in the Wigner representation is solved numerically by using Monte Carlo methods. For incremental time steps, the propagation is implemented as a classical evolution in phase space modified by a quantum correction. The correction, which is a momentum jump function, is simulated in the quasi-classical approximation via a stochastic process. The technique, which is developed and validated in two- and three- dimensional momentum space, extends an earlier one-dimensional work. Also, by developing a new algorithm, the application to bound state motion in an anharmonic quartic potential shows better agreement with exact solutions in two-dimensional phase space.

  7. The memory loophole

    NASA Astrophysics Data System (ADS)

    Shanahan, Daniel

    2008-05-01

    The memory loophole supposes that the measurement of an entangled pair is influenced by the measurements of earlier pairs in the same run of measurements. To assert the memory loophole is thus to deny that measurement is intrinsically random. It is argued that measurement might instead involve a process of recovery and equilibrium in the measuring apparatus akin to that described in thermodynamics by Le Chatelier's principle. The predictions of quantum mechanics would then arise from conservation of the measured property in the combined system of apparatus and measured ensemble. Measurement would be consistent with classical laws of conservation, not simply in the classical limit of large numbers, but whatever the size of the ensemble. However variances from quantum mechanical predictions would be self-correcting and centripetal, rather than Markovian and increasing as under the standard theory. Entanglement correlations would persist, not because the entangled particles act in concert (which would entail nonlocality), but because the measurements of the particles were influenced by the one fluctuating state of imbalance in the process of measurement.

  8. Finite-size effects in simulations of electrolyte solutions under periodic boundary conditions

    NASA Astrophysics Data System (ADS)

    Thompson, Jeffrey; Sanchez, Isaac

    The equilibrium properties of charged systems with periodic boundary conditions may exhibit pronounced system-size dependence due to the long range of the Coulomb force. As shown by others, the leading-order finite-size correction to the Coulomb energy of a charged fluid confined to a periodic box of volume V may be derived from sum rules satisfied by the charge-charge correlations in the thermodynamic limit V -> ∞ . In classical systems, the relevant sum rule is the Stillinger-Lovett second-moment (or perfect screening) condition. This constraint implies that for large V, periodicity induces a negative bias of -kB T(2 V) - 1 in the total Coulomb energy density of a homogeneous classical charged fluid of given density and temperature. We present a careful study of the impact of such finite-size effects on the calculation of solute chemical potentials from explicit-solvent molecular simulations of aqueous electrolyte solutions. National Science Foundation Graduate Research Fellowship Program, Grant No. DGE-1610403.

  9. Quantum fluctuating geometries and the information paradox

    NASA Astrophysics Data System (ADS)

    Eyheralde, Rodrigo; Campiglia, Miguel; Gambini, Rodolfo; Pullin, Jorge

    2017-12-01

    We study Hawking radiation on the quantum space-time of a collapsing null shell. We use the geometric optics approximation as in Hawking’s original papers to treat the radiation. The quantum space-time is constructed by superposing the classical geometries associated with collapsing shells with uncertainty in their position and mass. We show that there are departures from thermality in the radiation even though we are not considering a back reaction. One recovers the usual profile for the Hawking radiation as a function of frequency in the limit where the space-time is classical. However, when quantum corrections are taken into account, the profile of the Hawking radiation as a function of time contains information about the initial state of the collapsing shell. More work will be needed to determine whether all the information can be recovered. The calculations show that non-trivial quantum effects can occur in regions of low curvature when horizons are involved, as is proposed in the firewall scenario, for instance.

  10. Wall interference tests of a CAST 10-2/DOA 2 airfoil in an adaptive-wall test section

    NASA Technical Reports Server (NTRS)

    Mineck, Raymond E.

    1987-01-01

    A wind-tunnel investigation of a CAST 10-2/DOA 2 airfoil model has been conducted in the adaptive-wall test section of the Langley 0.3-Meter Transonic Cryogenic Tunnel (TCT) and in the National Aeronautical Establishment High Reynolds Number Two-Dimensional Test Facility. The primary goal of the tests was to assess two different wall-interference correction techniques: adaptive test-section walls and classical analytical corrections. Tests were conducted over a Mach number range from 0.3 to 0.8 and over a chord Reynolds number range from 6 million to 70 million. The airfoil aerodynamic characteristics from the tests in the 0.3-m TCT have been corrected for wall interference by the movement of the adaptive walls. No additional corrections for any residual interference have been applied to the data, to allow comparison with the classically corrected data from the same model in the conventional National Aeronautical Establishment facility. The data are presented graphically in this report as integrated force-and-moment coefficients and chordwise pressure distributions.

  11. Corneal topometry by fringe projection: limits and possibilities

    NASA Astrophysics Data System (ADS)

    Windecker, Robert; Tiziani, Hans J.; Thiel, H.; Jean, Benedikt J.

    1996-01-01

    A fast and accurate measurement of corneal topography is an important task especially since laser induced corneal reshaping has been used for the correction of ametropia. The classical measuring system uses Placido rings for the measurement and calculation of the topography or local curvatures. Another approach is the projection of a known fringe map to be imaged onto the surface under a certain angle of incidence. We present a set-up using telecentric illumination and detection units. With a special grating we get a synthetic wavelength with a nearly sinusoidal profile. In combination with a very fast data acquisition the topography can be evaluated using as special selfnormalizing phase evaluation algorithm. It calculates local Fourier coefficients and corrects errors caused by imperfect illumination or inhomogeneous scattering by fringe normalization. The topography can be determined over 700 by 256 pixel. The set-up is suitable to measure optically rough silicon replica of the human cornea as well as the cornea in vivo over a field of 8 mm and more. The resolution is mainly limited by noise and is better than two micrometers. We discuss the principle benefits and the drawbacks compared with standard Placido technique.

  12. The Five Dogs of Politically Correct Speech on Campus.

    ERIC Educational Resources Information Center

    Droge, David

    "Politically correct" has become an all-purpose pejorative epithet conflating and condemning a number of initiatives, such as affirmative action in hiring and admissions, multicultural education, broadening the "canon" of classical texts to include women and minority groups, protests against unpopular, usually conservative…

  13. Dynamic Network-Based Epistasis Analysis: Boolean Examples

    PubMed Central

    Azpeitia, Eugenio; Benítez, Mariana; Padilla-Longoria, Pablo; Espinosa-Soto, Carlos; Alvarez-Buylla, Elena R.

    2011-01-01

    In this article we focus on how the hierarchical and single-path assumptions of epistasis analysis can bias the inference of gene regulatory networks. Here we emphasize the critical importance of dynamic analyses, and specifically illustrate the use of Boolean network models. Epistasis in a broad sense refers to gene interactions, however, as originally proposed by Bateson, epistasis is defined as the blocking of a particular allelic effect due to the effect of another allele at a different locus (herein, classical epistasis). Classical epistasis analysis has proven powerful and useful, allowing researchers to infer and assign directionality to gene interactions. As larger data sets are becoming available, the analysis of classical epistasis is being complemented with computer science tools and system biology approaches. We show that when the hierarchical and single-path assumptions are not met in classical epistasis analysis, the access to relevant information and the correct inference of gene interaction topologies is hindered, and it becomes necessary to consider the temporal dynamics of gene interactions. The use of dynamical networks can overcome these limitations. We particularly focus on the use of Boolean networks that, like classical epistasis analysis, relies on logical formalisms, and hence can complement classical epistasis analysis and relax its assumptions. We develop a couple of theoretical examples and analyze them from a dynamic Boolean network model perspective. Boolean networks could help to guide additional experiments and discern among alternative regulatory schemes that would be impossible or difficult to infer without the elimination of these assumption from the classical epistasis analysis. We also use examples from the literature to show how a Boolean network-based approach has resolved ambiguities and guided epistasis analysis. Our article complements previous accounts, not only by focusing on the implications of the hierarchical and single-path assumption, but also by demonstrating the importance of considering temporal dynamics, and specifically introducing the usefulness of Boolean network models and also reviewing some key properties of network approaches. PMID:22645556

  14. Quantum Error Correction Protects Quantum Search Algorithms Against Decoherence

    PubMed Central

    Botsinis, Panagiotis; Babar, Zunaira; Alanis, Dimitrios; Chandra, Daryus; Nguyen, Hung; Ng, Soon Xin; Hanzo, Lajos

    2016-01-01

    When quantum computing becomes a wide-spread commercial reality, Quantum Search Algorithms (QSA) and especially Grover’s QSA will inevitably be one of their main applications, constituting their cornerstone. Most of the literature assumes that the quantum circuits are free from decoherence. Practically, decoherence will remain unavoidable as is the Gaussian noise of classic circuits imposed by the Brownian motion of electrons, hence it may have to be mitigated. In this contribution, we investigate the effect of quantum noise on the performance of QSAs, in terms of their success probability as a function of the database size to be searched, when decoherence is modelled by depolarizing channels’ deleterious effects imposed on the quantum gates. Moreover, we employ quantum error correction codes for limiting the effects of quantum noise and for correcting quantum flips. More specifically, we demonstrate that, when we search for a single solution in a database having 4096 entries using Grover’s QSA at an aggressive depolarizing probability of 10−3, the success probability of the search is 0.22 when no quantum coding is used, which is improved to 0.96 when Steane’s quantum error correction code is employed. Finally, apart from Steane’s code, the employment of Quantum Bose-Chaudhuri-Hocquenghem (QBCH) codes is also considered. PMID:27924865

  15. Topological order and memory time in marginally-self-correcting quantum memory

    NASA Astrophysics Data System (ADS)

    Siva, Karthik; Yoshida, Beni

    2017-03-01

    We examine two proposals for marginally-self-correcting quantum memory: the cubic code by Haah and the welded code by Michnicki. In particular, we prove explicitly that they are absent of topological order above zero temperature, as their Gibbs ensembles can be prepared via a short-depth quantum circuit from classical ensembles. Our proof technique naturally gives rise to the notion of free energy associated with excitations. Further, we develop a framework for an ergodic decomposition of Davies generators in CSS codes which enables formal reduction to simpler classical memory problems. We then show that memory time in the welded code is doubly exponential in inverse temperature via the Peierls argument. These results introduce further connections between thermal topological order and self-correction from the viewpoint of free energy and quantum circuit depth.

  16. Hypersurface-deformation algebroids and effective spacetime models

    NASA Astrophysics Data System (ADS)

    Bojowald, Martin; Büyükçam, Umut; Brahma, Suddhasattwa; D'Ambrosio, Fabio

    2016-11-01

    In canonical gravity, covariance is implemented by brackets of hypersurface-deformation generators forming a Lie algebroid. Lie-algebroid morphisms, therefore, allow one to relate different versions of the brackets that correspond to the same spacetime structure. An application to examples of modified brackets found mainly in models of loop quantum gravity can, in some cases, map the spacetime structure back to the classical Riemannian form after a field redefinition. For one type of quantum corrections (holonomies), signature change appears to be a generic feature of effective spacetime, and it is shown here to be a new quantum spacetime phenomenon which cannot be mapped to an equivalent classical structure. In low-curvature regimes, our constructions not only prove the existence of classical spacetime structures assumed elsewhere in models of loop quantum cosmology, they also show the existence of additional quantum corrections that have not always been included.

  17. Experimental quantum annealing: case study involving the graph isomorphism problem.

    PubMed

    Zick, Kenneth M; Shehab, Omar; French, Matthew

    2015-06-08

    Quantum annealing is a proposed combinatorial optimization technique meant to exploit quantum mechanical effects such as tunneling and entanglement. Real-world quantum annealing-based solvers require a combination of annealing and classical pre- and post-processing; at this early stage, little is known about how to partition and optimize the processing. This article presents an experimental case study of quantum annealing and some of the factors involved in real-world solvers, using a 504-qubit D-Wave Two machine and the graph isomorphism problem. To illustrate the role of classical pre-processing, a compact Hamiltonian is presented that enables a reduced Ising model for each problem instance. On random N-vertex graphs, the median number of variables is reduced from N(2) to fewer than N log2 N and solvable graph sizes increase from N = 5 to N = 13. Additionally, error correction via classical post-processing majority voting is evaluated. While the solution times are not competitive with classical approaches to graph isomorphism, the enhanced solver ultimately classified correctly every problem that was mapped to the processor and demonstrated clear advantages over the baseline approach. The results shed some light on the nature of real-world quantum annealing and the associated hybrid classical-quantum solvers.

  18. Experimental quantum annealing: case study involving the graph isomorphism problem

    PubMed Central

    Zick, Kenneth M.; Shehab, Omar; French, Matthew

    2015-01-01

    Quantum annealing is a proposed combinatorial optimization technique meant to exploit quantum mechanical effects such as tunneling and entanglement. Real-world quantum annealing-based solvers require a combination of annealing and classical pre- and post-processing; at this early stage, little is known about how to partition and optimize the processing. This article presents an experimental case study of quantum annealing and some of the factors involved in real-world solvers, using a 504-qubit D-Wave Two machine and the graph isomorphism problem. To illustrate the role of classical pre-processing, a compact Hamiltonian is presented that enables a reduced Ising model for each problem instance. On random N-vertex graphs, the median number of variables is reduced from N2 to fewer than N log2 N and solvable graph sizes increase from N = 5 to N = 13. Additionally, error correction via classical post-processing majority voting is evaluated. While the solution times are not competitive with classical approaches to graph isomorphism, the enhanced solver ultimately classified correctly every problem that was mapped to the processor and demonstrated clear advantages over the baseline approach. The results shed some light on the nature of real-world quantum annealing and the associated hybrid classical-quantum solvers. PMID:26053973

  19. Data Analysis Techniques for Physical Scientists

    NASA Astrophysics Data System (ADS)

    Pruneau, Claude A.

    2017-10-01

    Preface; How to read this book; 1. The scientific method; Part I. Foundation in Probability and Statistics: 2. Probability; 3. Probability models; 4. Classical inference I: estimators; 5. Classical inference II: optimization; 6. Classical inference III: confidence intervals and statistical tests; 7. Bayesian inference; Part II. Measurement Techniques: 8. Basic measurements; 9. Event reconstruction; 10. Correlation functions; 11. The multiple facets of correlation functions; 12. Data correction methods; Part III. Simulation Techniques: 13. Monte Carlo methods; 14. Collision and detector modeling; List of references; Index.

  20. From quantum to classical modeling of radiation reaction: A focus on stochasticity effects

    NASA Astrophysics Data System (ADS)

    Niel, F.; Riconda, C.; Amiranoff, F.; Duclous, R.; Grech, M.

    2018-04-01

    Radiation reaction in the interaction of ultrarelativistic electrons with a strong external electromagnetic field is investigated using a kinetic approach in the nonlinear moderately quantum regime. Three complementary descriptions are discussed considering arbitrary geometries of interaction: a deterministic one relying on the quantum-corrected radiation reaction force in the Landau and Lifschitz (LL) form, a linear Boltzmann equation for the electron distribution function, and a Fokker-Planck (FP) expansion in the limit where the emitted photon energies are small with respect to that of the emitting electrons. The latter description is equivalent to a stochastic differential equation where the effect of the radiation reaction appears in the form of the deterministic term corresponding to the quantum-corrected LL friction force, and by a diffusion term accounting for the stochastic nature of photon emission. By studying the evolution of the energy moments of the electron distribution function with the three models, we are able to show that all three descriptions provide similar predictions on the temporal evolution of the average energy of an electron population in various physical situations of interest, even for large values of the quantum parameter χ . The FP and full linear Boltzmann descriptions also allow us to correctly describe the evolution of the energy variance (second-order moment) of the distribution function, while higher-order moments are in general correctly captured with the full linear Boltzmann description only. A general criterion for the limit of validity of each description is proposed, as well as a numerical scheme for the inclusion of the FP description in particle-in-cell codes. This work, not limited to the configuration of a monoenergetic electron beam colliding with a laser pulse, allows further insight into the relative importance of various effects of radiation reaction and in particular of the discrete and stochastic nature of high-energy photon emission and its back-reaction in the deformation of the particle distribution function.

  1. Classical, Generalizability, and Multifaceted Rasch Detection of Interrater Variability in Large, Sparse Data Sets.

    ERIC Educational Resources Information Center

    MacMillan, Peter D.

    2000-01-01

    Compared classical test theory (CTT), generalizability theory (GT), and multifaceted Rasch model (MFRM) approaches to detecting and correcting for rater variability using responses of 4,930 high school students graded by 3 raters on 9 scales. The MFRM approach identified far more raters as different than did the CTT analysis. GT and Rasch…

  2. Demonstration of a quantum error detection code using a square lattice of four superconducting qubits

    PubMed Central

    Córcoles, A.D.; Magesan, Easwar; Srinivasan, Srikanth J.; Cross, Andrew W.; Steffen, M.; Gambetta, Jay M.; Chow, Jerry M.

    2015-01-01

    The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code. PMID:25923200

  3. Demonstration of a quantum error detection code using a square lattice of four superconducting qubits.

    PubMed

    Córcoles, A D; Magesan, Easwar; Srinivasan, Srikanth J; Cross, Andrew W; Steffen, M; Gambetta, Jay M; Chow, Jerry M

    2015-04-29

    The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code.

  4. Homogeneous nucleation and droplet growth in nitrogen. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Dotson, E. H.

    1983-01-01

    A one dimensional computer model of the homogeneous nucleation process and growth of condensate for nitrogen flows over airfoils is developed to predict the onset of condensation and thus to be able to take advantage of as much of Reynolds capability of cryogenic tunnels as possible. Homogeneous nucleation data were taken using a DFVLR CAST-10 airfoil in the 0.3-Meter Transonic Cryogenic Tunnel and are used to evaluate the classical liquid droplet theory and several proposed corrections to it. For predicting liquid nitrogen condensation effects, use of the arbitrary Tolman constant of 0.25 x 250 billionth m or the Reiss or Kikuchi correction agrees with the CAST-10 data. Because no solid nitrogen condensation were found experimentally during the CAST-10 experiments, earlier nozzle data are used to evaluate corrections to the classical liquid droplet theory in the lower temperature regime. A theoretical expression for the surface tension of solid nitrogen is developed.

  5. Technically natural vacuum energy at the tip of a supersymmetric teardrop

    NASA Astrophysics Data System (ADS)

    Williams, Matthew

    2014-04-01

    A minimal supersymmetric braneworld model is presented which has (i) zero classical four-dimensional vacuum curvature, despite the large naive vacuum energy due to contributions from Standard Model particles and (ii) one-(bulk)-loop quantum corrections to the vacuum energy with a size set by the radius of the extra-dimensional spheroid. These corrections are technically natural because a Bogomol'nyi-Prasad-Sommerfield-like relation between the brane tension and R charge—which would have preserved (half of) the bulk supersymmetry—is violated by the requirement that the stabilizing R-symmetry gauge flux be quantized. The extra-dimensional geometry is similar to previous rugby-ball geometries, but is simpler in that there is only one brane and so fewer free parameters. Although the sign of the renormalized vacuum energy ends up being the unphysical one for this model (in the limit considered here, where the massive bulk loop is the leading contribution), it serves as an illustrative example of the relevant physics.

  6. Keldysh approach for nonequilibrium phase transitions in quantum optics: Beyond the Dicke model in optical cavities

    NASA Astrophysics Data System (ADS)

    Torre, Emanuele G. Dalla; Diehl, Sebastian; Lukin, Mikhail D.; Sachdev, Subir; Strack, Philipp

    2013-02-01

    We investigate nonequilibrium phase transitions for driven atomic ensembles interacting with a cavity mode and coupled to a Markovian dissipative bath. In the thermodynamic limit and at low frequencies, we show that the distribution function of the photonic mode is thermal, with an effective temperature set by the atom-photon interaction strength. This behavior characterizes the static and dynamic critical exponents of the associated superradiance transition. Motivated by these considerations, we develop a general Keldysh path-integral approach that allows us to study physically relevant nonlinearities beyond the idealized Dicke model. Using standard diagrammatic techniques, we take into account the leading-order corrections due to the finite number N of atoms. For finite N, the photon mode behaves as a damped classical nonlinear oscillator at finite temperature. For the atoms, we propose a Dicke action that can be solved for any N and correctly captures the atoms’ depolarization due to dissipative dephasing.

  7. Theoretical prediction of a rotating magnon wave packet in ferromagnets.

    PubMed

    Matsumoto, Ryo; Murakami, Shuichi

    2011-05-13

    We theoretically show that the magnon wave packet has a rotational motion in two ways: a self-rotation and a motion along the boundary of the sample (edge current). They are similar to the cyclotron motion of electrons, but unlike electrons the magnons have no charge and the rotation is not due to the Lorentz force. These rotational motions are caused by the Berry phase in momentum space from the magnon band structure. Furthermore, the rotational motion of the magnon gives an additional correction term to the magnon Hall effect. We also discuss the Berry curvature effect in the classical limit of long-wavelength magnetostatic spin waves having macroscopic coherence length.

  8. Gene Therapy for Hemophilia and Duchenne Muscular Dystrophy in China.

    PubMed

    Liu, Xionghao; Liu, Mujun; Wu, Lingqian; Liang, Desheng

    2018-02-01

    Gene therapy is a new technology that provides potential for curing monogenic diseases caused by mutations in a single gene. Hemophilia and Duchenne muscular dystrophy (DMD) are ideal target diseases of gene therapy. Important advances have been made in clinical trials, including studies of adeno-associated virus vectors in hemophilia and antisense in DMD. However, issues regarding the high doses of viral vectors required and limited delivery efficiency of antisense oligonucleotides have not yet been fully addressed. As an alternative strategy to classic gene addition, genome editing based on programmable nucleases has also shown promise to correct mutations in situ. This review describes the recent progress made by Chinese researchers in gene therapy for hemophilia and DMD.

  9. Uncertainty evaluation of mass values determined by electronic balances in analytical chemistry: a new method to correct for air buoyancy.

    PubMed

    Wunderli, S; Fortunato, G; Reichmuth, A; Richard, Ph

    2003-06-01

    A new method to correct for the largest systematic influence in mass determination-air buoyancy-is outlined. A full description of the most relevant influence parameters is given and the combined measurement uncertainty is evaluated according to the ISO-GUM approach [1]. A new correction method for air buoyancy using an artefact is presented. This method has the advantage that only a mass artefact is used to correct for air buoyancy. The classical approach demands the determination of the air density and therefore suitable equipment to measure at least the air temperature, the air pressure and the relative air humidity within the demanded uncertainties (i.e. three independent measurement tasks have to be performed simultaneously). The calculated uncertainty is lower for the classical method. However a field laboratory may not always be in possession of fully traceable measurement systems for these room climatic parameters.A comparison of three approaches applied to the calculation of the combined uncertainty of mass values is presented. Namely the classical determination of air buoyancy, the artefact method, and the neglecting of this systematic effect as proposed in the new EURACHEM/CITAC guide [2]. The artefact method is suitable for high-precision measurement in analytical chemistry and especially for the production of certified reference materials, reference values and analytical chemical reference materials. The method could also be used either for volume determination of solids or for air density measurement by an independent method.

  10. Fate of classical solitons in one-dimensional quantum systems.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pustilnik, M.; Matveev, K. A.

    We study one-dimensional quantum systems near the classical limit described by the Korteweg-de Vries (KdV) equation. The excitations near this limit are the well-known solitons and phonons. The classical description breaks down at long wavelengths, where quantum effects become dominant. Focusing on the spectra of the elementary excitations, we describe analytically the entire classical-to-quantum crossover. We show that the ultimate quantum fate of the classical KdV excitations is to become fermionic quasiparticles and quasiholes. We discuss in detail two exactly solvable models exhibiting such crossover, the Lieb-Liniger model of bosons with weak contact repulsion and the quantum Toda model, andmore » argue that the results obtained for these models are universally applicable to all quantum one-dimensional systems with a well-defined classical limit described by the KdV equation.« less

  11. Quantum Corrections in Nanoplasmonics: Shape, Scale, and Material

    NASA Astrophysics Data System (ADS)

    Christensen, Thomas; Yan, Wei; Jauho, Antti-Pekka; Soljačić, Marin; Mortensen, N. Asger

    2017-04-01

    The classical treatment of plasmonics is insufficient at the nanometer-scale due to quantum mechanical surface phenomena. Here, an extension of the classical paradigm is reported which rigorously remedies this deficiency through the incorporation of first-principles surface response functions—the Feibelman d parameters—in general geometries. Several analytical results for the leading-order plasmonic quantum corrections are obtained in a first-principles setting; particularly, a clear separation of the roles of shape, scale, and material is established. The utility of the formalism is illustrated by the derivation of a modified sum rule for complementary structures, a rigorous reformulation of Kreibig's phenomenological damping prescription, and an account of the small-scale resonance shifting of simple and noble metal nanostructures.

  12. Quantization of the Szekeres system

    NASA Astrophysics Data System (ADS)

    Paliathanasis, A.; Zampeli, Adamantia; Christodoulakis, T.; Mustafa, M. T.

    2018-06-01

    We study the quantum corrections on the Szekeres system in the context of canonical quantization in the presence of symmetries. We start from an effective point-like Lagrangian with two integrals of motion, one corresponding to the Hamiltonian and the other to a second rank killing tensor. Imposing their quantum version on the wave function results to a solution which is then interpreted in the context of Bohmian mechanics. In this semiclassical approach, it is shown that there is no quantum corrections, thus the classical trajectories of the Szekeres system are not affected at this level. Finally, we define a probability function which shows that a stationary surface of the probability corresponds to a classical exact solution.

  13. Correction of differential renal function for asymmetric renal area ratio in unilateral hydronephrosis.

    PubMed

    Aktaş, Gul Ege; Sarıkaya, Ali

    2015-11-01

    Children with unilateral hydronephrosis are followed up with anteroposterior pelvic diameter (APD), hydronephrosis grade, mercaptoacetyltriglycine (MAG-3) drainage pattern and differential renal function (DRF). Indeterminate drainage preserved DRF in higher grades of hydronephrosis, in some situations, complicating the decision-making process. Due to an asymmetric renal area ratio, falsely negative DRF estimations can result in missed optimal surgery times. This study was designed to assess whether correcting the DRF estimation according to kidney area could reflect the clinical situation of a hydronephrotic kidney better than a classical DRF calculation, concurrently with the hydronephrosis grade, APD and MAG-3 drainage pattern. We reviewed the MAG-3, dimercaptosuccinic acid (DMSA) scans and ultrasonography (US) of 23 children (6 girls, 17 boys, mean age: 29 ± 50 months) with unilateral hydronephrosis. MAG-3 and DMSA scans were performed within 3 months (mean 25.4 ± 30.7 days). The closest US findings (mean 41.5 ± 28.2 days) were used. DMSA DRF estimations were obtained using the geometric mean method. Secondary calculations were performed to correct the counts (the total counts divided by the number of pixels in ROI) according to kidney area. The renogram patterns of patients were evaluated and separated into subgroups. The visual assessment of DMSA scans was noted and the hydronephrotic kidney was classified in comparison to the normal contralateral kidney's uptake. The correlations of the DRF values of classical and area-corrected methods with MAG-3 renogram patterns, the visual classification of DMSA scan, the hydronephrosis grade and the APD were assessed. DRF estimations of two methods were statistically different (p: 0.001). The categories of 12 hydronephrotic kidneys were changed. There were no correlations between classical DRF estimations and the hydronephrosis grade, APD, visual classification of the DMSA scan and uptake evaluation. The DRF distributions according to MAG-3 drainage patterns were not different. Area-corrected DRF estimations correlated with all: with an increasing hydronephrosis grade and APD, DRF estimations decreased and MAG-3 drainage patterns worsened. A decrease in DRF (< 45 %) was determined when APD was ≥ 10 mm. When APD was ≥ 26 mm, a reduction of DRF below 40 % was determined. Our results suggest that correcting DRF estimation for asymmetric renal area ratio in unilateral hydronephrosis can be more robust than the classical method, especially for higher grades of hydronephrotic kidneys, under equivocal circumstances.

  14. The Core: Teaching Your Child the Foundations of Classical Education

    ERIC Educational Resources Information Center

    Bortins, Leigh A.

    2010-01-01

    In the past, correct spelling, the multiplication tables, the names of the state capitals and the American presidents were basics that all children were taught in school. Today, many children graduate without this essential knowledge. Most curricula today follow a haphazard sampling of topics with a focus on political correctness instead of…

  15. Semi-classical analysis and pseudo-spectra

    NASA Astrophysics Data System (ADS)

    Davies, E. B.

    We prove an approximate spectral theorem for non-self-adjoint operators and investigate its applications to second-order differential operators in the semi-classical limit. This leads to the construction of a twisted FBI transform. We also investigate the connections between pseudo-spectra and boundary conditions in the semi-classical limit.

  16. Classical and sequential limit analysis revisited

    NASA Astrophysics Data System (ADS)

    Leblond, Jean-Baptiste; Kondo, Djimédo; Morin, Léo; Remmal, Almahdi

    2018-04-01

    Classical limit analysis applies to ideal plastic materials, and within a linearized geometrical framework implying small displacements and strains. Sequential limit analysis was proposed as a heuristic extension to materials exhibiting strain hardening, and within a fully general geometrical framework involving large displacements and strains. The purpose of this paper is to study and clearly state the precise conditions permitting such an extension. This is done by comparing the evolution equations of the full elastic-plastic problem, the equations of classical limit analysis, and those of sequential limit analysis. The main conclusion is that, whereas classical limit analysis applies to materials exhibiting elasticity - in the absence of hardening and within a linearized geometrical framework -, sequential limit analysis, to be applicable, strictly prohibits the presence of elasticity - although it tolerates strain hardening and large displacements and strains. For a given mechanical situation, the relevance of sequential limit analysis therefore essentially depends upon the importance of the elastic-plastic coupling in the specific case considered.

  17. Cosine problem in EPRL/FK spinfoam model

    NASA Astrophysics Data System (ADS)

    Vojinović, Marko

    2014-01-01

    We calculate the classical limit effective action of the EPRL/FK spinfoam model of quantum gravity coupled to matter fields. By employing the standard QFT background field method adapted to the spinfoam setting, we find that the model has many different classical effective actions. Most notably, these include the ordinary Einstein-Hilbert action coupled to matter, but also an action which describes antigravity. All those multiple classical limits appear as a consequence of the fact that the EPRL/FK vertex amplitude has cosine-like large spin asymptotics. We discuss some possible ways to eliminate the unwanted classical limits.

  18. Kalman filter based control for Adaptive Optics

    NASA Astrophysics Data System (ADS)

    Petit, Cyril; Quiros-Pacheco, Fernando; Conan, Jean-Marc; Kulcsár, Caroline; Raynaud, Henri-François; Fusco, Thierry

    2004-12-01

    Classical Adaptive Optics suffer from a limitation of the corrected Field Of View. This drawback has lead to the development of MultiConjugated Adaptive Optics. While the first MCAO experimental set-ups are presently under construction, little attention has been paid to the control loop. This is however a key element in the optimization process especially for MCAO systems. Different approaches have been proposed in recent articles for astronomical applications : simple integrator, Optimized Modal Gain Integrator and Kalman filtering. We study here Kalman filtering which seems a very promising solution. Following the work of Brice Leroux, we focus on a frequential characterization of kalman filters, computing a transfer matrix. The result brings much information about their behaviour and allows comparisons with classical controllers. It also appears that straightforward improvements of the system models can lead to static aberrations and vibrations filtering. Simulation results are proposed and analysed thanks to our frequential characterization. Related problems such as model errors, aliasing effect reduction or experimental implementation and testing of Kalman filter control loop on a simplified MCAO experimental set-up could be then discussed.

  19. Beating the classical limits of information transmission using a quantum decoder

    NASA Astrophysics Data System (ADS)

    Chapman, Robert J.; Karim, Akib; Huang, Zixin; Flammia, Steven T.; Tomamichel, Marco; Peruzzo, Alberto

    2018-01-01

    Encoding schemes and error-correcting codes are widely used in information technology to improve the reliability of data transmission over real-world communication channels. Quantum information protocols can further enhance the performance in data transmission by encoding a message in quantum states; however, most proposals to date have focused on the regime of a large number of uses of the noisy channel, which is unfeasible with current quantum technology. We experimentally demonstrate quantum enhanced communication over an amplitude damping noisy channel with only two uses of the channel per bit and a single entangling gate at the decoder. By simulating the channel using a photonic interferometric setup, we experimentally increase the reliability of transmitting a data bit by greater than 20 % for a certain damping range over classically sending the message twice. We show how our methodology can be extended to larger systems by simulating the transmission of a single bit with up to eight uses of the channel and a two-bit message with three uses of the channel, predicting a quantum enhancement in all cases.

  20. Quantum Error Correction with Biased Noise

    NASA Astrophysics Data System (ADS)

    Brooks, Peter

    Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security. At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level. In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations. In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction. In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled states which decreases as the states are distilled to better quality. The interplay of of these different rates sets limits on the achievable distillation and how quickly states converge to that limit.

  1. Bull's-Eye and Nontarget Skin Lesions of Lyme Disease: An Internet Survey of Identification of Erythema Migrans

    PubMed Central

    Aucott, John N.; Crowder, Lauren A.; Yedlin, Victoria; Kortte, Kathleen B.

    2012-01-01

    Introduction. Lyme disease is an emerging worldwide infectious disease with major foci of endemicity in North America and regions of temperate Eurasia. The erythema migrans rash associated with early infection is found in approximately 80% of patients and can have a range of appearances including the classic target bull's-eye lesion and nontarget appearing lesions. Methods. A survey was designed to assess the ability of the general public to distinguish various appearances of erythema migrans from non-Lyme rashes. Participants were solicited from individuals who visited an educational website about Lyme disease. Results. Of 3,104 people who accessed a rash identification survey, 72.7% of participants correctly identified the classic target erythema migrans commonly associated with Lyme disease. A mean of 20.5% of participants was able to correctly identify the four nonclassic erythema migrans. 24.2% of participants incorrectly identified a tick bite reaction in the skin as erythema migrans. Conclusions. Participants were most familiar with the classic target erythema migrans of Lyme disease but were unlikely to correctly identify the nonclassic erythema migrans. These results identify an opportunity for educational intervention to improve early recognition of Lyme disease and to increase the patient's appropriate use of medical services for early Lyme disease diagnosis. PMID:23133445

  2. Simple improvements to classical bubble nucleation models.

    PubMed

    Tanaka, Kyoko K; Tanaka, Hidekazu; Angélil, Raymond; Diemand, Jürg

    2015-08-01

    We revisit classical nucleation theory (CNT) for the homogeneous bubble nucleation rate and improve the classical formula using a correct prefactor in the nucleation rate. Most of the previous theoretical studies have used the constant prefactor determined by the bubble growth due to the evaporation process from the bubble surface. However, the growth of bubbles is also regulated by the thermal conduction, the viscosity, and the inertia of liquid motion. These effects can decrease the prefactor significantly, especially when the liquid pressure is much smaller than the equilibrium one. The deviation in the nucleation rate between the improved formula and the CNT can be as large as several orders of magnitude. Our improved, accurate prefactor and recent advances in molecular dynamics simulations and laboratory experiments for argon bubble nucleation enable us to precisely constrain the free energy barrier for bubble nucleation. Assuming the correction to the CNT free energy is of the functional form suggested by Tolman, the precise evaluations of the free energy barriers suggest the Tolman length is ≃0.3σ independently of the temperature for argon bubble nucleation, where σ is the unit length of the Lennard-Jones potential. With this Tolman correction and our prefactor one gets accurate bubble nucleation rate predictions in the parameter range probed by current experiments and molecular dynamics simulations.

  3. Quantum mean-field approximation for lattice quantum models: Truncating quantum correlations and retaining classical ones

    NASA Astrophysics Data System (ADS)

    Malpetti, Daniele; Roscilde, Tommaso

    2017-02-01

    The mean-field approximation is at the heart of our understanding of complex systems, despite its fundamental limitation of completely neglecting correlations between the elementary constituents. In a recent work [Phys. Rev. Lett. 117, 130401 (2016), 10.1103/PhysRevLett.117.130401], we have shown that in quantum many-body systems at finite temperature, two-point correlations can be formally separated into a thermal part and a quantum part and that quantum correlations are generically found to decay exponentially at finite temperature, with a characteristic, temperature-dependent quantum coherence length. The existence of these two different forms of correlation in quantum many-body systems suggests the possibility of formulating an approximation, which affects quantum correlations only, without preventing the correct description of classical fluctuations at all length scales. Focusing on lattice boson and quantum Ising models, we make use of the path-integral formulation of quantum statistical mechanics to introduce such an approximation, which we dub quantum mean-field (QMF) approach, and which can be readily generalized to a cluster form (cluster QMF or cQMF). The cQMF approximation reduces to cluster mean-field theory at T =0 , while at any finite temperature it produces a family of systematically improved, semi-classical approximations to the quantum statistical mechanics of the lattice theory at hand. Contrary to standard MF approximations, the correct nature of thermal critical phenomena is captured by any cluster size. In the two exemplary cases of the two-dimensional quantum Ising model and of two-dimensional quantum rotors, we study systematically the convergence of the cQMF approximation towards the exact result, and show that the convergence is typically linear or sublinear in the boundary-to-bulk ratio of the clusters as T →0 , while it becomes faster than linear as T grows. These results pave the way towards the development of semiclassical numerical approaches based on an approximate, yet systematically improved account of quantum correlations.

  4. Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes

    NASA Astrophysics Data System (ADS)

    Harrington, James William

    Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present a local classical processing scheme for correcting errors on toric codes, which demonstrates that quantum information can be maintained in two dimensions by purely local (quantum and classical) resources.

  5. Quantum circuit dynamics via path integrals: Is there a classical action for discrete-time paths?

    NASA Astrophysics Data System (ADS)

    Penney, Mark D.; Enshan Koh, Dax; Spekkens, Robert W.

    2017-07-01

    It is straightforward to compute the transition amplitudes of a quantum circuit using the sum-over-paths methodology when the gates in the circuit are balanced, where a balanced gate is one for which all non-zero transition amplitudes are of equal magnitude. Here we consider the question of whether, for such circuits, the relative phases of different discrete-time paths through the configuration space can be defined in terms of a classical action, as they are for continuous-time paths. We show how to do so for certain kinds of quantum circuits, namely, Clifford circuits where the elementary systems are continuous-variable systems or discrete systems of odd-prime dimension. These types of circuit are distinguished by having phase-space representations that serve to define their classical counterparts. For discrete systems, the phase-space coordinates are also discrete variables. We show that for each gate in the generating set, one can associate a symplectomorphism on the phase-space and to each of these one can associate a generating function, defined on two copies of the configuration space. For discrete systems, the latter association is achieved using tools from algebraic geometry. Finally, we show that if the action functional for a discrete-time path through a sequence of gates is defined using the sum of the corresponding generating functions, then it yields the correct relative phases for the path-sum expression. These results are likely to be relevant for quantizing physical theories where time is fundamentally discrete, characterizing the classical limit of discrete-time quantum dynamics, and proving complexity results for quantum circuits.

  6. Influences on and Limitations of Classical Test Theory Reliability Estimates.

    ERIC Educational Resources Information Center

    Arnold, Margery E.

    It is incorrect to say "the test is reliable" because reliability is a function not only of the test itself, but of many factors. The present paper explains how different factors affect classical reliability estimates such as test-retest, interrater, internal consistency, and equivalent forms coefficients. Furthermore, the limits of classical test…

  7. Naked in the Old and the New World: Differences and Analogies in Descriptions of European and American herbae nudae in the Sixteenth Century.

    PubMed

    Čermáková, Lucie; Černá, Jana

    2018-03-01

    The sixteenth century could be understand as a period of renaissance of interest in nature and as a period of development of natural history as a discipline. The spreading of the printing press was connected to the preparation of new editions of Classical texts and to the act of correcting and commenting on these texts. This forced scholars to confront texts with living nature and to subject it to more careful investigation. The discovery of America uncovered new horizons and brought new natural products, which were exotic and unknown to Classical tradition. The aim of this study is to compare strategies and categories, which were used in describing plants of the Old and the New World. Attention will be paid to the first reactions to the new flora, to the methods of naming and describing plants, to the ways of gaining knowledge about plants from local sources or by means of one's own observation. The confrontation with novelty puts naturalists in the Old World and in the New World in a similar situation. It reveals the limits of traditional knowledge based on Classical authorities. A closer investigation, however, brings to light not only the sometimes unexpected similarities, but also the differences which were due to the radical otherness of American plants.

  8. Inelastic neutron scattering, Raman, vibrational analysis with anharmonic corrections, and scaled quantum mechanical force field for polycrystalline L-alanine

    NASA Astrophysics Data System (ADS)

    Williams, Robert W.; Schlücker, Sebastian; Hudson, Bruce S.

    2008-01-01

    A scaled quantum mechanical harmonic force field (SQMFF) corrected for anharmonicity is obtained for the 23 K L-alanine crystal structure using van der Waals corrected periodic boundary condition density functional theory (DFT) calculations with the PBE functional. Scale factors are obtained with comparisons to inelastic neutron scattering (INS), Raman, and FT-IR spectra of polycrystalline L-alanine at 15-23 K. Calculated frequencies for all 153 normal modes differ from observed frequencies with a standard deviation of 6 wavenumbers. Non-bonded external k = 0 lattice modes are included, but assignments to these modes are presently ambiguous. The extension of SQMFF methodology to lattice modes is new, as are the procedures used here for providing corrections for anharmonicity and van der Waals interactions in DFT calculations on crystals. First principles Born-Oppenheimer molecular dynamics (BOMD) calculations are performed on the L-alanine crystal structure at a series of classical temperatures ranging from 23 K to 600 K. Corrections for zero-point energy (ZPE) are estimated by finding the classical temperature that reproduces the mean square displacements (MSDs) measured from the diffraction data at 23 K. External k = 0 lattice motions are weakly coupled to bonded internal modes.

  9. CLASSICAL AREAS OF PHENOMENOLOGY: Correcting dynamic residual aberrations of conformal optical systems using AO technology

    NASA Astrophysics Data System (ADS)

    Li, Yan; Li, Lin; Huang, Yi-Fan; Du, Bao-Lin

    2009-07-01

    This paper analyses the dynamic residual aberrations of a conformal optical system and introduces adaptive optics (AO) correction technology to this system. The image sharpening AO system is chosen as the correction scheme. Communication between MATLAB and Code V is established via ActiveX technique in computer simulation. The SPGD algorithm is operated at seven zoom positions to calculate the optimized surface shape of the deformable mirror. After comparison of performance of the corrected system with the baseline system, AO technology is proved to be a good way of correcting the dynamic residual aberration in conformal optical design.

  10. A toolkit for measurement error correction, with a focus on nutritional epidemiology

    PubMed Central

    Keogh, Ruth H; White, Ian R

    2014-01-01

    Exposure measurement error is a problem in many epidemiological studies, including those using biomarkers and measures of dietary intake. Measurement error typically results in biased estimates of exposure-disease associations, the severity and nature of the bias depending on the form of the error. To correct for the effects of measurement error, information additional to the main study data is required. Ideally, this is a validation sample in which the true exposure is observed. However, in many situations, it is not feasible to observe the true exposure, but there may be available one or more repeated exposure measurements, for example, blood pressure or dietary intake recorded at two time points. The aim of this paper is to provide a toolkit for measurement error correction using repeated measurements. We bring together methods covering classical measurement error and several departures from classical error: systematic, heteroscedastic and differential error. The correction methods considered are regression calibration, which is already widely used in the classical error setting, and moment reconstruction and multiple imputation, which are newer approaches with the ability to handle differential error. We emphasize practical application of the methods in nutritional epidemiology and other fields. We primarily consider continuous exposures in the exposure-outcome model, but we also outline methods for use when continuous exposures are categorized. The methods are illustrated using the data from a study of the association between fibre intake and colorectal cancer, where fibre intake is measured using a diet diary and repeated measures are available for a subset. © 2014 The Authors. PMID:24497385

  11. Quantum phase uncertainties in the classical limit

    NASA Technical Reports Server (NTRS)

    Franson, James D.

    1994-01-01

    Several sources of phase noise, including spontaneous emission noise and the loss of coherence due to which-path information, are examined in the classical limit of high field intensities. Although the origin of these effects may appear to be quantum-mechanical in nature, it is found that classical analogies for these effects exist in the form of chaos.

  12. Kinetics of the chiral phase transition in a linear σ model

    NASA Astrophysics Data System (ADS)

    Wesp, Christian; van Hees, Hendrik; Meistrenko, Alex; Greiner, Carsten

    2018-02-01

    We study the dynamics of the chiral phase transition in a linear quark-meson σ model using a novel approach based on semiclassical wave-particle duality. The quarks are treated as test particles in a Monte Carlo simulation of elastic collisions and the coupling to the σ meson, which is treated as a classical field, via a kinetic approach motivated by wave-particle duality. The exchange of energy and momentum between particles and fields is described in terms of appropriate Gaussian wave packets. It has been demonstrated that energy-momentum conservation and the principle of detailed balance are fulfilled, and that the dynamics leads to the correct equilibrium limit. First schematic studies of the dynamics of matter produced in heavy-ion collisions are presented.

  13. Optimal sequential measurements for bipartite state discrimination

    NASA Astrophysics Data System (ADS)

    Croke, Sarah; Barnett, Stephen M.; Weir, Graeme

    2017-05-01

    State discrimination is a useful test problem with which to clarify the power and limitations of different classes of measurement. We consider the problem of discriminating between given states of a bipartite quantum system via sequential measurement of the subsystems, with classical feed-forward of measurement results. Our aim is to understand when sequential measurements, which are relatively easy to implement experimentally, perform as well, or almost as well, as optimal joint measurements, which are in general more technologically challenging. We construct conditions that the optimal sequential measurement must satisfy, analogous to the well-known Helstrom conditions for minimum error discrimination in the unrestricted case. We give several examples and compare the optimal probability of correctly identifying the state via global versus sequential measurement strategies.

  14. Using harmonic oscillators to determine the spot size of Hermite-Gaussian laser beams

    NASA Technical Reports Server (NTRS)

    Steely, Sidney L.

    1993-01-01

    The similarity of the functional forms of quantum mechanical harmonic oscillators and the modes of Hermite-Gaussian laser beams is illustrated. This functional similarity provides a direct correlation to investigate the spot size of large-order mode Hermite-Gaussian laser beams. The classical limits of a corresponding two-dimensional harmonic oscillator provide a definition of the spot size of Hermite-Gaussian laser beams. The classical limits of the harmonic oscillator provide integration limits for the photon probability densities of the laser beam modes to determine the fraction of photons detected therein. Mathematica is used to integrate the probability densities for large-order beam modes and to illustrate the functional similarities. The probabilities of detecting photons within the classical limits of Hermite-Gaussian laser beams asymptotically approach unity in the limit of large-order modes, in agreement with the Correspondence Principle. The classical limits for large-order modes include all of the nodes for Hermite Gaussian laser beams; Sturm's theorem provides a direct proof.

  15. Hybrid quantum-classical hierarchy for mitigation of decoherence and determination of excited states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McClean, Jarrod R.; Kimchi-Schwartz, Mollie E.; Carter, Jonathan

    Using quantum devices supported by classical computational resources is a promising approach to quantum-enabled computation. One powerful example of such a hybrid quantum-classical approach optimized for classically intractable eigenvalue problems is the variational quantum eigensolver, built to utilize quantum resources for the solution of eigenvalue problems and optimizations with minimal coherence time requirements by leveraging classical computational resources. These algorithms have been placed as leaders among the candidates for the first to achieve supremacy over classical computation. Here, we provide evidence for the conjecture that variational approaches can automatically suppress even nonsystematic decoherence errors by introducing an exactly solvable channelmore » model of variational state preparation. Moreover, we develop a more general hierarchy of measurement and classical computation that allows one to obtain increasingly accurate solutions by leveraging additional measurements and classical resources. In conclusion, we demonstrate numerically on a sample electronic system that this method both allows for the accurate determination of excited electronic states as well as reduces the impact of decoherence, without using any additional quantum coherence time or formal error-correction codes.« less

  16. Semiclassical propagator of the Wigner function.

    PubMed

    Dittrich, Thomas; Viviescas, Carlos; Sandoval, Luis

    2006-02-24

    Propagation of the Wigner function is studied on two levels of semiclassical propagation: one based on the Van Vleck propagator, the other on phase-space path integration. Leading quantum corrections to the classical Liouville propagator take the form of a time-dependent quantum spot. Its oscillatory structure depends on whether the underlying classical flow is elliptic or hyperbolic. It can be interpreted as the result of interference of a pair of classical trajectories, indicating how quantum coherences are to be propagated semiclassically in phase space. The phase-space path-integral approach allows for a finer resolution of the quantum spot in terms of Airy functions.

  17. Classical Limit and Quantum Logic

    NASA Astrophysics Data System (ADS)

    Losada, Marcelo; Fortin, Sebastian; Holik, Federico

    2018-02-01

    The analysis of the classical limit of quantum mechanics usually focuses on the state of the system. The general idea is to explain the disappearance of the interference terms of quantum states appealing to the decoherence process induced by the environment. However, in these approaches it is not explained how the structure of quantum properties becomes classical. In this paper, we consider the classical limit from a different perspective. We consider the set of properties of a quantum system and we study the quantum-to-classical transition of its logical structure. The aim is to open the door to a new study based on dynamical logics, that is, logics that change over time. In particular, we appeal to the notion of hybrid logics to describe semiclassical systems. Moreover, we consider systems with many characteristic decoherence times, whose sublattices of properties become distributive at different times.

  18. Quantum corrections to quasi-periodic solution of Sine-Gordon model and periodic solution of phi4 model

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, G.; Leble, S.

    2014-03-01

    Analytical form of quantum corrections to quasi-periodic solution of Sine-Gordon model and periodic solution of phi4 model is obtained through zeta function regularisation with account of all rest variables of a d-dimensional theory. Qualitative dependence of quantum corrections on parameters of the classical systems is also evaluated for a much broader class of potentials u(x) = b2f(bx) + C with b and C as arbitrary real constants.

  19. [Small infundibulectomy versus ventriculotomy in tetralogy of Fallot].

    PubMed

    Bojórquez-Ramos, Julio César

    2013-01-01

    the surgical correction of tetralogy of Fallot (TOF) is standardized on the way to close the septal defect, but differs in the way of expanding the right ventricular outflow tract (RVOT). The aim was to compare the early postoperative clinical course of the RVOT obstruction enlargement in classical ventriculotomy technique and the small infundibulectomy (SI). We analyzed the database of the pediatric heart surgery service from 2008 to 2011. Patients with non-complex TOF undergoing complete correction by classical ventriculotomy or SI were selected. Anova, χ(2) and Fisher statistical test were applied. the data included 47 patients, 55 % (26) male, mean age 43 months (6-172), classical ventriculotomy was performed in 61.7 % (29). This group had higher peak levels of lactate (9.07 versus 6.8 mmol/L) p = 0049, and greater magnitude in the index bleeding/kg in the first 12 hours (39.1 versus 20.3 mL/kg) p = 0.016. Death occurred in 9 cases (31.03 %) versus one (5.6 %) in the SI group with p = 0.037; complications exclusive as acute renal failure, hemopneumothorax, pneumonia, permanent AV-block and multiple organ failure were observed. morbidity and mortality was higher in classical ventriculotomy group in comparison with SI. This is possibly associated with higher blood volume.

  20. Classical and quantum simulations of warm dense carbon

    NASA Astrophysics Data System (ADS)

    Whitley, Heather; Sanchez, David; Hamel, Sebastien; Correa, Alfredo; Benedict, Lorin

    We have applied classical and DFT-based molecular dynamics (MD) simulations to study the equation of state of carbon in the warm dense matter regime (ρ = 3.7 g/cc, 0.86 eV

  1. Pauli structures arising from confined particles interacting via a statistical potential

    NASA Astrophysics Data System (ADS)

    Batle, Josep; Ciftja, Orion; Farouk, Ahmed; Alkhambashi, Majid; Abdalla, Soliman

    2017-09-01

    There have been suggestions that the Pauli exclusion principle alone can lead a non-interacting (free) system of identical fermions to form crystalline structures dubbed Pauli crystals. Single-shot imaging experiments for the case of ultra-cold systems of free spin-polarized fermionic atoms in a two-dimensional harmonic trap appear to show geometric arrangements that cannot be characterized as Wigner crystals. This work explores this idea and considers a well-known approach that enables one to treat a quantum system of free fermions as a system of classical particles interacting with a statistical interaction potential. The model under consideration, though classical in nature, incorporates the quantum statistics by endowing the classical particles with an effective interaction potential. The reasonable expectation is that possible Pauli crystal features seen in experiments may manifest in this model that captures the correct quantum statistics as a first order correction. We use the Monte Carlo simulated annealing method to obtain the most stable configurations of finite two-dimensional systems of confined particles that interact with an appropriate statistical repulsion potential. We consider both an isotropic harmonic and a hard-wall confinement potential. Despite minor differences, the most stable configurations observed in our model correspond to the reported Pauli crystals in single-shot imaging experiments of free spin-polarized fermions in a harmonic trap. The crystalline configurations observed appear to be different from the expected classical Wigner crystal structures that would emerge should the confined classical particles had interacted with a pair-wise Coulomb repulsion.

  2. On the weight of indels in genomic distances

    PubMed Central

    2011-01-01

    Background Classical approaches to compute the genomic distance are usually limited to genomes with the same content, without duplicated markers. However, differences in the gene content are frequently observed and can reflect important evolutionary aspects. A few polynomial time algorithms that include genome rearrangements, insertions and deletions (or substitutions) were already proposed. These methods often allow a block of contiguous markers to be inserted, deleted or substituted at once but result in distance functions that do not respect the triangular inequality and hence do not constitute metrics. Results In the present study we discuss the disruption of the triangular inequality in some of the available methods and give a framework to establish an efficient correction for two models recently proposed, one that includes insertions, deletions and double cut and join (DCJ) operations, and one that includes substitutions and DCJ operations. Conclusions We show that the proposed framework establishes the triangular inequality in both distances, by summing a surcharge on indel operations and on substitutions that depends only on the number of markers affected by these operations. This correction can be applied a posteriori, without interfering with the already available formulas to compute these distances. We claim that this correction leads to distances that are biologically more plausible. PMID:22151784

  3. Comparing laser-based open- and closed-path gas analyzers to measure methane fluxes using the eddy covariance method

    USGS Publications Warehouse

    Detto, Matteo; Verfaillie, Joseph; Anderson, Frank; Xu, Liukang; Baldocchi, Dennis

    2011-01-01

    Closed- and open-path methane gas analyzers are used in eddy covariance systems to compare three potential methane emitting ecosystems in the Sacramento-San Joaquin Delta (CA, USA): a rice field, a peatland pasture and a restored wetland. The study points out similarities and differences of the systems in field experiments and data processing. The closed-path system, despite a less intrusive placement with the sonic anemometer, required more care and power. In contrast, the open-path system appears more versatile for a remote and unattended experimental site. Overall, the two systems have comparable minimum detectable limits, but synchronization between wind speed and methane data, air density corrections and spectral losses have different impacts on the computed flux covariances. For the closed-path analyzer, air density effects are less important, but the synchronization and spectral losses may represent a problem when fluxes are small or when an undersized pump is used. For the open-path analyzer air density corrections are greater, due to spectroscopy effects and the classic Webb–Pearman–Leuning correction. Comparison between the 30-min fluxes reveals good agreement in terms of magnitudes between open-path and closed-path flux systems. However, the scatter is large, as consequence of the intensive data processing which both systems require.

  4. Time-dependent observables in heavy ion collisions. Part II. In search of pressure isotropization in the φ 4 theory

    NASA Astrophysics Data System (ADS)

    Kovchegov, Yuri V.; Wu, Bin

    2018-03-01

    To understand the dynamics of thermalization in heavy ion collisions in the perturbative framework it is essential to first find corrections to the free-streaming classical gluon fields of the McLerran-Venugopalan model. The corrections that lead to deviations from free streaming (and that dominate at late proper time) would provide evidence for the onset of isotropization (and, possibly, thermalization) of the produced medium. To find such corrections we calculate the late-time two-point Green function and the energy-momentum tensor due to a single 2 → 2 scattering process involving two classical fields. To make the calculation tractable we employ the scalar φ 4 theory instead of QCD. We compare our exact diagrammatic results for these quantities to those in kinetic theory and find disagreement between the two. The disagreement is in the dependence on the proper time τ and, for the case of the two-point function, is also in the dependence on the space-time rapidity η: the exact diagrammatic calculation is, in fact, consistent with the free streaming scenario. Kinetic theory predicts a build-up of longitudinal pressure, which, however, is not observed in the exact calculation. We conclude that we find no evidence for the beginning of the transition from the free-streaming classical fields to the kinetic theory description of the produced matter after a single 2 → 2 rescattering.

  5. Self-force correction to geodetic spin precession in Kerr spacetime

    NASA Astrophysics Data System (ADS)

    Akcay, Sarp

    2017-08-01

    We present an expression for the gravitational self-force correction to the geodetic spin precession of a spinning compact object with small, but non-negligible mass in a bound, equatorial orbit around a Kerr black hole. We consider only conservative backreaction effects due to the mass of the compact object (m1), thus neglecting the effects of its spin s1 on its motion; i.e., we impose s1≪G m12/c and m1≪m2, where m2 is the mass parameter of the background Kerr spacetime. We encapsulate the correction to the spin precession in ψ , the ratio of the accumulated spin-precession angle to the total azimuthal angle over one radial orbit in the equatorial plane. Our formulation considers the gauge-invariant O (m1) part of the correction to ψ , denoted by Δ ψ , and is a generalization of the results of Akcay et al. [Classical Quantum Gravity 34, 084001 (2017), 10.1088/1361-6382/aa61d6] to Kerr spacetime. Additionally, we compute the zero-eccentricity limit of Δ ψ and show that this quantity differs from the circular orbit Δ ψcirc by a gauge-invariant quantity containing the gravitational self-force correction to general relativistic periapsis advance in Kerr spacetime. Our result for Δ ψ is expressed in a manner that readily accommodates numerical/analytical self-force computations, e.g., in the radiation gauge, and paves the way for the computation of a new eccentric-orbit Kerr gauge invariant beyond the generalized redshift.

  6. Dynamic optimization and its relation to classical and quantum constrained systems

    NASA Astrophysics Data System (ADS)

    Contreras, Mauricio; Pellicer, Rely; Villena, Marcelo

    2017-08-01

    We study the structure of a simple dynamic optimization problem consisting of one state and one control variable, from a physicist's point of view. By using an analogy to a physical model, we study this system in the classical and quantum frameworks. Classically, the dynamic optimization problem is equivalent to a classical mechanics constrained system, so we must use the Dirac method to analyze it in a correct way. We find that there are two second-class constraints in the model: one fix the momenta associated with the control variables, and the other is a reminder of the optimal control law. The dynamic evolution of this constrained system is given by the Dirac's bracket of the canonical variables with the Hamiltonian. This dynamic results to be identical to the unconstrained one given by the Pontryagin equations, which are the correct classical equations of motion for our physical optimization problem. In the same Pontryagin scheme, by imposing a closed-loop λ-strategy, the optimality condition for the action gives a consistency relation, which is associated to the Hamilton-Jacobi-Bellman equation of the dynamic programming method. A similar result is achieved by quantizing the classical model. By setting the wave function Ψ(x , t) =e iS(x , t) in the quantum Schrödinger equation, a non-linear partial equation is obtained for the S function. For the right-hand side quantization, this is the Hamilton-Jacobi-Bellman equation, when S(x , t) is identified with the optimal value function. Thus, the Hamilton-Jacobi-Bellman equation in Bellman's maximum principle, can be interpreted as the quantum approach of the optimization problem.

  7. Classical evolution and quantum generation in generalized gravity theories including string corrections and tachyons: Unified analyses

    NASA Astrophysics Data System (ADS)

    Hwang, Jai-Chan; Noh, Hyerim

    2005-03-01

    We present cosmological perturbation theory based on generalized gravity theories including string theory correction terms and a tachyonic complication. The classical evolution as well as the quantum generation processes in these varieties of gravity theories are presented in unified forms. These apply both to the scalar- and tensor-type perturbations. Analyses are made based on the curvature variable in two different gauge conditions often used in the literature in Einstein’s gravity; these are the curvature variables in the comoving (or uniform-field) gauge and the zero-shear gauge. Applications to generalized slow-roll inflation and its consequent power spectra are derived in unified forms which include a wide range of inflationary scenarios based on Einstein’s gravity and others.

  8. Quantum Corrections to the 'Atomistic' MOSFET Simulations

    NASA Technical Reports Server (NTRS)

    Asenov, Asen; Slavcheva, G.; Kaya, S.; Balasubramaniam, R.

    2000-01-01

    We have introduced in a simple and efficient manner quantum mechanical corrections in our 3D 'atomistic' MOSFET simulator using the density gradient formalism. We have studied in comparison with classical simulations the effect of the quantum mechanical corrections on the simulation of random dopant induced threshold voltage fluctuations, the effect of the single charge trapping on interface states and the effect of the oxide thickness fluctuations in decanano MOSFETs with ultrathin gate oxides. The introduction of quantum corrections enhances the threshold voltage fluctuations but does not affect significantly the amplitude of the random telegraph noise associated with single carrier trapping. The importance of the quantum corrections for proper simulation of oxide thickness fluctuation effects has also been demonstrated.

  9. Can quantum transition state theory be defined as an exact t = 0+ limit?

    NASA Astrophysics Data System (ADS)

    Jang, Seogjoo; Voth, Gregory A.

    2016-02-01

    The definition of the classical transition state theory (TST) as a t → 0+ limit of the flux-side time correlation function relies on the assumption that simultaneous measurement of population and flux is a well defined physical process. However, the noncommutativity of the two measurements in quantum mechanics makes the extension of such a concept to the quantum regime impossible. For this reason, quantum TST (QTST) has been generally accepted as any kind of quantum rate theory reproducing the TST in the classical limit, and there has been a broad consensus that no unique QTST retaining all the properties of TST can be defined. Contrary to this widely held view, Hele and Althorpe (HA) [J. Chem. Phys. 138, 084108 (2013)] recently suggested that a true QTST can be defined as the exact t → 0+ limit of a certain kind of quantum flux-side time correlation function and that it is equivalent to the ring polymer molecular dynamics (RPMD) TST. This work seeks to question and clarify certain assumptions underlying these suggestions and their implications. First, the time correlation function used by HA as a starting expression is not related to the kinetic rate constant by virtue of linear response theory, which is the first important step in relating a t = 0+ limit to a physically measurable rate. Second, a theoretical analysis calls into question a key step in HA's proof which appears not to rely on an exact quantum mechanical identity. The correction of this makes the true t = 0+ limit of HA's QTST different from the RPMD-TST rate expression, but rather equal to the well-known path integral quantum transition state theory rate expression for the case of centroid dividing surface. An alternative quantum rate expression is then formulated starting from the linear response theory and by applying a recently developed formalism of real time dynamics of imaginary time path integrals [S. Jang, A. V. Sinitskiy, and G. A. Voth, J. Chem. Phys. 140, 154103 (2014)]. It is shown that the t → 0+ limit of the new rate expression vanishes in the exact quantum limit.

  10. Klippel-Feil syndrome associated with atrial septal defect.

    PubMed

    Bejiqi, Ramush; Retkoceri, Ragip; Bejiqi, Hana; Zeka, Naim; Maloku, Arlinda; Berisha, Majlinda

    2013-01-01

    Three major features result from this abnormality: a short neck, a limited range of motion in the neck, and a low hairline at the back of the head. Most affected people have one or two of these characteristic features. Less than half of all individuals with Klippel-Feil syndrome have all three classic features of this condition. The etiology of Klippel-Feil syndrome and its associated conditions is unknown. The syndrome can present with a variety of other clinical syndromes, including fetal alcohol syndrome, Goldenhar syndrome, anomalies of the extremities etc. Associated anomalies occur in the auditory system, neural axis, cardiovascular system, and the musculoskeletal system. Cardiovascular anomalies, mainly septal defects, were found in 7 patients in Hensinger's series, with 4 of these individuals requiring corrective surgery. In our case we have had registered a nonrestrictive atrial septal defect and corrective surgical intervention at age 18 months in the Santa Rosa Children's Hospital (USA) has been done successfully. Careful examinations of specialist exclude anomalies in other organs and systems. Radiographs and MRI of the thoracic and lumbosacral spine are obtained and other anomalies have been excluded.

  11. Generalizing the ADM computation to quantum field theory

    NASA Astrophysics Data System (ADS)

    Mora, P. J.; Tsamis, N. C.; Woodard, R. P.

    2012-01-01

    The absence of recognizable, low energy quantum gravitational effects requires that some asymptotic series expansion be wonderfully accurate, but the correct expansion might involve logarithms or fractional powers of Newton’s constant. That would explain why conventional perturbation theory shows uncontrollable ultraviolet divergences. We explore this possibility in the context of the mass of a charged, gravitating scalar. The classical limit of this system was solved exactly in 1960 by Arnowitt, Deser and Misner, and their solution does exhibit nonanalytic dependence on Newton’s constant. We derive an exact functional integral representation for the mass of the quantum field theoretic system, and then develop an alternate expansion for it based on a correct implementation of the method of stationary phase. The new expansion entails adding an infinite class of new diagrams to each order and subtracting them from higher orders. The zeroth-order term of the new expansion has the physical interpretation of a first quantized Klein-Gordon scalar which forms a bound state in the gravitational and electromagnetic potentials sourced by its own probability current. We show that such bound states exist and we obtain numerical results for their masses.

  12. Machine-learned cluster identification in high-dimensional data.

    PubMed

    Ultsch, Alfred; Lötsch, Jörn

    2017-02-01

    High-dimensional biomedical data are frequently clustered to identify subgroup structures pointing at distinct disease subtypes. It is crucial that the used cluster algorithm works correctly. However, by imposing a predefined shape on the clusters, classical algorithms occasionally suggest a cluster structure in homogenously distributed data or assign data points to incorrect clusters. We analyzed whether this can be avoided by using emergent self-organizing feature maps (ESOM). Data sets with different degrees of complexity were submitted to ESOM analysis with large numbers of neurons, using an interactive R-based bioinformatics tool. On top of the trained ESOM the distance structure in the high dimensional feature space was visualized in the form of a so-called U-matrix. Clustering results were compared with those provided by classical common cluster algorithms including single linkage, Ward and k-means. Ward clustering imposed cluster structures on cluster-less "golf ball", "cuboid" and "S-shaped" data sets that contained no structure at all (random data). Ward clustering also imposed structures on permuted real world data sets. By contrast, the ESOM/U-matrix approach correctly found that these data contain no cluster structure. However, ESOM/U-matrix was correct in identifying clusters in biomedical data truly containing subgroups. It was always correct in cluster structure identification in further canonical artificial data. Using intentionally simple data sets, it is shown that popular clustering algorithms typically used for biomedical data sets may fail to cluster data correctly, suggesting that they are also likely to perform erroneously on high dimensional biomedical data. The present analyses emphasized that generally established classical hierarchical clustering algorithms carry a considerable tendency to produce erroneous results. By contrast, unsupervised machine-learned analysis of cluster structures, applied using the ESOM/U-matrix method, is a viable, unbiased method to identify true clusters in the high-dimensional space of complex data. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Characterizing quantum channels with non-separable states of classical light

    NASA Astrophysics Data System (ADS)

    Ndagano, Bienvenu; Perez-Garcia, Benjamin; Roux, Filippus S.; McLaren, Melanie; Rosales-Guzman, Carmelo; Zhang, Yingwen; Mouane, Othmane; Hernandez-Aranda, Raul I.; Konrad, Thomas; Forbes, Andrew

    2017-04-01

    High-dimensional entanglement with spatial modes of light promises increased security and information capacity over quantum channels. Unfortunately, entanglement decays due to perturbations, corrupting quantum links that cannot be repaired without performing quantum tomography on the channel. Paradoxically, the channel tomography itself is not possible without a working link. Here we overcome this problem with a robust approach to characterize quantum channels by means of classical light. Using free-space communication in a turbulent atmosphere as an example, we show that the state evolution of classically entangled degrees of freedom is equivalent to that of quantum entangled photons, thus providing new physical insights into the notion of classical entanglement. The analysis of quantum channels by means of classical light in real time unravels stochastic dynamics in terms of pure state trajectories, and thus enables precise quantum error correction in short- and long-haul optical communication, in both free space and fibre.

  14. Toward simulating complex systems with quantum effects

    NASA Astrophysics Data System (ADS)

    Kenion-Hanrath, Rachel Lynn

    Quantum effects like tunneling, coherence, and zero point energy often play a significant role in phenomena on the scales of atoms and molecules. However, the exact quantum treatment of a system scales exponentially with dimensionality, making it impractical for characterizing reaction rates and mechanisms in complex systems. An ongoing effort in the field of theoretical chemistry and physics is extending scalable, classical trajectory-based simulation methods capable of capturing quantum effects to describe dynamic processes in many-body systems; in the work presented here we explore two such techniques. First, we detail an explicit electron, path integral (PI)-based simulation protocol for predicting the rate of electron transfer in condensed-phase transition metal complex systems. Using a PI representation of the transferring electron and a classical representation of the transition metal complex and solvent atoms, we compute the outer sphere free energy barrier and dynamical recrossing factor of the electron transfer rate while accounting for quantum tunneling and zero point energy effects. We are able to achieve this employing only a single set of force field parameters to describe the system rather than parameterizing along the reaction coordinate. Following our success in describing a simple model system, we discuss our next steps in extending our protocol to technologically relevant materials systems. The latter half focuses on the Mixed Quantum-Classical Initial Value Representation (MQC-IVR) of real-time correlation functions, a semiclassical method which has demonstrated its ability to "tune'' between quantum- and classical-limit correlation functions while maintaining dynamic consistency. Specifically, this is achieved through a parameter that determines the quantumness of individual degrees of freedom. Here, we derive a semiclassical correction term for the MQC-IVR to systematically characterize the error introduced by different choices of simulation parameters, and demonstrate the ability of this approach to optimize MQC-IVR simulations.

  15. Quantum groups, roots of unity and particles on quantized Anti-de Sitter space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steinacker, Harold

    1997-05-23

    Quantum groups in general and the quantum Anti-de Sitter group U q(so(2,3)) in particular are studied from the point of view of quantum field theory. The author shows that if q is a suitable root of unity, there exist finite-dimensional, unitary representations corresponding to essentially all the classical one-particle representations with (half) integer spin, with the same structure at low energies as in the classical case. In the massless case for spin ≥ 1, "naive" representations are unitarizable only after factoring out a subspace of "pure gauges", as classically. Unitary many-particle representations are defined, with the correct classical limit. Furthermore,more » the author identifies a remarkable element Q in the center of U q(g), which plays the role of a BRST operator in the case of U q(so(2,3)) at roots of unity, for any spin ≥ 1. The associated ghosts are an intrinsic part of the indecomposable representations. The author shows how to define an involution on algebras of creation and anihilation operators at roots of unity, in an example corresponding to non-identical particles. It is shown how nonabelian gauge fields appear naturally in this framework, without having to define connections on fiber bundles. Integration on Quantum Euclidean space and sphere and on Anti-de Sitter space is studied as well. The author gives a conjecture how Q can be used in general to analyze the structure of indecomposable representations, and to define a new, completely reducible associative (tensor) product of representations at roots of unity, which generalizes the standard "truncated" tensor product as well as many-particle representations.« less

  16. Shade guide optimization--a novel shade arrangement principle for both ceramic and composite shade guides when identifying composite test objects.

    PubMed

    Østervemb, Niels; Jørgensen, Jette Nedergaard; Hørsted-Bindslev, Preben

    2011-02-01

    The most widely used shade guide for composite materials is made of ceramic and arranged according to a non-proven method. There is a need for a composite shade guide using a scientifically based arrangement principle. To compare the shade tab arrangement of the Vitapan Classical shade guide and an individually made composite shade guide using both the originally proposed arrangement principle and arranged according to ΔE2000 values with hue group division. An individual composite shade guide made from Filtek Supreme XT body colors was compared to the Vitapan Classical shade guide. Twenty-five students matched color samples made from Filtek Supreme XT body colors using the two shade guides arranged after the two proposed principles--four shade guides in total. Age, sequence, gender, time, and number of correct matches were recorded. The proposed visually optimal composite shade guide was both fastest and had the highest number of correct matches. Gender was significantly associated with time used for color sampling but not regarding the number of correct shade matches. A composite shade guide is superior compared to the ceramic Vitapan Classical guide when using composite test objects. A rearrangement of the shade guide according to hue, subdivided according to ΔE2000, significantly reduces the time needed to take a color sample and increases the number of correct shade matches. Total color difference in relation to the lightest tab with hue group division is recommended as a possible and universally applicable mode of tab arrangement in dental color standards. Moreover, a shade guide made of the composite materials itself is to be preferred as both a faster and more accurate method of determining color. © 2011, COPYRIGHT THE AUTHORS. JOURNAL COMPILATION © 2011, WILEY PERIODICALS, INC.

  17. Publisher's Note: System of classical nonlinear oscillators as a coarse-grained quantum system [Phys. Rev. A 84, 022103 (2011)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radonjic, Milan; Prvanovic, Slobodan; Buric, Nikola

    2011-08-15

    This paper was published online on 2 August 2011 with a typographical error in an author name in the author list. The first author's name should be 'Milan Radonji Acute-Accent c'. The name has been corrected as of 16 August 2011. The name is correct in the printed version of the journal.

  18. Thermodynamic integration from classical to quantum mechanics.

    PubMed

    Habershon, Scott; Manolopoulos, David E

    2011-12-14

    We present a new method for calculating quantum mechanical corrections to classical free energies, based on thermodynamic integration from classical to quantum mechanics. In contrast to previous methods, our method is numerically stable even in the presence of strong quantum delocalization. We first illustrate the method and its relationship to a well-established method with an analysis of a one-dimensional harmonic oscillator. We then show that our method can be used to calculate the quantum mechanical contributions to the free energies of ice and water for a flexible water model, a problem for which the established method is unstable. © 2011 American Institute of Physics

  19. Space-charge-limited currents for cathodes with electric field enhanced geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lai, Dingguo, E-mail: laidingguo@nint.ac.cn; Qiu, Mengtong; Xu, Qifu

    This paper presents the approximate analytic solutions of current density for annulus and circle cathodes. The current densities of annulus and circle cathodes are derived approximately from first principles, which are in agreement with simulation results. The large scaling laws can predict current densities of high current vacuum diodes including annulus and circle cathodes in practical applications. In order to discuss the relationship between current density and electric field on cathode surface, the existing analytical solutions of currents for concentric cylinder and sphere diodes are fitted from existing solutions relating with electric field enhancement factors. It is found that themore » space-charge-limited current density for the cathode with electric-field enhanced geometry can be written in a general form of J = g(β{sub E}){sup 2}J{sub 0}, where J{sub 0} is the classical (1D) Child-Langmuir current density, β{sub E} is the electric field enhancement factor, and g is the geometrical correction factor depending on the cathode geometry.« less

  20. Bulk entanglement gravity without a boundary: Towards finding Einstein's equation in Hilbert space

    NASA Astrophysics Data System (ADS)

    Cao, ChunJun; Carroll, Sean M.

    2018-04-01

    We consider the emergence from quantum entanglement of spacetime geometry in a bulk region. For certain classes of quantum states in an appropriately factorized Hilbert space, a spatial geometry can be defined by associating areas along codimension-one surfaces with the entanglement entropy between either side. We show how radon transforms can be used to convert these data into a spatial metric. Under a particular set of assumptions, the time evolution of such a state traces out a four-dimensional spacetime geometry, and we argue using a modified version of Jacobson's "entanglement equilibrium" that the geometry should obey Einstein's equation in the weak-field limit. We also discuss how entanglement equilibrium is related to a generalization of the Ryu-Takayanagi formula in more general settings, and how quantum error correction can help specify the emergence map between the full quantum-gravity Hilbert space and the semiclassical limit of quantum fields propagating on a classical spacetime.

  1. Dynamic compaction of granular materials

    PubMed Central

    Favrie, N.; Gavrilyuk, S.

    2013-01-01

    An Eulerian hyperbolic multiphase flow model for dynamic and irreversible compaction of granular materials is constructed. The reversible model is first constructed on the basis of the classical Hertz theory. The irreversible model is then derived in accordance with the following two basic principles. First, the entropy inequality is satisfied by the model. Second, the corresponding ‘intergranular stress’ coming from elastic energy owing to contact between grains decreases in time (the granular media behave as Maxwell-type materials). The irreversible model admits an equilibrium state corresponding to von Mises-type yield limit. The yield limit depends on the volume fraction of the solid. The sound velocity at the yield surface is smaller than that in the reversible model. The last one is smaller than the sound velocity in the irreversible model. Such an embedded model structure assures a thermodynamically correct formulation of the model of granular materials. The model is validated on quasi-static experiments on loading–unloading cycles. The experimentally observed hysteresis phenomena were numerically confirmed with a good accuracy by the proposed model. PMID:24353466

  2. The perils of the imperfect expectation of the perfect baby.

    PubMed

    Chervenak, Frank A; McCullough, Laurence B; Brent, Robert L

    2010-08-01

    Advances in modern medicine invite the assumption that medicine can control human biology. There is a perilous logic that leads from expectations of medicine's control over reproductive biology to the expectation of having a perfect baby. This article proposes that obstetricians should take a preventive ethics approach to the care of pregnant women with expectations for a perfect baby. We use Nathaniel Hawthorne's classic short story, "The Birthmark," to illustrate the perils of the logic of control and perfection through science and then identify possible contemporary sources of the expectation of the perfect baby. We propose that the informed consent process should be used as a preventive ethics tool throughout the course of pregnancy to educate pregnant women about the inherent errors of human reproduction, the highly variable clinical outcomes of these errors, the limited capacity of medicine to detect these errors, and the even more limited capacity to correct them. Copyright (c) 2010 Mosby, Inc. All rights reserved.

  3. Quantum protocols within Spekkens' toy model

    NASA Astrophysics Data System (ADS)

    Disilvestro, Leonardo; Markham, Damian

    2017-05-01

    Quantum mechanics is known to provide significant improvements in information processing tasks when compared to classical models. These advantages range from computational speedups to security improvements. A key question is where these advantages come from. The toy model developed by Spekkens [R. W. Spekkens, Phys. Rev. A 75, 032110 (2007), 10.1103/PhysRevA.75.032110] mimics many of the features of quantum mechanics, such as entanglement and no cloning, regarded as being important in this regard, despite being a local hidden variable theory. In this work, we study several protocols within Spekkens' toy model where we see it can also mimic the advantages and limitations shown in the quantum case. We first provide explicit proofs for the impossibility of toy bit commitment and the existence of a toy error correction protocol and consequent k -threshold secret sharing. Then, defining a toy computational model based on the quantum one-way computer, we prove the existence of blind and verified protocols. Importantly, these two last quantum protocols are known to achieve a better-than-classical security. Our results suggest that such quantum improvements need not arise from any Bell-type nonlocality or contextuality, but rather as a consequence of steering correlations.

  4. The second Quito astrolabe catalogue

    NASA Astrophysics Data System (ADS)

    Kolesnik, Y. B.; Davila, H.

    1994-03-01

    The paper contains 515 individual corrections {DELTA}α and 235 corrections {DELTA}δ to FK5 and FK5Supp. stars and 50 corrections to their proper motions computed from observations made with the classical Danjon astrolabe OPL-13 at Quito Astronomical Observatory of Ecuador National Polytechnical School during a period from 1964 to 1983. These corrections cover the declination zone from -30deg to +30deg. Mean probable errors of catalogue positions are 0.047" in αcosδ and 0.054" in δ. The systematic trends of the catalogue {DELTA}αalpha_cosδ, {DELTA}αdelta_cosδ, {DELTA}δalpha_, {DELTA}δdelta_ are presented for the observed zone.

  5. Disentangling the role of seed bank and dispersal in plant metapopulation dynamics using patch occupancy surveys.

    PubMed

    Manna, F; Pradel, R; Choquet, R; Fréville, H; Cheptou, P-O

    2017-10-01

    In plants, the presence of a seed bank challenges the application of classical metapopulation models to aboveground presence surveys; ignoring seed bank leads to overestimated extinction and colonization rates. In this article, we explore the possibility to detect seed bank using hidden Markov models in the analysis of aboveground patch occupancy surveys of an annual plant with limited dispersal. Patch occupancy data were generated by simulation under two metapopulation sizes (N = 200 and N = 1,000 patches) and different metapopulation scenarios, each scenario being a combination of the presence/absence of a 1-yr seed bank and the presence/absence of limited dispersal in a circular 1-dimension configuration of patches. In addition, because local conditions often vary among patches in natural metapopulations, we simulated patch occupancy data with heterogeneous germination rate and patch disturbance. Seed bank is not observable from aboveground patch occupancy surveys, hence hidden Markov models were designed to account for uncertainty in patch occupancy. We explored their ability to retrieve the correct scenario. For 10 yr surveys and metapopulation sizes of N = 200 or 1,000 patches, the correct metapopulation scenario was detected at a rate close to 100%, whatever the underlying scenario considered. For smaller, more realistic, survey duration, the length for a reliable detection of the correct scenario depends on the metapopulation size: 3 yr for N = 1,000 and 6 yr for N = 200 are enough. Our method remained powerful to disentangle seed bank from dispersal in the presence of patch heterogeneity affecting either seed germination or patch extinction. Our work shows that seed bank and limited dispersal generate different signatures on aboveground patch occupancy surveys. Therefore, our method provides a powerful tool to infer metapopulation dynamics in a wide range of species with an undetectable life form. © 2017 by the Ecological Society of America.

  6. Holographic calculation for large interval Rényi entropy at high temperature

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Wu, Jie-qiang

    2015-11-01

    In this paper, we study the holographic Rényi entropy of a large interval on a circle at high temperature for the two-dimensional conformal field theory (CFT) dual to pure AdS3 gravity. In the field theory, the Rényi entropy is encoded in the CFT partition function on n -sheeted torus connected with each other by a large branch cut. As proposed by Chen and Wu [Large interval limit of Rényi entropy at high temperature, arXiv:1412.0763], the effective way to read the entropy in the large interval limit is to insert a complete set of state bases of the twist sector at the branch cut. Then the calculation transforms into an expansion of four-point functions in the twist sector with respect to e-2/π T R n . By using the operator product expansion of the twist operators at the branch points, we read the first few terms of the Rényi entropy, including the leading and next-to-leading contributions in the large central charge limit. Moreover, we show that the leading contribution is actually captured by the twist vacuum module. In this case by the Ward identity the four-point functions can be derived from the correlation function of four twist operators, which is related to double interval entanglement entropy. Holographically, we apply the recipe in [T. Faulkner, The entanglement Rényi entropies of disjoint intervals in AdS/CFT, arXiv:1303.7221] and [T. Barrella et al., Holographic entanglement beyond classical gravity, J. High Energy Phys. 09 (2013) 109] to compute the classical Rényi entropy and its one-loop quantum correction, after imposing a new set of monodromy conditions. The holographic classical result matches exactly with the leading contribution in the field theory up to e-4 π T R and l6, while the holographical one-loop contribution is in exact agreement with next-to-leading results in field theory up to e-6/π T R n and l4 as well.

  7. Dimension of quantum phase space measured by photon correlations

    NASA Astrophysics Data System (ADS)

    Leuchs, Gerd; Glauber, Roy J.; Schleich, Wolfgang P.

    2015-06-01

    We show that the different values 1, 2 and 3 of the normalized second-order correlation function {g}(2)(0) corresponding to a coherent state, a thermal state and a highly squeezed vacuum originate from the different dimensionality of these states in phase space. In particular, we derive an exact expression for {g}(2)(0) in terms of the ratio of the moments of the classical energy evaluated with the Wigner function of the quantum state of interest and corrections proportional to the reciprocal of powers of the average number of photons. In this way we establish a direct link between {g}(2)(0) and the shape of the state in phase space. Moreover, we illuminate this connection by demonstrating that in the semi-classical limit the familiar photon statistics of a thermal state arise from an area in phase space weighted by a two-dimensional Gaussian, whereas those of a highly squeezed state are governed by a line-integral of a one-dimensional Gaussian. We dedicate this article to Margarita and Vladimir Man’ko on the occasion of their birthdays. The topic of our contribution is deeply rooted in and motivated by their love for non-classical light, quantum mechanical phase space distribution functions and orthogonal polynomials. Indeed, through their articles, talks and most importantly by many stimulating discussions and intensive collaborations with us they have contributed much to our understanding of physics. Happy birthday to you both!

  8. Reexamination of Induction Heating of Primitive Bodies in Protoplanetary Disks

    NASA Astrophysics Data System (ADS)

    Menzel, Raymond L.; Roberge, Wayne G.

    2013-10-01

    We reexamine the unipolar induction mechanism for heating asteroids originally proposed in a classic series of papers by Sonett and collaborators. As originally conceived, induction heating is caused by the "motional electric field" that appears in the frame of an asteroid immersed in a fully ionized, magnetized solar wind and drives currents through its interior. However, we point out that classical induction heating contains a subtle conceptual error, in consequence of which the electric field inside the asteroid was calculated incorrectly. The problem is that the motional electric field used by Sonett et al. is the electric field in the freely streaming plasma far from the asteroid; in fact, the motional field vanishes at the asteroid surface for realistic assumptions about the plasma density. In this paper we revisit and improve the induction heating scenario by (1) correcting the conceptual error by self-consistently calculating the electric field in and around the boundary layer at the asteroid-plasma interface; (2) considering weakly ionized plasmas consistent with current ideas about protoplanetary disks; and (3) considering more realistic scenarios that do not require a fully ionized, powerful T Tauri wind in the disk midplane. We present exemplary solutions for two highly idealized flows that show that the interior electric field can either vanish or be comparable to the fields predicted by classical induction depending on the flow geometry. We term the heating driven by these flows "electrodynamic heating," calculate its upper limits, and compare them to heating produced by short-lived radionuclides.

  9. Onset of fractional-order thermal convection in porous media

    NASA Astrophysics Data System (ADS)

    Karani, Hamid; Rashtbehesht, Majid; Huber, Christian; Magin, Richard L.

    2017-12-01

    The macroscopic description of buoyancy-driven thermal convection in porous media is governed by advection-diffusion processes, which in the presence of thermophysical heterogeneities fail to predict the onset of thermal convection and the average rate of heat transfer. This work extends the classical model of heat transfer in porous media by including a fractional-order advective-dispersive term to account for the role of thermophysical heterogeneities in shifting the thermal instability point. The proposed fractional-order model overcomes limitations of the common closure approaches for the thermal dispersion term by replacing the diffusive assumption with a fractional-order model. Through a linear stability analysis and Galerkin procedure, we derive an analytical formula for the critical Rayleigh number as a function of the fractional model parameters. The resulting critical Rayleigh number reduces to the classical value in the absence of thermophysical heterogeneities when solid and fluid phases have similar thermal conductivities. Numerical simulations of the coupled flow equation with the fractional-order energy model near the primary bifurcation point confirm our analytical results. Moreover, data from pore-scale simulations are used to examine the potential of the proposed fractional-order model in predicting the amount of heat transfer across the porous enclosure. The linear stability and numerical results show that, unlike the classical thermal advection-dispersion models, the fractional-order model captures the advance and delay in the onset of convection in porous media and provides correct scalings for the average heat transfer in a thermophysically heterogeneous medium.

  10. Limits of Infinite Processes for Liberal Arts Majors: Two Classic Examples

    ERIC Educational Resources Information Center

    Jorgensen, Theresa A.; Shipman, Barbara A.

    2012-01-01

    This paper presents guided classroom activities that showcase two classic problems in which a finite limit exists and where there is a certain charm to engage liberal arts majors. The two scenarios build solely on students' existing knowledge of number systems and harness potential misconceptions about limits and infinity to guide their thinking.…

  11. The structure of aqueous sodium hydroxide solutions: a combined solution x-ray diffraction and simulation study.

    PubMed

    Megyes, Tünde; Bálint, Szabolcs; Grósz, Tamás; Radnai, Tamás; Bakó, Imre; Sipos, Pál

    2008-01-28

    To determine the structure of aqueous sodium hydroxide solutions, results obtained from x-ray diffraction and computer simulation (molecular dynamics and Car-Parrinello) have been compared. The capabilities and limitations of the methods in describing the solution structure are discussed. For the solutions studied, diffraction methods were found to perform very well in describing the hydration spheres of the sodium ion and yield structural information on the anion's hydration structure. Classical molecular dynamics simulations were not able to correctly describe the bulk structure of these solutions. However, Car-Parrinello simulation proved to be a suitable tool in the detailed interpretation of the hydration sphere of ions and bulk structure of solutions. The results of Car-Parrinello simulations were compared with the findings of diffraction experiments.

  12. On the semi-classical limit of scalar products of the XXZ spin chain

    NASA Astrophysics Data System (ADS)

    Jiang, Yunfeng; Brunekreef, Joren

    2017-03-01

    We study the scalar products between Bethe states in the XXZ spin chain with anisotropy |Δ| > 1 in the semi-classical limit where the length of the spin chain and the number of magnons tend to infinity with their ratio kept finite and fixed. Our method is a natural yet non-trivial generalization of similar methods developed for the XXX spin chain. The final result can be written in a compact form as a contour integral in terms of Faddeev's quantum dilogarithm function, which in the isotropic limit reduces to the classical dilogarithm function.

  13. Limit Theorems for Dispersing Billiards with Cusps

    NASA Astrophysics Data System (ADS)

    Bálint, P.; Chernov, N.; Dolgopyat, D.

    2011-12-01

    Dispersing billiards with cusps are deterministic dynamical systems with a mild degree of chaos, exhibiting "intermittent" behavior that alternates between regular and chaotic patterns. Their statistical properties are therefore weak and delicate. They are characterized by a slow (power-law) decay of correlations, and as a result the classical central limit theorem fails. We prove that a non-classical central limit theorem holds, with a scaling factor of {sqrt{nlog n}} replacing the standard {sqrt{n}} . We also derive the respective Weak Invariance Principle, and we identify the class of observables for which the classical CLT still holds.

  14. Anterior cingulate cortex and intuitive bias detection during number conservation.

    PubMed

    Simon, Grégory; Lubin, Amélie; Houdé, Olivier; De Neys, Wim

    2015-01-01

    Children's number conservation is often biased by misleading intuitions but the precise nature of these conservation errors is not clear. A key question is whether children detect that their erroneous conservation judgment is unwarranted. The present study reanalyzed available fMRI data to test the implication of the anterior cingulate cortex (ACC) in this detection process. We extracted mean BOLD (Blood Oxygen Level Dependent) signal values in an independently defined ACC region of interest (ROI) during presentation of classic and control number conservation problems. In classic trials, an intuitively cued visuospatial response conflicted with the correct conservation response, whereas this conflict was not present in the control trials. Results showed that ACC activation increased when solving the classic conservation problems. Critically, this increase did not differ between participants who solved the classic problems correctly (i.e., so-called conservers) and incorrectly (i.e., so-called non-conservers). Additional control analyses of inferior and lateral prefrontal ROIs showed that the group of conservers did show stronger activation in the right inferior frontal gyrus and right lateral middle frontal gyrus. In line with recent behavioral findings, these data lend credence to the hypothesis that even non-conserving children detect the biased nature of their judgment. The key difference between conservers and non-conservers seems to lie in a differential recruitment of inferior and lateral prefrontal regions associated with inhibitory control.

  15. Closed almost-periodic orbits in semiclassical quantization of generic polygons

    PubMed

    Biswas

    2000-05-01

    Periodic orbits are the central ingredients of modern semiclassical theories and corrections to these are generally nonclassical in origin. We show here that, for the class of generic polygonal billiards, the corrections are predominantly classical in origin owing to the contributions from closed almost-periodic (CAP) orbit families. Furthermore, CAP orbit families outnumber periodic families but have comparable weights. They are hence indispensable for semiclassical quantization.

  16. Analytic non-Maxwellian electron velocity distribution function in a Hall discharge plasma

    NASA Astrophysics Data System (ADS)

    Shagayda, Andrey; Tarasov, Alexey

    2017-10-01

    The electron velocity distribution function in the low-pressure discharges with the crossed electric and magnetic fields, which occur in magnetrons, plasma accelerators, and Hall thrusters with a closed electron drift, is not Maxwellian. A deviation from equilibrium is caused by a large electron mean free path relative to the Larmor radius and the size of the discharge channel. In this study, we derived in the relaxation approximation the analytical expression of the electron velocity distribution function in a weakly ionized Lorentz plasma with the crossed electric and magnetic fields in the presence of the electron density and temperature gradients in the direction of the electric field. The solution was obtained in the stationary approximation far from boundary surfaces, when diffusion and mobility are determined by the classical effective collision frequency of electrons with ions and atoms. The moments of the distribution function including the average velocity, the stress tensor, and the heat flux were calculated and compared with the classical hydrodynamic expressions. It was shown that a kinetic correction to the drift velocity stems from a contribution of the off-diagonal component of the stress tensor. This correction becomes essential if the drift velocity in the crossed electric and magnetic fields would be comparable to the thermal velocity of electrons. The electron temperature has three different components at a nonzero effective collision frequency and two different components in the limit when the collision frequency tends to zero. It is shown that, in the presence of ionization collisions, the components of the heat flux have additives that are not related to the temperature gradient, and arise because of the electron drift.

  17. Metrics and textural features of MRI diffusion to improve classification of pediatric posterior fossa tumors.

    PubMed

    Rodriguez Gutierrez, D; Awwad, A; Meijer, L; Manita, M; Jaspan, T; Dineen, R A; Grundy, R G; Auer, D P

    2014-05-01

    Qualitative radiologic MR imaging review affords limited differentiation among types of pediatric posterior fossa brain tumors and cannot detect histologic or molecular subtypes, which could help to stratify treatment. This study aimed to improve current posterior fossa discrimination of histologic tumor type by using support vector machine classifiers on quantitative MR imaging features. This retrospective study included preoperative MRI in 40 children with posterior fossa tumors (17 medulloblastomas, 16 pilocytic astrocytomas, and 7 ependymomas). Shape, histogram, and textural features were computed from contrast-enhanced T2WI and T1WI and diffusivity (ADC) maps. Combinations of features were used to train tumor-type-specific classifiers for medulloblastoma, pilocytic astrocytoma, and ependymoma types in separation and as a joint posterior fossa classifier. A tumor-subtype classifier was also produced for classic medulloblastoma. The performance of different classifiers was assessed and compared by using randomly selected subsets of training and test data. ADC histogram features (25th and 75th percentiles and skewness) yielded the best classification of tumor type (on average >95.8% of medulloblastomas, >96.9% of pilocytic astrocytomas, and >94.3% of ependymomas by using 8 training samples). The resulting joint posterior fossa classifier correctly assigned >91.4% of the posterior fossa tumors. For subtype classification, 89.4% of classic medulloblastomas were correctly classified on the basis of ADC texture features extracted from the Gray-Level Co-Occurence Matrix. Support vector machine-based classifiers using ADC histogram features yielded very good discrimination among pediatric posterior fossa tumor types, and ADC textural features show promise for further subtype discrimination. These findings suggest an added diagnostic value of quantitative feature analysis of diffusion MR imaging in pediatric neuro-oncology. © 2014 by American Journal of Neuroradiology.

  18. Spatial problem-solving in a wheel-shaped maze: quantitative and qualitative analyses of the behavioural changes following damage to the hippocampus in the rat.

    PubMed

    Buhot, M C; Chapuis, N; Scardigli, P; Herrmann, T

    1991-07-01

    The behaviour of sham-operated rats and rats with damage to the dorsal hippocampus was compared in a complex spatial problem-solving task using a 'hub-spoke-rim' wheel type maze. Compared to the classical Olton 8-arm radial maze and Morris water maze, this apparatus presents the animal with a series of possible alternative routes both direct and indirect to the goal (food). The task included 3 main stages: exploration, feeding and testing, as do the classic problem-solving tasks. During exploration, hippocampal rats were found to be more active than sham rats. Nevertheless, they displayed habituation and a relatively efficient circumnavigation, though, in both cases, different from those of sham rats. During test trials, hippocampal rats were characterized as being less accurate, making more errors than sham rats. Nevertheless, both groups increased their accuracy of first choices over trials. The qualitative analyses of test trial performance indicated that hippocampal rats were less accurate in terms of the initial error's deviation from the goal, and less efficient in terms of corrective behaviour than sham rats which used either the periphery or the spokes to attain economically the goal. Surprisingly, hippocampal rats were not limited to a taxon type orientation but learned to use the periphery, a tendency which developed over time. Seemingly, for sham rats, the problem-solving process took the form of updating information during transit. For hippocampal rats, the use of periphery reflected both an ability to discriminate its usefulness in reaching the goal via a taxis type behaviour, and some sparing of ability to generalize the closeness and the location of the goal. These results, especially the strategic correction patterns, are discussed in the light of Sutherland and Rudy's 'configurational association theory'.

  19. Communication: Note on detailed balance in symmetrical quasi-classical models for electronically non-adiabatic dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, William H., E-mail: millerwh@berkeley.edu; Cotton, Stephen J., E-mail: StephenJCotton47@gmail.com

    2015-04-07

    It is noted that the recently developed symmetrical quasi-classical (SQC) treatment of the Meyer-Miller (MM) model for the simulation of electronically non-adiabatic dynamics provides a good description of detailed balance, even though the dynamics which results from the classical MM Hamiltonian is “Ehrenfest dynamics” (i.e., the force on the nuclei is an instantaneous coherent average over all electronic states). This is seen to be a consequence of the SQC windowing methodology for “processing” the results of the trajectory calculation. For a particularly simple model discussed here, this is shown to be true regardless of the choice of windowing function employedmore » in the SQC model, and for a more realistic full classical molecular dynamics simulation, it is seen to be maintained correctly for very long time.« less

  20. Supersaturated calcium carbonate solutions are classical

    PubMed Central

    Henzler, Katja; Fetisov, Evgenii O.; Galib, Mirza; Baer, Marcel D.; Legg, Benjamin A.; Borca, Camelia; Xto, Jacinta M.; Pin, Sonia; Fulton, John L.; Schenter, Gregory K.; Govind, Niranjan; Siepmann, J. Ilja; Mundy, Christopher J.; Huthwelker, Thomas; De Yoreo, James J.

    2018-01-01

    Mechanisms of CaCO3 nucleation from solutions that depend on multistage pathways and the existence of species far more complex than simple ions or ion pairs have recently been proposed. Herein, we provide a tightly coupled theoretical and experimental study on the pathways that precede the initial stages of CaCO3 nucleation. Starting from molecular simulations, we succeed in correctly predicting bulk thermodynamic quantities and experimental data, including equilibrium constants, titration curves, and detailed x-ray absorption spectra taken from the supersaturated CaCO3 solutions. The picture that emerges is in complete agreement with classical views of cluster populations in which ions and ion pairs dominate, with the concomitant free energy landscapes following classical nucleation theory. PMID:29387793

  1. Supersaturated calcium carbonate solutions are classical.

    PubMed

    Henzler, Katja; Fetisov, Evgenii O; Galib, Mirza; Baer, Marcel D; Legg, Benjamin A; Borca, Camelia; Xto, Jacinta M; Pin, Sonia; Fulton, John L; Schenter, Gregory K; Govind, Niranjan; Siepmann, J Ilja; Mundy, Christopher J; Huthwelker, Thomas; De Yoreo, James J

    2018-01-01

    Mechanisms of CaCO 3 nucleation from solutions that depend on multistage pathways and the existence of species far more complex than simple ions or ion pairs have recently been proposed. Herein, we provide a tightly coupled theoretical and experimental study on the pathways that precede the initial stages of CaCO 3 nucleation. Starting from molecular simulations, we succeed in correctly predicting bulk thermodynamic quantities and experimental data, including equilibrium constants, titration curves, and detailed x-ray absorption spectra taken from the supersaturated CaCO 3 solutions. The picture that emerges is in complete agreement with classical views of cluster populations in which ions and ion pairs dominate, with the concomitant free energy landscapes following classical nucleation theory.

  2. VizieR Online Data Catalog: Second Quito Astrolabe Catalogue (Kolesnik+ 1994)

    NASA Astrophysics Data System (ADS)

    Kolesnik, Y. B.; Davila, H.

    1994-03-01

    The paper contains 515 individual corrections {DELTA}α and 235 corrections {DELTA}δ to FK5 and FK5Supp. stars and 50 corrections to their proper motions computed from observations made with the classical Danjon astrolabe OPL-13 at Quito Astronomical Observatory of Ecuador National Polytechnical School during a period from 1964 to 1983. These corrections cover the declination zone from -30° to +30°. Mean probable errors of catalogue positions are 0.047" in αcosδ and 0.054" in δ. The systematic trends of the catalogue {DELTA}ααcosδ, {DELTA}αδcosδ, {DELTA}δα, {DELTA}δδ are presented for the observed zone. (2 data files).

  3. Preparation and measurement of three-qubit entanglement in a superconducting circuit.

    PubMed

    Dicarlo, L; Reed, M D; Sun, L; Johnson, B R; Chow, J M; Gambetta, J M; Frunzio, L; Girvin, S M; Devoret, M H; Schoelkopf, R J

    2010-09-30

    Traditionally, quantum entanglement has been central to foundational discussions of quantum mechanics. The measurement of correlations between entangled particles can have results at odds with classical behaviour. These discrepancies grow exponentially with the number of entangled particles. With the ample experimental confirmation of quantum mechanical predictions, entanglement has evolved from a philosophical conundrum into a key resource for technologies such as quantum communication and computation. Although entanglement in superconducting circuits has been limited so far to two qubits, the extension of entanglement to three, eight and ten qubits has been achieved among spins, ions and photons, respectively. A key question for solid-state quantum information processing is whether an engineered system could display the multi-qubit entanglement necessary for quantum error correction, which starts with tripartite entanglement. Here, using a circuit quantum electrodynamics architecture, we demonstrate deterministic production of three-qubit Greenberger-Horne-Zeilinger (GHZ) states with fidelity of 88 per cent, measured with quantum state tomography. Several entanglement witnesses detect genuine three-qubit entanglement by violating biseparable bounds by 830 ± 80 per cent. We demonstrate the first step of basic quantum error correction, namely the encoding of a logical qubit into a manifold of GHZ-like states using a repetition code. The integration of this encoding with decoding and error-correcting steps in a feedback loop will be the next step for quantum computing with integrated circuits.

  4. Mass hierarchy, mass gap and corrections to Newton's law on thick branes with Poincaré symmetry

    NASA Astrophysics Data System (ADS)

    Barbosa-Cendejas, Nandinii; Herrera-Aguilar, Alfredo; Kanakoglou, Konstantinos; Nucamendi, Ulises; Quiros, Israel

    2014-01-01

    We consider a scalar thick brane configuration arising in a 5D theory of gravity coupled to a self-interacting scalar field in a Riemannian manifold. We start from known classical solutions of the corresponding field equations and elaborate on the physics of the transverse traceless modes of linear fluctuations of the classical background, which obey a Schrödinger-like equation. We further consider two special cases in which this equation can be solved analytically for any massive mode with , in contrast with numerical approaches, allowing us to study in closed form the massive spectrum of Kaluza-Klein (KK) excitations and to analytically compute the corrections to Newton's law in the thin brane limit. In the first case we consider a novel solution with a mass gap in the spectrum of KK fluctuations with two bound states—the massless 4D graviton free of tachyonic instabilities and a massive KK excitation—as well as a tower of continuous massive KK modes which obey a Legendre equation. The mass gap is defined by the inverse of the brane thickness, allowing us to get rid of the potentially dangerous multiplicity of arbitrarily light KK modes. It is shown that due to this lucky circumstance, the solution of the mass hierarchy problem is much simpler and transparent than in the thin Randall-Sundrum (RS) two-brane configuration. In the second case we present a smooth version of the RS model with a single massless bound state, which accounts for the 4D graviton, and a sector of continuous fluctuation modes with no mass gap, which obey a confluent Heun equation in the Ince limit. (The latter seems to have physical applications for the first time within braneworld models). For this solution the mass hierarchy problem is solved with positive branes as in the Lykken-Randall (LR) model and the model is completely free of naked singularities. We also show that the scalar-tensor system is stable under scalar perturbations with no scalar modes localized on the braneworld configuration.

  5. Rigorous derivation of electromagnetic self-force

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gralla, Samuel E.; Harte, Abraham I.; Wald, Robert M.

    2009-07-15

    During the past century, there has been considerable discussion and analysis of the motion of a point charge in an external electromagnetic field in special relativity, taking into account 'self-force' effects due to the particle's own electromagnetic field. We analyze the issue of 'particle motion' in classical electromagnetism in a rigorous and systematic way by considering a one-parameter family of solutions to the coupled Maxwell and matter equations corresponding to having a body whose charge-current density J{sup a}({lambda}) and stress-energy tensor T{sub ab}({lambda}) scale to zero size in an asymptotically self-similar manner about a worldline {gamma} as {lambda}{yields}0. In thismore » limit, the charge, q, and total mass, m, of the body go to zero, and q/m goes to a well-defined limit. The Maxwell field F{sub ab}({lambda}) is assumed to be the retarded solution associated with J{sup a}({lambda}) plus a homogeneous solution (the 'external field') that varies smoothly with {lambda}. We prove that the worldline {gamma} must be a solution to the Lorentz force equations of motion in the external field F{sub ab}({lambda}=0). We then obtain self-force, dipole forces, and spin force as first-order perturbative corrections to the center-of-mass motion of the body. We believe that this is the first rigorous derivation of the complete first-order correction to Lorentz force motion. We also address the issue of obtaining a self-consistent perturbative equation of motion associated with our perturbative result, and argue that the self-force equations of motion that have previously been written down in conjunction with the 'reduction of order' procedure should provide accurate equations of motion for a sufficiently small charged body with negligible dipole moments and spin. (There is no corresponding justification for the non-reduced-order equations.) We restrict consideration in this paper to classical electrodynamics in flat spacetime, but there should be no difficulty in extending our results to the motion of a charged body in an arbitrary globally hyperbolic curved spacetime.« less

  6. The new classic data acquisition system for NPOI

    NASA Astrophysics Data System (ADS)

    Sun, B.; Jorgensen, A. M.; Landavazo, M.; Hutter, D. J.; van Belle, G. T.; Mozurkewich, David; Armstrong, J. T.; Schmitt, H. R.; Baines, E. K.; Restaino, S. R.

    2014-07-01

    The New Classic data acquisition system is an important portion of a new project of stellar surface imaging with the NPOI, funded by the National Science Foundation, and enables the data acquisition necessary for the project. The NPOI can simultaneously deliver beams from 6 telescopes to the beam combining facility, and in the Classic beam combiner these are combined 4 at a time on 3 separate spectrographs with all 15 possible baselines observed. The Classic data acquisition system is limited to 16 of 32 wavelength channels on two spectrographs and limited to 30 s integrations followed by a pause to ush data. Classic also has some limitations in its fringe-tracking capability. These factors, and the fact that Classic incorporates 1990s technology which cannot be easily replaced are motivation for upgrading the data acquisition system. The New Classic data acquisition system is based around modern electronics, including a high-end Stratix FPGA, a 200 MB/s Direct Memory Access card, and a fast modern Linux computer. These allow for continuous recording of all 96 channels across three spectrographs, increasing the total amount of data recorded by a an estimated order of magnitude. The additional computing power on the data acquisition system also allows for the implementation of more sophisticated fringe-tracking algorithms which are needed for the Stellar Surface Imaging project. In this paper we describe the New Classic system design and implementation, describe the background and motivation for the system as well as show some initial results from using it.

  7. Test-state approach to the quantum search problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sehrawat, Arun; Nguyen, Le Huy; Graduate School for Integrative Sciences and Engineering, National University of Singapore, Singapore 117597

    2011-05-15

    The search for 'a quantum needle in a quantum haystack' is a metaphor for the problem of finding out which one of a permissible set of unitary mappings - the oracles - is implemented by a given black box. Grover's algorithm solves this problem with quadratic speedup as compared with the analogous search for 'a classical needle in a classical haystack'. Since the outcome of Grover's algorithm is probabilistic - it gives the correct answer with high probability, not with certainty - the answer requires verification. For this purpose we introduce specific test states, one for each oracle. These testmore » states can also be used to realize 'a classical search for the quantum needle' which is deterministic - it always gives a definite answer after a finite number of steps - and 3.41 times as fast as the purely classical search. Since the test-state search and Grover's algorithm look for the same quantum needle, the average number of oracle queries of the test-state search is the classical benchmark for Grover's algorithm.« less

  8. Current surgical management of mitral regurgitation.

    PubMed

    Calvinho, Paulo; Antunes, Manuel

    2008-04-01

    From Walton Lillehei, who performed the first successful open mitral valve surgery in 1956, until the advent of robotic surgery in the 21st Century, only 50 years have passed. The introduction of the first heart valve prosthesis, in 1960, was the next major step forward. However, correction of mitral disease by valvuloplasty results in better survival and ventricular performance than mitral valve replacement. However, the European Heart Survey demonstrated that only 40% of the valves are repaired. The standard procedures (Carpentier's techniques and Alfieri's edge-to-edge suture) are the surgical basis for the new technical approaches. Minimally invasive surgery led to the development of video-assisted and robotic surgery and interventional cardiology is already making the first steps on endovascular procedures, using the classical concepts in highly differentiated approaches. Correction of mitral regurgitation is a complex field that is still growing, whereas classic surgery is still under debate as the new era arises.

  9. Simultaneous quantitative analysis of olmesartan, amlodipine and hydrochlorothiazide in their combined dosage form utilizing classical and alternating least squares based chemometric methods.

    PubMed

    Darwish, Hany W; Bakheit, Ahmed H; Abdelhameed, Ali S

    2016-03-01

    Simultaneous spectrophotometric analysis of a multi-component dosage form of olmesartan, amlodipine and hydrochlorothiazide used for the treatment of hypertension has been carried out using various chemometric methods. Multivariate calibration methods include classical least squares (CLS) executed by net analyte processing (NAP-CLS), orthogonal signal correction (OSC-CLS) and direct orthogonal signal correction (DOSC-CLS) in addition to multivariate curve resolution-alternating least squares (MCR-ALS). Results demonstrated the efficiency of the proposed methods as quantitative tools of analysis as well as their qualitative capability. The three analytes were determined precisely using the aforementioned methods in an external data set and in a dosage form after optimization of experimental conditions. Finally, the efficiency of the models was validated via comparison with the partial least squares (PLS) method in terms of accuracy and precision.

  10. Identification of filamentous fungi isolates by MALDI-TOF mass spectrometry: clinical evaluation of an extended reference spectra library.

    PubMed

    Becker, Pierre T; de Bel, Annelies; Martiny, Delphine; Ranque, Stéphane; Piarroux, Renaud; Cassagne, Carole; Detandt, Monique; Hendrickx, Marijke

    2014-11-01

    The identification of filamentous fungi by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) relies mainly on a robust and extensive database of reference spectra. To this end, a large in-house library containing 760 strains and representing 472 species was built and evaluated on 390 clinical isolates by comparing MALDI-TOF MS with the classical identification method based on morphological observations. The use of MALDI-TOF MS resulted in the correct identification of 95.4% of the isolates at species level, without considering LogScore values. Taking into account the Brukers' cutoff value for reliability (LogScore >1.70), 85.6% of the isolates were correctly identified. For a number of isolates, microscopic identification was limited to the genus, resulting in only 61.5% of the isolates correctly identified at species level while the correctness reached 94.6% at genus level. Using this extended in-house database, MALDI-TOF MS thus appears superior to morphology in order to obtain a robust and accurate identification of filamentous fungi. A continuous extension of the library is however necessary to further improve its reliability. Indeed, 15 isolates were still not represented while an additional three isolates were not recognized, probably because of a lack of intraspecific variability of the corresponding species in the database. © The Author 2014. Published by Oxford University Press on behalf of The International Society for Human and Animal Mycology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. Application of classical simulations for the computation of vibrational properties of free molecules.

    PubMed

    Tikhonov, Denis S; Sharapa, Dmitry I; Schwabedissen, Jan; Rybkin, Vladimir V

    2016-10-12

    In this study, we investigate the ability of classical molecular dynamics (MD) and Monte-Carlo (MC) simulations for modeling the intramolecular vibrational motion. These simulations were used to compute thermally-averaged geometrical structures and infrared vibrational intensities for a benchmark set previously studied by gas electron diffraction (GED): CS 2 , benzene, chloromethylthiocyanate, pyrazinamide and 9,12-I 2 -1,2-closo-C 2 B 10 H 10 . The MD sampling of NVT ensembles was performed using chains of Nose-Hoover thermostats (NH) as well as the generalized Langevin equation thermostat (GLE). The performance of the theoretical models based on the classical MD and MC simulations was compared with the experimental data and also with the alternative computational techniques: a conventional approach based on the Taylor expansion of potential energy surface, path-integral MD and MD with quantum-thermal bath (QTB) based on the generalized Langevin equation (GLE). A straightforward application of the classical simulations resulted, as expected, in poor accuracy of the calculated observables due to the complete neglect of quantum effects. However, the introduction of a posteriori quantum corrections significantly improved the situation. The application of these corrections for MD simulations of the systems with large-amplitude motions was demonstrated for chloromethylthiocyanate. The comparison of the theoretical vibrational spectra has revealed that the GLE thermostat used in this work is not applicable for this purpose. On the other hand, the NH chains yielded reasonably good results.

  12. Mixtures of Berkson and classical covariate measurement error in the linear mixed model: Bias analysis and application to a study on ultrafine particles.

    PubMed

    Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette

    2018-05-01

    The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Classical multiparty computation using quantum resources

    NASA Astrophysics Data System (ADS)

    Clementi, Marco; Pappa, Anna; Eckstein, Andreas; Walmsley, Ian A.; Kashefi, Elham; Barz, Stefanie

    2017-12-01

    In this work, we demonstrate a way to perform classical multiparty computing among parties with limited computational resources. Our method harnesses quantum resources to increase the computational power of the individual parties. We show how a set of clients restricted to linear classical processing are able to jointly compute a nonlinear multivariable function that lies beyond their individual capabilities. The clients are only allowed to perform classical xor gates and single-qubit gates on quantum states. We also examine the type of security that can be achieved in this limited setting. Finally, we provide a proof-of-concept implementation using photonic qubits that allows four clients to compute a specific example of a multiparty function, the pairwise and.

  14. Second-Order Asymptotics for the Classical Capacity of Image-Additive Quantum Channels

    NASA Astrophysics Data System (ADS)

    Tomamichel, Marco; Tan, Vincent Y. F.

    2015-08-01

    We study non-asymptotic fundamental limits for transmitting classical information over memoryless quantum channels, i.e. we investigate the amount of classical information that can be transmitted when a quantum channel is used a finite number of times and a fixed, non-vanishing average error is permissible. In this work we consider the classical capacity of quantum channels that are image-additive, including all classical to quantum channels, as well as the product state capacity of arbitrary quantum channels. In both cases we show that the non-asymptotic fundamental limit admits a second-order approximation that illustrates the speed at which the rate of optimal codes converges to the Holevo capacity as the blocklength tends to infinity. The behavior is governed by a new channel parameter, called channel dispersion, for which we provide a geometrical interpretation.

  15. Extending Bell's beables to encompass dissipation, decoherence, and the quantum-to-classical transition through quantum trajectories

    NASA Astrophysics Data System (ADS)

    Lorenzen, F.; de Ponte, M. A.; Moussa, M. H. Y.

    2009-09-01

    In this paper, employing the Itô stochastic Schrödinger equation, we extend Bell’s beable interpretation of quantum mechanics to encompass dissipation, decoherence, and the quantum-to-classical transition through quantum trajectories. For a particular choice of the source of stochasticity, the one leading to a dissipative Lindblad-type correction to the Hamiltonian dynamics, we find that the diffusive terms in Nelsons stochastic trajectories are naturally incorporated into Bohm’s causal dynamics, yielding a unified Bohm-Nelson theory. In particular, by analyzing the interference between quantum trajectories, we clearly identify the decoherence time, as estimated from the quantum formalism. We also observe the quantum-to-classical transition in the convergence of the infinite ensemble of quantum trajectories to their classical counterparts. Finally, we show that our extended beables circumvent the problems in Bohm’s causal dynamics regarding stationary states in quantum mechanics.

  16. Multiscale Free Energy Simulations: An Efficient Method for Connecting Classical MD Simulations to QM or QM/MM Free Energies Using Non-Boltzmann Bennett Reweighting Schemes

    PubMed Central

    2015-01-01

    The reliability of free energy simulations (FES) is limited by two factors: (a) the need for correct sampling and (b) the accuracy of the computational method employed. Classical methods (e.g., force fields) are typically used for FES and present a myriad of challenges, with parametrization being a principle one. On the other hand, parameter-free quantum mechanical (QM) methods tend to be too computationally expensive for adequate sampling. One widely used approach is a combination of methods, where the free energy difference between the two end states is computed by, e.g., molecular mechanics (MM), and the end states are corrected by more accurate methods, such as QM or hybrid QM/MM techniques. Here we report two new approaches that significantly improve the aforementioned scheme; with a focus on how to compute corrections between, e.g., the MM and the more accurate QM calculations. First, a molecular dynamics trajectory that properly samples relevant conformational degrees of freedom is generated. Next, potential energies of each trajectory frame are generated with a QM or QM/MM Hamiltonian. Free energy differences are then calculated based on the QM or QM/MM energies using either a non-Boltzmann Bennett approach (QM-NBB) or non-Boltzmann free energy perturbation (NB-FEP). Both approaches are applied to calculate relative and absolute solvation free energies in explicit and implicit solvent environments. Solvation free energy differences (relative and absolute) between ethane and methanol in explicit solvent are used as the initial test case for QM-NBB. Next, implicit solvent methods are employed in conjunction with both QM-NBB and NB-FEP to compute absolute solvation free energies for 21 compounds. These compounds range from small molecules such as ethane and methanol to fairly large, flexible solutes, such as triacetyl glycerol. Several technical aspects were investigated. Ultimately some best practices are suggested for improving methods that seek to connect MM to QM (or QM/MM) levels of theory in FES. PMID:24803863

  17. Multiscale Free Energy Simulations: An Efficient Method for Connecting Classical MD Simulations to QM or QM/MM Free Energies Using Non-Boltzmann Bennett Reweighting Schemes.

    PubMed

    König, Gerhard; Hudson, Phillip S; Boresch, Stefan; Woodcock, H Lee

    2014-04-08

    THE RELIABILITY OF FREE ENERGY SIMULATIONS (FES) IS LIMITED BY TWO FACTORS: (a) the need for correct sampling and (b) the accuracy of the computational method employed. Classical methods (e.g., force fields) are typically used for FES and present a myriad of challenges, with parametrization being a principle one. On the other hand, parameter-free quantum mechanical (QM) methods tend to be too computationally expensive for adequate sampling. One widely used approach is a combination of methods, where the free energy difference between the two end states is computed by, e.g., molecular mechanics (MM), and the end states are corrected by more accurate methods, such as QM or hybrid QM/MM techniques. Here we report two new approaches that significantly improve the aforementioned scheme; with a focus on how to compute corrections between, e.g., the MM and the more accurate QM calculations. First, a molecular dynamics trajectory that properly samples relevant conformational degrees of freedom is generated. Next, potential energies of each trajectory frame are generated with a QM or QM/MM Hamiltonian. Free energy differences are then calculated based on the QM or QM/MM energies using either a non-Boltzmann Bennett approach (QM-NBB) or non-Boltzmann free energy perturbation (NB-FEP). Both approaches are applied to calculate relative and absolute solvation free energies in explicit and implicit solvent environments. Solvation free energy differences (relative and absolute) between ethane and methanol in explicit solvent are used as the initial test case for QM-NBB. Next, implicit solvent methods are employed in conjunction with both QM-NBB and NB-FEP to compute absolute solvation free energies for 21 compounds. These compounds range from small molecules such as ethane and methanol to fairly large, flexible solutes, such as triacetyl glycerol. Several technical aspects were investigated. Ultimately some best practices are suggested for improving methods that seek to connect MM to QM (or QM/MM) levels of theory in FES.

  18. First-Principles Molecular Dynamics Simulations of NaCl in Water: Performance of Advanced Exchange-Correlation Approximations in Density Functional Theory

    NASA Astrophysics Data System (ADS)

    Yao, Yi; Kanai, Yosuke

    Our ability to correctly model the association of oppositely charged ions in water is fundamental in physical chemistry and essential to various technological and biological applications of molecular dynamics (MD) simulations. MD simulations using classical force fields often show strong clustering of NaCl in the aqueous ionic solutions as a consequence of a deep contact pair minimum in the potential of mean force (PMF) curve. First-Principles Molecular Dynamics (FPMD) based on Density functional theory (DFT) with the popular PBE exchange-correlation approximation, on the other hand, show a different result with a shallow contact pair minimum in the PMF. We employed two of most promising exchange-correlation approximations, ωB97xv by Mardiorossian and Head-Gordon and SCAN by Sun, Ruzsinszky and Perdew, to examine the PMF using FPMD simulations. ωB97xv is highly empirically and optimized in the space of range-separated hybrid functional with a dispersion correction while SCAN is the most recent meta-GGA functional that is constructed by satisfying various known conditions in well-defined physical limits. We will discuss our findings for PMF, charge transfer, water dipoles, etc.

  19. Axioms for quantum mechanics: relativistic causality, retrocausality, and the existence of a classical limit

    NASA Astrophysics Data System (ADS)

    Rohrlich, Daniel

    Y. Aharonov and A. Shimony both conjectured that two axioms - relativistic causality (``no superluminal signalling'') and nonlocality - so nearly contradict each other that only quantum mechanics reconciles them. Can we indeed derive quantum mechanics, at least in part, from these two axioms? No: ``PR-box'' correlations show that quantum correlations are not the most nonlocal correlations consistent with relativistic causality. Here we replace ``nonlocality'' with ``retrocausality'' and supplement the axioms of relativistic causality and retrocausality with a natural and minimal third axiom: the existence of a classical limit, in which macroscopic observables commute. That is, just as quantum mechanics has a classical limit, so must any generalization of quantum mechanics. In this limit, PR-box correlations violaterelativistic causality. Generalized to all stronger-than-quantum bipartite correlations, this result is a derivation of Tsirelson's bound (a theorem of quantum mechanics) from the three axioms of relativistic causality, retrocausality and the existence of a classical limit. Although the derivation does not assume quantum mechanics, it points to the Hilbert space structure that underlies quantum correlations. I thank the John Templeton Foundation (Project ID 43297) and the Israel Science Foundation (Grant No. 1190/13) for support.

  20. Surface code quantum communication.

    PubMed

    Fowler, Austin G; Wang, David S; Hill, Charles D; Ladd, Thaddeus D; Van Meter, Rodney; Hollenberg, Lloyd C L

    2010-05-07

    Quantum communication typically involves a linear chain of repeater stations, each capable of reliable local quantum computation and connected to their nearest neighbors by unreliable communication links. The communication rate of existing protocols is low as two-way classical communication is used. By using a surface code across the repeater chain and generating Bell pairs between neighboring stations with probability of heralded success greater than 0.65 and fidelity greater than 0.96, we show that two-way communication can be avoided and quantum information can be sent over arbitrary distances with arbitrarily low error at a rate limited only by the local gate speed. This is achieved by using the unreliable Bell pairs to measure nonlocal stabilizers and feeding heralded failure information into post-transmission error correction. Our scheme also applies when the probability of heralded success is arbitrarily low.

  1. Polymer quantization of the Einstein-Rosen wormhole throat

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kunstatter, Gabor; Peltola, Ari; Louko, Jorma

    2010-01-15

    We present a polymer quantization of spherically symmetric Einstein gravity in which the polymerized variable is the area of the Einstein-Rosen wormhole throat. In the classical polymer theory, the singularity is replaced by a bounce at a radius that depends on the polymerization scale. In the polymer quantum theory, we show numerically that the area spectrum is evenly spaced and in agreement with a Bohr-Sommerfeld semiclassical estimate, and this spectrum is not qualitatively sensitive to issues of factor ordering or boundary conditions except in the lowest few eigenvalues. In the limit of small polymerization scale we recover, within the numericalmore » accuracy, the area spectrum obtained from a Schroedinger quantization of the wormhole throat dynamics. The prospects of recovering from the polymer throat theory a full quantum-corrected spacetime are discussed.« less

  2. Quantum information density scaling and qubit operation time constraints of CMOS silicon-based quantum computer architectures

    NASA Astrophysics Data System (ADS)

    Rotta, Davide; Sebastiano, Fabio; Charbon, Edoardo; Prati, Enrico

    2017-06-01

    Even the quantum simulation of an apparently simple molecule such as Fe2S2 requires a considerable number of qubits of the order of 106, while more complex molecules such as alanine (C3H7NO2) require about a hundred times more. In order to assess such a multimillion scale of identical qubits and control lines, the silicon platform seems to be one of the most indicated routes as it naturally provides, together with qubit functionalities, the capability of nanometric, serial, and industrial-quality fabrication. The scaling trend of microelectronic devices predicting that computing power would double every 2 years, known as Moore's law, according to the new slope set after the 32-nm node of 2009, suggests that the technology roadmap will achieve the 3-nm manufacturability limit proposed by Kelly around 2020. Today, circuital quantum information processing architectures are predicted to take advantage from the scalability ensured by silicon technology. However, the maximum amount of quantum information per unit surface that can be stored in silicon-based qubits and the consequent space constraints on qubit operations have never been addressed so far. This represents one of the key parameters toward the implementation of quantum error correction for fault-tolerant quantum information processing and its dependence on the features of the technology node. The maximum quantum information per unit surface virtually storable and controllable in the compact exchange-only silicon double quantum dot qubit architecture is expressed as a function of the complementary metal-oxide-semiconductor technology node, so the size scale optimizing both physical qubit operation time and quantum error correction requirements is assessed by reviewing the physical and technological constraints. According to the requirements imposed by the quantum error correction method and the constraints given by the typical strength of the exchange coupling, we determine the workable operation frequency range of a silicon complementary metal-oxide-semiconductor quantum processor to be within 1 and 100 GHz. Such constraint limits the feasibility of fault-tolerant quantum information processing with complementary metal-oxide-semiconductor technology only to the most advanced nodes. The compatibility with classical complementary metal-oxide-semiconductor control circuitry is discussed, focusing on the cryogenic complementary metal-oxide-semiconductor operation required to bring the classical controller as close as possible to the quantum processor and to enable interfacing thousands of qubits on the same chip via time-division, frequency-division, and space-division multiplexing. The operation time range prospected for cryogenic control electronics is found to be compatible with the operation time expected for qubits. By combining the forecast of the development of scaled technology nodes with operation time and classical circuitry constraints, we derive a maximum quantum information density for logical qubits of 2.8 and 4 Mqb/cm2 for the 10 and 7-nm technology nodes, respectively, for the Steane code. The density is one and two orders of magnitude less for surface codes and for concatenated codes, respectively. Such values provide a benchmark for the development of fault-tolerant quantum algorithms by circuital quantum information based on silicon platforms and a guideline for other technologies in general.

  3. Lens correction algorithm based on the see-saw diagram to correct Seidel aberrations employing aspheric surfaces

    NASA Astrophysics Data System (ADS)

    Rosete-Aguilar, Martha

    2000-06-01

    In this paper a lens correction algorithm based on the see- saw diagram developed by Burch is described. The see-saw diagram describes the image correction in rotationally symmetric systems over a finite field of view by means of aspherics surfaces. The algorithm is applied to the design of some basic telescopic configurations such as the classical Cassegrain telescope, the Dall-Kirkham telescope, the Pressman-Camichel telescope and the Ritchey-Chretien telescope in order to show a physically visualizable concept of image correction for optical systems that employ aspheric surfaces. By using the see-saw method the student can visualize the different possible configurations of such telescopes as well as their performances and also the student will be able to understand that it is not always possible to correct more primary aberrations by aspherizing more surfaces.

  4. Logarithmic corrections to entropy of magnetically charged AdS4 black holes

    NASA Astrophysics Data System (ADS)

    Jeon, Imtak; Lal, Shailesh

    2017-11-01

    Logarithmic terms are quantum corrections to black hole entropy determined completely from classical data, thus providing a strong check for candidate theories of quantum gravity purely from physics in the infrared. We compute these terms in the entropy associated to the horizon of a magnetically charged extremal black hole in AdS4×S7 using the quantum entropy function and discuss the possibility of matching against recently derived microscopic expressions.

  5. The strengths and limitations of effective centroid force models explored by studying isotopic effects in liquid water

    NASA Astrophysics Data System (ADS)

    Yuan, Ying; Li, Jicun; Li, Xin-Zheng; Wang, Feng

    2018-05-01

    The development of effective centroid potentials (ECPs) is explored with both the constrained-centroid and quasi-adiabatic force matching using liquid water as a test system. A trajectory integrated with the ECP is free of statistical noises that would be introduced when the centroid potential is approximated on the fly with a finite number of beads. With the reduced cost of ECP, challenging experimental properties can be studied in the spirit of centroid molecular dynamics. The experimental number density of H2O is 0.38% higher than that of D2O. With the ECP, the H2O number density is predicted to be 0.42% higher, when the dispersion term is not refit. After correction of finite size effects, the diffusion constant of H2O is found to be 21% higher than that of D2O, which is in good agreement with the 29.9% higher diffusivity for H2O observed experimentally. Although the ECP is also able to capture the redshifts of both the OH and OD stretching modes in liquid water, there are a number of properties that a classical simulation with the ECP will not be able to recover. For example, the heat capacities of H2O and D2O are predicted to be almost identical and higher than the experimental values. Such a failure is simply a result of not properly treating quantized vibrational energy levels when the trajectory is propagated with classical mechanics. Several limitations of the ECP based approach without bead population reconstruction are discussed.

  6. REEXAMINATION OF INDUCTION HEATING OF PRIMITIVE BODIES IN PROTOPLANETARY DISKS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menzel, Raymond L.; Roberge, Wayne G., E-mail: menzer@rpi.edu, E-mail: roberw@rpi.edu

    2013-10-20

    We reexamine the unipolar induction mechanism for heating asteroids originally proposed in a classic series of papers by Sonett and collaborators. As originally conceived, induction heating is caused by the 'motional electric field' that appears in the frame of an asteroid immersed in a fully ionized, magnetized solar wind and drives currents through its interior. However, we point out that classical induction heating contains a subtle conceptual error, in consequence of which the electric field inside the asteroid was calculated incorrectly. The problem is that the motional electric field used by Sonett et al. is the electric field in themore » freely streaming plasma far from the asteroid; in fact, the motional field vanishes at the asteroid surface for realistic assumptions about the plasma density. In this paper we revisit and improve the induction heating scenario by (1) correcting the conceptual error by self-consistently calculating the electric field in and around the boundary layer at the asteroid-plasma interface; (2) considering weakly ionized plasmas consistent with current ideas about protoplanetary disks; and (3) considering more realistic scenarios that do not require a fully ionized, powerful T Tauri wind in the disk midplane. We present exemplary solutions for two highly idealized flows that show that the interior electric field can either vanish or be comparable to the fields predicted by classical induction depending on the flow geometry. We term the heating driven by these flows 'electrodynamic heating', calculate its upper limits, and compare them to heating produced by short-lived radionuclides.« less

  7. A Matched Filter Technique for Slow Radio Transient Detection and First Demonstration with the Murchison Widefield Array

    NASA Astrophysics Data System (ADS)

    Feng, L.; Vaulin, R.; Hewitt, J. N.; Remillard, R.; Kaplan, D. L.; Murphy, Tara; Kudryavtseva, N.; Hancock, P.; Bernardi, G.; Bowman, J. D.; Briggs, F.; Cappallo, R. J.; Deshpande, A. A.; Gaensler, B. M.; Greenhill, L. J.; Hazelton, B. J.; Johnston-Hollitt, M.; Lonsdale, C. J.; McWhirter, S. R.; Mitchell, D. A.; Morales, M. F.; Morgan, E.; Oberoi, D.; Ord, S. M.; Prabu, T.; Udaya Shankar, N.; Srivani, K. S.; Subrahmanyan, R.; Tingay, S. J.; Wayth, R. B.; Webster, R. L.; Williams, A.; Williams, C. L.

    2017-03-01

    Many astronomical sources produce transient phenomena at radio frequencies, but the transient sky at low frequencies (<300 MHz) remains relatively unexplored. Blind surveys with new wide-field radio instruments are setting increasingly stringent limits on the transient surface density on various timescales. Although many of these instruments are limited by classical confusion noise from an ensemble of faint, unresolved sources, one can in principle detect transients below the classical confusion limit to the extent that the classical confusion noise is independent of time. We develop a technique for detecting radio transients that is based on temporal matched filters applied directly to time series of images, rather than relying on source-finding algorithms applied to individual images. This technique has well-defined statistical properties and is applicable to variable and transient searches for both confusion-limited and non-confusion-limited instruments. Using the Murchison Widefield Array as an example, we demonstrate that the technique works well on real data despite the presence of classical confusion noise, sidelobe confusion noise, and other systematic errors. We searched for transients lasting between 2 minutes and 3 months. We found no transients and set improved upper limits on the transient surface density at 182 MHz for flux densities between ˜20 and 200 mJy, providing the best limits to date for hour- and month-long transients.

  8. Density-Functional Theory with Dispersion-Correcting Potentials for Methane: Bridging the Efficiency and Accuracy Gap between High-Level Wave Function and Classical Molecular Mechanics Methods.

    PubMed

    Torres, Edmanuel; DiLabio, Gino A

    2013-08-13

    Large clusters of noncovalently bonded molecules can only be efficiently modeled by classical mechanics simulations. One prominent challenge associated with this approach is obtaining force-field parameters that accurately describe noncovalent interactions. High-level correlated wave function methods, such as CCSD(T), are capable of correctly predicting noncovalent interactions, and are widely used to produce reference data. However, high-level correlated methods are generally too computationally costly to generate the critical reference data required for good force-field parameter development. In this work we present an approach to generate Lennard-Jones force-field parameters to accurately account for noncovalent interactions. We propose the use of a computational step that is intermediate to CCSD(T) and classical molecular mechanics, that can bridge the accuracy and computational efficiency gap between them, and demonstrate the efficacy of our approach with methane clusters. On the basis of CCSD(T)-level binding energy data for a small set of methane clusters, we develop methane-specific, atom-centered, dispersion-correcting potentials (DCPs) for use with the PBE0 density-functional and 6-31+G(d,p) basis sets. We then use the PBE0-DCP approach to compute a detailed map of the interaction forces associated with the removal of a single methane molecule from a cluster of eight methane molecules and use this map to optimize the Lennard-Jones parameters for methane. The quality of the binding energies obtained by the Lennard-Jones parameters we obtained is assessed on a set of methane clusters containing from 2 to 40 molecules. Our Lennard-Jones parameters, used in combination with the intramolecular parameters of the CHARMM force field, are found to closely reproduce the results of our dispersion-corrected density-functional calculations. The approach outlined can be used to develop Lennard-Jones parameters for any kind of molecular system.

  9. Mouvements collectifs de grandes amplitudes dans les noyaux : une approche microscopique

    NASA Astrophysics Data System (ADS)

    Giannoni, M.-J.

    Various aspects of the adiabatic limit of the time-dependent Hartree-Fock approximation are studied. This formalism is a mean field theory for nuclear collective motion which provides microscopical foundations to the successful phenomenological collective models, and whose validity is not restricted to small amplitude phenomena. Emphasis is put on the classical Hamiltonian-like structure of the dynamical equations. Several limiting cases of the general formalism are considered : Random Phase Approximation, Nuclear Hydrodynamics, case of a single collective variable. Applications to low-lying vibrational modes are described. Results are discussed in terms of sum rules. A quantitative comparison between self-consistent and Inglis cranking mass parameters is made. Important dynamical corrections to the Hartree-Fock ground state are expected for soft nuclei. On étudie divers aspects de l'approximation de Hartree-Fock dépendant du temps à la limite adiabatique. Ce formalisme est une théorie de champ moyen adaptée à la description de phénomènes collectifs dans les noyaux, et dont le domaine de validité n'est pas limité aux mouvements de faibles amplitudes ; d'autre part il permet, grâce à l'approximation adiabatique, de comprendre en termes microscopiques les modèles collectifs purement phénoménologiques. La structure Hamiltonienne classique des équations de mouvement, est étudiée en détail. On considère plusieurs cas limites du formalisme général : approximation des phases au hasard (RPA), limite hydrodynamique, réduction à une seule variable collective. Dans le cadre de ce dernier cas limite, on calcule les paramètres de masse pour les modes vibrationnels quadrupolaires de plusieurs noyaux. Les résultats sont discutés en termes de règle de somme. On compare les paramètres de masse autocohérents aux paramètres de masse d'Inglis. Le formalisme conduit à d'importantes corrections, d'origine dynamique, à l'état fondamental de Hartree-Fock pour des noyaux mous.

  10. Thermodynamics of finite systems: a key issues review

    NASA Astrophysics Data System (ADS)

    Swendsen, Robert H.

    2018-07-01

    A little over ten years ago, Campisi, and Dunkel and Hilbert, published papers claiming that the Gibbs (volume) entropy of a classical system was correct, and that the Boltzmann (surface) entropy was not. They claimed further that the quantum version of the Gibbs entropy was also correct, and that the phenomenon of negative temperatures was thermodynamically inconsistent. Their work began a vigorous debate of exactly how the entropy, both classical and quantum, should be defined. The debate has called into question the basis of thermodynamics, along with fundamental ideas such as whether heat always flows from hot to cold. The purpose of this paper is to sum up the present status—admittedly from my point of view. I will show that standard thermodynamics, with some minor generalizations, is correct, and the alternative thermodynamics suggested by Hilbert, Hänggi, and Dunkel is not. Heat does not flow from cold to hot. Negative temperatures are thermodynamically consistent. The small ‘errors’ in the Boltzmann entropy that started the whole debate are shown to be a consequence of the micro-canonical assumption of an energy distribution of zero width. Improved expressions for the entropy are found when this assumption is abandoned.

  11. Nucleation theory - Is replacement free energy needed?. [error analysis of capillary approximation

    NASA Technical Reports Server (NTRS)

    Doremus, R. H.

    1982-01-01

    It has been suggested that the classical theory of nucleation of liquid from its vapor as developed by Volmer and Weber (1926) needs modification with a factor referred to as the replacement free energy and that the capillary approximation underlying the classical theory is in error. Here, the classical nucleation equation is derived from fluctuation theory, Gibb's result for the reversible work to form a critical nucleus, and the rate of collision of gas molecules with a surface. The capillary approximation is not used in the derivation. The chemical potential of small drops is then considered, and it is shown that the capillary approximation can be derived from thermodynamic equations. The results show that no corrections to Volmer's equation are needed.

  12. Effective model hierarchies for dynamic and static classical density functional theories

    NASA Astrophysics Data System (ADS)

    Majaniemi, S.; Provatas, N.; Nonomura, M.

    2010-09-01

    The origin and methodology of deriving effective model hierarchies are presented with applications to solidification of crystalline solids. In particular, it is discussed how the form of the equations of motion and the effective parameters on larger scales can be obtained from the more microscopic models. It will be shown that tying together the dynamic structure of the projection operator formalism with static classical density functional theories can lead to incomplete (mass) transport properties even though the linearized hydrodynamics on large scales is correctly reproduced. To facilitate a more natural way of binding together the dynamics of the macrovariables and classical density functional theory, a dynamic generalization of density functional theory based on the nonequilibrium generating functional is suggested.

  13. Tuberculosis detection and the challenges of integrated care in rural China: A cross-sectional standardized patient study.

    PubMed

    Sylvia, Sean; Xue, Hao; Zhou, Chengchao; Shi, Yaojiang; Yi, Hongmei; Zhou, Huan; Rozelle, Scott; Pai, Madhukar; Das, Jishnu

    2017-10-01

    Despite recent reductions in prevalence, China still faces a substantial tuberculosis (TB) burden, with future progress dependent on the ability of rural providers to appropriately detect and refer TB patients for further care. This study (a) provides a baseline assessment of the ability of rural providers to correctly manage presumptive TB cases; (b) measures the gap between provider knowledge and practice and; (c) evaluates how ongoing reforms of China's health system-characterized by a movement toward "integrated care" and promotion of initial contact with grassroots providers-will affect the care of TB patients. Unannounced standardized patients (SPs) presenting with classic pulmonary TB symptoms were deployed in 3 provinces of China in July 2015. The SPs successfully completed 274 interactions across all 3 tiers of China's rural health system, interacting with providers in 46 village clinics, 207 township health centers, and 21 county hospitals. Interactions between providers and standardized patients were assessed against international and national standards of TB care. Using a lenient definition of correct management as at least a referral, chest X-ray or sputum test, 41% (111 of 274) SPs were correctly managed. Although there were no cases of empirical anti-TB treatment, antibiotics unrelated to the treatment of TB were prescribed in 168 of 274 interactions or 61.3% (95% CI: 55%-67%). Correct management proportions significantly higher at county hospitals compared to township health centers (OR 0.06, 95% CI: 0.01-0.25, p < 0.001) and village clinics (OR 0.02, 95% CI: 0.0-0.17, p < 0.001). Correct management in tests of knowledge administered to the same 274 physicians for the same case was 45 percentage points (95% CI: 37%-53%) higher with 24 percentage points (95% CI: -33% to -15%) fewer antibiotic prescriptions. Relative to the current system, where patients can choose to bypass any level of care, simulations suggest that a system of managed referral with gatekeeping at the level of village clinics would reduce proportions of correct management from 41% to 16%, while gatekeeping at the level of the township hospital would retain correct management close to current levels at 37%. The main limitations of the study are 2-fold. First, we evaluate the management of a one-time new patient presenting with presumptive TB, which may not reflect how providers manage repeat patients or more complicated TB presentations. Second, simulations under alternate policies require behavioral and statistical assumptions that should be addressed in future applications of this method. There were significant quality deficits among village clinics and township health centers in the management of a classic case of presumptive TB, with higher proportions of correct case management in county hospitals. Poor clinical performance does not arise only from a lack of knowledge, a phenomenon known as the "know-do" gap. Given significant deficits in quality of care, reforms encouraging first contact with lower tiers of the health system can improve efficiency only with concomitant improvements in appropriate management of presumptive TB patients in village clinics and township health centers.

  14. Empirical resistive-force theory for slender biological filaments in shear-thinning fluids

    NASA Astrophysics Data System (ADS)

    Riley, Emily E.; Lauga, Eric

    2017-06-01

    Many cells exploit the bending or rotation of flagellar filaments in order to self-propel in viscous fluids. While appropriate theoretical modeling is available to capture flagella locomotion in simple, Newtonian fluids, formidable computations are required to address theoretically their locomotion in complex, nonlinear fluids, e.g., mucus. Based on experimental measurements for the motion of rigid rods in non-Newtonian fluids and on the classical Carreau fluid model, we propose empirical extensions of the classical Newtonian resistive-force theory to model the waving of slender filaments in non-Newtonian fluids. By assuming the flow near the flagellum to be locally Newtonian, we propose a self-consistent way to estimate the typical shear rate in the fluid, which we then use to construct correction factors to the Newtonian local drag coefficients. The resulting non-Newtonian resistive-force theory, while empirical, is consistent with the Newtonian limit, and with the experiments. We then use our models to address waving locomotion in non-Newtonian fluids and show that the resulting swimming speeds are systematically lowered, a result which we are able to capture asymptotically and to interpret physically. An application of the models to recent experimental results on the locomotion of Caenorhabditis elegans in polymeric solutions shows reasonable agreement and thus captures the main physics of swimming in shear-thinning fluids.

  15. Quantum Stabilizer Codes Can Realize Access Structures Impossible by Classical Secret Sharing

    NASA Astrophysics Data System (ADS)

    Matsumoto, Ryutaroh

    We show a simple example of a secret sharing scheme encoding classical secret to quantum shares that can realize an access structure impossible by classical information processing with limitation on the size of each share. The example is based on quantum stabilizer codes.

  16. An unusual presentation of eosinophilic angiocentric fibrosis.

    PubMed

    Hardman, Joel; Toon, Christopher; Nirmalananda, Arjuna

    2017-12-01

    Eosinophilic angiocentric fibrosis (EAF) is a rare, benign condition affecting the respiratory mucosa and is generally characterized by a locally destructive growth. We present a case of a lady with a saddle nose deformity that had for many years been treated as granulomatosis with polyangiitis (GPA), of which saddle nose deformity is a classic feature. At the time of surgery, she was found to have subglottic stenosis another classic feature of GPA, however, histology demonstrated EAF. We discuss the difference between the two conditions and highlight the importance of making the correct diagnosis.

  17. The equivalence principle in a quantum world

    NASA Astrophysics Data System (ADS)

    Bjerrum-Bohr, N. E. J.; Donoghue, John F.; El-Menoufi, Basem Kamal; Holstein, Barry R.; Planté, Ludovic; Vanhove, Pierre

    2015-09-01

    We show how modern methods can be applied to quantum gravity at low energy. We test how quantum corrections challenge the classical framework behind the equivalence principle (EP), for instance through introduction of nonlocality from quantum physics, embodied in the uncertainty principle. When the energy is small, we now have the tools to address this conflict explicitly. Despite the violation of some classical concepts, the EP continues to provide the core of the quantum gravity framework through the symmetry — general coordinate invariance — that is used to organize the effective field theory (EFT).

  18. On the dynamical and geometrical symmetries of Keplerian motion

    NASA Astrophysics Data System (ADS)

    Wulfman, Carl E.

    2009-05-01

    The dynamical symmetries of classical, relativistic and quantum-mechanical Kepler systems are considered to arise from geometric symmetries in PQET phase space. To establish their interconnection, the symmetries are related with the aid of a Lie-algebraic extension of Dirac's correspondence principle, a canonical transformation containing a Cunningham-Bateman inversion, and a classical limit involving a preliminary canonical transformation in ET space. The Lie-algebraic extension establishes the conditions under which the uncertainty principle allows the local dynamical symmetry of a quantum-mechanical system to be the same as the geometrical phase-space symmetry of its classical counterpart. The canonical transformation converts Poincaré-invariant free-particle systems into ISO(3,1) invariant relativistic systems whose classical limit produces Keplerian systems. Locally Cartesian relativistic PQET coordinates are converted into a set of eight conjugate position and momentum coordinates whose classical limit contains Fock projective momentum coordinates and the components of Runge-Lenz vectors. The coordinate systems developed via the transformations are those in which the evolution and degeneracy groups of the classical system are generated by Poisson-bracket operators that produce ordinary rotation, translation and hyperbolic motions in phase space. The way in which these define classical Keplerian symmetries and symmetry coordinates is detailed. It is shown that for each value of the energy of a Keplerian system, the Poisson-bracket operators determine two invariant functions of positions and momenta, which together with its regularized Hamiltonian, define the manifold in six-dimensional phase space upon which motions evolve.

  19. Non-Gaussianity in a quasiclassical electronic circuit

    NASA Astrophysics Data System (ADS)

    Suzuki, Takafumi J.; Hayakawa, Hisao

    2017-05-01

    We study the non-Gaussian dynamics of a quasiclassical electronic circuit coupled to a mesoscopic conductor. Non-Gaussian noise accompanying the nonequilibrium transport through the conductor significantly modifies the stationary probability density function (PDF) of the flux in the dissipative circuit. We incorporate weak quantum fluctuation of the dissipative LC circuit with a stochastic method and evaluate the quantum correction of the stationary PDF. Furthermore, an inverse formula to infer the statistical properties of the non-Gaussian noise from the stationary PDF is derived in the classical-quantum crossover regime. The quantum correction is indispensable to correctly estimate the microscopic transfer events in the QPC with the quasiclassical inverse formula.

  20. Affordable, automatic quantitative fall risk assessment based on clinical balance scales and Kinect data.

    PubMed

    Colagiorgio, P; Romano, F; Sardi, F; Moraschini, M; Sozzi, A; Bejor, M; Ricevuti, G; Buizza, A; Ramat, S

    2014-01-01

    The problem of a correct fall risk assessment is becoming more and more critical with the ageing of the population. In spite of the available approaches allowing a quantitative analysis of the human movement control system's performance, the clinical assessment and diagnostic approach to fall risk assessment still relies mostly on non-quantitative exams, such as clinical scales. This work documents our current effort to develop a novel method to assess balance control abilities through a system implementing an automatic evaluation of exercises drawn from balance assessment scales. Our aim is to overcome the classical limits characterizing these scales i.e. limited granularity and inter-/intra-examiner reliability, to obtain objective scores and more detailed information allowing to predict fall risk. We used Microsoft Kinect to record subjects' movements while performing challenging exercises drawn from clinical balance scales. We then computed a set of parameters quantifying the execution of the exercises and fed them to a supervised classifier to perform a classification based on the clinical score. We obtained a good accuracy (~82%) and especially a high sensitivity (~83%).

  1. Non-Equilibrium Turbulence and Two-Equation Modeling

    NASA Technical Reports Server (NTRS)

    Rubinstein, Robert

    2011-01-01

    Two-equation turbulence models are analyzed from the perspective of spectral closure theories. Kolmogorov theory provides useful information for models, but it is limited to equilibrium conditions in which the energy spectrum has relaxed to a steady state consistent with the forcing at large scales; it does not describe transient evolution between such states. Transient evolution is necessarily through nonequilibrium states, which can only be found from a theory of turbulence evolution, such as one provided by a spectral closure. When the departure from equilibrium is small, perturbation theory can be used to approximate the evolution by a two-equation model. The perturbation theory also gives explicit conditions under which this model can be valid, and when it will fail. Implications of the non-equilibrium corrections for the classic Tennekes-Lumley balance in the dissipation rate equation are drawn: it is possible to establish both the cancellation of the leading order Re1/2 divergent contributions to vortex stretching and enstrophy destruction, and the existence of a nonzero difference which is finite in the limit of infinite Reynolds number.

  2. Worldsheet instantons and the amplitude for string pair production in an external field as a WKB exact functional integral

    NASA Astrophysics Data System (ADS)

    Gordon, James; Semenoff, Gordon W.

    2018-05-01

    We revisit the problem of charged string pair creation in a constant external electric field. The string states are massive and creation of pairs from the vacuum is a tunnelling process, analogous to the Schwinger process where charged particle-anti-particle pairs are created by an electric field. We find the instantons in the worldsheet sigma model which are responsible for the tunnelling events. We evaluate the sigma model partition function in the multi-instanton sector in the WKB approximation which keeps the classical action and integrates the quadratic fluctuations about the solution. We find that the summation of the result over all multi-instanton sectors reproduces the known amplitude. This suggests that corrections to the WKB limit must cancel. To show that they indeed cancel, we identify a fermionic symmetry of the sigma model which occurs in the instanton sectors and which is associated with collective coordinates. We demonstrate that the action is symmetric and that the interaction action is an exact form. These conditions are sufficient for localization of the worldsheet functional integral onto its WKB limit.

  3. The Dispersion Relation for the 1/sinh(exp 2) Potential in the Classical Limit

    NASA Technical Reports Server (NTRS)

    Campbell, Joel

    2009-01-01

    The dispersion relation for the inverse hyperbolic potential is calculated in the classical limit. This is shown for both the low amplitude phonon branch and the high amplitude soliton branch. It is shown these results qualitatively follow that previously found for the inverse squared potential where explicit analytic solutions are known.

  4. Molecfit: A general tool for telluric absorption correction. II. Quantitative evaluation on ESO-VLT/X-Shooterspectra

    NASA Astrophysics Data System (ADS)

    Kausch, W.; Noll, S.; Smette, A.; Kimeswenger, S.; Barden, M.; Szyszka, C.; Jones, A. M.; Sana, H.; Horst, H.; Kerber, F.

    2015-04-01

    Context. Absorption by molecules in the Earth's atmosphere strongly affects ground-based astronomical observations. The resulting absorption line strength and shape depend on the highly variable physical state of the atmosphere, i.e. pressure, temperature, and mixing ratio of the different molecules involved. Usually, supplementary observations of so-called telluric standard stars (TSS) are needed to correct for this effect, which is expensive in terms of telescope time. We have developed the software package molecfit to provide synthetic transmission spectra based on parameters obtained by fitting narrow ranges of the observed spectra of scientific objects. These spectra are calculated by means of the radiative transfer code LBLRTM and an atmospheric model. In this way, the telluric absorption correction for suitable objects can be performed without any additional calibration observations of TSS. Aims: We evaluate the quality of the telluric absorption correction using molecfit with a set of archival ESO-VLT/X-Shooter visible and near-infrared spectra. Methods: Thanks to the wavelength coverage from the U to the K band, X-Shooter is well suited to investigate the quality of the telluric absorption correction with respect to the observing conditions, the instrumental set-up, input parameters of the code, the signal-to-noise of the input spectrum, and the atmospheric profiles. These investigations are based on two figures of merit, Ioff and Ires, that describe the systematic offsets and the remaining small-scale residuals of the corrections. We also compare the quality of the telluric absorption correction achieved with molecfit to the classical method based on a telluric standard star. Results: The evaluation of the telluric correction with molecfit shows a convincing removal of atmospheric absorption features. The comparison with the classical method reveals that molecfit performs better because it is not prone to the bad continuum reconstruction, noise, and intrinsic spectral features introduced by the telluric standard star. Conclusions: Fitted synthetic transmission spectra are an excellent alternative to the correction based on telluric standard stars. Moreover, molecfit offers wide flexibility for adaption to various instruments and observing sites. http://www.eso.org/sci/software/pipelines/skytools/

  5. Sequencing of bimaxillary surgery in the correction of vertical maxillary excess: retrospective study.

    PubMed

    Salmen, F S; de Oliveira, T F M; Gabrielli, M A C; Pereira Filho, V A; Real Gabrielli, M F

    2018-06-01

    The aim of this study was to evaluate the precision of bimaxillary surgery performed to correct vertical maxillary excess, when the procedure is sequenced with mandibular surgery first or maxillary surgery first. Thirty-two patients, divided into two groups, were included in this retrospective study. Group 1 comprised patients who received bimaxillary surgery following the classical sequence with repositioning of the maxilla first. Patients in group 2 received bimaxillary surgery, but the mandible was operated on first. The precision of the maxillomandibular repositioning was determined by comparison of the digital prediction and postoperative tracings superimposed on the cranial base. The data were tabulated and analyzed statistically. In this sample, both surgical sequences provided adequate clinical accuracy. The classical sequence, repositioning the maxilla first, resulted in greater accuracy for A-point and the upper incisor edge vertical position. Repositioning the mandible first allowed greater precision in the vertical position of pogonion. In conclusion, although both surgical sequences may be used, repositioning the mandible first will result in greater imprecision in relation to the predictive tracing than repositioning the maxilla first. The classical sequence resulted in greater accuracy in the vertical position of the maxilla, which is key for aesthetics. Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  6. Elasticity of short DNA molecules: theory and experiment for contour lengths of 0.6-7 microm.

    PubMed

    Seol, Yeonee; Li, Jinyu; Nelson, Philip C; Perkins, Thomas T; Betterton, M D

    2007-12-15

    The wormlike chain (WLC) model currently provides the best description of double-stranded DNA elasticity for micron-sized molecules. This theory requires two intrinsic material parameters-the contour length L and the persistence length p. We measured and then analyzed the elasticity of double-stranded DNA as a function of L (632 nm-7.03 microm) using the classic solution to the WLC model. When the elasticity data were analyzed using this solution, the resulting fitted value for the persistence length p(wlc) depended on L; even for moderately long DNA molecules (L = 1300 nm), this apparent persistence length was 10% smaller than its limiting value for long DNA. Because p is a material parameter, and cannot depend on length, we sought a new solution to the WLC model, which we call the "finite wormlike chain (FWLC)," to account for effects not considered in the classic solution. Specifically we accounted for the finite chain length, the chain-end boundary conditions, and the bead rotational fluctuations inherent in optical trapping assays where beads are used to apply the force. After incorporating these corrections, we used our FWLC solution to generate force-extension curves, and then fit those curves with the classic WLC solution, as done in the standard experimental analysis. These results qualitatively reproduced the apparent dependence of p(wlc) on L seen in experimental data when analyzed with the classic WLC solution. Directly fitting experimental data to the FWLC solution reduces the apparent dependence of p(fwlc) on L by a factor of 3. Thus, the FWLC solution provides a significantly improved theoretical framework in which to analyze single-molecule experiments over a broad range of experimentally accessible DNA lengths, including both short (a few hundred nanometers in contour length) and very long (microns in contour length) molecules.

  7. Short- and medium-range structure of multicomponent bioactive glasses and melts: An assessment of the performances of shell-model and rigid-ion potentials.

    PubMed

    Tilocca, Antonio

    2008-08-28

    Classical and ab initio molecular dynamics (MD) simulations have been carried out to investigate the effect of a different treatment of interatomic forces in modeling the structural properties of multicomponent glasses and melts. The simulated system is a soda-lime phosphosilicate composition with bioactive properties. Because the bioactivity of these materials depends on their medium-range structural features, such as the network connectivity and the Q(n) distribution (where Q(n) is a tetrahedral species bonded to n bridging oxygens) of silicon and phosphorus network formers, it is essential to assess whether, and up to what extent, classical potentials can reproduce these properties. The results indicate that the inclusion of the oxide ion polarization through a shell-model (SM) approach provides a more accurate representation of the medium-range structure compared to rigid-ion (RI) potentials. Insight into the causes of these improvements has been obtained by comparing the melt-and-quench transformation of a small sample of the same system, modeled using Car-Parrinello MD (CPMD), to the classical MD runs with SM and RI potentials. Both classical potentials show some limitations in reproducing the highly distorted structure of the melt denoted by the CPMD runs; however, the inclusion of polarization in the SM potential results in a better and qualitatively correct dynamical balance between the interconversion of Q(n) species during the cooling of the melt. This effect seems to reflect the slower decay of the fraction of structural defects during the cooling with the SM potential. Because these transient defects have a central role in mediating the Q(n) transformations, as previously proposed and confirmed by the current simulations, their presence in the melt is essential to produce an accurate final distribution of Q(n) species in the glass.

  8. Driven topological systems in the classical limit

    NASA Astrophysics Data System (ADS)

    Duncan, Callum W.; Öhberg, Patrik; Valiente, Manuel

    2017-03-01

    Periodically driven quantum systems can exhibit topologically nontrivial behavior, even when their quasienergy bands have zero Chern numbers. Much work has been conducted on noninteracting quantum-mechanical models where this kind of behavior is present. However, the inclusion of interactions in out-of-equilibrium quantum systems can prove to be quite challenging. On the other hand, the classical counterpart of hard-core interactions can be simulated efficiently via constrained random walks. The noninteracting model, proposed by Rudner et al. [Phys. Rev. X 3, 031005 (2013), 10.1103/PhysRevX.3.031005], has a special point for which the system is equivalent to a classical random walk. We consider the classical counterpart of this model, which is exact at a special point even when hard-core interactions are present, and show how these quantitatively affect the edge currents in a strip geometry. We find that the interacting classical system is well described by a mean-field theory. Using this we simulate the dynamics of the classical system, which show that the interactions play the role of Markovian, or time-dependent disorder. By comparing the evolution of classical and quantum edge currents in small lattices, we find regimes where the classical limit considered gives good insight into the quantum problem.

  9. Population growth and economic growth.

    PubMed

    Narayana, D L

    1984-01-01

    This discussion of the issues relating to the problem posed by population explosion in the developing countries and economic growth in the contemporary world covers the following: predictions of economic and social trends; the Malthusian theory of population; the classical or stationary theory of population; the medical triage model; ecological disaster; the Global 2000 study; the limits to growth; critiques of the Limits to Growth model; nonrenewable resources; food and agriculture; population explosion and stabilization; space and ocean colonization; and the limits perspective. The Limits to Growth model, a general equilibrium anti-growth model, is the gloomiest economic model ever constructed. None of the doomsday models, the Malthusian theory, the classical stationary state, the neo-Malthusian medical triage model, the Global 2000 study, are so far reaching in their consequences. The course of events that followed the publication of the "Limits to Growth" in 1972 in the form of 2 oil shocks, food shock, pollution shock, and price shock seemed to bear out formally the gloomy predictions of the thesis with a remarkable speed. The 12 years of economic experience and the knowledge of resource trends postulate that even if the economic pressures visualized by the model are at work they are neither far reaching nor so drastic. Appropriate action can solve them. There are several limitations to the Limits to Growth model. The central theme of the model, which is overshoot and collapse, is unlikely to be the course of events. The model is too aggregative to be realistic. It exaggerates the ecological disaster arising out of the exponential growth of population and industry. The gross underestimation of renewable resources is a basic flaw of the model. The most critical weakness of the model is its gross underestimation of the historical trend of technological progress and the technological possiblities within industry and agriculture. The model does correctly emphasize the exponential growth of population as the source of several complications for economic growth and human welfare. Stabilization of population by reducing fertility is conducive for improving the quality of population and also advances the longterm management of the population growth and work force utilization. The perspective of longterm economic management involves populatio n planning, control of environmental pollution, conservation of scarce resources, exploration of resources, realization of technological possibilities in agriculture and industry and in farm and factory, and achievement of economic growth and its equitable distribution.

  10. Post-processing procedure for industrial quantum key distribution systems

    NASA Astrophysics Data System (ADS)

    Kiktenko, Evgeny; Trushechkin, Anton; Kurochkin, Yury; Fedorov, Aleksey

    2016-08-01

    We present algorithmic solutions aimed on post-processing procedure for industrial quantum key distribution systems with hardware sifting. The main steps of the procedure are error correction, parameter estimation, and privacy amplification. Authentication of classical public communication channel is also considered.

  11. Classical affine W-algebras associated to Lie superalgebras

    NASA Astrophysics Data System (ADS)

    Suh, Uhi Rinn

    2016-02-01

    In this paper, we prove classical affine W-algebras associated to Lie superalgebras (W-superalgebras), which can be constructed in two different ways: via affine classical Hamiltonian reductions and via taking quasi-classical limits of quantum affine W-superalgebras. Also, we show that a classical finite W-superalgebra can be obtained by a Zhu algebra of a classical affine W-superalgebra. Using the definition by Hamiltonian reductions, we find free generators of a classical W-superalgebra associated to a minimal nilpotent. Moreover, we compute generators of the classical W-algebra associated to spo(2|3) and its principal nilpotent. In the last part of this paper, we introduce a generalization of classical affine W-superalgebras called classical affine fractional W-superalgebras. We show these have Poisson vertex algebra structures and find generators of a fractional W-superalgebra associated to a minimal nilpotent.

  12. Classical fluoroscopy criteria poorly predict right ventricular lead septal positioning by comparison with echocardiography.

    PubMed

    Squara, Fabien; Scarlatti, Didier; Riccini, Philippe; Garret, Gauthier; Moceri, Pamela; Ferrari, Emile

    2018-03-13

    Fluoroscopic criteria have been described for the documentation of septal right ventricular (RV) lead positioning, but their accuracy remains questioned. Consecutive patients undergoing pacemaker or defibrillator implantation were prospectively included. RV lead was positioned using postero-anterior and left anterior oblique 40° incidences, and right anterior oblique 30° to rule out coronary sinus positioning when suspected. RV lead positioning using fluoroscopy was compared to true RV lead positioning as assessed by transthoracic echocardiography (TTE). Precise anatomical localizations were determined with both modalities; then, RV lead positioning was ultimately dichotomized into two simple clinically relevant categories: RV septal or RV free wall. Accuracy of fluoroscopy for RV lead positioning was then assessed by comparison with TTE. We included 100 patients. On TTE, 66/100 had a septal RV lead and 34/100 had a free wall RV lead. Fluoroscopy had moderate agreement with TTE for precise anatomical localization of RV lead (k = 0.53), and poor agreement for septal/free wall localization (k = 0.36). For predicting septal RV lead positioning, classical fluoroscopy criteria had a high sensitivity (95.5%; 63/66 patients having a septal RV lead on TTE were correctly identified by fluoroscopy) but a very low specificity (35.3%; only 12/34 patients having a free wall RV lead on TTE were correctly identified by fluoroscopy). Classical fluoroscopy criteria have a poor accuracy for identifying RV free wall leads, which are most of the time misclassified as septal. This raises important concerns about the efficacy and safety of RV lead positioning using classical fluoroscopy criteria.

  13. Towards the Fundamental Quantum Limit of Linear Measurements of Classical Signals

    NASA Astrophysics Data System (ADS)

    Miao, Haixing; Adhikari, Rana X.; Ma, Yiqiu; Pang, Belinda; Chen, Yanbei

    2017-08-01

    The quantum Cramér-Rao bound (QCRB) sets a fundamental limit for the measurement of classical signals with detectors operating in the quantum regime. Using linear-response theory and the Heisenberg uncertainty relation, we derive a general condition for achieving such a fundamental limit. When applied to classical displacement measurements with a test mass, this condition leads to an explicit connection between the QCRB and the standard quantum limit that arises from a tradeoff between the measurement imprecision and quantum backaction; the QCRB can be viewed as an outcome of a quantum nondemolition measurement with the backaction evaded. Additionally, we show that the test mass is more a resource for improving measurement sensitivity than a victim of the quantum backaction, which suggests a new approach to enhancing the sensitivity of a broad class of sensors. We illustrate these points with laser interferometric gravitational-wave detectors.

  14. Zero-point energy conservation in classical trajectory simulations: Application to H2CO

    NASA Astrophysics Data System (ADS)

    Lee, Kin Long Kelvin; Quinn, Mitchell S.; Kolmann, Stephen J.; Kable, Scott H.; Jordan, Meredith J. T.

    2018-05-01

    A new approach for preventing zero-point energy (ZPE) violation in quasi-classical trajectory (QCT) simulations is presented and applied to H2CO "roaming" reactions. Zero-point energy may be problematic in roaming reactions because they occur at or near bond dissociation thresholds and these channels may be incorrectly open or closed depending on if, or how, ZPE has been treated. Here we run QCT simulations on a "ZPE-corrected" potential energy surface defined as the sum of the molecular potential energy surface (PES) and the global harmonic ZPE surface. Five different harmonic ZPE estimates are examined with four, on average, giving values within 4 kJ/mol—chemical accuracy—for H2CO. The local harmonic ZPE, at arbitrary molecular configurations, is subsequently defined in terms of "projected" Cartesian coordinates and a global ZPE "surface" is constructed using Shepard interpolation. This, combined with a second-order modified Shepard interpolated PES, V, allows us to construct a proof-of-concept ZPE-corrected PES for H2CO, Veff, at no additional computational cost to the PES itself. Both V and Veff are used to model product state distributions from the H + HCO → H2 + CO abstraction reaction, which are shown to reproduce the literature roaming product state distributions. Our ZPE-corrected PES allows all trajectories to be analysed, whereas, in previous simulations, a significant proportion was discarded because of ZPE violation. We find ZPE has little effect on product rotational distributions, validating previous QCT simulations. Running trajectories on V, however, shifts the product kinetic energy release to higher energy than on Veff and classical simulations of kinetic energy release should therefore be viewed with caution.

  15. Adam Smith's invisible hand is unstable: physics and dynamics reasoning applied to economic theorizing

    NASA Astrophysics Data System (ADS)

    McCauley, Joseph L.

    2002-11-01

    Neo-classical economic theory is based on the postulated, nonempiric notion of utility. Neo-classical economists assume that prices, dynamics, and market equilibria are supposed to be derived from utility. The results are supposed to represent mathematically the stabilizing action of Adam Smith's invisible hand. In deterministic excess demand dynamics, however, a utility function generally does not exist mathematically due to nonintegrability. Price as a function of demand does not exist and all equilibria are unstable. Qualitatively, and empirically, the neo-classical prediction of price as a function of demand describes neither consumer nor trader demand. We also discuss five inconsistent definitions of equilibrium used in economics and finance, only one of which is correct, and then explain the fallacy in the economists’ notion of ‘temporary price equilibria’.

  16. On the hypothesis that quantum mechanism manifests classical mechanics: Numerical approach to the correspondence in search of quantum chaos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Sang-Bong

    1993-09-01

    Quantum manifestation of classical chaos has been one of the extensively studied subjects for more than a decade. Yet clear understanding of its nature still remains to be an open question partly due to the lack of a canonical definition of quantum chaos. The classical definition seems to be unsuitable in quantum mechanics partly because of the Heisenberg quantum uncertainty. In this regard, quantum chaos is somewhat misleading and needs to be clarified at the very fundamental level of physics. Since it is well known that quantum mechanics is more fundamental than classical mechanics, the quantum description of classically chaoticmore » nature should be attainable in the limit of large quantum numbers. The focus of my research, therefore, lies on the correspondence principle for classically chaotic systems. The chaotic damped driven pendulum is mainly studied numerically using the split operator method that solves the time-dependent Schroedinger equation. For classically dissipative chaotic systems in which (multi)fractal strange attractors often emerge, several quantum dissipative mechanisms are also considered. For instance, Hoover`s and Kubo-Fox-Keizer`s approaches are studied with some computational analyses. But the notion of complex energy with non-Hermiticity is extensively applied. Moreover, the Wigner and Husimi distribution functions are examined with an equivalent classical distribution in phase-space, and dynamical properties of the wave packet in configuration and momentum spaces are also explored. The results indicate that quantum dynamics embraces classical dynamics although the classicalquantum correspondence fails to be observed in the classically chaotic regime. Even in the semi-classical limits, classically chaotic phenomena would eventually be suppressed by the quantum uncertainty.« less

  17. A compact quantum correction model for symmetric double gate metal-oxide-semiconductor field-effect transistor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, Edward Namkyu; Shin, Yong Hyeon; Yun, Ilgu, E-mail: iyun@yonsei.ac.kr

    2014-11-07

    A compact quantum correction model for a symmetric double gate (DG) metal-oxide-semiconductor field-effect transistor (MOSFET) is investigated. The compact quantum correction model is proposed from the concepts of the threshold voltage shift (ΔV{sub TH}{sup QM}) and the gate capacitance (C{sub g}) degradation. First of all, ΔV{sub TH}{sup QM} induced by quantum mechanical (QM) effects is modeled. The C{sub g} degradation is then modeled by introducing the inversion layer centroid. With ΔV{sub TH}{sup QM} and the C{sub g} degradation, the QM effects are implemented in previously reported classical model and a comparison between the proposed quantum correction model and numerical simulationmore » results is presented. Based on the results, the proposed quantum correction model can be applicable to the compact model of DG MOSFET.« less

  18. Loop quantum corrected Einstein Yang-Mills black holes

    NASA Astrophysics Data System (ADS)

    Protter, Mason; DeBenedictis, Andrew

    2018-05-01

    In this paper, we study the homogeneous interiors of black holes possessing SU(2) Yang-Mills fields subject to corrections inspired by loop quantum gravity. The systems studied possess both magnetic and induced electric Yang-Mills fields. We consider the system of equations both with and without Wilson loop corrections to the Yang-Mills potential. The structure of the Yang-Mills Hamiltonian, along with the restriction to homogeneity, allows for an anomaly-free effective quantization. In particular, we study the bounce which replaces the classical singularity and the behavior of the Yang-Mills fields in the quantum corrected interior, which possesses topology R ×S2 . Beyond the bounce, the magnitude of the Yang-Mills electric field asymptotically grows monotonically. This results in an ever-expanding R sector even though the two-sphere volume is asymptotically constant. The results are similar with and without Wilson loop corrections on the Yang-Mills potential.

  19. Two-Way Communication with a Single Quantum Particle.

    PubMed

    Del Santo, Flavio; Dakić, Borivoje

    2018-02-09

    In this Letter we show that communication when restricted to a single information carrier (i.e., single particle) and finite speed of propagation is fundamentally limited for classical systems. On the other hand, quantum systems can surpass this limitation. We show that communication bounded to the exchange of a single quantum particle (in superposition of different spatial locations) can result in "two-way signaling," which is impossible in classical physics. We quantify the discrepancy between classical and quantum scenarios by the probability of winning a game played by distant players. We generalize our result to an arbitrary number of parties and we show that the probability of success is asymptotically decreasing to zero as the number of parties grows, for all classical strategies. In contrast, quantum strategy allows players to win the game with certainty.

  20. Two-Way Communication with a Single Quantum Particle

    NASA Astrophysics Data System (ADS)

    Del Santo, Flavio; Dakić, Borivoje

    2018-02-01

    In this Letter we show that communication when restricted to a single information carrier (i.e., single particle) and finite speed of propagation is fundamentally limited for classical systems. On the other hand, quantum systems can surpass this limitation. We show that communication bounded to the exchange of a single quantum particle (in superposition of different spatial locations) can result in "two-way signaling," which is impossible in classical physics. We quantify the discrepancy between classical and quantum scenarios by the probability of winning a game played by distant players. We generalize our result to an arbitrary number of parties and we show that the probability of success is asymptotically decreasing to zero as the number of parties grows, for all classical strategies. In contrast, quantum strategy allows players to win the game with certainty.

  1. Capacities of quantum amplifier channels

    NASA Astrophysics Data System (ADS)

    Qi, Haoyu; Wilde, Mark M.

    2017-01-01

    Quantum amplifier channels are at the core of several physical processes. Not only do they model the optical process of spontaneous parametric down-conversion, but the transformation corresponding to an amplifier channel also describes the physics of the dynamical Casimir effect in superconducting circuits, the Unruh effect, and Hawking radiation. Here we study the communication capabilities of quantum amplifier channels. Invoking a recently established minimum output-entropy theorem for single-mode phase-insensitive Gaussian channels, we determine capacities of quantum-limited amplifier channels in three different scenarios. First, we establish the capacities of quantum-limited amplifier channels for one of the most general communication tasks, characterized by the trade-off between classical communication, quantum communication, and entanglement generation or consumption. Second, we establish capacities of quantum-limited amplifier channels for the trade-off between public classical communication, private classical communication, and secret key generation. Third, we determine the capacity region for a broadcast channel induced by the quantum-limited amplifier channel, and we also show that a fully quantum strategy outperforms those achieved by classical coherent-detection strategies. In all three scenarios, we find that the capacities significantly outperform communication rates achieved with a naive time-sharing strategy.

  2. Lack of a thermodynamic finite-temperature spin-glass phase in the two-dimensional randomly coupled ferromagnet

    NASA Astrophysics Data System (ADS)

    Zhu, Zheng; Ochoa, Andrew J.; Katzgraber, Helmut G.

    2018-05-01

    The search for problems where quantum adiabatic optimization might excel over classical optimization techniques has sparked a recent interest in inducing a finite-temperature spin-glass transition in quasiplanar topologies. We have performed large-scale finite-temperature Monte Carlo simulations of a two-dimensional square-lattice bimodal spin glass with next-nearest ferromagnetic interactions claimed to exhibit a finite-temperature spin-glass state for a particular relative strength of the next-nearest to nearest interactions [Phys. Rev. Lett. 76, 4616 (1996), 10.1103/PhysRevLett.76.4616]. Our results show that the system is in a paramagnetic state in the thermodynamic limit, despite zero-temperature simulations [Phys. Rev. B 63, 094423 (2001), 10.1103/PhysRevB.63.094423] suggesting the existence of a finite-temperature spin-glass transition. Therefore, deducing the finite-temperature behavior from zero-temperature simulations can be dangerous when corrections to scaling are large.

  3. Computation of Molecular Spectra on a Quantum Processor with an Error-Resilient Algorithm

    DOE PAGES

    Colless, J. I.; Ramasesh, V. V.; Dahlen, D.; ...

    2018-02-12

    Harnessing the full power of nascent quantum processors requires the efficient management of a limited number of quantum bits with finite coherent lifetimes. Hybrid algorithms, such as the variational quantum eigensolver (VQE), leverage classical resources to reduce the required number of quantum gates. Experimental demonstrations of VQE have resulted in calculation of Hamiltonian ground states, and a new theoretical approach based on a quantum subspace expansion (QSE) has outlined a procedure for determining excited states that are central to dynamical processes. Here, we use a superconducting-qubit-based processor to apply the QSE approach to the H 2 molecule, extracting both groundmore » and excited states without the need for auxiliary qubits or additional minimization. Further, we show that this extended protocol can mitigate the effects of incoherent errors, potentially enabling larger-scale quantum simulations without the need for complex error-correction techniques.« less

  4. FAST TRACK COMMUNICATION: Semiclassical Klein Kramers and Smoluchowski equations for the Brownian motion of a particle in an external potential

    NASA Astrophysics Data System (ADS)

    Coffey, W. T.; Kalmykov, Yu P.; Titov, S. V.; Mulligan, B. P.

    2007-01-01

    The quantum Brownian motion of a particle in an external potential V(x) is treated using the master equation for the Wigner distribution function W(x, p, t) in phase space (x, p). A heuristic method of determination of diffusion coefficients in the master equation is proposed. The time evolution equation so obtained contains explicit quantum correction terms up to o(planck4) and in the classical limit, planck → 0, reduces to the Klein-Kramers equation. For a quantum oscillator, the method yields an evolution equation for W(x, p, t) coinciding with that of Agarwal (1971 Phys. Rev. A 4 739). In the non-inertial regime, by applying the Brinkman expansion of the momentum distribution in Weber functions (Brinkman 1956 Physica 22 29), the corresponding semiclassical Smoluchowski equation is derived.

  5. Computation of Molecular Spectra on a Quantum Processor with an Error-Resilient Algorithm

    NASA Astrophysics Data System (ADS)

    Colless, J. I.; Ramasesh, V. V.; Dahlen, D.; Blok, M. S.; Kimchi-Schwartz, M. E.; McClean, J. R.; Carter, J.; de Jong, W. A.; Siddiqi, I.

    2018-02-01

    Harnessing the full power of nascent quantum processors requires the efficient management of a limited number of quantum bits with finite coherent lifetimes. Hybrid algorithms, such as the variational quantum eigensolver (VQE), leverage classical resources to reduce the required number of quantum gates. Experimental demonstrations of VQE have resulted in calculation of Hamiltonian ground states, and a new theoretical approach based on a quantum subspace expansion (QSE) has outlined a procedure for determining excited states that are central to dynamical processes. We use a superconducting-qubit-based processor to apply the QSE approach to the H2 molecule, extracting both ground and excited states without the need for auxiliary qubits or additional minimization. Further, we show that this extended protocol can mitigate the effects of incoherent errors, potentially enabling larger-scale quantum simulations without the need for complex error-correction techniques.

  6. Asymptotics for metamaterials and photonic crystals

    PubMed Central

    Antonakakis, T.; Craster, R. V.; Guenneau, S.

    2013-01-01

    Metamaterial and photonic crystal structures are central to modern optics and are typically created from multiple elementary repeating cells. We demonstrate how one replaces such structures asymptotically by a continuum, and therefore by a set of equations, that captures the behaviour of potentially high-frequency waves propagating through a periodic medium. The high-frequency homogenization that we use recovers the classical homogenization coefficients in the low-frequency long-wavelength limit. The theory is specifically developed in electromagnetics for two-dimensional square lattices where every cell contains an arbitrary hole with Neumann boundary conditions at its surface and implemented numerically for cylinders and split-ring resonators. Illustrative numerical examples include lensing via all-angle negative refraction, as well as omni-directive antenna, endoscope and cloaking effects. We also highlight the importance of choosing the correct Brillouin zone and the potential of missing interesting physical effects depending upon the path chosen. PMID:23633908

  7. Advanced diffusion MRI and biomarkers in the central nervous system: a new approach.

    PubMed

    Martín Noguerol, T; Martínez Barbero, J P

    The introduction of diffusion-weighted sequences has revolutionized the detection and characterization of central nervous system (CNS) disease. Nevertheless, the assessment of diffusion studies of the CNS is often limited to qualitative estimation. Moreover, the pathophysiological complexity of the different entities that affect the CNS cannot always be correctly explained through classical models. The development of new models for the analysis of diffusion sequences provides numerous parameters that enable a quantitative approach to both diagnosis and prognosis as well as to monitoring the response to treatment; these parameters can be considered potential biomarkers of health and disease. In this update, we review the physical bases underlying diffusion studies and diffusion tensor imaging, advanced models for their analysis (intravoxel coherent motion and kurtosis), and the biological significance of the parameters derived. Copyright © 2017 SERAM. Publicado por Elsevier España, S.L.U. All rights reserved.

  8. Laboratory and telescope demonstration of the TP3-WFS for the adaptive optics segment of AOLI

    NASA Astrophysics Data System (ADS)

    Colodro-Conde, C.; Velasco, S.; Fernández-Valdivia, J. J.; López, R.; Oscoz, A.; Rebolo, R.; Femenía, B.; King, D. L.; Labadie, L.; Mackay, C.; Muthusubramanian, B.; Pérez Garrido, A.; Puga, M.; Rodríguez-Coira, G.; Rodríguez-Ramos, L. F.; Rodríguez-Ramos, J. M.; Toledo-Moreo, R.; Villó-Pérez, I.

    2017-05-01

    Adaptive Optics Lucky Imager (AOLI) is a state-of-the-art instrument that combines adaptive optics (AO) and lucky imaging (LI) with the objective of obtaining diffraction-limited images in visible wavelength at mid- and big-size ground-based telescopes. The key innovation of AOLI is the development and use of the new Two Pupil Plane Positions Wavefront Sensor (TP3-WFS). The TP3-WFS, working in visible band, represents an advance over classical wavefront sensors such as the Shack-Hartmann WFS because it can theoretically use fainter natural reference stars, which would ultimately provide better sky coverages to AO instruments using this newer sensor. This paper describes the software, algorithms and procedures that enabled AOLI to become the first astronomical instrument performing real-time AO corrections in a telescope with this new type of WFS, including the first control-related results at the William Herschel Telescope.

  9. Computation of Molecular Spectra on a Quantum Processor with an Error-Resilient Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colless, J. I.; Ramasesh, V. V.; Dahlen, D.

    Harnessing the full power of nascent quantum processors requires the efficient management of a limited number of quantum bits with finite coherent lifetimes. Hybrid algorithms, such as the variational quantum eigensolver (VQE), leverage classical resources to reduce the required number of quantum gates. Experimental demonstrations of VQE have resulted in calculation of Hamiltonian ground states, and a new theoretical approach based on a quantum subspace expansion (QSE) has outlined a procedure for determining excited states that are central to dynamical processes. Here, we use a superconducting-qubit-based processor to apply the QSE approach to the H 2 molecule, extracting both groundmore » and excited states without the need for auxiliary qubits or additional minimization. Further, we show that this extended protocol can mitigate the effects of incoherent errors, potentially enabling larger-scale quantum simulations without the need for complex error-correction techniques.« less

  10. Quantum power source: putting in order of a Brownian motion without Maxwell's demon

    NASA Astrophysics Data System (ADS)

    Aristov, Vitaly V.; Nikulov, A. V.

    2003-07-01

    The problem of possible violation of the second law of thermodynamics is discussed. It is noted that the task of the well known challenge to the second law called Maxwell's demon is put in order a chaotic perpetual motion and if any ordered Brownian motion exists then the second law can be broken without this hypothetical intelligent entity. The postulate of absolute randomness of any Brownian motion saved the second law in the beginning of the 20th century when it was realized as perpetual motion. This postulate can be proven in the limits of classical mechanics but is not correct according to quantum mechanics. Moreover some enough known quantum phenomena, such as the persistent current at non-zero resistance, are an experimental evidence of the non-chaotic Brownian motion with non-zero average velocity. An experimental observation of a dc quantum power soruce is interperted as evidence of violation of the second law.

  11. A Stereo Dual-Channel Dynamic Programming Algorithm for UAV Image Stitching

    PubMed Central

    Chen, Ruizhi; Zhang, Weilong; Li, Deren; Liao, Xuan; Zhang, Peng

    2017-01-01

    Dislocation is one of the major challenges in unmanned aerial vehicle (UAV) image stitching. In this paper, we propose a new algorithm for seamlessly stitching UAV images based on a dynamic programming approach. Our solution consists of two steps: Firstly, an image matching algorithm is used to correct the images so that they are in the same coordinate system. Secondly, a new dynamic programming algorithm is developed based on the concept of a stereo dual-channel energy accumulation. A new energy aggregation and traversal strategy is adopted in our solution, which can find a more optimal seam line for image stitching. Our algorithm overcomes the theoretical limitation of the classical Duplaquet algorithm. Experiments show that the algorithm can effectively solve the dislocation problem in UAV image stitching, especially for the cases in dense urban areas. Our solution is also direction-independent, which has better adaptability and robustness for stitching images. PMID:28885547

  12. Thin-shell wormholes in rainbow gravity

    NASA Astrophysics Data System (ADS)

    Amirabi, Z.; Halilsoy, M.; Mazharimousavi, S. Habib

    2018-03-01

    At the Planck scale of length ˜10‑35 m where the energy is comparable with the Planck energy, the quantum gravity corrections to the classical background spacetime results in gravity’s rainbow or rainbow gravity. In this modified theory of gravity, geometry depends on the energy of the test particle used to probe the spacetime, such that in the low energy limit, it yields the standard general relativity. In this work, we study the thin-shell wormholes in the spherically symmetric rainbow gravity. We find the corresponding properties in terms of the rainbow functions which are essential in the rainbow gravity and the stability of such thin-shell wormholes are investigated. Particularly, it will be shown that there are exact solutions in which high energy particles crossing the throat will encounter less amount of total exotic matter. This may be used as an advantage over general relativity to reduce the amount of exotic matter.

  13. A Stereo Dual-Channel Dynamic Programming Algorithm for UAV Image Stitching.

    PubMed

    Li, Ming; Chen, Ruizhi; Zhang, Weilong; Li, Deren; Liao, Xuan; Wang, Lei; Pan, Yuanjin; Zhang, Peng

    2017-09-08

    Dislocation is one of the major challenges in unmanned aerial vehicle (UAV) image stitching. In this paper, we propose a new algorithm for seamlessly stitching UAV images based on a dynamic programming approach. Our solution consists of two steps: Firstly, an image matching algorithm is used to correct the images so that they are in the same coordinate system. Secondly, a new dynamic programming algorithm is developed based on the concept of a stereo dual-channel energy accumulation. A new energy aggregation and traversal strategy is adopted in our solution, which can find a more optimal seam line for image stitching. Our algorithm overcomes the theoretical limitation of the classical Duplaquet algorithm. Experiments show that the algorithm can effectively solve the dislocation problem in UAV image stitching, especially for the cases in dense urban areas. Our solution is also direction-independent, which has better adaptability and robustness for stitching images.

  14. Field balancing in the real world

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bracher, B.

    Field balancing can achieve significant results when other problems are present in the frequency spectrum and multiple vibrations are evident in the waveform. Many references suggest eliminating other problems before attempting to balance. That`s great - if you can do it. There are valid reasons for this approach, and it would be much easier to balance machinery when other problems have been corrected. It is the theoretical ideal in field balancing. However, in the real world of machinery maintained for years by reacting to immediate problems, the classic vibration signature for unbalance is rarely seen. Maintenance personnel make most ofmore » their decisions with limited information. The decision to balance or not to balance is usually made the same way. This paper will demonstrate significant results of field balancing in the presence of multiple problems. By examining the data available and analyzing the probabilities, a reasonable chance for success can be assured.« less

  15. Tropical forecasting - Predictability perspective

    NASA Technical Reports Server (NTRS)

    Shukla, J.

    1989-01-01

    Results are presented of classical predictability studies and forecast experiments with observed initial conditions to show the nature of initial error growth and final error equilibration for the tropics and midlatitudes, separately. It is found that the theoretical upper limit of tropical circulation predictability is far less than for midlatitudes. The error growth for a complete general circulation model is compared to a dry version of the same model in which there is no prognostic equation for moisture, and diabatic heat sources are prescribed. It is found that the growth rate of synoptic-scale errors for the dry model is significantly smaller than for the moist model, suggesting that the interactions between dynamics and moist processes are among the important causes of atmospheric flow predictability degradation. Results are then presented of numerical experiments showing that correct specification of the slowly varying boundary condition of SST produces significant improvement in the prediction of time-averaged circulation and rainfall over the tropics.

  16. Mapping the Perceptual Grain of the Human Retina

    PubMed Central

    Tuten, William S.; Roorda, Austin; Sincich, Lawrence C.

    2014-01-01

    In humans, experimental access to single sensory receptors is difficult to achieve, yet it is crucial for learning how the signals arising from each receptor are transformed into perception. By combining adaptive optics microstimulation with high-speed eye tracking, we show that retinal function can be probed at the level of the individual cone photoreceptor in living eyes. Classical psychometric functions were obtained from cone-sized microstimuli targeted to single photoreceptors. Revealed psychophysically, the cone mosaic also manifests a variable sensitivity to light across its surface that accords with a simple model of cone light capture. Because this microscopic grain of vision could be detected on the perceptual level, it suggests that photoreceptors can act individually to shape perception, if the normally suboptimal relay of light by the eye's optics is corrected. Thus the precise arrangement of cones and the exact placement of stimuli onto those cones create the initial retinal limits on signals mediating spatial vision. PMID:24741057

  17. Analytic Methods for Adjusting Subjective Rating Schemes.

    ERIC Educational Resources Information Center

    Cooper, Richard V. L.; Nelson, Gary R.

    Statistical and econometric techniques of correcting for supervisor bias in models of individual performance appraisal were developed, using a variant of the classical linear regression model. Location bias occurs when individual performance is systematically overestimated or underestimated, while scale bias results when raters either exaggerate…

  18. Entropy, temperature and internal energy of trapped gravitons and corrections to the Black Hole entropy

    NASA Astrophysics Data System (ADS)

    Viaggiu, Stefano

    2017-12-01

    In this paper we study the proposal present in Viaggiu (2017) concerning the statistical description of trapped gravitons and applied to derive the semi-classical black hole (BH) entropy SBH. We study the possible configurations depending on physically reasonable expressions for the internal energy U. In particular, we show that expressions for U ∼Rk , k ≥ 1, with R the radius of the confining spherical box, can have a semi-classical description, while behaviors with k < 1 derive from thermodynamic or quantum fluctuations. There, by taking a suitable physically motivated expression for U(R) , we obtain the well known logarithmic corrections to the BH entropy, with the usual behaviors present in the literature of BH entropy. Moreover, a phase transition emerges with a positive specific heat C at Planckian lengths instead of the usual negative one at non-Planckian scales, in agreement with results present in the literature. Finally, we show that evaporation stops at a radius R of the order of the Planck length.

  19. Li+ solvation and kinetics of Li+-BF4-/PF6- ion pairs in ethylene carbonate. A molecular dynamics study with classical rate theories

    NASA Astrophysics Data System (ADS)

    Chang, Tsun-Mei; Dang, Liem X.

    2017-10-01

    Using our polarizable force-field models and employing classical rate theories of chemical reactions, we examine the ethylene carbonate (EC) exchange process between the first and second solvation shells around Li+ and the dissociation kinetics of ion pairs Li+-[BF4] and Li+-[PF6] in this solvent. We calculate the exchange rates using transition state theory and correct them with transmission coefficients computed by the reactive flux, Impey, Madden, and McDonald approaches, and Grote-Hynes theory. We found that the residence times of EC around Li+ ions varied from 60 to 450 ps, depending on the correction method used. We found that the relaxation times changed significantly from Li+-[BF4] to Li+-[PF6] ion pairs in EC. Our results also show that, in addition to affecting the free energy of dissociation in EC, the anion type also significantly influences the dissociation kinetics of ion pairing.

  20. Classical affine W-algebras associated to Lie superalgebras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suh, Uhi Rinn, E-mail: uhrisu1@math.snu.ac.kr

    2016-02-15

    In this paper, we prove classical affine W-algebras associated to Lie superalgebras (W-superalgebras), which can be constructed in two different ways: via affine classical Hamiltonian reductions and via taking quasi-classical limits of quantum affine W-superalgebras. Also, we show that a classical finite W-superalgebra can be obtained by a Zhu algebra of a classical affine W-superalgebra. Using the definition by Hamiltonian reductions, we find free generators of a classical W-superalgebra associated to a minimal nilpotent. Moreover, we compute generators of the classical W-algebra associated to spo(2|3) and its principal nilpotent. In the last part of this paper, we introduce a generalizationmore » of classical affine W-superalgebras called classical affine fractional W-superalgebras. We show these have Poisson vertex algebra structures and find generators of a fractional W-superalgebra associated to a minimal nilpotent.« less

  1. The evolution of the midface lift in aesthetic plastic surgery.

    PubMed

    Paul, Malcolm D; Calvert, Jay W; Evans, Gregory R D

    2006-05-01

    The midface lift has recently gained significant popularity with many surgeons. It allows the surgeon an opportunity to achieve greater facial harmony with facial rejuvenation procedures by correcting midfacial atrophy, addressing the tear trough deformity, and correcting the perceived malposition of the malar fat pad. This article examines the history of midfacial procedures. Surgical attempts at improving the aging face have evolved from minimal excisions and skin closure to aggressive dissections at multiple planes. The midface target area is peripheral to classic approaches, and its correction has required further anterior dissection from a distance or direct access centrally. Ultimately, conquering the stigmata of midface aging is entirely related to vectors and volume.

  2. Magnetic torque on a rotating superconducting sphere

    NASA Technical Reports Server (NTRS)

    Holdeman, L. B.

    1975-01-01

    The London theory of superconductivity is used to calculate the torque on a superconducting sphere rotating in a uniform applied magnetic field. The London theory is combined with classical electrodynamics for a calculation of the direct effect of excess charge on a rotating superconducting sphere. Classical electrodynamics, with the assumption of a perfect Meissner effect, is used to calculate the torque on a superconducting sphere rotating in an arbitrary magnetic induction; this macroscopic approach yields results which are correct to first order. Using the same approach, the torque due to a current loop encircling the rotating sphere is calculated.

  3. On simulations of rarefied vapor flows with condensation

    NASA Astrophysics Data System (ADS)

    Bykov, Nikolay; Gorbachev, Yuriy; Fyodorov, Stanislav

    2018-05-01

    Results of the direct simulation Monte Carlo of 1D spherical and 2D axisymmetric expansions into vacuum of condens-ing water vapor are presented. Two models based on the kinetic approach and the size-corrected classical nucleation theory are employed for simulations. The difference in obtained results is discussed and advantages of the kinetic approach in comparison with the modified classical theory are demonstrated. The impact of clusterization on flow parameters is observed when volume fraction of clusters in the expansion region exceeds 5%. Comparison of the simulation data with the experimental results demonstrates good agreement.

  4. The MusIC method: a fast and quasi-optimal solution to the muscle forces estimation problem.

    PubMed

    Muller, A; Pontonnier, C; Dumont, G

    2018-02-01

    The present paper aims at presenting a fast and quasi-optimal method of muscle forces estimation: the MusIC method. It consists in interpolating a first estimation in a database generated offline thanks to a classical optimization problem, and then correcting it to respect the motion dynamics. Three different cost functions - two polynomial criteria and a min/max criterion - were tested on a planar musculoskeletal model. The MusIC method provides a computation frequency approximately 10 times higher compared to a classical optimization problem with a relative mean error of 4% on cost function evaluation.

  5. Quantum machine learning: a classical perspective

    NASA Astrophysics Data System (ADS)

    Ciliberto, Carlo; Herbster, Mark; Ialongo, Alessandro Davide; Pontil, Massimiliano; Rocchetto, Andrea; Severini, Simone; Wossnig, Leonard

    2018-01-01

    Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed.

  6. A new class of ensemble conserving algorithms for approximate quantum dynamics: Theoretical formulation and model problems.

    PubMed

    Smith, Kyle K G; Poulsen, Jens Aage; Nyman, Gunnar; Rossky, Peter J

    2015-06-28

    We develop two classes of quasi-classical dynamics that are shown to conserve the initial quantum ensemble when used in combination with the Feynman-Kleinert approximation of the density operator. These dynamics are used to improve the Feynman-Kleinert implementation of the classical Wigner approximation for the evaluation of quantum time correlation functions known as Feynman-Kleinert linearized path-integral. As shown, both classes of dynamics are able to recover the exact classical and high temperature limits of the quantum time correlation function, while a subset is able to recover the exact harmonic limit. A comparison of the approximate quantum time correlation functions obtained from both classes of dynamics is made with the exact results for the challenging model problems of the quartic and double-well potentials. It is found that these dynamics provide a great improvement over the classical Wigner approximation, in which purely classical dynamics are used. In a special case, our first method becomes identical to centroid molecular dynamics.

  7. Quantum machine learning: a classical perspective

    PubMed Central

    Ciliberto, Carlo; Herbster, Mark; Ialongo, Alessandro Davide; Pontil, Massimiliano; Severini, Simone; Wossnig, Leonard

    2018-01-01

    Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed. PMID:29434508

  8. Quantum machine learning: a classical perspective.

    PubMed

    Ciliberto, Carlo; Herbster, Mark; Ialongo, Alessandro Davide; Pontil, Massimiliano; Rocchetto, Andrea; Severini, Simone; Wossnig, Leonard

    2018-01-01

    Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed.

  9. Why trace and delay conditioning are sometimes (but not always) hippocampal dependent: A computational model

    PubMed Central

    Moustafa, Ahmed A.; Wufong, Ella; Servatius, Richard J.; Pang, Kevin C. H.; Gluck, Mark A.; Myers, Catherine E.

    2013-01-01

    A recurrent-network model provides a unified account of the hippocampal region in mediating the representation of temporal information in classical eyeblink conditioning. Much empirical research is consistent with a general conclusion that delay conditioning (in which the conditioned stimulus CS and unconditioned stimulus US overlap and co-terminate) is independent of the hippocampal system, while trace conditioning (in which the CS terminates before US onset) depends on the hippocampus. However, recent studies show that, under some circumstances, delay conditioning can be hippocampal-dependent and trace conditioning can be spared following hippocampal lesion. Here, we present an extension of our prior trial-level models of hippocampal function and stimulus representation that can explain these findings within a unified framework. Specifically, the current model includes adaptive recurrent collateral connections that aid in the representation of intra-trial temporal information. With this model, as in our prior models, we argue that the hippocampus is not specialized for conditioned response timing, but rather is a general-purpose system that learns to predict the next state of all stimuli given the current state of variables encoded by activity in recurrent collaterals. As such, the model correctly predicts that hippocampal involvement in classical conditioning should be critical not only when there is an intervening trace interval, but also when there is a long delay between CS onset and US onset. Our model simulates empirical data from many variants of classical conditioning, including delay and trace paradigms in which the length of the CS, the inter-stimulus interval, or the trace interval is varied. Finally, we discuss model limitations, future directions, and several novel empirical predictions of this temporal processing model of hippocampal function and learning. PMID:23178699

  10. Spherical subjective refraction with a novel 3D virtual reality based system.

    PubMed

    Pujol, Jaume; Ondategui-Parra, Juan Carlos; Badiella, Llorenç; Otero, Carles; Vilaseca, Meritxell; Aldaba, Mikel

    To conduct a clinical validation of a virtual reality-based experimental system that is able to assess the spherical subjective refraction simplifying the methodology of ocular refraction. For the agreement assessment, spherical refraction measurements were obtained from 104 eyes of 52 subjects using three different methods: subjectively with the experimental prototype (Subj.E) and the classical subjective refraction (Subj.C); and objectively with the WAM-5500 autorefractor (WAM). To evaluate precision (intra- and inter-observer variability) of each refractive tool independently, 26 eyes were measured in four occasions. With regard to agreement, the mean difference (±SD) for the spherical equivalent (M) between the new experimental subjective method (Subj.E) and the classical subjective refraction (Subj.C) was -0.034D (±0.454D). The corresponding 95% Limits of Agreement (LoA) were (-0.856D, 0.924D). In relation to precision, intra-observer mean difference for the M component was 0.034±0.195D for the Subj.C, 0.015±0.177D for the WAM and 0.072±0.197D for the Subj.E. Inter-observer variability showed worse precision values, although still clinically valid (below 0.25D) in all instruments. The spherical equivalent obtained with the new experimental system was precise and in good agreement with the classical subjective routine. The algorithm implemented in this new system and its optical configuration has been shown to be a first valid step for spherical error correction in a semiautomated way. Copyright © 2016 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  11. Extending In Vitro Conditioning in "Aplysia" to Analyze Operant and Classical Processes in the Same Preparation

    ERIC Educational Resources Information Center

    Brembs, Bjorn; Baxter, Douglas A.; Byrne, John H.

    2004-01-01

    Operant and classical conditioning are major processes shaping behavioral responses in all animals. Although the understanding of the mechanisms of classical conditioning has expanded significantly, the understanding of the mechanisms of operant conditioning is more limited. Recent developments in "Aplysia" are helping to narrow the gap in the…

  12. Einstein observations of three classical Cepheids

    NASA Technical Reports Server (NTRS)

    Bohm-Vitense, E.; Parsons, S. B.

    1983-01-01

    We have looked for X-ray emission from the classical Cepheids delta Cep, beta Dor, and zeta Gem during phases when the latter two stars show emission in low excitation chromospheric lines. No X-ray flux was detected except possibly from zeta Gem at phase 0.26. Derived upper limits are in line with emission flux or upper limits obtained for other F and G supergiants.

  13. Casimir free energy of dielectric films: classical limit, low-temperature behavior and control.

    PubMed

    Klimchitskaya, G L; Mostepanenko, V M

    2017-07-12

    The Casimir free energy of dielectric films, both free-standing in vacuum and deposited on metallic or dielectric plates, is investigated. It is shown that the values of the free energy depend considerably on whether the calculation approach used neglects or takes into account the dc conductivity of film material. We demonstrate that there are material-dependent and universal classical limits in the former and latter cases, respectively. The analytic behavior of the Casimir free energy and entropy for a free-standing dielectric film at low temperature is found. According to our results, the Casimir entropy goes to zero when the temperature vanishes if the calculation approach with neglected dc conductivity of a film is employed. If the dc conductivity is taken into account, the Casimir entropy takes the positive value at zero temperature, depending on the parameters of a film, i.e. the Nernst heat theorem is violated. By considering the Casimir free energy of SiO 2 and Al 2 O 3 films deposited on a Au plate in the framework of two calculation approaches, we argue that physically correct values are obtained by disregarding the role of dc conductivity. A comparison with the well known results for the configuration of two parallel plates is made. Finally, we compute the Casimir free energy of SiO 2 , Al 2 O 3 and Ge films deposited on high-resistivity Si plates of different thicknesses and demonstrate that it can be positive, negative and equal to zero. The effect of illumination of a Si plate with laser light is considered. Possible applications of the obtained results to thin films used in microelectronics are discussed.

  14. Casimir free energy of dielectric films: classical limit, low-temperature behavior and control

    NASA Astrophysics Data System (ADS)

    Klimchitskaya, G. L.; Mostepanenko, V. M.

    2017-07-01

    The Casimir free energy of dielectric films, both free-standing in vacuum and deposited on metallic or dielectric plates, is investigated. It is shown that the values of the free energy depend considerably on whether the calculation approach used neglects or takes into account the dc conductivity of film material. We demonstrate that there are material-dependent and universal classical limits in the former and latter cases, respectively. The analytic behavior of the Casimir free energy and entropy for a free-standing dielectric film at low temperature is found. According to our results, the Casimir entropy goes to zero when the temperature vanishes if the calculation approach with neglected dc conductivity of a film is employed. If the dc conductivity is taken into account, the Casimir entropy takes the positive value at zero temperature, depending on the parameters of a film, i.e. the Nernst heat theorem is violated. By considering the Casimir free energy of SiO2 and Al2O3 films deposited on a Au plate in the framework of two calculation approaches, we argue that physically correct values are obtained by disregarding the role of dc conductivity. A comparison with the well known results for the configuration of two parallel plates is made. Finally, we compute the Casimir free energy of SiO2, Al2O3 and Ge films deposited on high-resistivity Si plates of different thicknesses and demonstrate that it can be positive, negative and equal to zero. The effect of illumination of a Si plate with laser light is considered. Possible applications of the obtained results to thin films used in microelectronics are discussed.

  15. Thermophysical properties of krypton-helium gas mixtures from ab initio pair potentials

    PubMed Central

    2017-01-01

    A new potential energy curve for the krypton-helium atom pair was developed using supermolecular ab initio computations for 34 interatomic distances. Values for the interaction energies at the complete basis set limit were obtained from calculations with the coupled-cluster method with single, double, and perturbative triple excitations and correlation consistent basis sets up to sextuple-zeta quality augmented with mid-bond functions. Higher-order coupled-cluster excitations up to the full quadruple level were accounted for in a scheme of successive correction terms. Core-core and core-valence correlation effects were included. Relativistic corrections were considered not only at the scalar relativistic level but also using full four-component Dirac–Coulomb and Dirac–Coulomb–Gaunt calculations. The fitted analytical pair potential function is characterized by a well depth of 31.42 K with an estimated standard uncertainty of 0.08 K. Statistical thermodynamics was applied to compute the krypton-helium cross second virial coefficients. The results show a very good agreement with the best experimental data. Kinetic theory calculations based on classical and quantum-mechanical approaches for the underlying collision dynamics were utilized to compute the transport properties of krypton-helium mixtures in the dilute-gas limit for a large temperature range. The results were analyzed with respect to the orders of approximation of kinetic theory and compared with experimental data. Especially the data for the binary diffusion coefficient confirm the predictive quality of the new potential. Furthermore, inconsistencies between two empirical pair potential functions for the krypton-helium system from the literature could be resolved. PMID:28595411

  16. Thermophysical properties of krypton-helium gas mixtures from ab initio pair potentials

    NASA Astrophysics Data System (ADS)

    Jäger, Benjamin; Bich, Eckard

    2017-06-01

    A new potential energy curve for the krypton-helium atom pair was developed using supermolecular ab initio computations for 34 interatomic distances. Values for the interaction energies at the complete basis set limit were obtained from calculations with the coupled-cluster method with single, double, and perturbative triple excitations and correlation consistent basis sets up to sextuple-zeta quality augmented with mid-bond functions. Higher-order coupled-cluster excitations up to the full quadruple level were accounted for in a scheme of successive correction terms. Core-core and core-valence correlation effects were included. Relativistic corrections were considered not only at the scalar relativistic level but also using full four-component Dirac-Coulomb and Dirac-Coulomb-Gaunt calculations. The fitted analytical pair potential function is characterized by a well depth of 31.42 K with an estimated standard uncertainty of 0.08 K. Statistical thermodynamics was applied to compute the krypton-helium cross second virial coefficients. The results show a very good agreement with the best experimental data. Kinetic theory calculations based on classical and quantum-mechanical approaches for the underlying collision dynamics were utilized to compute the transport properties of krypton-helium mixtures in the dilute-gas limit for a large temperature range. The results were analyzed with respect to the orders of approximation of kinetic theory and compared with experimental data. Especially the data for the binary diffusion coefficient confirm the predictive quality of the new potential. Furthermore, inconsistencies between two empirical pair potential functions for the krypton-helium system from the literature could be resolved.

  17. On the Anticipatory Aspects of the Four Interactions: what the Known Classical and Semi-Classical Solutions Teach us

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lusanna, Luca

    2004-08-19

    The four (electro-magnetic, weak, strong and gravitational) interactions are described by singular Lagrangians and by Dirac-Bergmann theory of Hamiltonian constraints. As a consequence a subset of the original configuration variables are gauge variables, not determined by the equations of motion. Only at the Hamiltonian level it is possible to separate the gauge variables from the deterministic physical degrees of freedom, the Dirac observables, and to formulate a well posed Cauchy problem for them both in special and general relativity. Then the requirement of causality dictates the choice of retarded solutions at the classical level. However both the problems of themore » classical theory of the electron, leading to the choice of (1/2) (retarded + advanced) solutions, and the regularization of quantum field theory, leading to the Feynman propagator, introduce anticipatory aspects. The determination of the relativistic Darwin potential as a semi-classical approximation to the Lienard-Wiechert solution for particles with Grassmann-valued electric charges, regularizing the Coulomb self-energies, shows that these anticipatory effects live beyond the semi-classical approximation (tree level) under the form of radiative corrections, at least for the electro-magnetic interaction.Talk and 'best contribution' at The Sixth International Conference on Computing Anticipatory Systems CASYS'03, Liege August 11-16, 2003.« less

  18. Actinic cheilitis: aesthetic and functional comparative evaluation of vermilionectomy using the classic and W-plasty techniques.

    PubMed

    Rossoe, Ed Wilson Tsuneo; Tebcherani, Antonio José; Sittart, José Alexandre; Pires, Mario Cezar

    2011-01-01

    Chronic actinic cheilitis is actinic keratosis located on the vermilion border. Treatment is essential because of the potential for malignant transformation. To evaluate the aesthetic and functional results of vermilionectomy using the classic and W-plasty techniques in actinic cheilitis. In the classic technique, the scar is linear and in the W-plasty one, it is a broken line. 32 patients with clinical and histopathological diagnosis of actinic cheilitis were treated. Out of the 32 patients, 15 underwent the W-plasty technique and 17 underwent the classic one. We evaluated parameters such as scar retraction and functional changes. A statistically significant association between the technique used and scar retraction was found, which was positive when using the classic technique (p = 0.01 with Yates' correction). The odds ratio was calculated at 11.25, i.e., there was a greater chance of retraction in patients undergoing the classic technique. Both techniques revealed no functional changes. We evaluated postoperative complications such as the presence of crusts, dry lips, paresthesia, and suture dehiscence. There was no statistically significant association between complications and the technique used (p = 0.69). We concluded that vermilionectomy using the W-plasty technique shows better cosmetic results and similar complication rates.

  19. Representational Realism, Closed Theories and the Quantum to Classical Limit

    NASA Astrophysics Data System (ADS)

    de Ronde, Christian

    In this chapter, we discuss the representational realist stance as a pluralistontic approach to inter-theoretic relationships. Our stance stresses the fact that physical theories require the necessary consideration of a conceptual level of discourse which determines and configures the specific field of phenomena discussed by each particular theory. We will criticize the orthodox line of research which has grounded the analysis about QM in two (Bohrian) metaphysical presuppositions - accepted in the present as dogmas that all interpretations must follow. We will also examine how the orthodox project of "bridging the gap" between the quantum and the classical domains has constrained the possibilities of research, producing only a limited set of interpretational problems which only focus in the justification of "classical reality" and exclude the possibility of analyzing the possibilities of non-classical conceptual representations of QM. The representational realist stance introduces two new problems, namely, the superposition problem and the contextuality problem, which consider explicitly the conceptual representation of orthodox QM beyond the mere reference to mathematical structures and measurement outcomes. In the final part of the chapter, we revisit, from representational realist perspective, the quantum to classical limit and the orthodox claim that this inter-theoretic relation can be explained through the principle of decoherence.

  20. Evaluation of Wall Interference Effects in a Two-Dimensional Transonic Wind Tunnel by Subsonic Linear Theory,

    DTIC Science & Technology

    1979-02-01

    tests were conducted on two geometrica lly similar models of each of two aerofoil sections -—t he NA CA 00/ 2 and the BGK- 1 sections -and covered a...and slotted-wall tes t sections are corrected for wind tunnel wall interference efJ~cts by the application of classical linearized theory. For the...solid wall results , these corrections appear to produce data which are very close to being free of the effects of interference. In the case of

  1. Immunohistochemical testing for colon cancer--what do New Zealand surgeons know?

    PubMed

    Harper, Simon J; McEwen, Alison R; Dennett, Elizabeth R

    2010-11-05

    8-12% of colorectal cancers are associated with genetic syndromes. The most common of these is Lynch syndrome (also known as Hereditary Non-Polyposis Colorectal Cancer). Clinical criteria (Besthesda criteria) exist that can be used to identify colorectal cancer patients who may benefit from immunohistochemical screening of their tumour for Lynch syndrome. Treating surgeons need to know these criteria in order to request appropriate testing. The aim of this study was to assess the knowledge of New Zealand surgeons about the Bethesda criteria. We conducted a postal survey of all New Zealand General Surgical Fellows of the Royal Australasian College of Surgeons. Of the surgeons returning surveys 88% knew screening using immunohistochemistry was available; 7% would not refer an abnormal result to a genetic service. Results of the practice based questions showed only 45% of respondents knew that a colorectal cancer diagnosed before the age of 50 was one of the Besthesda criteria. The correct response rates for the rest of the survey ranged from 32-96%. Questions about Lynch syndrome associated cancers returned fewest correct answers. In general, surgeons are poorly informed about cancers associated with Lynch syndrome. The study demonstrates limited awareness of the Besthesda criteria amongst New Zealand General Surgeons. Those treating colorectal cancer should be aware of the classic features of Lynch syndrome and test appropriately.

  2. Improving Hierarchical Models Using Historical Data with Applications in High-Throughput Genomics Data Analysis.

    PubMed

    Li, Ben; Li, Yunxiao; Qin, Zhaohui S

    2017-06-01

    Modern high-throughput biotechnologies such as microarray and next generation sequencing produce a massive amount of information for each sample assayed. However, in a typical high-throughput experiment, only limited amount of data are observed for each individual feature, thus the classical 'large p , small n ' problem. Bayesian hierarchical model, capable of borrowing strength across features within the same dataset, has been recognized as an effective tool in analyzing such data. However, the shrinkage effect, the most prominent feature of hierarchical features, can lead to undesirable over-correction for some features. In this work, we discuss possible causes of the over-correction problem and propose several alternative solutions. Our strategy is rooted in the fact that in the Big Data era, large amount of historical data are available which should be taken advantage of. Our strategy presents a new framework to enhance the Bayesian hierarchical model. Through simulation and real data analysis, we demonstrated superior performance of the proposed strategy. Our new strategy also enables borrowing information across different platforms which could be extremely useful with emergence of new technologies and accumulation of data from different platforms in the Big Data era. Our method has been implemented in R package "adaptiveHM", which is freely available from https://github.com/benliemory/adaptiveHM.

  3. Improving Hierarchical Models Using Historical Data with Applications in High-Throughput Genomics Data Analysis

    PubMed Central

    Li, Ben; Li, Yunxiao; Qin, Zhaohui S.

    2016-01-01

    Modern high-throughput biotechnologies such as microarray and next generation sequencing produce a massive amount of information for each sample assayed. However, in a typical high-throughput experiment, only limited amount of data are observed for each individual feature, thus the classical ‘large p, small n’ problem. Bayesian hierarchical model, capable of borrowing strength across features within the same dataset, has been recognized as an effective tool in analyzing such data. However, the shrinkage effect, the most prominent feature of hierarchical features, can lead to undesirable over-correction for some features. In this work, we discuss possible causes of the over-correction problem and propose several alternative solutions. Our strategy is rooted in the fact that in the Big Data era, large amount of historical data are available which should be taken advantage of. Our strategy presents a new framework to enhance the Bayesian hierarchical model. Through simulation and real data analysis, we demonstrated superior performance of the proposed strategy. Our new strategy also enables borrowing information across different platforms which could be extremely useful with emergence of new technologies and accumulation of data from different platforms in the Big Data era. Our method has been implemented in R package “adaptiveHM”, which is freely available from https://github.com/benliemory/adaptiveHM. PMID:28919931

  4. Loop corrections to primordial fluctuations from inflationary phase transitions

    NASA Astrophysics Data System (ADS)

    Wu, Yi-Peng; Yokoyama, Jun'ichi

    2018-05-01

    We investigate loop corrections to the primordial fluctuations in the single-field inflationary paradigm from spectator fields that experience a smooth transition of their vacuum expectation values. We show that when the phase transition involves a classical evolution effectively driven by a negative mass term from the potential, important corrections to the curvature perturbation can be generated by field perturbations that are frozen outside the horizon by the time of the phase transition, yet the correction to tensor perturbation is naturally suppressed by the spatial derivative couplings between spectator fields and graviton. At one-loop level, the dominant channel for the production of primordial fluctuations comes from a pair-scattering of free spectator fields that decay into the curvature perturbations, and this decay process is only sensitive to field masses comparable to the Hubble scale of inflation.

  5. Generalized relative entropies in the classical limit

    NASA Astrophysics Data System (ADS)

    Kowalski, A. M.; Martin, M. T.; Plastino, A.

    2015-03-01

    Our protagonists are (i) the Cressie-Read family of divergences (characterized by the parameter γ), (ii) Tsallis' generalized relative entropies (characterized by the q one), and, as a particular instance of both, (iii) the Kullback-Leibler (KL) relative entropy. In their normalized versions, we ascertain the equivalence between (i) and (ii). Additionally, we employ these three entropic quantifiers in order to provide a statistical investigation of the classical limit of a semiclassical model, whose properties are well known from a purely dynamic viewpoint. This places us in a good position to assess the appropriateness of our statistical quantifiers for describing involved systems. We compare the behaviour of (i), (ii), and (iii) as one proceeds towards the classical limit. We determine optimal ranges for γ and/or q. It is shown the Tsallis-quantifier is better than KL's for 1.5 < q < 2.5.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suh, Uhi Rinn, E-mail: uhrisu1@math.snu.ac.kr

    We introduce a classical BRST complex (See Definition 3.2.) and show that one can construct a classical affine W-algebra via the complex. This definition clarifies that classical affine W-algebras can be considered as quasi-classical limits of quantum affine W-algebras. We also give a definition of a classical affine fractional W-algebra as a Poisson vertex algebra. As in the classical affine case, a classical affine fractional W-algebra has two compatible λ-brackets and is isomorphic to an algebra of differential polynomials as a differential algebra. When a classical affine fractional W-algebra is associated to a minimal nilpotent, we describe explicit forms ofmore » free generators and compute λ-brackets between them. Provided some assumptions on a classical affine fractional W-algebra, we find an infinite sequence of integrable systems related to the algebra, using the generalized Drinfel’d and Sokolov reduction.« less

  7. Reduction of elevated plasma globotriaosylsphingosine in patients with classic Fabry disease following enzyme replacement therapy.

    PubMed

    van Breemen, Mariëlle J; Rombach, Saskia M; Dekker, Nick; Poorthuis, Ben J; Linthorst, Gabor E; Zwinderman, Aeilko H; Breunig, Frank; Wanner, Christoph; Aerts, Johannes M; Hollak, Carla E

    2011-01-01

    Fabry disease is treated by two-weekly infusions with α-galactosidase A, which is deficient in this X-linked globotriaosylceramide (Gb3) storage disorder. Elevated plasma globotriaosylsphingosine (lysoGb3) is a hallmark of classical Fabry disease. We investigated effects of enzyme replacement therapy (ERT) on plasma levels of lysoGb3 and Gb3 in patients with classical Fabry disease treated with agalsidase alfa at 0.2mg/kg, agalsidase beta at 0.2mg/kg or at 1.0mg/kg bodyweight. Each treatment regimen led to prominent reductions of plasma lysoGb3 in Fabry males within 3 months (P=0.0313), followed by relative stability later on. Many males developed antibodies against α-galactosidase A, particularly those treated with agalsidase beta. Patients with antibodies tended towards smaller correction in plasma lysoGb3 concentration, whereas treatment with high dose agalsidase beta allowed a reduction comparable to patients without antibodies. Pre-treatment plasma lysoGb3 concentrations of Fabry females were relatively low. In all females and with each treatment regimen, ERT gave reduction or stabilisation of plasma lysoGb3. Our investigation revealed that ERT of Fabry patients reduces plasma lysoGb3, regardless of the recombinant enzyme used. This finding shows that ERT can correct a characteristic biochemical abnormality in Fabry patients. Copyright © 2010 Elsevier B.V. All rights reserved.

  8. [Study on expression styles of meridian diseases in the Internal Classic].

    PubMed

    Jia-Jie; Zhao, Jing-sheng

    2007-01-01

    To probe expression styles of meridian diseases in the Internal Classic. Expression styles for meridian diseases in the Internal Classic were divided by using literature study methods. Expression styles of meridian diseases in the Internal Classic include the 4 types, i. e. twelve meridians, the six channels on the foot, indications of acupoints, and diseases of zang and fu organs. The recognition of later generations on the meridians diseases in the Lingshu Chanels has a certain history limitation.

  9. On the correct representation of bending and axial deformation in the absolute nodal coordinate formulation with an elastic line approach

    NASA Astrophysics Data System (ADS)

    Gerstmayr, Johannes; Irschik, Hans

    2008-12-01

    In finite element methods that are based on position and slope coordinates, a representation of axial and bending deformation by means of an elastic line approach has become popular. Such beam and plate formulations based on the so-called absolute nodal coordinate formulation have not yet been verified sufficiently enough with respect to analytical results or classical nonlinear rod theories. Examining the existing planar absolute nodal coordinate element, which uses a curvature proportional bending strain expression, it turns out that the deformation does not fully agree with the solution of the geometrically exact theory and, even more serious, the normal force is incorrect. A correction based on the classical ideas of the extensible elastica and geometrically exact theories is applied and a consistent strain energy and bending moment relations are derived. The strain energy of the solid finite element formulation of the absolute nodal coordinate beam is based on the St. Venant-Kirchhoff material: therefore, the strain energy is derived for the latter case and compared to classical nonlinear rod theories. The error in the original absolute nodal coordinate formulation is documented by numerical examples. The numerical example of a large deformation cantilever beam shows that the normal force is incorrect when using the previous approach, while a perfect agreement between the absolute nodal coordinate formulation and the extensible elastica can be gained when applying the proposed modifications. The numerical examples show a very good agreement of reference analytical and numerical solutions with the solutions of the proposed beam formulation for the case of large deformation pre-curved static and dynamic problems, including buckling and eigenvalue analysis. The resulting beam formulation does not employ rotational degrees of freedom and therefore has advantages compared to classical beam elements regarding energy-momentum conservation.

  10. The Biharmonic Oscillator and Asymmetric Linear Potentials: From Classical Trajectories to Momentum-Space Probability Densities in the Extreme Quantum Limit

    ERIC Educational Resources Information Center

    Ruckle, L. J.; Belloni, M.; Robinett, R. W.

    2012-01-01

    The biharmonic oscillator and the asymmetric linear well are two confining power-law-type potentials for which complete bound-state solutions are possible in both classical and quantum mechanics. We examine these problems in detail, beginning with studies of their trajectories in position and momentum space, evaluation of the classical probability…

  11. 40 CFR 146.64 - Corrective action for wells in the area of review.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... requiring corrective action other than pressure limitations shall include a compliance schedule requiring... require observance of appropriate pressure limitations under paragraph (d)(3) until all other corrective... have been taken. (3) The Director may require pressure limitations in lieu of plugging. If pressure...

  12. Cognitive Diagnostic Attribute-Level Discrimination Indices

    ERIC Educational Resources Information Center

    Henson, Robert; Roussos, Louis; Douglas, Jeff; He, Xuming

    2008-01-01

    Cognitive diagnostic models (CDMs) model the probability of correctly answering an item as a function of an examinee's attribute mastery pattern. Because estimation of the mastery pattern involves more than a continuous measure of ability, reliability concepts introduced by classical test theory and item response theory do not apply. The cognitive…

  13. Computer Simulation of Classic Studies in Psychology.

    ERIC Educational Resources Information Center

    Bradley, Drake R.

    This paper describes DATASIM, a comprehensive software package which generates simulated data for actual or hypothetical research designs. DATASIM is primarily intended for use in statistics and research methods courses, where it is used to generate "individualized" datasets for students to analyze, and later to correct their answers.…

  14. Probing for quantum speedup on D-Wave Two

    NASA Astrophysics Data System (ADS)

    Rønnow, Troels F.; Wang, Zhihui; Job, Joshua; Isakov, Sergei V.; Boixo, Sergio; Lidar, Daniel; Martinis, John; Troyer, Matthias

    2014-03-01

    Quantum speedup refers to the advantage quantum devices can have over classical ones in solving classes of computational problems. In this talk we show how to correctly define and measure quantum speedup in experimental devices. We show how to avoid issues that might mask or fake quantum speedup.

  15. A Photonic Basis for Deriving Nonlinear Optical Response

    ERIC Educational Resources Information Center

    Andrews, David L.; Bradshaw, David S.

    2009-01-01

    Nonlinear optics is generally first presented as an extension of conventional optics. Typically the subject is introduced with reference to a classical oscillatory electric polarization, accommodating correction terms that become significant at high intensities. The material parameters that quantify the extent of the nonlinear response are cast as…

  16. Surface hopping simulation of vibrational predissociation of methanol dimer

    NASA Astrophysics Data System (ADS)

    Jiang, Ruomu; Sibert, Edwin L.

    2012-06-01

    The mixed quantum-classical surface hopping method is applied to the vibrational predissociation of methanol dimer, and the results are compared to more exact quantum calculations. Utilizing the vibrational SCF basis, the predissociation problem is cast into a curve crossing problem between dissociative and quasibound surfaces with different vibrational character. The varied features of the dissociative surfaces, arising from the large amplitude OH torsion, generate rich predissociation dynamics. The fewest switches surface hopping algorithm of Tully [J. Chem. Phys. 93, 1061 (1990), 10.1063/1.459170] is applied to both diabatic and adiabatic representations. The comparison affords new insight into the criterion for selecting the suitable representation. The adiabatic method's difficulty with low energy trajectories is highlighted. In the normal crossing case, the diabatic calculations yield good results, albeit showing its limitation in situations where tunneling is important. The quadratic scaling of the rates on coupling strength is confirmed. An interesting resonance behavior is identified and is dealt with using a simple decoherence scheme. For low lying dissociative surfaces that do not cross the quasibound surface, the diabatic method tends to overestimate the predissociation rate whereas the adiabatic method is qualitatively correct. Analysis reveals the major culprits involve Rabi-like oscillation, treatment of classically forbidden hops, and overcoherence. Improvements of the surface hopping results are achieved by adopting a few changes to the original surface hopping algorithms.

  17. Minimum length from quantum mechanics and classical general relativity.

    PubMed

    Calmet, Xavier; Graesser, Michael; Hsu, Stephen D H

    2004-11-19

    We derive fundamental limits on measurements of position, arising from quantum mechanics and classical general relativity. First, we show that any primitive probe or target used in an experiment must be larger than the Planck length lP. This suggests a Planck-size minimum ball of uncertainty in any measurement. Next, we study interferometers (such as LIGO) whose precision is much finer than the size of any individual components and hence are not obviously limited by the minimum ball. Nevertheless, we deduce a fundamental limit on their accuracy of order lP. Our results imply a device independent limit on possible position measurements.

  18. Topics in quantum chaos

    NASA Astrophysics Data System (ADS)

    Jordan, Andrew Noble

    2002-09-01

    In this dissertation, we study the quantum mechanics of classically chaotic dynamical systems. We begin by considering the decoherence effects a quantum chaotic system has on a simple quantum few state system. Typical time evolution of a quantum system whose classical limit is chaotic generates structures in phase space whose size is much smaller than Planck's constant. A naive application of Heisenberg's uncertainty principle indicates that these structures are not physically relevant. However, if we take the quantum chaotic system in question to be an environment which interacts with a simple two state quantum system (qubit), we show that these small phase-space structures cause the qubit to generically lose quantum coherence if and only if the environment has many degrees of freedom, such as a dilute gas. This implies that many-body environments may be crucial for the phenomenon of quantum decoherence. Next, we turn to an analysis of statistical properties of time correlation functions and matrix elements of quantum chaotic systems. A semiclassical evaluation of matrix elements of an operator indicates that the dominant contribution will be related to a classical time correlation function over the energy surface. For a highly chaotic class of dynamics, these correlation functions may be decomposed into sums of Ruelle resonances, which control exponential decay to the ergodic distribution. The theory is illustrated both numerically and theoretically on the Baker map. For this system, we are able to isolate individual Ruelle modes. We further consider dynamical systems whose approach to ergodicity is given by a power law rather than an exponential in time. We propose a billiard with diffusive boundary conditions, whose classical solution may be calculated analytically. We go on to compare the exact solution with an approximation scheme, as well calculate asympotic corrections. Quantum spectral statistics are calculated assuming the validity of the Again, Altshuler and Andreev ansatz. We find singular behavior of the two point spectral correlator in the limit of small spacing. Finally, we analyse the effect that slow decay to ergodicity has on the structure of the quantum propagator, as well as wavefunction localization. We introduce a statistical quantum description of systems that are composed of both an orderly region and a random region. By averaging over the random region only, we find that measures of localization in momentum space semiclassically diverge with the dimension of the Hilbert space. We illustrate this numerically with quantum maps and suggest various other systems where this behavior should be important.

  19. Local non-Calderbank-Shor-Steane quantum error-correcting code on a three-dimensional lattice

    NASA Astrophysics Data System (ADS)

    Kim, Isaac H.

    2011-05-01

    We present a family of non-Calderbank-Shor-Steane quantum error-correcting code consisting of geometrically local stabilizer generators on a 3D lattice. We study the Hamiltonian constructed from ferromagnetic interaction of overcomplete set of local stabilizer generators. The degenerate ground state of the system is characterized by a quantum error-correcting code whose number of encoded qubits are equal to the second Betti number of the manifold. These models (i) have solely local interactions; (ii) admit a strong-weak duality relation with an Ising model on a dual lattice; (iii) have topological order in the ground state, some of which survive at finite temperature; and (iv) behave as classical memory at finite temperature.

  20. Parametric interactions in presence of different size colloids in semiconductor quantum plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vanshpal, R., E-mail: ravivanshpal@gmail.com; Sharma, Uttam; Dubey, Swati

    2015-07-31

    Present work is an attempt to investigate the effect of different size colloids on parametric interaction in semiconductor quantum plasma. Inclusion of quantum effect is being done in this analysis through quantum correction term in classical hydrodynamic model of homogeneous semiconductor plasma. The effect is associated with purely quantum origin using quantum Bohm potential and quantum statistics. Colloidal size and quantum correction term modify the parametric dispersion characteristics of ion implanted semiconductor plasma medium. It is found that quantum effect on colloids is inversely proportional to their size. Moreover critical size of implanted colloids for the effective quantum correction ismore » determined which is found to be equal to the lattice spacing of the crystal.« less

  1. Equilibrium energy spectrum of point vortex motion with remarks on ensemble choice and ergodicity

    NASA Astrophysics Data System (ADS)

    Esler, J. G.

    2017-01-01

    The dynamics and statistical mechanics of N chaotically evolving point vortices in the doubly periodic domain are revisited. The selection of the correct microcanonical ensemble for the system is first investigated. The numerical results of Weiss and McWilliams [Phys. Fluids A 3, 835 (1991), 10.1063/1.858014], who argued that the point vortex system with N =6 is nonergodic because of an apparent discrepancy between ensemble averages and dynamical time averages, are shown to be due to an incorrect ensemble definition. When the correct microcanonical ensemble is sampled, accounting for the vortex momentum constraint, time averages obtained from direct numerical simulation agree with ensemble averages within the sampling error of each calculation, i.e., there is no numerical evidence for nonergodicity. Further, in the N →∞ limit it is shown that the vortex momentum no longer constrains the long-time dynamics and therefore that the correct microcanonical ensemble for statistical mechanics is that associated with the entire constant energy hypersurface in phase space. Next, a recently developed technique is used to generate an explicit formula for the density of states function for the system, including for arbitrary distributions of vortex circulations. Exact formulas for the equilibrium energy spectrum, and for the probability density function of the energy in each Fourier mode, are then obtained. Results are compared with a series of direct numerical simulations with N =50 and excellent agreement is found, confirming the relevance of the results for interpretation of quantum and classical two-dimensional turbulence.

  2. Quantitative, Comparable Coherent Anti-Stokes Raman Scattering (CARS) Spectroscopy: Correcting Errors in Phase Retrieval

    PubMed Central

    Camp, Charles H.; Lee, Young Jong; Cicerone, Marcus T.

    2017-01-01

    Coherent anti-Stokes Raman scattering (CARS) microspectroscopy has demonstrated significant potential for biological and materials imaging. To date, however, the primary mechanism of disseminating CARS spectroscopic information is through pseudocolor imagery, which explicitly neglects a vast majority of the hyperspectral data. Furthermore, current paradigms in CARS spectral processing do not lend themselves to quantitative sample-to-sample comparability. The primary limitation stems from the need to accurately measure the so-called nonresonant background (NRB) that is used to extract the chemically-sensitive Raman information from the raw spectra. Measurement of the NRB on a pixel-by-pixel basis is a nontrivial task; thus, reference NRB from glass or water are typically utilized, resulting in error between the actual and estimated amplitude and phase. In this manuscript, we present a new methodology for extracting the Raman spectral features that significantly suppresses these errors through phase detrending and scaling. Classic methods of error-correction, such as baseline detrending, are demonstrated to be inaccurate and to simply mask the underlying errors. The theoretical justification is presented by re-developing the theory of phase retrieval via the Kramers-Kronig relation, and we demonstrate that these results are also applicable to maximum entropy method-based phase retrieval. This new error-correction approach is experimentally applied to glycerol spectra and tissue images, demonstrating marked consistency between spectra obtained using different NRB estimates, and between spectra obtained on different instruments. Additionally, in order to facilitate implementation of these approaches, we have made many of the tools described herein available free for download. PMID:28819335

  3. Determination of confidence limits for experiments with low numbers of counts. [Poisson-distributed photon counts from astrophysical sources

    NASA Technical Reports Server (NTRS)

    Kraft, Ralph P.; Burrows, David N.; Nousek, John A.

    1991-01-01

    Two different methods, classical and Bayesian, for determining confidence intervals involving Poisson-distributed data are compared. Particular consideration is given to cases where the number of counts observed is small and is comparable to the mean number of background counts. Reasons for preferring the Bayesian over the classical method are given. Tables of confidence limits calculated by the Bayesian method are provided for quick reference.

  4. Quantum Landauer erasure with a molecular nanomagnet

    NASA Astrophysics Data System (ADS)

    Gaudenzi, R.; Burzurí, E.; Maegawa, S.; van der Zant, H. S. J.; Luis, F.

    2018-06-01

    The erasure of a bit of information is an irreversible operation whose minimal entropy production of kB ln 2 is set by the Landauer limit1. This limit has been verified in a variety of classical systems, including particles in traps2,3 and nanomagnets4. Here, we extend it to the quantum realm by using a crystal of molecular nanomagnets as a quantum spin memory and showing that its erasure is still governed by the Landauer principle. In contrast to classical systems, maximal energy efficiency is achieved while preserving fast operation owing to its high-speed spin dynamics. The performance of our spin register in terms of energy-time cost is orders of magnitude better than existing memory devices to date. The result shows that thermodynamics sets a limit on the energy cost of certain quantum operations and illustrates a way to enhance classical computations by using a quantum system.

  5. Additive Classical Capacity of Quantum Channels Assisted by Noisy Entanglement.

    PubMed

    Zhuang, Quntao; Zhu, Elton Yechao; Shor, Peter W

    2017-05-19

    We give a capacity formula for the classical information transmission over a noisy quantum channel, with separable encoding by the sender and limited resources provided by the receiver's preshared ancilla. Instead of a pure state, we consider the signal-ancilla pair in a mixed state, purified by a "witness." Thus, the signal-witness correlation limits the resource available from the signal-ancilla correlation. Our formula characterizes the utility of different forms of resources, including noisy or limited entanglement assistance, for classical communication. With separable encoding, the sender's signals across multiple channel uses are still allowed to be entangled, yet our capacity formula is additive. In particular, for generalized covariant channels, our capacity formula has a simple closed form. Moreover, our additive capacity formula upper bounds the general coherent attack's information gain in various two-way quantum key distribution protocols. For Gaussian protocols, the additivity of the formula indicates that the collective Gaussian attack is the most powerful.

  6. Explorations of Space-Charge Limits in Parallel-Plate Diodes and Associated Techniques for Automation

    NASA Astrophysics Data System (ADS)

    Ragan-Kelley, Benjamin

    Space-charge limited flow is a topic of much interest and varied application. We extend existing understanding of space-charge limits by simulations, and develop new tools and techniques for doing these simulations along the way. The Child-Langmuir limit is a simple analytic solution for space-charge limited current density in a one-dimensional diode. It has been previously extended to two dimensions by numerical calculation in planar geometries. By considering an axisymmetric cylindrical system with axial emission from a circular cathode of finite radius r and outer drift tube R > r and gap length L, we further examine the space charge limit in two dimensions. We simulate a two-dimensional axisymmetric parallel plate diode of various aspect ratios (r/L), and develop a scaling law for the measured two-dimensional space-charge limit (2DSCL) relative to the Child-Langmuir limit as a function of the aspect ratio of the diode. These simulations are done with a large (100T) longitudinal magnetic field to restrict electron motion to 1D, with the two-dimensional particle-in-cell simulation code OOPIC. We find a scaling law that is a monotonically decreasing function of this aspect ratio, and the one-dimensional result is recovered in the limit as r >> L. The result is in good agreement with prior results in planar geometry, where the emission area is proportional to the cathode width. We find a weak contribution from the effects of the drift tube for current at the beam edge, and a strong contribution of high current-density "wings" at the outer-edge of the beam, with a very large relative contribution when the beam is narrow. Mechanisms for enhancing current beyond the Child-Langmuir limit remain a matter of great importance. We analyze the enhancement effects of upstream ion injection on the transmitted current in a one-dimensional parallel plate diode. Electrons are field-emitted at the cathode, and ions are injected at a controlled current from the anode. An analytic solution is derived for maximizing the electron current throughput in terms of the ion current. This analysis accounts for various energy regimes, from classical to fully relativistic. The analytical result is then confirmed by simulation of the diode in each energy regime. Field-limited emission is an approach for using Gauss's law to satisfy the space charge limit for emitting current in particle-in-cell simulations. We find that simple field-limited emission models make several assumptions, which introduce small, systematic errors in the system. We make a thorough analysis of each assumption, and ultimately develop and test a new emission scheme that accounts for each. The first correction we make is to allow for a non-zero surface field at the boundary. Since traditional field-emission schemes only aim to balance Gauss's law at the surface, a zero surface field is an assumed condition. But for many systems, this is not appropriate, so the addition of a target surface field is made. The next correction is to account for nonzero initial velocity, which, if neglected, results in a systematic underestimation of the current, due to assuming that all emitted charge will be weighted to the boundary, when in fact it will be weighted as a fraction strictly less than unity, depending on the distance across the initial cell the particle travels in its initial fractional timestep. A correction is made to the scheme, to use the actual particle weight to adjust the target emission. The final analyses involve geometric terms, analyzing the effects of cylindrical coordinates, and taking particular care to analyze the center of a cylindrical beam, as well as the outer edge of the beam, in Cartesian coordinates. We find that balancing Gauss's law at the edge of the beam is not the correct behavior, and that it is important to resolve the profile of the emitted current, in order to avoid systematic errors. A thorough analysis is done of the assumptions made in prior implementations, and corrections are introduced for cylindrical geometry, non-zero injection velocity, and non-zero surface field. Particular care is taken to determine special conditions for the outermost node, where we find that forcing a balance of Gauss's law would be incorrect. (Abstract shortened by UMI.)

  7. Entanglement-assisted quantum convolutional coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilde, Mark M.; Brun, Todd A.

    2010-04-15

    We show how to protect a stream of quantum information from decoherence induced by a noisy quantum communication channel. We exploit preshared entanglement and a convolutional coding structure to develop a theory of entanglement-assisted quantum convolutional coding. Our construction produces a Calderbank-Shor-Steane (CSS) entanglement-assisted quantum convolutional code from two arbitrary classical binary convolutional codes. The rate and error-correcting properties of the classical convolutional codes directly determine the corresponding properties of the resulting entanglement-assisted quantum convolutional code. We explain how to encode our CSS entanglement-assisted quantum convolutional codes starting from a stream of information qubits, ancilla qubits, and shared entangled bits.

  8. Real-time algorithm for acoustic imaging with a microphone array.

    PubMed

    Huang, Xun

    2009-05-01

    Acoustic phased array has become an important testing tool in aeroacoustic research, where the conventional beamforming algorithm has been adopted as a classical processing technique. The computation however has to be performed off-line due to the expensive cost. An innovative algorithm with real-time capability is proposed in this work. The algorithm is similar to a classical observer in the time domain while extended for the array processing to the frequency domain. The observer-based algorithm is beneficial mainly for its capability of operating over sampling blocks recursively. The expensive experimental time can therefore be reduced extensively since any defect in a testing can be corrected instantaneously.

  9. Classical verification of quantum circuits containing few basis changes

    NASA Astrophysics Data System (ADS)

    Demarie, Tommaso F.; Ouyang, Yingkai; Fitzsimons, Joseph F.

    2018-04-01

    We consider the task of verifying the correctness of quantum computation for a restricted class of circuits which contain at most two basis changes. This contains circuits giving rise to the second level of the Fourier hierarchy, the lowest level for which there is an established quantum advantage. We show that when the circuit has an outcome with probability at least the inverse of some polynomial in the circuit size, the outcome can be checked in polynomial time with bounded error by a completely classical verifier. This verification procedure is based on random sampling of computational paths and is only possible given knowledge of the likely outcome.

  10. Quantum diffusion during inflation and primordial black holes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pattison, Chris; Assadullahi, Hooshyar; Wands, David

    We calculate the full probability density function (PDF) of inflationary curvature perturbations, even in the presence of large quantum backreaction. Making use of the stochastic-δ N formalism, two complementary methods are developed, one based on solving an ordinary differential equation for the characteristic function of the PDF, and the other based on solving a heat equation for the PDF directly. In the classical limit where quantum diffusion is small, we develop an expansion scheme that not only recovers the standard Gaussian PDF at leading order, but also allows us to calculate the first non-Gaussian corrections to the usual result. Inmore » the opposite limit where quantum diffusion is large, we find that the PDF is given by an elliptic theta function, which is fully characterised by the ratio between the squared width and height (in Planck mass units) of the region where stochastic effects dominate. We then apply these results to the calculation of the mass fraction of primordial black holes from inflation, and show that no more than ∼ 1 e -fold can be spent in regions of the potential dominated by quantum diffusion. We explain how this requirement constrains inflationary potentials with two examples.« less

  11. Quantum diffusion during inflation and primordial black holes

    NASA Astrophysics Data System (ADS)

    Pattison, Chris; Vennin, Vincent; Assadullahi, Hooshyar; Wands, David

    2017-10-01

    We calculate the full probability density function (PDF) of inflationary curvature perturbations, even in the presence of large quantum backreaction. Making use of the stochastic-δ N formalism, two complementary methods are developed, one based on solving an ordinary differential equation for the characteristic function of the PDF, and the other based on solving a heat equation for the PDF directly. In the classical limit where quantum diffusion is small, we develop an expansion scheme that not only recovers the standard Gaussian PDF at leading order, but also allows us to calculate the first non-Gaussian corrections to the usual result. In the opposite limit where quantum diffusion is large, we find that the PDF is given by an elliptic theta function, which is fully characterised by the ratio between the squared width and height (in Planck mass units) of the region where stochastic effects dominate. We then apply these results to the calculation of the mass fraction of primordial black holes from inflation, and show that no more than ~ 1 e-fold can be spent in regions of the potential dominated by quantum diffusion. We explain how this requirement constrains inflationary potentials with two examples.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donangelo, R.J.

    An integral representation for the classical limit of the quantum mechanical S-matrix is developed and applied to heavy-ion Coulomb excitation and Coulomb-nuclear interference. The method combines the quantum principle of superposition with exact classical dynamics to describe the projectile-target system. A detailed consideration of the classical trajectories and of the dimensionless parameters that characterize the system is carried out. The results are compared, where possible, to exact quantum mechanical calculations and to conventional semiclassical calculations. It is found that in the case of backscattering the classical limit S-matrix method is able to almost exactly reproduce the quantum-mechanical S-matrix elements, andmore » therefore the transition probabilities, even for projectiles as light as protons. The results also suggest that this approach should be a better approximation for heavy-ion multiple Coulomb excitation than earlier semiclassical methods, due to a more accurate description of the classical orbits in the electromagnetic field of the target nucleus. Calculations using this method indicate that the rotational excitation probabilities in the Coulomb-nuclear interference region should be very sensitive to the details of the potential at the surface of the nucleus, suggesting that heavy-ion rotational excitation could constitute a sensitive probe of the nuclear potential in this region. The application to other problems as well as the present limits of applicability of the formalism are also discussed.« less

  13. Statistical mechanics in the context of special relativity. II.

    PubMed

    Kaniadakis, G

    2005-09-01

    The special relativity laws emerge as one-parameter (light speed) generalizations of the corresponding laws of classical physics. These generalizations, imposed by the Lorentz transformations, affect both the definition of the various physical observables (e.g., momentum, energy, etc.), as well as the mathematical apparatus of the theory. Here, following the general lines of [Phys. Rev. E 66, 056125 (2002)], we show that the Lorentz transformations impose also a proper one-parameter generalization of the classical Boltzmann-Gibbs-Shannon entropy. The obtained relativistic entropy permits us to construct a coherent and self-consistent relativistic statistical theory, preserving the main features of the ordinary statistical theory, which is recovered in the classical limit. The predicted distribution function is a one-parameter continuous deformation of the classical Maxwell-Boltzmann distribution and has a simple analytic form, showing power law tails in accordance with the experimental evidence. Furthermore, this statistical mechanics can be obtained as the stationary case of a generalized kinetic theory governed by an evolution equation obeying the H theorem and reproducing the Boltzmann equation of the ordinary kinetics in the classical limit.

  14. Detecting gravitational decoherence with clocks: Limits on temporal resolution from a classical-channel model of gravity

    NASA Astrophysics Data System (ADS)

    Khosla, Kiran E.; Altamirano, Natacha

    2017-05-01

    The notion of time is given a different footing in quantum mechanics and general relativity, treated as a parameter in the former and being an observer-dependent property in the latter. From an operational point of view time is simply the correlation between a system and a clock, where an idealized clock can be modeled as a two-level system. We investigate the dynamics of clocks interacting gravitationally by treating the gravitational interaction as a classical information channel. This model, known as the classical-channel gravity (CCG), postulates that gravity is mediated by a fundamentally classical force carrier and is therefore unable to entangle particles gravitationally. In particular, we focus on the decoherence rates and temporal resolution of arrays of N clocks, showing how the minimum dephasing rate scales with N , and the spatial configuration. Furthermore, we consider the gravitational redshift between a clock and a massive particle and show that a classical-channel model of gravity predicts a finite-dephasing rate from the nonlocal interaction. In our model we obtain a fundamental limitation in time accuracy that is intrinsic to each clock.

  15. Signatures of bifurcation on quantum correlations: Case of the quantum kicked top

    NASA Astrophysics Data System (ADS)

    Bhosale, Udaysinh T.; Santhanam, M. S.

    2017-01-01

    Quantum correlations reflect the quantumness of a system and are useful resources for quantum information and computational processes. Measures of quantum correlations do not have a classical analog and yet are influenced by classical dynamics. In this work, by modeling the quantum kicked top as a multiqubit system, the effect of classical bifurcations on measures of quantum correlations such as the quantum discord, geometric discord, and Meyer and Wallach Q measure is studied. The quantum correlation measures change rapidly in the vicinity of a classical bifurcation point. If the classical system is largely chaotic, time averages of the correlation measures are in good agreement with the values obtained by considering the appropriate random matrix ensembles. The quantum correlations scale with the total spin of the system, representing its semiclassical limit. In the vicinity of trivial fixed points of the kicked top, the scaling function decays as a power law. In the chaotic limit, for large total spin, quantum correlations saturate to a constant, which we obtain analytically, based on random matrix theory, for the Q measure. We also suggest that it can have experimental consequences.

  16. Classical conformal blocks and accessory parameters from isomonodromic deformations

    NASA Astrophysics Data System (ADS)

    Lencsés, Máté; Novaes, Fábio

    2018-04-01

    Classical conformal blocks appear in the large central charge limit of 2D Virasoro conformal blocks. In the AdS3 /CFT2 correspondence, they are related to classical bulk actions and used to calculate entanglement entropy and geodesic lengths. In this work, we discuss the identification of classical conformal blocks and the Painlevé VI action showing how isomonodromic deformations naturally appear in this context. We recover the accessory parameter expansion of Heun's equation from the isomonodromic τ -function. We also discuss how the c = 1 expansion of the τ -function leads to a novel approach to calculate the 4-point classical conformal block.

  17. Cosmology, Cosmomicrophysics and Gravitation Properties of the Gravitational Lens Mapping in the Vicinity of a Cusp Caustic

    NASA Astrophysics Data System (ADS)

    Alexandrov, A. N.; Zhdanov, V. I.; Koval, S. M.

    We derive approximate formulas for the coordinates and magnification of critical images of a point source in a vicinity of a cusp caustic arising in the gravitational lens mapping. In the lowest (zero-order) approximation, these formulas were obtained in the classical work by Schneider&Weiss (1992) and then studied by a number of authors; first-order corrections in powers of the proximity parameter were treated by Congdon, Keeton and Nordgren. We have shown that the first-order corrections are solely due to the asymmetry of the cusp. We found expressions for the second-order corrections in the case of general lens potential and for an arbitrary position of the source near a symmetric cusp. Applications to a lensing galaxy model represented by a singular isothermal sphere with an external shear y are studied and the role of the second-order corrections is demonstrated.

  18. Phantom of the Hartle–Hawking instanton: Connecting inflation with dark energy

    DOE PAGES

    Chen, Pisin; Qiu, Taotao; Yeom, Dong -han

    2016-02-20

    If the Hartle–Hawking wave function is the correct boundary condition of our universe, the history of our universe will be well approximated by an instanton. Although this instanton should be classicalized at infinity, as long as we are observing a process of each history, we may detect a non-classicalized part of field combinations. When we apply it to a dark energy model, this non-classicalized part of fields can be well embedded to a quintessence and a phantom model, i.e., a quintom model. Because of the property of complexified instantons, the phantomness will be naturally free from a big rip singularity.more » This phantomness does not cause perturbative instabilities, as it is an effect emergent from the entire wave function. Lastly, our work may thus provide a theoretical basis for the quintom models, whose equation of state can cross the cosmological constant boundary phenomenologically.« less

  19. Cascading and local-field effects in non-linear optics revisited: a quantum-field picture based on exchange of photons.

    PubMed

    Bennett, Kochise; Mukamel, Shaul

    2014-01-28

    The semi-classical theory of radiation-matter coupling misses local-field effects that may alter the pulse time-ordering and cascading that leads to the generation of new signals. These are then introduced macroscopically by solving Maxwell's equations. This procedure is convenient and intuitive but ad hoc. We show that both effects emerge naturally by including coupling to quantum modes of the radiation field that are initially in the vacuum state to second order. This approach is systematic and suggests a more general class of corrections that only arise in a QED framework. In the semi-classical theory, which only includes classical field modes, the susceptibility of a collection of N non-interacting molecules is additive and scales as N. Second-order coupling to a vacuum mode generates an effective retarded interaction that leads to cascading and local field effects both of which scale as N(2).

  20. Change in the diagnosis from classical Hodgkin's lymphoma to anaplastic large cell lymphoma by (18)F flourodeoxyglucose positron emission tomography/computed tomography: Importance of recognising disease pattern on imaging and immunohistochemistry.

    PubMed

    Senthil, Raja; Mohapatra, Ranjan Kumar; Sampath, Mouleeswaran Koramadai; Sundaraiya, Sumati

    2016-01-01

    Anaplastic large cell lymphoma (ALCL) is a rare type of nonHodgkin's lymphoma (NHL), but one of the most common subtypes of T-cell lymphoma. It is an aggressive T-cell lymphoma, and some ALCL may mimic less aggressive classical HL histopathlogically. It may be misdiagnosed unless careful immunohistochemical examination is performed. As the prognosis and management of these two lymphomas vary significantly, it is important to make a correct diagnosis. We describe a case who was diagnosed as classical HL by histopathological examination of cervical lymph node, in whom (18)F-flouro deoxyglucose positron emission tomography/computed tomography appearances were unusual for HL and warranted review of histopathology that revealed anaplastic lymphoma kinase-1 negative anaplastic large T-cell lymphoma, Hodgkin-like variant, thereby changing the management.

  1. An in vitro study to evaluate the difference in shade between commercially available shade guides and glazed porcelain.

    PubMed

    Manimaran, P; Sadan, D Sai

    2016-10-01

    Smile is one of the most important interactive communication skills of a person. A smile is the key factor for an aesthetic appearance. Hence aesthetics is one of the motivating factor for the patients to seek dental care. Correction of unaesthetic appearance gives a positive effect to the self esteem of the patient. The aim of this study was to compare the difference in the shade between the commercially available shade guides namely Vita Classical And Ivoclar Chromascop and the fired porcelain samples fabricated using Vita Zahnfabrik VMK 95 and Ivoclar Classic Materials respectively. The objective of this study was to obtain a matching brand of material that has a particular shade tab among the brands used. To conclude, Ivoclar material matched the chromascop shade guide better than the vita material matched the vita classic shade guide.

  2. The energy separation between the classical and nonclassical isomers of protonated acetylene - An extensive study in one- and n-particle space saturation

    NASA Technical Reports Server (NTRS)

    Lindh, Roland; Rice, Julia E.; Lee, Timothy J.

    1991-01-01

    The energy separation between the classical and nonclassical forms of protonated acetylene has been reinvestigated in light of the recent experimentally deduced lower bound to this value of 6.0 kcal/mol. The objective of the present study is to use state-of-the-art ab initio quantum mechanical methods to establish this energy difference to within chemical accuracy (i.e., about 1 kcal/mol). The one-particle basis sets include up to g-type functions and the electron correlation methods include single and double excitation coupled-cluster (CCSD), the CCSD(T) extension, multireference configuration interaction, and the averaged coupled-pair functional methods. A correction for zero-point vibrational energies has also been included, yielding a best estimate for the energy difference between the classical and nonclassical forms of 3.7 + or - 1.3 kcal/mol.

  3. QSPIN: A High Level Java API for Quantum Computing Experimentation

    NASA Technical Reports Server (NTRS)

    Barth, Tim

    2017-01-01

    QSPIN is a high level Java language API for experimentation in QC models used in the calculation of Ising spin glass ground states and related quadratic unconstrained binary optimization (QUBO) problems. The Java API is intended to facilitate research in advanced QC algorithms such as hybrid quantum-classical solvers, automatic selection of constraint and optimization parameters, and techniques for the correction and mitigation of model and solution errors. QSPIN includes high level solver objects tailored to the D-Wave quantum annealing architecture that implement hybrid quantum-classical algorithms [Booth et al.] for solving large problems on small quantum devices, elimination of variables via roof duality, and classical computing optimization methods such as GPU accelerated simulated annealing and tabu search for comparison. A test suite of documented NP-complete applications ranging from graph coloring, covering, and partitioning to integer programming and scheduling are provided to demonstrate current capabilities.

  4. Phantom of the Hartle-Hawking instanton: connecting inflation with dark energy

    NASA Astrophysics Data System (ADS)

    Chen, Pisin; Qiu, Taotao; Yeom, Dong-han

    2016-02-01

    If the Hartle-Hawking wave function is the correct boundary condition of our universe, the history of our universe will be well approximated by an instanton. Although this instanton should be classicalized at infinity, as long as we are observing a process of each history, we may detect a non-classicalized part of field combinations. When we apply it to a dark energy model, this non-classicalized part of fields can be well embedded to a quintessence and a phantom model, i.e., a quintom model. Because of the property of complexified instantons, the phantomness will be naturally free from a big rip singularity. This phantomness does not cause perturbative instabilities, as it is an effect emergent from the entire wave function. Our work may thus provide a theoretical basis for the quintom models, whose equation of state can cross the cosmological constant boundary phenomenologically.

  5. Exploring Ultrahigh-Intensity Laser-Plasma Interaction Physics with QED Particle-in-Cell Simulations

    NASA Astrophysics Data System (ADS)

    Luedtke, S. V.; Yin, L.; Labun, L. A.; Albright, B. J.; Stark, D. J.; Bird, R. F.; Nystrom, W. D.; Hegelich, B. M.

    2017-10-01

    Next generation high-intensity lasers are reaching intensity regimes where new physics-quantum electrodynamics (QED) corrections to otherwise classical plasma dynamics-becomes important. Modeling laser-plasma interactions in these extreme settings presents a challenge to traditional particle-in-cell (PIC) codes, which either do not have radiation reaction or include only classical radiation reaction. We discuss a semi-classical approach to adding quantum radiation reaction and photon production to the PIC code VPIC. We explore these intensity regimes with VPIC, compare with results from the PIC code PSC, and report on ongoing work to expand the capability of VPIC in these regimes. This work was supported by the U.S. DOE, Los Alamos National Laboratory Science program, LDRD program, NNSA (DE-NA0002008), and AFOSR (FA9550-14-1-0045). HPC resources provided by TACC, XSEDE, and LANL Institutional Computing.

  6. Quantum localization of classical mechanics

    NASA Astrophysics Data System (ADS)

    Batalin, Igor A.; Lavrov, Peter M.

    2016-07-01

    Quantum localization of classical mechanics within the BRST-BFV and BV (or field-antifield) quantization methods are studied. It is shown that a special choice of gauge fixing functions (or BRST-BFV charge) together with the unitary limit leads to Hamiltonian localization in the path integral of the BRST-BFV formalism. In turn, we find that a special choice of gauge fixing functions being proportional to extremals of an initial non-degenerate classical action together with a very special solution of the classical master equation result in Lagrangian localization in the partition function of the BV formalism.

  7. Emergent Geometry from Entropy and Causality

    NASA Astrophysics Data System (ADS)

    Engelhardt, Netta

    In this thesis, we investigate the connections between the geometry of spacetime and aspects of quantum field theory such as entanglement entropy and causality. This work is motivated by the idea that spacetime geometry is an emergent phenomenon in quantum gravity, and that the physics responsible for this emergence is fundamental to quantum field theory. Part I of this thesis is focused on the interplay between spacetime and entropy, with a special emphasis on entropy due to entanglement. In general spacetimes, there exist locally-defined surfaces sensitive to the geometry that may act as local black hole boundaries or cosmological horizons; these surfaces, known as holographic screens, are argued to have a connection with the second law of thermodynamics. Holographic screens obey an area law, suggestive of an association with entropy; they are also distinguished surfaces from the perspective of the covariant entropy bound, a bound on the total entropy of a slice of the spacetime. This construction is shown to be quite general, and is formulated in both classical and perturbatively quantum theories of gravity. The remainder of Part I uses the Anti-de Sitter/ Conformal Field Theory (AdS/CFT) correspondence to both expand and constrain the connection between entanglement entropy and geometry. The AdS/CFT correspondence posits an equivalence between string theory in the "bulk" with AdS boundary conditions and certain quantum field theories. In the limit where the string theory is simply classical General Relativity, the Ryu-Takayanagi and more generally, the Hubeny-Rangamani-Takayanagi (HRT) formulae provide a way of relating the geometry of surfaces to entanglement entropy. A first-order bulk quantum correction to HRT was derived by Faulkner, Lewkowycz and Maldacena. This formula is generalized to include perturbative quantum corrections in the bulk at any (finite) order. Hurdles to spacetime emergence from entanglement entropy as described by HRT and its quantum generalizations are discussed, both at the classical and perturbatively quantum limits. In particular, several No Go Theorems are proven, indicative of a conclusion that supplementary approaches or information may be necessary to recover the full spacetime geometry. Part II of this thesis involves the relation between geometry and causality, the property that information cannot travel faster than light. Requiring this of any quantum field theory results in constraints on string theory setups that are dual to quantum field theories via the AdS/CFT correspondence. At the level of perturbative quantum gravity, it is shown that causality in the field theory constraints the causal structure in the bulk. At the level of nonperturbative quantum string theory, we find that constraints on causal signals restrict the possible ways in which curvature singularities can be resolved in string theory. Finally, a new program of research is proposed for the construction of bulk geometry from the divergences of correlation functions in the dual field theory. This divergence structure is linked to the causal structure of the bulk and of the field theory.

  8. A modified elastance model to control mock ventricles in real-time: numerical and experimental validation.

    PubMed

    Colacino, Francesco Maria; Moscato, Francesco; Piedimonte, Fabio; Danieli, Guido; Nicosia, Salvatore; Arabia, Maurizio

    2008-01-01

    This article describes an elastance-based mock ventricle able to reproduce the correct ventricular pressure-volume relationship and its correct interaction with the hydraulic circuit connected to it. A real-time control of the mock ventricle was obtained by a new left ventricular mathematical model including resistive and inductive terms added to the classical Suga-Sagawa elastance model throughout the whole cardiac cycle. A valved piston pump was used to mimic the left ventricle. The pressure measured into the pump chamber was fed back into the mathematical model and the calculated reference left ventricular volume was used to drive the piston. Results show that the classical model is very sensitive to pressure disturbances, especially during the filling phase, while the modified model is able to filter out the oscillations thus eliminating their detrimental effects. The presented model is thus suitable to control mock ventricles in real-time, where sudden pressure disturbances represent a key issue and are not negligible. This real-time controlled mock ventricle is able to reproduce the elastance mechanism of a natural ventricle by mimicking its preload (mean atrial pressure) and afterload (mean aortic pressure) sensitivity, i.e., the Starling law. Therefore, it can be used for designing and testing cardiovascular prostheses due to its capability to reproduce the correct ventricle-vascular system interaction.

  9. Joint Processing of Envelope Alignment and Phase Compensation for Isar Imaging

    NASA Astrophysics Data System (ADS)

    Chen, Tao; Jin, Guanghu; Dong, Zhen

    2018-04-01

    Range envelope alignment and phase compensation are spilt into two isolated parts in the classical methods of translational motion compensation in Inverse Synthetic Aperture Radar (ISAR) imaging. In classic method of the rotating object imaging, the two reference points of the envelope alignment and the Phase Difference (PD) estimation are probably not the same point, making it difficult to uncouple the coupling term by conducting the correction of Migration Through Resolution Cell (MTRC). In this paper, an improved approach of joint processing which chooses certain scattering point as the sole reference point is proposed to perform with utilizing the Prominent Point Processing (PPP) method. With this end in view, we firstly get the initial image using classical methods from which a certain scattering point can be chose. The envelope alignment and phase compensation using the selected scattering point as the same reference point are subsequently conducted. The keystone transform is thus smoothly applied to further improve imaging quality. Both simulation experiments and real data processing are provided to demonstrate the performance of the proposed method compared with classical method.

  10. Calculating Probabilistic Distance to Solution in a Complex Problem Solving Domain

    ERIC Educational Resources Information Center

    Sudol, Leigh Ann; Rivers, Kelly; Harris, Thomas K.

    2012-01-01

    In complex problem solving domains, correct solutions are often comprised of a combination of individual components. Students usually go through several attempts, each attempt reflecting an individual solution state that can be observed during practice. Classic metrics to measure student performance over time rely on counting the number of…

  11. Li + solvation and kinetics of Li +–BF 4 -/PF 6 - ion pairs in ethylene carbonate. A molecular dynamics study with classical rate theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Tsun-Mei; Dang, Liem X.

    Using our polarizable force-field models and employing classical rate theories of chemical reactions, we examine in this paper the ethylene carbonate (EC) exchange process between the first and second solvation shells around Li + and the dissociation kinetics of ion pairs Li +–[BF 4] and Li +–[PF 6] in this solvent. We calculate the exchange rates using transition state theory and correct them with transmission coefficients computed by the reactive flux, Impey, Madden, and McDonald approaches, and Grote-Hynes theory. We found that the residence times of EC around Li + ions varied from 60 to 450 ps, depending on themore » correction method used. We found that the relaxation times changed significantly from Li +–[BF 4] to Li +–[PF 6] ion pairs in EC. Finally, our results also show that, in addition to affecting the free energy of dissociation in EC, the anion type also significantly influences the dissociation kinetics of ion pairing.« less

  12. Li + solvation and kinetics of Li +–BF 4 -/PF 6 - ion pairs in ethylene carbonate. A molecular dynamics study with classical rate theories

    DOE PAGES

    Chang, Tsun-Mei; Dang, Liem X.

    2017-07-19

    Using our polarizable force-field models and employing classical rate theories of chemical reactions, we examine in this paper the ethylene carbonate (EC) exchange process between the first and second solvation shells around Li + and the dissociation kinetics of ion pairs Li +–[BF 4] and Li +–[PF 6] in this solvent. We calculate the exchange rates using transition state theory and correct them with transmission coefficients computed by the reactive flux, Impey, Madden, and McDonald approaches, and Grote-Hynes theory. We found that the residence times of EC around Li + ions varied from 60 to 450 ps, depending on themore » correction method used. We found that the relaxation times changed significantly from Li +–[BF 4] to Li +–[PF 6] ion pairs in EC. Finally, our results also show that, in addition to affecting the free energy of dissociation in EC, the anion type also significantly influences the dissociation kinetics of ion pairing.« less

  13. Simple model dielectric functions for insulators

    NASA Astrophysics Data System (ADS)

    Vos, Maarten; Grande, Pedro L.

    2017-05-01

    The Drude dielectric function is a simple way of describing the dielectric function of free electron materials, which have an uniform electron density, in a classical way. The Mermin dielectric function describes a free electron gas, but is based on quantum physics. More complex metals have varying electron densities and are often described by a sum of Drude dielectric functions, the weight of each function being taken proportional to the volume with the corresponding density. Here we describe a slight variation on the Drude dielectric functions that describes insulators in a semi-classical way and a form of the Levine-Louie dielectric function including a relaxation time that does the same within the framework of quantum physics. In the optical limit the semi-classical description of an insulator and the quantum physics description coincide, in the same way as the Drude and Mermin dielectric function coincide in the optical limit for metals. There is a simple relation between the coefficients used in the classical and quantum approaches, a relation that ensures that the obtained dielectric function corresponds to the right static refractive index. For water we give a comparison of the model dielectric function at non-zero momentum with inelastic X-ray measurements, both at relative small momenta and in the Compton limit. The Levine-Louie dielectric function including a relaxation time describes the spectra at small momentum quite well, but in the Compton limit there are significant deviations.

  14. SU-F-R-32: Evaluation of MRI Acquisition Parameter Variations On Texture Feature Extraction Using ACR Phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Y; Wang, J; Wang, C

    Purpose: To investigate the sensitivity of classic texture features to variations of MRI acquisition parameters. Methods: This study was performed on American College of Radiology (ACR) MRI Accreditation Program Phantom. MR imaging was acquired on a GE 750 3T scanner with XRM explain gradient, employing a T1-weighted images (TR/TE=500/20ms) with the following parameters as the reference standard: number of signal average (NEX) = 1, matrix size = 256×256, flip angle = 90°, slice thickness = 5mm. The effect of the acquisition parameters on texture features with and without non-uniformity correction were investigated respectively, while all the other parameters were keptmore » as reference standard. Protocol parameters were set as follows: (a). NEX = 0.5, 2 and 4; (b).Phase encoding steps = 128, 160 and 192; (c). Matrix size = 128×128, 192×192 and 512×512. 32 classic texture features were generated using the classic gray level run length matrix (GLRLM) and gray level co-occurrence matrix (GLCOM) from each image data set. Normalized range ((maximum-minimum)/mean) was calculated to determine variation among the scans with different protocol parameters. Results: For different NEX, 31 out of 32 texture features’ range are within 10%. For different phase encoding steps, 31 out of 32 texture features’ range are within 10%. For different acquisition matrix size without non-uniformity correction, 14 out of 32 texture features’ range are within 10%; for different acquisition matrix size with non-uniformity correction, 16 out of 32 texture features’ range are within 10%. Conclusion: Initial results indicated that those texture features that range within 10% are less sensitive to variations in T1-weighted MRI acquisition parameters. This might suggest that certain texture features might be more reliable to be used as potential biomarkers in MR quantitative image analysis.« less

  15. Report on the Implementation of Homogeneous Nucleation Scheme in MARMOT-based Phase Field Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yulan; Hu, Shenyang Y.; Sun, Xin

    2013-09-30

    In this report, we summarized our effort in developing mesoscale phase field models for predicting precipitation kinetics in alloys during thermal aging and/or under irradiation in nuclear reactors. The first part focused on developing a method to predict the thermodynamic properties of critical nuclei such as the sizes and concentration profiles of critical nuclei, and nucleation barrier. These properties are crucial for quantitative simulations of precipitate evolution kinetics with phase field models. Fe-Cr alloy was chosen as a model alloy because it has valid thermodynamic and kinetic data as well as it is an important structural material in nuclear reactors.more » A constrained shrinking dimer dynamics (CSDD) method was developed to search for the energy minimum path during nucleation. With the method we are able to predict the concentration profiles of the critical nuclei of Cr-rich precipitates and nucleation energy barriers. Simulations showed that Cr concentration distribution in the critical nucleus strongly depends on the overall Cr concentration as well as temperature. The Cr concentration inside the critical nucleus is much smaller than the equilibrium concentration calculated by the equilibrium phase diagram. This implies that a non-classical nucleation theory should be used to deal with the nucleation of Cr precipitates in Fe-Cr alloys. The growth kinetics of both classical and non-classical nuclei was investigated by the phase field approach. A number of interesting phenomena were observed from the simulations: 1) a critical classical nucleus first shrinks toward its non-classical nucleus and then grows; 2) a non-classical nucleus has much slower growth kinetics at its earlier growth stage compared to the diffusion-controlled growth kinetics. 3) a critical classical nucleus grows faster at the earlier growth stage than the non-classical nucleus. All of these results demonstrated that it is critical to introduce the correct critical nuclei into phase field modeling in order to correctly capture the kinetics of precipitation. In most alloys the matrix phase and precipitate phase have different concentrations as well as different crystal structures. For example, Cu precipitates in FeCu alloys have fcc crystal structure while the matrix Fe-Cu solid solution has bcc structure at low temperature. The WBM model and KimS model, where both concentrations and order parameters are chosen to describe the microstructures, are commonly used to model precipitations in such alloys. The WBM and KimS models have not been implemented into Marmot yet. In the second part of this report, we focused on implementing the WBM and KimS models into Marmot. The Fe-Cu alloys, which are important structure materials in nuclear reactors, was taken as the model alloys to test the models.« less

  16. Does Planck really rule out monomial inflation?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enqvist, Kari; Karčiauskas, Mindaugas, E-mail: kari.enqvist@helsinki.fi, E-mail: mindaugas.karciauskas@helsinki.fi

    2014-02-01

    We consider the modifications of monomial chaotic inflation models due to radiative corrections induced by inflaton couplings to bosons and/or fermions necessary for reheating. To the lowest order, ignoring gravitational corrections and treating the inflaton as a classical background field, they are of the Coleman-Weinberg type and parametrized by the renormalization scale μ. In cosmology, there are not enough measurements to fix μ so that we end up with a family of models, each having a slightly different slope of the potential. We demonstrate by explicit calculation that within the family of chaotic φ{sup 2} models, some may be ruledmore » out by Planck whereas some remain perfectly viable. In contrast, radiative corrections do not seem to help chaotic φ{sup 4} models to meet the Planck constraints.« less

  17. The Ponzano-Regge Model and Parametric Representation

    NASA Astrophysics Data System (ADS)

    Li, Dan

    2014-04-01

    We give a parametric representation of the effective noncommutative field theory derived from a -deformation of the Ponzano-Regge model and define a generalized Kirchhoff polynomial with -correction terms, obtained in a -linear approximation. We then consider the corresponding graph hypersurfaces and the question of how the presence of the correction term affects their motivic nature. We look in particular at the tetrahedron graph, which is the basic case of relevance to quantum gravity. With the help of computer calculations, we verify that the number of points over finite fields of the corresponding hypersurface does not fit polynomials with integer coefficients, hence the hypersurface of the tetrahedron is not polynomially countable. This shows that the correction term can change significantly the motivic properties of the hypersurfaces, with respect to the classical case.

  18. Hysteresis and thermal limit cycles in MRI simulations of accretion discs

    NASA Astrophysics Data System (ADS)

    Latter, H. N.; Papaloizou, J. C. B.

    2012-10-01

    The recurrentoutbursts that characterize low-mass binary systems reflect thermal state changes in their associated accretion discs. The observed outbursts are connected to the strong variation in disc opacity as hydrogen ionizes near 5000 K. This physics leads to accretion disc models that exhibit bistability and thermal limit cycles, whereby the disc jumps between a family of cool and low-accreting states and a family of hot and efficiently accreting states. Previous models have parametrized the disc turbulence via an alpha (or 'eddy') viscosity. In this paper we treat the turbulence more realistically via a suite of numerical simulations of the magnetorotational instability (MRI) in local geometry. Radiative cooling is included via a simple but physically motivated prescription. We show the existence of bistable equilibria and thus the prospect of thermal limit cycles, and in so doing demonstrate that MRI-induced turbulence is compatible with the classical theory. Our simulations also show that the turbulent stress and pressure perturbations are only weakly dependent on each other on orbital times; as a consequence, thermal instability connected to variations in turbulent heating (as opposed to radiative cooling) is unlikely to operate, in agreement with previous numerical results. Our work presents a first step towards unifying simulations of full magnetohydrodynamic turbulence with the correct thermal and radiative physics of the outbursting discs associated with dwarf novae, low-mass X-ray binaries and possibly young stellar objects.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paz-Soldan, C.; La Haye, R. J.; Shiraki, D.

    DIII-D plasmas at very low density exhibit onset of n=1 error field (EF) penetration (the `low-density locked mode') not at a critical density or EF, but instead at a critical level of runaway electron (RE) intensity. Raising the density during a discharge does not avoid EF penetration, so long as RE growth proceeds to the critical level. Penetration is preceded by non-thermalization of the electron cyclotron emission, anisotropization of the total pressure, synchrotron emission shape changes, as well as decreases in the loop voltage and bulk thermal electron temperature. The same phenomena occur despite various types of optimal EF correction,more » and in some cases modes are born rotating. Similar phenomena are also found at the low-density limit in JET. These results stand in contrast to the conventional interpretation of the low-density stability limit as being due to residual EFs and demonstrate a new pathway to EF penetration instability due to REs. Existing scaling laws for penetration project to increasing EF sensitivity as bulk temperatures decrease, though other possible mechanisms include classical tearing instability, thermo-resistive instability, and pressure-anisotropy driven instability. Regardless of first-principles mechanism, known scaling laws for Ohmic energy confinement combined with theoretical RE production rates allow rough extrapolation of the RE criticality condition, and thus, the low-density limit to other tokamaks. Furthermore, the extrapolated low-density limit by this pathway decreases with increasing machine size and is considerably below expected operating conditions for ITER. While likely unimportant for ITER, this effect can explain the low-density limit of existing tokamaks operating with small residual EFs.« less

  20. Efficient energy transfer in light-harvesting systems: Quantum-classical comparison, flux network, and robustness analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu Jianlan; Department of Chemistry, Massachusetts Institute of Technology, 77 Massachusetts Ave., Cambridge, Massachusetts 02139; Liu Fan

    2012-11-07

    Following the calculation of optimal energy transfer in thermal environment in our first paper [J. L. Wu, F. Liu, Y. Shen, J. S. Cao, and R. J. Silbey, New J. Phys. 12, 105012 (2010)], full quantum dynamics and leading-order 'classical' hopping kinetics are compared in the seven-site Fenna-Matthews-Olson (FMO) protein complex. The difference between these two dynamic descriptions is due to higher-order quantum corrections. Two thermal bath models, classical white noise (the Haken-Strobl-Reineker (HSR) model) and quantum Debye model, are considered. In the seven-site FMO model, we observe that higher-order corrections lead to negligible changes in the trapping time ormore » in energy transfer efficiency around the optimal and physiological conditions (2% in the HSR model and 0.1% in the quantum Debye model for the initial site at BChl 1). However, using the concept of integrated flux, we can identify significant differences in branching probabilities of the energy transfer network between hopping kinetics and quantum dynamics (26% in the HSR model and 32% in the quantum Debye model for the initial site at BChl 1). This observation indicates that the quantum coherence can significantly change the distribution of energy transfer pathways in the flux network with the efficiency nearly the same. The quantum-classical comparison of the average trapping time with the removal of the bottleneck site, BChl 4, demonstrates the robustness of the efficient energy transfer by the mechanism of multi-site quantum coherence. To reconcile with the latest eight-site FMO model which is also investigated in the third paper [J. Moix, J. L. Wu, P. F. Huo, D. F. Coker, and J. S. Cao, J. Phys. Chem. Lett. 2, 3045 (2011)], the quantum-classical comparison with the flux network analysis is summarized in Appendix C. The eight-site FMO model yields similar trapping time and network structure as the seven-site FMO model but leads to a more disperse distribution of energy transfer pathways.« less

  1. Simple proof of the quantum benchmark fidelity for continuous-variable quantum devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Namiki, Ryo

    2011-04-15

    An experimental success criterion for continuous-variable quantum teleportation and memory is to surpass the limit of the average fidelity achieved by classical measure-and-prepare schemes with respect to a Gaussian-distributed set of coherent states. We present an alternative proof of the classical limit based on the familiar notions of state-channel duality and partial transposition. The present method enables us to produce a quantum-domain criterion associated with a given set of measured fidelities.

  2. 75 FR 37738 - 1-Naphthaleneacetic Acid; Time-Limited Tolerance, Technical Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-30

    ...-Naphthaleneacetic Acid; Time-Limited Tolerance, Technical Correction AGENCY: Environmental Protection Agency (EPA..., ethylene oxide, fenvalerate, et al.; tolerance actions. Today's rule restores the time-limited tolerance...-3) establishing a time-limited tolerance for residues of 1-naphthaleneacetic acid ethyl ester in or...

  3. Role of memory errors in quantum repeaters

    NASA Astrophysics Data System (ADS)

    Hartmann, L.; Kraus, B.; Briegel, H.-J.; Dür, W.

    2007-03-01

    We investigate the influence of memory errors in the quantum repeater scheme for long-range quantum communication. We show that the communication distance is limited in standard operation mode due to memory errors resulting from unavoidable waiting times for classical signals. We show how to overcome these limitations by (i) improving local memory and (ii) introducing two operational modes of the quantum repeater. In both operational modes, the repeater is run blindly, i.e., without waiting for classical signals to arrive. In the first scheme, entanglement purification protocols based on one-way classical communication are used allowing to communicate over arbitrary distances. However, the error thresholds for noise in local control operations are very stringent. The second scheme makes use of entanglement purification protocols with two-way classical communication and inherits the favorable error thresholds of the repeater run in standard mode. One can increase the possible communication distance by an order of magnitude with reasonable overhead in physical resources. We outline the architecture of a quantum repeater that can possibly ensure intercontinental quantum communication.

  4. Quantum localization for a kicked rotor with accelerator mode islands.

    PubMed

    Iomin, A; Fishman, S; Zaslavsky, G M

    2002-03-01

    Dynamical localization of classical superdiffusion for the quantum kicked rotor is studied in the semiclassical limit. Both classical and quantum dynamics of the system become more complicated under the conditions of mixed phase space with accelerator mode islands. Recently, long time quantum flights due to the accelerator mode islands have been found. By exploration of their dynamics, it is shown here that the classical-quantum duality of the flights leads to their localization. The classical mechanism of superdiffusion is due to accelerator mode dynamics, while quantum tunneling suppresses the superdiffusion and leads to localization of the wave function. Coupling of the regular type dynamics inside the accelerator mode island structures to dynamics in the chaotic sea proves increasing the localization length. A numerical procedure and an analytical method are developed to obtain an estimate of the localization length which, as it is shown, has exponentially large scaling with the dimensionless Planck's constant (tilde)h<1 in the semiclassical limit. Conditions for the validity of the developed method are specified.

  5. Universal scaling for the quantum Ising chain with a classical impurity

    NASA Astrophysics Data System (ADS)

    Apollaro, Tony J. G.; Francica, Gianluca; Giuliano, Domenico; Falcone, Giovanni; Palma, G. Massimo; Plastina, Francesco

    2017-10-01

    We study finite-size scaling for the magnetic observables of an impurity residing at the end point of an open quantum Ising chain with transverse magnetic field, realized by locally rescaling the field by a factor μ ≠1 . In the homogeneous chain limit at μ =1 , we find the expected finite-size scaling for the longitudinal impurity magnetization, with no specific scaling for the transverse magnetization. At variance, in the classical impurity limit μ =0 , we recover finite scaling for the longitudinal magnetization, while the transverse one basically does not scale. We provide both analytic approximate expressions for the magnetization and the susceptibility as well as numerical evidences for the scaling behavior. At intermediate values of μ , finite-size scaling is violated, and we provide a possible explanation of this result in terms of the appearance of a second, impurity-related length scale. Finally, by going along the standard quantum-to-classical mapping between statistical models, we derive the classical counterpart of the quantum Ising chain with an end-point impurity as a classical Ising model on a square lattice wrapped on a half-infinite cylinder, with the links along the first circle modified as a function of μ .

  6. Probing Higgs self-coupling of a classically scale invariant model in e+e- → Zhh: Evaluation at physical point

    NASA Astrophysics Data System (ADS)

    Fujitani, Y.; Sumino, Y.

    2018-04-01

    A classically scale invariant extension of the standard model predicts large anomalous Higgs self-interactions. We compute missing contributions in previous studies for probing the Higgs triple coupling of a minimal model using the process e+e- → Zhh. Employing a proper order counting, we compute the total and differential cross sections at the leading order, which incorporate the one-loop corrections between zero external momenta and their physical values. Discovery/exclusion potential of a future e+e- collider for this model is estimated. We also find a unique feature in the momentum dependence of the Higgs triple vertex for this class of models.

  7. Axiomatic Geometrical Optics, Abraham-Minkowski Controversy, and Photon Properties Derived Classically

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    L.Y. Dodin and N.J. Fisch

    2012-06-18

    By restating geometrical optics within the eld-theoretical approach, the classical concept of a photon in arbitrary dispersive medium is introduced, and photon properties are calculated unambiguously. In particular, the canonical and kinetic momenta carried by a photon, as well as the two corresponding energy-momentum tensors of a wave, are derived straightforwardly from rst principles of Lagrangian mechanics. The Abraham-Minkowski controversy pertaining to the de nitions of these quantities is thereby resolved for linear waves of arbitrary nature, and corrections to the traditional formulas for the photon kinetic quantities are found. An application of axiomatic geometrical optics to electromagnetic waves ismore » also presented as an example.« less

  8. Nonmonotonic Classical Magnetoconductivity of a Two-Dimensional Electron Gas in a Disordered Array of Obstacles

    NASA Astrophysics Data System (ADS)

    Siboni, N. H.; Schluck, J.; Pierz, K.; Schumacher, H. W.; Kazazis, D.; Horbach, J.; Heinzel, T.

    2018-02-01

    Magnetotransport measurements in combination with molecular dynamics simulations on two-dimensional disordered Lorentz gases in the classical regime are reported. In quantitative agreement between experiment and simulation, the magnetoconductivity displays a pronounced peak as a function of the perpendicular magnetic field B which cannot be explained by existing kinetic theories. This peak is linked to the onset of a directed motion of the electrons along the contour of the disordered obstacle matrix when the cyclotron radius becomes smaller than the size of the obstacles. This directed motion leads to transient superdiffusive motion and strong scaling corrections in the vicinity of the insulator-to-conductor transitions of the Lorentz gas.

  9. The Importance of the Assumption of Uncorrelated Errors in Psychometric Theory

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.; Patelis, Thanos

    2015-01-01

    A critical discussion of the assumption of uncorrelated errors in classical psychometric theory and its applications is provided. It is pointed out that this assumption is essential for a number of fundamental results and underlies the concept of parallel tests, the Spearman-Brown's prophecy and the correction for attenuation formulas as well as…

  10. Humboldtian Values in a Changing World: Staff and Students in German Universities

    ERIC Educational Resources Information Center

    Pritchard, Rosalind

    2004-01-01

    The globalisation of higher education implies the application of a neo-liberal market forces model based on competition and choice. This is happening in Germany by gradual stages, and is often, but not necessarily correctly, assumed to be antagonistic to the Humboldtian model that underlies the classical German university tradition. This paper…

  11. The Disappearance of Independence in Textbook Coverage of Asch's Social Pressure Experiments

    ERIC Educational Resources Information Center

    Griggs, Richard A.

    2015-01-01

    Asch's classic social pressure experiments are discussed in almost all introductory and social psychology textbooks. However, the results of these experiments have been shown to be misrepresented in textbooks. An analysis of textbooks from 1953 to 1984 revealed that although most of the responses on critical trials were independent correct ones,…

  12. Fitting the Rasch Model to Account for Variation in Item Discrimination

    ERIC Educational Resources Information Center

    Weitzman, R. A.

    2009-01-01

    Building on the Kelley and Gulliksen versions of classical test theory, this article shows that a logistic model having only a single item parameter can account for varying item discrimination, as well as difficulty, by using item-test correlations to adjust incorrect-correct (0-1) item responses prior to an initial model fit. The fit occurs…

  13. Fuzzy logic controller for the LOLA AO tip-tilt corrector system

    NASA Astrophysics Data System (ADS)

    Sotelo, Pablo D.; Flores, Ruben; Garfias, Fernando; Cuevas, Salvador

    1998-09-01

    At the INSTITUTO DE ASTRONOMIA we developed an adaptive optics system for the correction of the two first orders of the Zernike polynomials measuring the image controid. Here we discus the two system modalities based in two different control strategies and we present simulations comparing the systems. For the classic control system we present telescope results.

  14. Pitfalls in classical nuclear medicine: myocardial perfusion imaging

    NASA Astrophysics Data System (ADS)

    Fragkaki, C.; Giannopoulou, Ch

    2011-09-01

    Scintigraphic imaging is a complex functional procedure subject to a variety of artefacts and pitfalls that may limit its clinical and diagnostic accuracy. It is important to be aware of and to recognize them when present and to eliminate them whenever possible. Pitfalls may occur at any stage of the imaging procedure and can be related with the γ-camera or other equipment, personnel handling, patient preparation, image processing or the procedure itself. Often, potential causes of artefacts and pitfalls may overlap. In this short review, special interest will be given to cardiac scintigraphic imaging. Most common causes of artefact in myocardial perfusion imaging are soft tissue attenuation as well as motion and gating errors. Additionally, clinical problems like cardiac abnormalities may cause interpretation pitfalls and nuclear medicine physicians should be familiar with these in order to ensure the correct evaluation of the study. Artefacts or suboptimal image quality can also result from infiltrated injections, misalignment in patient positioning, power instability or interruption, flood field non-uniformities, cracked crystal and several other technical reasons.

  15. Nudging toward a stable retirement.

    PubMed

    Kroncke, Charles

    2018-01-01

    The classical economics perspective is that public policy should be used to allow, not hinder, economic freedom. In some cases it may be possible for government to gently nudge individuals to change their behavior without hindering freedom. One example is a change from the default on pension program enrollment forms from "not contribute" to "contribute." This is generally viewed as a good nudge that gets people to do what the majority of people view as generally the correct behavior. However, a choice to contribute to a pension fund is not always in the individual's best interest - thus, it is a nudge, not a mandate. To maintain personal liberty, individuals should be fully informed about the consequences of their choice and the motives of the political authority. Saving for retirement is a complex issue, and pension contribution decisions are often made with little foresight or information. Pension contribution nudges may not always be freedom preserving because of complexity and unintended consequences. The benefits, risks, and limitations of default contribution pension nudges are discussed.

  16. The problems in quantum foundations in the light of gauge theories

    NASA Astrophysics Data System (ADS)

    Ne'Eman, Yuval

    1986-04-01

    We review the issues of nonseparability and seemingly acausal propagation of information in EPR, as displayed by experiments and the failure of Bell's inequalities. We show that global effects are in the very nature of the geometric structure of modern physical theories, occurring even at the classical level. The Aharonov-Bohm effect, magnetic monopoles, instantons, etc. result from the topology and homotopy features of the fiber bundle manifolds of gauge theories. The conservation of probabilities, a supposedly highly quantum effect, is also achieved through global geometry equations. The EPR observables all fit in such geometries, and space-time is a truncated representation and is not the correct arena for their understanding. Relativistic quantum field theory represents the global action of the measurement operators as the zero-momentum (and therefore spatially infinitely spread) limit of their wave functions (form factors). We also analyze the collapse of the state vector as a case of spontaneous symmetry breakdown in the apparatus-observed state interaction.

  17. Generic buckling curves for specially orthotropic rectangular plates

    NASA Technical Reports Server (NTRS)

    Brunnelle, E. J.; Oyibo, G. A.

    1983-01-01

    Using a double affine transformation, the classical buckling equation for specially orthotropic plates and the corresponding virtual work theorem are presented in a particularly simple fashion. These dual representations are characterized by a single material constant, called the generalized rigidity ratio, whose range is predicted to be the closed interval from 0 to 1 (if this prediction is correct then the numerical results using a ratio greater than 1 in the specially orthotropic plate literature are incorrect); when natural boundary conditions are considered a generalized Poisson's ratio is introduced. Thus the buckling results are valid for any specially orthotropic material; hence the curves presented in the text are generic rather than specific. The solution trends are twofold; the buckling coefficients decrease with decreasing generalized rigidity ratio and, when applicable, they decrease with increasing generalized Poisson's ratio. Since the isotropic plate is one limiting case of the above analysis, it is also true that isotropic buckling coefficients decrease with increasing Poission's ratio.

  18. Bosonization of nonrelativistic fermions on a circle: Tomonaga's problem revisited

    NASA Astrophysics Data System (ADS)

    Dhar, Avinash; Mandal, Gautam

    2006-11-01

    We use the recently developed tools for an exact bosonization of a finite number N of nonrelativistic fermions to discuss the classic Tomonaga problem. In the case of noninteracting fermions, the bosonized Hamiltonian naturally splits into an O(N) piece and an O(1) piece. We show that in the large-N and low-energy limit, the O(N) piece in the Hamiltonian describes a massless relativistic boson, while the O(1) piece gives rise to cubic self-interactions of the boson. At finite N and high energies, the low-energy effective description breaks down and the exact bosonized Hamiltonian must be used. We also comment on the connection between the Tomonaga problem and pure Yang-Mills theory on a cylinder. In the dual context of baby universes and multiple black holes in string theory, we point out that the O(N) piece in our bosonized Hamiltonian provides a simple understanding of the origin of two different kinds of nonperturbative O(e-N) corrections to the black hole partition function.

  19. Human adipose-derived stem cells: definition, isolation, tissue-engineering applications.

    PubMed

    Nae, S; Bordeianu, I; Stăncioiu, A T; Antohi, N

    2013-01-01

    Recent researches have demonstrated that the most effective repair system of the body is represented by stem cells - unspecialized cells, capable of self-renewal through successive mitoses, which have also the ability to transform into different cell types through differentiation. The discovery of adult stem cells represented an important step in regenerative medicine because they no longer raises ethical or legal issues and are more accessible. Only in 2002, stem cells isolated from adipose tissue were described as multipotent stem cells. Adipose tissue stem cells benefits in tissue engineering and regenerative medicine are numerous. Development of adipose tissue engineering techniques offers a great potential in surpassing the existing limits faced by the classical approaches used in plastic and reconstructive surgery. Adipose tissue engineering clinical applications are wide and varied, including reconstructive, corrective and cosmetic procedures. Nowadays, adipose tissue engineering is a fast developing field, both in terms of fundamental researches and medical applications, addressing issues related to current clinical pathology or trauma management of soft tissue injuries in different body locations.

  20. Transfusion-associated cytomegalovirus mononucleosis.

    PubMed Central

    Lerner, P I; Sampliner, J E

    1977-01-01

    Transfusion-associated cytomegalovirus mononucleosis is generally considered only as a complication of extracorporeal circulation following cardiac surgery. Three cases following trauma were recognized in less than one year. Both massive and limited volume blood transfusions were involved. Hectic fever was a characteristic feature in these otherwise remarkably asymptomatic individuals, without the classic features of heterophile-positive infectious mononucleosis. Since the illness developed several weeks into the post-operative period after extensive thoracic or abdominal trauma surgery, the presence of an undrained abscess was naturally the major diagnostic concern. Atypical lymphocytosis, markers of altered immunity (cold agglutinins, rheumatoid factor) and moderate hepatic dysfunction were important laboratory clues. In one case, focal isotope defects in the spleen scan misleadingly suggested a septic complication. A false-positive monospot test initially obscured the correct serologic diagnosis in the same patient. Failure to consider this selflimited viral infection may be a critical factor leading to unnecessary surgery. Other viral agents capable of eliciting a similar syndrome are cited. Images Fig. 1. PMID:190955

  1. Finding exact constants in a Markov model of Zipfs law generation

    NASA Astrophysics Data System (ADS)

    Bochkarev, V. V.; Lerner, E. Yu.; Nikiforov, A. A.; Pismenskiy, A. A.

    2017-12-01

    According to the classical Zipfs law, the word frequency is a power function of the word rank with an exponent -1. The objective of this work is to find multiplicative constant in a Markov model of word generation. Previously, the case of independent letters was mathematically strictly investigated in [Bochkarev V V and Lerner E Yu 2017 International Journal of Mathematics and Mathematical Sciences Article ID 914374]. Unfortunately, the methods used in this paper cannot be generalized in case of Markov chains. The search of the correct formulation of the Markov generalization of this results was performed using experiments with different ergodic matrices of transition probability P. Combinatory technique allowed taking into account all the words with probability of more than e -300 in case of 2 by 2 matrices. It was experimentally proved that the required constant in the limit is equal to the value reciprocal to conditional entropy of matrix row P with weights presenting the elements of the vector π of the stationary distribution of the Markov chain.

  2. Design of pharmaceutical products to meet future patient needs requires modification of current development paradigms and business models.

    PubMed

    Stegemann, S; Baeyens, J-P; Becker, R; Maio, M; Bresciani, M; Shreeves, T; Ecker, F; Gogol, M

    2014-06-01

    Drugs represent the most common intervention strategy for managing acute and chronic medical conditions. In light of demographic change and the increasing age of patients, the classic model of drug research and development by the pharmaceutical industry and drug prescription by physicians is reaching its limits. Different stakeholders, e.g. industry, regulatory authorities, health insurance systems, physicians etc., have at least partially differing interests regarding the process of healthcare provision. The primary responsibility for the correct handling of medication and adherence to treatment schedules lies with the recipient of a drug-based therapy, i.e. the patient. It is thus necessary to interactively involve elderly patients, as well as the other stakeholders, in the development of medication and medication application devices, and in clinical trials. This approach will provide the basis for developing a strategy that better meets patients' needs, thus resulting in improved adherence to treatment schedules and better therapeutic outcomes.

  3. Robust lung identification in MSCT via controlled flooding and shape constraints: dealing with anatomical and pathological specificity

    NASA Astrophysics Data System (ADS)

    Fetita, Catalin; Tarando, Sebastian; Brillet, Pierre-Yves; Grenier, Philippe A.

    2016-03-01

    Correct segmentation and labeling of lungs in thorax MSCT is a requirement in pulmonary/respiratory disease analysis as a basis for further processing or direct quantitative measures: lung texture classification, respiratory functional simulations, intrapulmonary vascular remodeling evaluation, detection of pleural effusion or subpleural opacities, are only few clinical applications related to this requirement. Whereas lung segmentation appears trivial for normal anatomo-pathological conditions, the presence of disease may complicate this task for fully-automated algorithms. The challenges come either from regional changes of lung texture opacity or from complex anatomic configurations (e.g., thin septum between lungs making difficult proper lung separation). They make difficult or even impossible the use of classic algorithms based on adaptive thresholding, 3-D connected component analysis and shape regularization. The objective of this work is to provide a robust segmentation approach of the pulmonary field, with individualized labeling of the lungs, able to overcome the mentioned limitations. The proposed approach relies on 3-D mathematical morphology and exploits the concept of controlled relief flooding (to identify contrasted lung areas) together with patient-specific shape properties for peripheral dense tissue detection. Tested on a database of 40 MSCT of pathological lungs, the proposed approach showed correct identification of lung areas with high sensitivity and specificity in locating peripheral dense opacities.

  4. Identifying biologically relevant putative mechanisms in a given phenotype comparison

    PubMed Central

    Hanoudi, Samer; Donato, Michele; Draghici, Sorin

    2017-01-01

    A major challenge in life science research is understanding the mechanism involved in a given phenotype. The ability to identify the correct mechanisms is needed in order to understand fundamental and very important phenomena such as mechanisms of disease, immune systems responses to various challenges, and mechanisms of drug action. The current data analysis methods focus on the identification of the differentially expressed (DE) genes using their fold change and/or p-values. Major shortcomings of this approach are that: i) it does not consider the interactions between genes; ii) its results are sensitive to the selection of the threshold(s) used, and iii) the set of genes produced by this approach is not always conducive to formulating mechanistic hypotheses. Here we present a method that can construct networks of genes that can be considered putative mechanisms. The putative mechanisms constructed by this approach are not limited to the set of DE genes, but also considers all known and relevant gene-gene interactions. We analyzed three real datasets for which both the causes of the phenotype, as well as the true mechanisms were known. We show that the method identified the correct mechanisms when applied on microarray datasets from mouse. We compared the results of our method with the results of the classical approach, showing that our method produces more meaningful biological insights. PMID:28486531

  5. Psychometric properties of the Hare Psychopathy Checklist-Revised (PCL-R) in a representative sample of Canadian federal offenders.

    PubMed

    Storey, Jennifer E; Hart, Stephen D; Cooke, David J; Michie, Christine

    2016-04-01

    The Hare Psychopathy Checklist-Revised (PCL-R; Hare, 2003) is a commonly used psychological test for assessing traits of psychopathic personality disorder. Despite the abundance of research using the PCL-R, the vast majority of research used samples of convenience rather than systematic methods to minimize sampling bias and maximize the generalizability of findings. This potentially complicates the interpretation of test scores and research findings, including the "norms" for offenders from the United States and Canada included in the PCL-R manual. In the current study, we evaluated the psychometric properties of PCL-R scores for all male offenders admitted to a regional reception center of the Correctional Service of Canada during a 1-year period (n = 375). Because offenders were admitted for assessment prior to institutional classification, they comprise a sample that was heterogeneous with respect to correctional risks and needs yet representative of all offenders in that region of the service. We examined the distribution of PCL-R scores, classical test theory indices of its structural reliability, the factor structure of test items, and the external correlates of test scores. The findings were highly consistent with those typically reported in previous studies. We interpret these results as indicating it is unlikely any sampling limitations of past research using the PCL-R resulted in findings that were, overall, strongly biased or unrepresentative. (c) 2016 APA, all rights reserved).

  6. EPRL/FK asymptotics and the flatness problem

    NASA Astrophysics Data System (ADS)

    Oliveira, José Ricardo

    2018-05-01

    Spin foam models are an approach to quantum gravity based on the concept of sum over states, which aims to describe quantum spacetime dynamics in a way that its parent framework, loop quantum gravity, has not as of yet succeeded. Since these models’ relation to classical Einstein gravity is not explicit, an important test of their viabilitiy is the study of asymptotics—the classical theory should be obtained in a limit where quantum effects are negligible, taken to be the limit of large triangle areas in a triangulated manifold with boundary. In this paper we will briefly introduce the EPRL/FK spin foam model and known results about its asymptotics, proceeding then to describe a practical computation of spin foam and semiclassical geometric data for a simple triangulation with only one interior triangle. The results are used to comment on the ‘flatness problem’—a hypothesis raised by Bonzom (2009 Phys. Rev. D 80 064028) suggesting that EPRL/FK’s classical limit only describes flat geometries in vacuum.

  7. Higher spin gauge theory on fuzzy \\boldsymbol {S^4_N}

    NASA Astrophysics Data System (ADS)

    Sperling, Marcus; Steinacker, Harold C.

    2018-02-01

    We examine in detail the higher spin fields which arise on the basic fuzzy sphere S^4N in the semi-classical limit. The space of functions can be identified with functions on classical S 4 taking values in a higher spin algebra associated to \

  8. Quid pro quo: a mechanism for fair collaboration in networked systems.

    PubMed

    Santos, Agustín; Fernández Anta, Antonio; López Fernández, Luis

    2013-01-01

    Collaboration may be understood as the execution of coordinated tasks (in the most general sense) by groups of users, who cooperate for achieving a common goal. Collaboration is a fundamental assumption and requirement for the correct operation of many communication systems. The main challenge when creating collaborative systems in a decentralized manner is dealing with the fact that users may behave in selfish ways, trying to obtain the benefits of the tasks but without participating in their execution. In this context, Game Theory has been instrumental to model collaborative systems and the task allocation problem, and to design mechanisms for optimal allocation of tasks. In this paper, we revise the classical assumptions of these models and propose a new approach to this problem. First, we establish a system model based on heterogenous nodes (users, players), and propose a basic distributed mechanism so that, when a new task appears, it is assigned to the most suitable node. The classical technique for compensating a node that executes a task is the use of payments (which in most networks are hard or impossible to implement). Instead, we propose a distributed mechanism for the optimal allocation of tasks without payments. We prove this mechanism to be robust evenevent in the presence of independent selfish or rationally limited players. Additionally, our model is based on very weak assumptions, which makes the proposed mechanisms susceptible to be implemented in networked systems (e.g., the Internet).

  9. Viable inflationary evolution from Einstein frame loop quantum cosmology

    NASA Astrophysics Data System (ADS)

    de Haro, Jaume; Odintsov, S. D.; Oikonomou, V. K.

    2018-04-01

    In this work we construct a bottom-up reconstruction technique for loop quantum cosmology scalar-tensor theories, from the observational indices. Particularly, the reconstruction technique is based on fixing the functional form of the scalar-to-tensor ratio as a function of the e -foldings number. The aim of the technique is to realize viable inflationary scenarios, and the only assumption that must hold true in order for the reconstruction technique to work is that the dynamical evolution of the scalar field obeys the slow-roll conditions. We use two functional forms for the scalar-to-tensor ratio, one of which corresponds to a popular inflationary class of models, the α attractors. For the latter, we calculate the leading order behavior of the spectral index and we demonstrate that the resulting inflationary theory is viable and compatible with the latest Planck and BICEP2/Keck-Array data. In addition, we find the classical limit of the theory, and as we demonstrate, the loop quantum cosmology corrected theory and the classical theory are identical at leading order in the perturbative expansion quantified by the parameter ρc, which is the critical density of the quantum theory. Finally, by using the formalism of slow-roll scalar-tensor loop quantum cosmology, we investigate how several inflationary potentials can be realized by the quantum theory, and we calculate directly the slow-roll indices and the corresponding observational indices. In addition, the f (R ) gravity frame picture is presented.

  10. A unifying model for adsorption and nucleation of vapors on solid surfaces.

    PubMed

    Laaksonen, Ari

    2015-04-23

    Vapor interaction with solid surfaces is traditionally described with adsorption isotherms in the undersaturated regime and with heterogeneous nucleation theory in the supersaturated regime. A class of adsorption isotherms is based on the idea of vapor molecule clustering around so-called active sites. However, as the isotherms do not account for the surface curvature effects of the clusters, they predict an infinitely thick adsorption layer at saturation and do not recognize the existence of the supersaturated regime. The classical heterogeneous nucleation theory also builds on the idea of cluster formation, but describes the interactions between the surface and the cluster with a single parameter, the contact angle, which provides limited information compared with adsorption isotherms. Here, a new model of vapor adsorption on nonporous solid surfaces is derived. The basic assumption is that adsorption proceeds via formation of molecular clusters, modeled as liquid caps. The equilibrium of the individual clusters with the vapor phase is described with the Frenkel-Halsey-Hill (FHH) adsorption theory modified with the Kelvin equation that corrects for the curvature effect on vapor pressure. The new model extends the FHH adsorption isotherm to be applicable both at submonolayer surface coverages and at supersaturated conditions. It shows good agreement with experimental adsorption data from 12 different adsorbent-adsorbate systems. The model predictions are also compared against heterogeneous nucleation data, and they show much better agreement than predictions of the classical heterogeneous nucleation theory.

  11. Quantum Kramers model: Corrections to the linear response theory for continuous bath spectrum

    NASA Astrophysics Data System (ADS)

    Rips, Ilya

    2017-01-01

    Decay of the metastable state is analyzed within the quantum Kramers model in the weak-to-intermediate dissipation regime. The decay kinetics in this regime is determined by energy exchange between the unstable mode and the stable modes of thermal bath. In our previous paper [Phys. Rev. A 42, 4427 (1990), 10.1103/PhysRevA.42.4427], Grabert's perturbative approach to well dynamics in the case of the discrete bath [Phys. Rev. Lett. 61, 1683 (1988), 10.1103/PhysRevLett.61.1683] has been extended to account for the second order terms in the classical equations of motion (EOM) for the stable modes. Account of the secular terms reduces EOM for the stable modes to those of the forced oscillator with the time-dependent frequency (TDF oscillator). Analytic expression for the characteristic function of energy loss of the unstable mode has been derived in terms of the generating function of the transition probabilities for the quantum forced TDF oscillator. In this paper, the approach is further developed and applied to the case of the continuous frequency spectrum of the bath. The spectral density functions of the bath of stable modes are expressed in terms of the dissipative properties (the friction function) of the original bath. They simplify considerably for the one-dimensional systems, when the density of phonon states is constant. Explicit expressions for the fourth order corrections to the linear response theory result for the characteristic function of the energy loss and its cumulants are obtained for the particular case of the cubic potential with Ohmic (Markovian) dissipation. The range of validity of the perturbative approach in this case is determined (γ /ωb<0.26 ), which includes the turnover region. The dominant correction to the linear response theory result is associated with the "work function" and leads to reduction of the average energy loss and its dispersion. This reduction increases with the increasing dissipation strength (up to ˜10 % ) within the range of validity of the approach. We have also calculated corrections to the depopulation factor and the escape rate for the quantum and for the classical Kramers models. Results for the classical escape rate are in very good agreement with the numerical simulations for high barriers. The results can serve as an additional proof of the robustness and accuracy of the linear response theory.

  12. Quantum Kramers model: Corrections to the linear response theory for continuous bath spectrum.

    PubMed

    Rips, Ilya

    2017-01-01

    Decay of the metastable state is analyzed within the quantum Kramers model in the weak-to-intermediate dissipation regime. The decay kinetics in this regime is determined by energy exchange between the unstable mode and the stable modes of thermal bath. In our previous paper [Phys. Rev. A 42, 4427 (1990)PLRAAN1050-294710.1103/PhysRevA.42.4427], Grabert's perturbative approach to well dynamics in the case of the discrete bath [Phys. Rev. Lett. 61, 1683 (1988)PRLTAO0031-900710.1103/PhysRevLett.61.1683] has been extended to account for the second order terms in the classical equations of motion (EOM) for the stable modes. Account of the secular terms reduces EOM for the stable modes to those of the forced oscillator with the time-dependent frequency (TDF oscillator). Analytic expression for the characteristic function of energy loss of the unstable mode has been derived in terms of the generating function of the transition probabilities for the quantum forced TDF oscillator. In this paper, the approach is further developed and applied to the case of the continuous frequency spectrum of the bath. The spectral density functions of the bath of stable modes are expressed in terms of the dissipative properties (the friction function) of the original bath. They simplify considerably for the one-dimensional systems, when the density of phonon states is constant. Explicit expressions for the fourth order corrections to the linear response theory result for the characteristic function of the energy loss and its cumulants are obtained for the particular case of the cubic potential with Ohmic (Markovian) dissipation. The range of validity of the perturbative approach in this case is determined (γ/ω_{b}<0.26), which includes the turnover region. The dominant correction to the linear response theory result is associated with the "work function" and leads to reduction of the average energy loss and its dispersion. This reduction increases with the increasing dissipation strength (up to ∼10%) within the range of validity of the approach. We have also calculated corrections to the depopulation factor and the escape rate for the quantum and for the classical Kramers models. Results for the classical escape rate are in very good agreement with the numerical simulations for high barriers. The results can serve as an additional proof of the robustness and accuracy of the linear response theory.

  13. The non-thermal origin of the tokamak low-density stability limit

    DOE PAGES

    Paz-Soldan, C.; La Haye, R. J.; Shiraki, D.; ...

    2016-04-13

    DIII-D plasmas at very low density exhibit onset of n=1 error field (EF) penetration (the `low-density locked mode') not at a critical density or EF, but instead at a critical level of runaway electron (RE) intensity. Raising the density during a discharge does not avoid EF penetration, so long as RE growth proceeds to the critical level. Penetration is preceded by non-thermalization of the electron cyclotron emission, anisotropization of the total pressure, synchrotron emission shape changes, as well as decreases in the loop voltage and bulk thermal electron temperature. The same phenomena occur despite various types of optimal EF correction,more » and in some cases modes are born rotating. Similar phenomena are also found at the low-density limit in JET. These results stand in contrast to the conventional interpretation of the low-density stability limit as being due to residual EFs and demonstrate a new pathway to EF penetration instability due to REs. Existing scaling laws for penetration project to increasing EF sensitivity as bulk temperatures decrease, though other possible mechanisms include classical tearing instability, thermo-resistive instability, and pressure-anisotropy driven instability. Regardless of first-principles mechanism, known scaling laws for Ohmic energy confinement combined with theoretical RE production rates allow rough extrapolation of the RE criticality condition, and thus, the low-density limit to other tokamaks. Furthermore, the extrapolated low-density limit by this pathway decreases with increasing machine size and is considerably below expected operating conditions for ITER. While likely unimportant for ITER, this effect can explain the low-density limit of existing tokamaks operating with small residual EFs.« less

  14. The classical limit of minimal length uncertainty relation: revisit with the Hamilton-Jacobi method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Xiaobo; Wang, Peng; Yang, Haitang, E-mail: guoxiaobo@swust.edu.cn, E-mail: pengw@scu.edu.cn, E-mail: hyanga@scu.edu.cn

    2016-05-01

    The existence of a minimum measurable length could deform not only the standard quantum mechanics but also classical physics. The effects of the minimal length on classical orbits of particles in a gravitation field have been investigated before, using the deformed Poisson bracket or Schwarzschild metric. In this paper, we first use the Hamilton-Jacobi method to derive the deformed equations of motion in the context of Newtonian mechanics and general relativity. We then employ them to study the precession of planetary orbits, deflection of light, and time delay in radar propagation. We also set limits on the deformation parameter bymore » comparing our results with the observational measurements. Finally, comparison with results from previous papers is given at the end of this paper.« less

  15. Two-dimensional electromagnetic Child-Langmuir law of a short-pulse electron flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, S. H.; Tai, L. C.; Liu, Y. L.

    Two-dimensional electromagnetic particle-in-cell simulations were performed to study the effect of the displacement current and the self-magnetic field on the space charge limited current density or the Child-Langmuir law of a short-pulse electron flow with a propagation distance of {zeta} and an emitting width of W from the classical regime to the relativistic regime. Numerical scaling of the two-dimensional electromagnetic Child-Langmuir law was constructed and it scales with ({zeta}/W) and ({zeta}/W){sup 2} at the classical and relativistic regimes, respectively. Our findings reveal that the displacement current can considerably enhance the space charge limited current density as compared to the well-knownmore » two-dimensional electrostatic Child-Langmuir law even at the classical regime.« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nomura, Yasunori; Salzetta, Nico; Sanches, Fabio

    We study the Hilbert space structure of classical spacetimes under the assumption that entanglement in holographic theories determines semiclassical geometry. We show that this simple assumption has profound implications; for example, a superposition of classical spacetimes may lead to another classical spacetime. Despite its unconventional nature, this picture admits the standard interpretation of superpositions of well-defined semiclassical spacetimes in the limit that the number of holographic degrees of freedom becomes large. We illustrate these ideas using a model for the holographic theory of cosmological spacetimes.

  17. Quantum information processing by a continuous Maxwell demon

    NASA Astrophysics Data System (ADS)

    Stevens, Josey; Deffner, Sebastian

    Quantum computing is believed to be fundamentally superior to classical computing; however quantifying the specific thermodynamic advantage has been elusive. Experimentally motivated, we generalize previous minimal models of discrete demons to continuous state space. Analyzing our model allows one to quantify the thermodynamic resources necessary to process quantum information. By further invoking the semi-classical limit we compare the quantum demon with its classical analogue. Finally, this model also serves as a starting point to study open quantum systems.

  18. Classical sociology and cosmopolitanism: a critical defence of the social.

    PubMed

    Turner, Bryan S

    2006-03-01

    It is frequently argued that classical sociology, if not sociology as a whole, cannot provide any significant insight into globalization, primarily because its assumptions about the nation-state, national cultures and national societies are no longer relevant to a global world. Sociology cannot consequently contribute to a normative debate about cosmopolitanism, which invites us to consider loyalties and identities that reach beyond the nation-state. My argument considers four principal topics. First, I defend the classical legacy by arguing that classical sociology involved the study of 'the social' not national societies. This argument is illustration by reference to Emile Durkheim and Talcott Parsons. Secondly, Durkheim specifically developed the notion of a cosmopolitan sociology to challenge the nationalist assumptions of his day. Thirdly, I attempt to develop a critical version of Max Weber's verstehende soziologie to consider the conditions for critical recognition theory in sociology as a necessary precondition of cosmopolitanism. Finally, I consider the limitations of some contemporary versions of global sociology in the example of 'flexible citizenship' to provide an empirical case study of the limitations of globalization processes and 'sociology beyond society'. While many institutions have become global, some cannot make this transition. Hence, we should consider the limitations on as well as the opportunities for cosmopolitan sociology.

  19. Comprehensive study of the dynamics of a classical Kitaev Spin Liquid

    NASA Astrophysics Data System (ADS)

    Samarakoon, Anjana; Banerjee, Arnab; Batista, Cristian; Kamiya, Yoshitomo; Tennant, Alan; Nagler, Stephen

    Quantum spin liquids (QSLs) have achieved great interest in both theoretical and experimental condensed matter physics due to their remarkable topological properties. Among many different candidates, the Kitaev model on the honeycomb lattice is a 2D prototypical QSL which can be experimentally studied in materials based on iridium or ruthenium.Here we study the spin-1/2 Kitaev model using classical Monte-Carlo and semiclassical spin dynamics of classical spins on a honeycomb lattice. Both real and reciprocal space pictures highlighting the differences and similarities of the results to the linear spin wave theory will be discussed in terms dispersion relations of the pure-Kitaev limit and beyond. Interestingly, this technique could capture some of the salient features of the exact quantum solution of the Kitaev model, such as features resembling the Majorana-like mode comparable to the Kitaev energy, which is spectrally narrowed compared to the quantum result, can be explained by magnon excitations on fluctuating onedimensional manifolds (loops). Hence the difference from the classical limit to the quantum limit can be understood by the fractionalization of a magnon to Majorana fermions. The calculations will be directly compared with our neutron scattering data on α-RuCl3 which is a prime candidate for experimental realization of Kitaev physics.

  20. Zooming in: high resolution 3D reconstruction of differently stained histological whole slide images

    NASA Astrophysics Data System (ADS)

    Lotz, Johannes; Berger, Judith; Müller, Benedikt; Breuhahn, Kai; Grabe, Niels; Heldmann, Stefan; Homeyer, André; Lahrmann, Bernd; Laue, Hendrik; Olesch, Janine; Schwier, Michael; Sedlaczek, Oliver; Warth, Arne

    2014-03-01

    Much insight into metabolic interactions, tissue growth, and tissue organization can be gained by analyzing differently stained histological serial sections. One opportunity unavailable to classic histology is three-dimensional (3D) examination and computer aided analysis of tissue samples. In this case, registration is needed to reestablish spatial correspondence between adjacent slides that is lost during the sectioning process. Furthermore, the sectioning introduces various distortions like cuts, folding, tearing, and local deformations to the tissue, which need to be corrected in order to exploit the additional information arising from the analysis of neighboring slide images. In this paper we present a novel image registration based method for reconstructing a 3D tissue block implementing a zooming strategy around a user-defined point of interest. We efficiently align consecutive slides at increasingly fine resolution up to cell level. We use a two-step approach, where after a macroscopic, coarse alignment of the slides as preprocessing, a nonlinear, elastic registration is performed to correct local, non-uniform deformations. Being driven by the optimization of the normalized gradient field (NGF) distance measure, our method is suitable for differently stained and thus multi-modal slides. We applied our method to ultra thin serial sections (2 μm) of a human lung tumor. In total 170 slides, stained alternately with four different stains, have been registered. Thorough visual inspection of virtual cuts through the reconstructed block perpendicular to the cutting plane shows accurate alignment of vessels and other tissue structures. This observation is confirmed by a quantitative analysis. Using nonlinear image registration, our method is able to correct locally varying deformations in tissue structures and exceeds the limitations of globally linear transformations.

  1. Reactive Force Fields via Explicit Valency

    NASA Astrophysics Data System (ADS)

    Kale, Seyit

    Computational simulations are invaluable in elucidating the dynamics of biological macromolecules. Unfortunately, reactions present a fundamental challenge. Calculations based on quantum mechanics can predict bond formation and rupture; however they suffer from severe length- and time-limitations. At the other extreme, classical approaches provide orders of magnitude faster simulations; however they regard chemical bonds as immutable entities. A few exceptions exist, but these are not always trivial to adopt for routine use. We bridge this gap by providing a novel, pseudo-classical approach, based on explicit valency. We unpack molecules into valence electron pairs and atomic cores. Particles bear ionic charges and interact via pairwise-only potentials. The potentials are informed of quantum effects in the short-range and obey dissociation limits in the long-range. They are trained against a small set of isolated species, including geometries and thermodynamics of small hydrides and of dimers formed by them. The resulting force field captures the essentials of reactivity, polarizability and flexibility in a simple, seamless setting. We call this model LEWIS, after the chemical theory that inspired the use of valence pairs. Following the introduction in Chapter 1, we initially focus on the properties of water. Chapter 2 considers gas phase clusters. To transition to the liquid phase, Chapter 3 describes a novel pairwise long-range compensation that performs comparably to infinite lattice summations. The approach is suited to ionic solutions in general. In Chapters 4 and 5, LEWIS is shown to correctly predict the dipolar and quadrupolar response in bulk liquid, and can accommodate proton transfers in both acid and base. Efficiency permits the study of proton defects at dilutions not accessible to experiment or quantum mechanics. Chapter 6 discusses explicit valency approaches in other hydrides, forming the basis of a reactive organic force field. Examples of simple proton transfer and more complex reactions are discussed. Chapter 7 provides a framework for variable electron spread. This addition resolves some of the inherent limitations of the former model which implicitly assumed that electron spread was not affected by the environment. A brief summary is provided in Chapter 8.

  2. Optimal quantum error correcting codes from absolutely maximally entangled states

    NASA Astrophysics Data System (ADS)

    Raissi, Zahra; Gogolin, Christian; Riera, Arnau; Acín, Antonio

    2018-02-01

    Absolutely maximally entangled (AME) states are pure multi-partite generalizations of the bipartite maximally entangled states with the property that all reduced states of at most half the system size are in the maximally mixed state. AME states are of interest for multipartite teleportation and quantum secret sharing and have recently found new applications in the context of high-energy physics in toy models realizing the AdS/CFT-correspondence. We work out in detail the connection between AME states of minimal support and classical maximum distance separable (MDS) error correcting codes and, in particular, provide explicit closed form expressions for AME states of n parties with local dimension \

  3. 78 FR 26112 - Limitation on Claims Against Proposed Public Transportation Projects; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-03

    ... DEPARTMENT OF TRANSPORTATION Federal Transit Administration Limitation on Claims Against Proposed Public Transportation Projects; Correction AGENCY: Federal Transit Administration (FTA), DOT. ACTION... Register on April 22, 2013, concerning a limitation on claims for certain specified public transportation...

  4. A multilevel correction adaptive finite element method for Kohn-Sham equation

    NASA Astrophysics Data System (ADS)

    Hu, Guanghui; Xie, Hehu; Xu, Fei

    2018-02-01

    In this paper, an adaptive finite element method is proposed for solving Kohn-Sham equation with the multilevel correction technique. In the method, the Kohn-Sham equation is solved on a fixed and appropriately coarse mesh with the finite element method in which the finite element space is kept improving by solving the derived boundary value problems on a series of adaptively and successively refined meshes. A main feature of the method is that solving large scale Kohn-Sham system is avoided effectively, and solving the derived boundary value problems can be handled efficiently by classical methods such as the multigrid method. Hence, the significant acceleration can be obtained on solving Kohn-Sham equation with the proposed multilevel correction technique. The performance of the method is examined by a variety of numerical experiments.

  5. Nonuniform fluids in the grand canonical ensemble

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Percus, J.K.

    1982-01-01

    Nonuniform simple classical fluids are considered quite generally. The grand canonical ensemble is particularly suitable, conceptually, in the leading approximation of local thermodynamics, which figuratively divides the system into approximately uniform spatial subsystems. The procedure is reviewed by which this approach is systematically corrected for slowly varying density profiles, and a model is suggested that carries the correction into the domain of local fluctuations. The latter is assessed for substrate bounded fluids, as well as for two-phase interfaces. The peculiarities of the grand ensemble in a two-phase region stem from the inherent very large number fluctuations. A primitive model showsmore » how these are quenched in the canonical ensemble. This is taken advantage of by applying the Kac-Siegert representation of the van der Waals decomposition with petit canonical corrections, to the two-phase regime.« less

  6. Completion of the universal I-Love-Q relations in compact stars including the mass

    NASA Astrophysics Data System (ADS)

    Reina, Borja; Sanchis-Gual, Nicolas; Vera, Raül; Font, José A.

    2017-09-01

    In a recent paper, we applied a rigorous perturbed matching framework to show the amendment of the mass of rotating stars in Hartle's model. Here, we apply this framework to the tidal problem in binary systems. Our approach fully accounts for the correction to the Love numbers needed to obtain the universal I-Love-Q relations. We compute the corrected mass versus radius configurations of rotating quark stars, revisiting a classical paper on the subject. These corrections allow us to find a universal relation involving the second-order contribution to the mass δM. We thus complete the set of universal relations for the tidal problem in binary systems, involving four perturbation parameters, namely I, Love, Q and δM. These relations can be used to obtain the perturbation parameters directly from observational data.

  7. The importance of atmospheric correction for airborne hyperspectral remote sensing of shallow waters: application to depth estimation

    NASA Astrophysics Data System (ADS)

    Castillo-López, Elena; Dominguez, Jose Antonio; Pereda, Raúl; de Luis, Julio Manuel; Pérez, Ruben; Piña, Felipe

    2017-10-01

    Accurate determination of water depth is indispensable in multiple aspects of civil engineering (dock construction, dikes, submarines outfalls, trench control, etc.). To determine the type of atmospheric correction most appropriate for the depth estimation, different accuracies are required. Accuracy in bathymetric information is highly dependent on the atmospheric correction made to the imagery. The reduction of effects such as glint and cross-track illumination in homogeneous shallow-water areas improves the results of the depth estimations. The aim of this work is to assess the best atmospheric correction method for the estimation of depth in shallow waters, considering that reflectance values cannot be greater than 1.5 % because otherwise the background would not be seen. This paper addresses the use of hyperspectral imagery to quantitative bathymetric mapping and explores one of the most common problems when attempting to extract depth information in conditions of variable water types and bottom reflectances. The current work assesses the accuracy of some classical bathymetric algorithms (Polcyn-Lyzenga, Philpot, Benny-Dawson, Hamilton, principal component analysis) when four different atmospheric correction methods are applied and water depth is derived. No atmospheric correction is valid for all type of coastal waters, but in heterogeneous shallow water the model of atmospheric correction 6S offers good results.

  8. In an occupational health surveillance study, auxiliary data from administrative health and occupational databases effectively corrected for nonresponse.

    PubMed

    Santin, Gaëlle; Geoffroy, Béatrice; Bénézet, Laetitia; Delézire, Pauline; Chatelot, Juliette; Sitta, Rémi; Bouyer, Jean; Gueguen, Alice

    2014-06-01

    To show how reweighting can correct for unit nonresponse bias in an occupational health surveillance survey by using data from administrative databases in addition to classic sociodemographic data. In 2010, about 10,000 workers covered by a French health insurance fund were randomly selected and were sent a postal questionnaire. Simultaneously, auxiliary data from routine health insurance and occupational databases were collected for all these workers. To model the probability of response to the questionnaire, logistic regressions were performed with these auxiliary data to compute weights for correcting unit nonresponse. Corrected prevalences of questionnaire variables were estimated under several assumptions regarding the missing data process. The impact of reweighting was evaluated by a sensitivity analysis. Respondents had more reimbursement claims for medical services than nonrespondents but fewer reimbursements for medical prescriptions or hospitalizations. Salaried workers, workers in service companies, or who had held their job longer than 6 months were more likely to respond. Corrected prevalences after reweighting were slightly different from crude prevalences for some variables but meaningfully different for others. Linking health insurance and occupational data effectively corrects for nonresponse bias using reweighting techniques. Sociodemographic variables may be not sufficient to correct for nonresponse. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Large mirror surface control by corrective coating

    NASA Astrophysics Data System (ADS)

    Bonnand, Romain; Degallaix, Jerome; Flaminio, Raffaele; Giacobone, Laurent; Lagrange, Bernard; Marion, Fréderique; Michel, Christophe; Mours, Benoit; Mugnier, Pierre; Pacaud, Emmanuel; Pinard, Laurent

    2013-08-01

    The Advanced Virgo gravitational wave detector aims at a sensitivity ten times better than the initial LIGO and Virgo detectors. This implies very stringent requirement on the optical losses in the interferometer arm cavities. In this paper we focus on the mirrors which form the interferometer arm cavities and that require a surface figure error to be well below one nanometre on a diameter of 150 mm. This ‘sub-nanometric flatness’ is not achievable by classical polishing on such a large diameter. Therefore we present the corrective coating technique which has been developed to reach this requirement. Its principle is to add a non-uniform thin film on top of the substrate in order to flatten its surface. In this paper we will introduce the Advanced Virgo requirements and present the basic principle of the corrective coating technique. Then we show the results obtained experimentally on an initial Virgo substrate. Finally we provide an evaluation of the round-trip losses in the Fabry-Perot arm cavities once the corrected surface is used.

  10. Snijders's correction of Infit and Outfit indexes with estimated ability level: an analysis with the Rasch model.

    PubMed

    Magis, David; Beland, Sebastien; Raiche, Gilles

    2014-01-01

    The Infit mean square W and the Outfit mean square U are commonly used person fit indexes under Rasch measurement. However, they suffer from two major weaknesses. First, their asymptotic distribution is usually derived by assuming that the true ability levels are known. Second, such distributions are even not clearly stated for indexes U and W. Both issues can seriously affect the selection of an appropriate cut-score for person fit identification. Snijders (2001) proposed a general approach to correct some person fit indexes when specific ability estimators are used. The purpose of this paper is to adapt this approach to U and W indexes. First, a brief sketch of the methodology and its application to U and W is proposed. Then, the corrected indexes are compared to their classical versions through a simulation study. The suggested correction yields controlled Type I errors against both conservatism and inflation, while the power to detect specific misfitting response patterns gets significantly increased.

  11. Quantum corrections to Bekenstein-Hawking black hole entropy and gravity partition functions

    NASA Astrophysics Data System (ADS)

    Bytsenko, A. A.; Tureanu, A.

    2013-08-01

    Algebraic aspects of the computation of partition functions for quantum gravity and black holes in AdS3 are discussed. We compute the sub-leading quantum corrections to the Bekenstein-Hawking entropy. It is shown that the quantum corrections to the classical result can be included systematically by making use of the comparison with conformal field theory partition functions, via the AdS3/CFT2 correspondence. This leads to a better understanding of the role of modular and spectral functions, from the point of view of the representation theory of infinite-dimensional Lie algebras. Besides, the sum of known quantum contributions to the partition function can be presented in a closed form, involving the Patterson-Selberg spectral function. These contributions can be reproduced in a holomorphically factorized theory whose partition functions are associated with the formal characters of the Virasoro modules. We propose a spectral function formulation for quantum corrections to the elliptic genus from supergravity states.

  12. The probabilistic origin of Bell's inequality

    NASA Technical Reports Server (NTRS)

    Krenn, Guenther

    1994-01-01

    The concept of local realism entails certain restrictions concerning the possible occurrence of correlated events. Although these restrictions are inherent in classical physics they have never been noticed until Bell showed in 1964 that general correlations in quantum mechanics can not be interpreted in a classical way. We demonstrate how a local realistic way of thinking about measurement results necessarily leads to limitations with regard to the possible appearance of correlated events. These limitations, which are equivalent to Bell's inequality can be easily formulated as an immediate consequence of our discussion.

  13. Functionality limit of classical simulated annealing

    NASA Astrophysics Data System (ADS)

    Hasegawa, M.

    2015-09-01

    By analyzing the system dynamics in the landscape paradigm, optimization function of classical simulated annealing is reviewed on the random traveling salesman problems. The properly functioning region of the algorithm is experimentally determined in the size-time plane and the influence of its boundary on the scalability test is examined in the standard framework of this method. From both results, an empirical choice of temperature length is plausibly explained as a minimum requirement that the algorithm maintains its scalability within its functionality limit. The study exemplifies the applicability of computational physics analysis to the optimization algorithm research.

  14. The Thermal Equilibrium Solution of a Generic Bipolar Quantum Hydrodynamic Model

    NASA Astrophysics Data System (ADS)

    Unterreiter, Andreas

    The thermal equilibrium state of a bipolar, isothermic quantum fluid confined to a bounded domain ,d = 1,2 or d = 3 is entirely described by the particle densities n, p, minimizing the energy where G1,2 are strictly convex real valued functions, . It is shown that this variational problem has a unique minimizer in and some regularity results are proven. The semi-classical limit is carried out recovering the minimizer of the limiting functional. The subsequent zero space charge limit leads to extensions of the classical boundary conditions. Due to the lack of regularity the asymptotics can not be settled on Sobolev embedding arguments. The limit is carried out by means of a compactness-by-convexity principle.

  15. Local patches of turbulent boundary layer behaviour in classical-state vertical natural convection

    NASA Astrophysics Data System (ADS)

    Ng, Chong Shen; Ooi, Andrew; Lohse, Detlef; Chung, Daniel

    2016-11-01

    We present evidence of local patches in vertical natural convection that are reminiscent of Prandtl-von Kármán turbulent boundary layers, for Rayleigh numbers 105-109 and Prandtl number 0.709. These local patches exist in the classical state, where boundary layers exhibit a laminar-like Prandtl-Blasius-Polhausen scaling at the global level, and are distinguished by regions dominated by high shear and low buoyancy flux. Within these patches, the locally averaged mean temperature profiles appear to obey a log-law with the universal constants of Yaglom (1979). We find that the local Nusselt number versus Rayleigh number scaling relation agrees with the logarithmically corrected power-law scaling predicted in the ultimate state of thermal convection, with an exponent consistent with Rayleigh-Bénard convection and Taylor-Couette flows. The local patches grow in size with increasing Rayleigh number, suggesting that the transition from the classical state to the ultimate state is characterised by increasingly larger patches of the turbulent boundary layers.

  16. Revised standards for statistical evidence.

    PubMed

    Johnson, Valen E

    2013-11-26

    Recent advances in Bayesian hypothesis testing have led to the development of uniformly most powerful Bayesian tests, which represent an objective, default class of Bayesian hypothesis tests that have the same rejection regions as classical significance tests. Based on the correspondence between these two classes of tests, it is possible to equate the size of classical hypothesis tests with evidence thresholds in Bayesian tests, and to equate P values with Bayes factors. An examination of these connections suggest that recent concerns over the lack of reproducibility of scientific studies can be attributed largely to the conduct of significance tests at unjustifiably high levels of significance. To correct this problem, evidence thresholds required for the declaration of a significant finding should be increased to 25-50:1, and to 100-200:1 for the declaration of a highly significant finding. In terms of classical hypothesis tests, these evidence standards mandate the conduct of tests at the 0.005 or 0.001 level of significance.

  17. [Comparison of classical 2D measurement of scoliosis and 3D measurement using vertebral vectors; advantages for prognosis and treatment evaluation].

    PubMed

    Illés, Tamás

    2011-03-01

    The EOS system is a new medical imaging device based on low-dose X-rays, gaseous detectors and dedicated software for 3D reconstruction. It was developed by Nobel prizewinner Georges Charpak. A new concept--the vertebral vector--is used to facilitate the interpretation of EOS data, especially in the horizontal plane. We studied 95 cases of idiopathic scoliosis before and after surgery by means of classical methods and using vertebral vectors, in order to compare the accuracy of the two approaches. The vertebral vector permits simultaneous analysis of the scoliotic curvature in the frontal, sagittal and horizontal planes, as precisely as classical methods. The use of the vertebral vector simplifies and facilitates the interpretation of the mass of information provided by EOS. After analyzing the horizontal data, the first goal of corrective intervention would be to reduce the lateral vertebral deviation. The reduction in vertebral rotation seems less important. This is a new element in the therapeutic management of spinal deformations.

  18. [Laparoscopy coupled with classical abdominoplasty in 10 cases of large rectus diastasis].

    PubMed

    Huguier, V; Faure, J-L; Doucet, C; Giot, J-P; Dagregorio, G

    2012-08-01

    In 10 cases of abdominoplasty where an important rectus diastasis had to be corrected, we completed the plication of the rectus sheath included in a classical abdominoplasty with the laparoscopic positioning of an intraperitoneal prosthesis. To assess the middle-term results of this technique and present its advantages and drawbacks. Fifteen patients have been operated from 2007 to 2011 by two surgeon teams. Ten of them have accepted to be included in our survey. All the patients said they were satisfied with their surgery. Four of them reported mild pain during the first postoperative weeks, and two of them mentioned very moderate pain at the time of the survey. The surgeons were not satisfied with the results obtained in two cases. Only one of these two patients accepted revision abdominoplasty with a good result. Laparoscopic positioning of an intraperitoneal prosthesis, coupled with a classical plication of the rectus sheath, gives excellent results in difficult cases of rectus diastasis. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  19. [Thought and method of classic formulae in treatment of chronic cough].

    PubMed

    Su, Ke-Lei; Zhang, Ye-Qing

    2018-06-01

    Chronic cough is a common clinical disease with complex etiology, which is easily misdiagnosed and mistreated. Chronic cough guideline has been developed based on the modern anatomical etiology classification, and it may improve the level of diagnosis and treatment. Common causes of chronic cough are as follows: cough variant asthma, upper airway cough syndrome, eosinophilic bronchitis, gastroesophageal reflux-related cough, post-infectious cough, etc. There is a long history and rich experience in treatment of cough in traditional Chinese medicine which is characterized by syndrome differentiation. The four elements of pathogenesis for chronic cough include wind, phlegm, fire, and deficiency. Classic formula is widely used in the treatment of chronic cough, and the focus is on prescriptions corresponding to syndromes. This article attempts to explore the thought and method of classic formulae in treatment of chronic cough based on three perspectives: differentiation of etiology, pathogenesis and formula-syndrome. Three medical cases are selected at last in order to prove its correction. Copyright© by the Chinese Pharmaceutical Association.

  20. Thermal helium clusters at 3.2 Kelvin in classical and semiclassical simulations

    NASA Astrophysics Data System (ADS)

    Schulte, J.

    1993-03-01

    The thermodynamic stability of4He4-13 at 3.2 K is investigated with the classical Monte Carlo method, with the semiclassical path-integral Monte Carlo (PIMC) method, and with the semiclassical all-order many-body method. In the all-order many-body simulation the dipole-dipole approximation including short-range correction is used. The resulting stability plots are discussed and related to recent TOF experiments by Stephens and King. It is found that with classical Monte Carlo of course the characteristics of the measured mass spectrum cannot be resolved. With PIMC, switching on more and more quantum mechanics. by raising the number of virtual time steps results in more structure in the stability plot, but this did not lead to sufficient agreement with the TOF experiment. Only the all-order many-body method resolved the characteristic structures of the measured mass spectrum, including magic numbers. The result shows the influence of quantum statistics and quantum mechanics on the stability of small neutral helium clusters.

  1. Scalar field quantum cosmology: A Schrödinger picture

    NASA Astrophysics Data System (ADS)

    Vakili, Babak

    2012-11-01

    We study the classical and quantum models of a scalar field Friedmann-Robertson-Walker (FRW) cosmology with an eye to the issue of time problem in quantum cosmology. We introduce a canonical transformation on the scalar field sector of the action such that the momentum conjugate to the new canonical variable appears linearly in the transformed Hamiltonian. Using this canonical transformation, we show that, it may lead to the identification of a time parameter for the corresponding dynamical system. In the cases of flat, closed and open FRW universes the classical cosmological solutions are obtained in terms of the introduced time parameter. Moreover, this formalism gives rise to a Schrödinger-Wheeler-DeWitt equation for the quantum-mechanical description of the model under consideration, the eigenfunctions of which can be used to construct the wave function of the universe. We use the resulting wave functions in order to investigate the possible corrections to the classical cosmologies due to quantum effects by means of the many-worlds and ontological interpretation of quantum cosmology.

  2. Closed string tachyon driving f(R) cosmology

    NASA Astrophysics Data System (ADS)

    Wang, Peng; Wu, Houwen; Yang, Haitang

    2018-05-01

    To study quantum effects on the bulk tachyon dynamics, we replace R with f(R) in the low-energy effective action that couples gravity, the dilaton, and the bulk closed string tachyon of bosonic closed string theory and study properties of their classical solutions. The α' corrections of the graviton-dilaton-tachyon system are implemented in the f(R). We obtain the tachyon-induced rolling solutions and show that the string metric does not need to remain fixed in some cases. In the case with H( t=‑∞ ) = , only the R and R2 terms in f(R) play a role in obtaining the rolling solutions with nontrivial metric. The singular behavior of more classical solutions are investigated and found to be modified by quantum effects. In particular, there could exist some classical solutions, in which the tachyon field rolls down from a maximum of the tachyon potential while the dilaton expectation value is always bounded from above during the rolling process.

  3. Epistemic View of Quantum States and Communication Complexity of Quantum Channels

    NASA Astrophysics Data System (ADS)

    Montina, Alberto

    2012-09-01

    The communication complexity of a quantum channel is the minimal amount of classical communication required for classically simulating a process of state preparation, transmission through the channel and subsequent measurement. It establishes a limit on the power of quantum communication in terms of classical resources. We show that classical simulations employing a finite amount of communication can be derived from a special class of hidden variable theories where quantum states represent statistical knowledge about the classical state and not an element of reality. This special class has attracted strong interest very recently. The communication cost of each derived simulation is given by the mutual information between the quantum state and the classical state of the parent hidden variable theory. Finally, we find that the communication complexity for single qubits is smaller than 1.28 bits. The previous known upper bound was 1.85 bits.

  4. How Could Foreign Teachers in Turkey Pronounce Their Turkish Students' Names Correctly

    ERIC Educational Resources Information Center

    Yurtbasi, Metin

    2016-01-01

    Most of us have read Dale Carnegie's classic "How to make friends and influence people" in which he reveals the secret of human psychology: giving people the "feeling of importance" that they seek. He claims in that work that people feel more friendly toward those who allows them this feeling by caring about them and showing…

  5. Use of Operational Criteria in an Office Practice for Diagnosis of Children Referred for Evaluation of Learning or Behavior Disorders.

    ERIC Educational Resources Information Center

    Brumback, Roger A.

    1979-01-01

    Operational criteria for childhood depression, specific learning disability, developmental hyperactivity, and Gilles de la Tourette syndrome were used to establish the correct diagnosis in 55 of 100 school age Ss. Forty-five Ss were diagnosed as having one of three classical neurological syndromes (epilepsy, sensorineural deafness, and childhood…

  6. Filovirus-Like Particles as Vaccines and Discovery Tools

    DTIC Science & Technology

    2005-06-01

    or MARV strains. Classic methods for vaccine development have been tried, including producing and testing attenuated and inactivated viral...MARV challenge [52]. However, an attenuated virus vac- cine is undesirable for filoviruses due to the danger of reversion to wild-type virulence...correct structural proteins is sufficient for forming VLPs. This is true for both nonenveloped viruses, such as parvovirus , papilloma- virus, rotavirus

  7. Transient chaos - a resolution of breakdown of quantum-classical correspondence in optomechanics.

    PubMed

    Wang, Guanglei; Lai, Ying-Cheng; Grebogi, Celso

    2016-10-17

    Recently, the phenomenon of quantum-classical correspondence breakdown was uncovered in optomechanics, where in the classical regime the system exhibits chaos but in the corresponding quantum regime the motion is regular - there appears to be no signature of classical chaos whatsoever in the corresponding quantum system, generating a paradox. We find that transient chaos, besides being a physically meaningful phenomenon by itself, provides a resolution. Using the method of quantum state diffusion to simulate the system dynamics subject to continuous homodyne detection, we uncover transient chaos associated with quantum trajectories. The transient behavior is consistent with chaos in the classical limit, while the long term evolution of the quantum system is regular. Transient chaos thus serves as a bridge for the quantum-classical transition (QCT). Strikingly, as the system transitions from the quantum to the classical regime, the average chaotic transient lifetime increases dramatically (faster than the Ehrenfest time characterizing the QCT for isolated quantum systems). We develop a physical theory to explain the scaling law.

  8. Transient chaos - a resolution of breakdown of quantum-classical correspondence in optomechanics

    PubMed Central

    Wang, Guanglei; Lai, Ying-Cheng; Grebogi, Celso

    2016-01-01

    Recently, the phenomenon of quantum-classical correspondence breakdown was uncovered in optomechanics, where in the classical regime the system exhibits chaos but in the corresponding quantum regime the motion is regular - there appears to be no signature of classical chaos whatsoever in the corresponding quantum system, generating a paradox. We find that transient chaos, besides being a physically meaningful phenomenon by itself, provides a resolution. Using the method of quantum state diffusion to simulate the system dynamics subject to continuous homodyne detection, we uncover transient chaos associated with quantum trajectories. The transient behavior is consistent with chaos in the classical limit, while the long term evolution of the quantum system is regular. Transient chaos thus serves as a bridge for the quantum-classical transition (QCT). Strikingly, as the system transitions from the quantum to the classical regime, the average chaotic transient lifetime increases dramatically (faster than the Ehrenfest time characterizing the QCT for isolated quantum systems). We develop a physical theory to explain the scaling law. PMID:27748418

  9. Are historical values of ionospheric parameters from ionosondes overestimated?

    NASA Astrophysics Data System (ADS)

    Laštovička, J.; Koucká Knížová, P.; Kouba, D.

    2012-04-01

    Ionogram-scaled values from pre-digital ionosonde times had been derived from ionograms under the assumption of the vertical reflection of ordinary mode of sounding radio waves. Classical ionosondes were unable to distinguish between the vertical and oblique reflections and in the case of the Es-layer also between the ordinary and extraordinary mode reflections due to mirror-like reflections. However, modern digisondes determine clearly the oblique or extraordinary mode reflections. Evaluating the Pruhonice digisonde ionograms in "classical" and in "correct" way we found for seven summers (2004-2010) that among strong foEs (> 6 MHz) only 10% of foEs values were correct and 90% were artificially enhanced in average by 1 MHz, in extreme cases by more than 3 MHz (some oblique reflections). 34% of all reflections were oblique reflections. With other ionospheric parameters like foF2 or foE the problem is less severe because non-mirror reflection makes delay of the extraordinary mode with respect to the ordinary mode and they are separated on ionograms, and oblique reflections are less frequent than with the patchy Es layer. At high latitudes another problem is caused by the z-mode, which is sometimes difficult to be distinguished from the ordinary mode.

  10. Line Interference Effects Using a Refined Robert-Bonamy Formalism: the Test Case of the Isotropic Raman Spectra of Autoperturbed N2

    NASA Technical Reports Server (NTRS)

    Boulet, Christian; Ma, Qiancheng; Thibault, Franck

    2014-01-01

    A symmetrized version of the recently developed refined Robert-Bonamy formalism [Q. Ma, C. Boulet, and R. H. Tipping, J. Chem. Phys. 139, 034305 (2013)] is proposed. This model takes into account line coupling effects and hence allows the calculation of the off-diagonal elements of the relaxation matrix, without neglecting the rotational structure of the perturbing molecule. The formalism is applied to the isotropic Raman spectra of autoperturbed N2 for which a benchmark quantum relaxation matrix has recently been proposed. The consequences of the classical path approximation are carefully analyzed. Methods correcting for effects of inelasticity are considered. While in the right direction, these corrections appear to be too crude to provide off diagonal elements which would yield, via the sum rule, diagonal elements in good agreement with the quantum results. In order to overcome this difficulty, a re-normalization procedure is applied, which ensures that the off-diagonal elements do lead to the exact quantum diagonal elements. The agreement between the (re-normalized) semi-classical and quantum relaxation matrices is excellent, at least for the Raman spectra of N2, opening the way to the analysis of more complex molecular systems.

  11. Tracking of Ball and Players in Beach Volleyball Videos

    PubMed Central

    Gomez, Gabriel; Herrera López, Patricia; Link, Daniel; Eskofier, Bjoern

    2014-01-01

    This paper presents methods for the determination of players' positions and contact time points by tracking the players and the ball in beach volleyball videos. Two player tracking methods are compared, a classical particle filter and a rigid grid integral histogram tracker. Due to mutual occlusion of the players and the camera perspective, results are best for the front players, with 74,6% and 82,6% of correctly tracked frames for the particle method and the integral histogram method, respectively. Results suggest an improved robustness against player confusion between different particle sets when tracking with a rigid grid approach. Faster processing and less player confusions make this method superior to the classical particle filter. Two different ball tracking methods are used that detect ball candidates from movement difference images using a background subtraction algorithm. Ball trajectories are estimated and interpolated from parabolic flight equations. The tracking accuracy of the ball is 54,2% for the trajectory growth method and 42,1% for the Hough line detection method. Tracking results of over 90% from the literature could not be confirmed. Ball contact frames were estimated from parabolic trajectory intersection, resulting in 48,9% of correctly estimated ball contact points. PMID:25426936

  12. Conservative classical and quantum resolution limits for incoherent imaging

    NASA Astrophysics Data System (ADS)

    Tsang, Mankei

    2018-06-01

    I propose classical and quantum limits to the statistical resolution of two incoherent optical point sources from the perspective of minimax parameter estimation. Unlike earlier results based on the Cramér-Rao bound (CRB), the limits proposed here, based on the worst-case error criterion and a Bayesian version of the CRB, are valid for any biased or unbiased estimator and obey photon-number scalings that are consistent with the behaviours of actual estimators. These results prove that, from the minimax perspective, the spatial-mode demultiplexing measurement scheme recently proposed by Tsang, Nair, and Lu [Phys. Rev. X 2016, 6 031033.] remains superior to direct imaging for sufficiently high photon numbers.

  13. Limb Lengthening and Then Insertion of an Intramedullary Nail: A Case-matched Comparison

    PubMed Central

    Kleinman, Dawn; Fragomen, Austin T.; Ilizarov, Svetlana

    2008-01-01

    Distraction osteogenesis is an effective method for lengthening, deformity correction, and treatment of nonunions and bone defects. The classic method uses an external fixator for both distraction and consolidation leading to lengthy times in frames and there is a risk of refracture after frame removal. We suggest a new technique: lengthening and then nailing (LATN) technique in which the frame is used for gradual distraction and then a reamed intramedullary nail inserted to support the bone during the consolidation phase, allowing early removal of the external fixator. We performed a retrospective case-matched comparison of patients lengthened with LATN (39 limbs in 27 patients) technique versus the classic (34 limbs in 27 patients). The LATN group wore the external fixator for less time than the classic group (12 versus 29 weeks). The LATN group had a lower external fixation index (0.5 versus 1.9) and a lower bone healing index (0.8 versus 1.9) than the classic group. LATN confers advantages over the classic method including shorter times needed in external fixation, quicker bone healing, and protection against refracture. There are also advantages over the lengthening over a nail and internal lengthening nail techniques. Level of Evidence: Level III, therapeutic study. See the Guidelines for Authors for a complete description of levels of evidence. PMID:18800209

  14. Learning, Realizability and Games in Classical Arithmetic

    NASA Astrophysics Data System (ADS)

    Aschieri, Federico

    2010-12-01

    In this dissertation we provide mathematical evidence that the concept of learning can be used to give a new and intuitive computational semantics of classical proofs in various fragments of Predicative Arithmetic. First, we extend Kreisel modified realizability to a classical fragment of first order Arithmetic, Heyting Arithmetic plus EM1 (Excluded middle axiom restricted to Sigma^0_1 formulas). We introduce a new realizability semantics we call "Interactive Learning-Based Realizability". Our realizers are self-correcting programs, which learn from their errors and evolve through time. Secondly, we extend the class of learning based realizers to a classical version PCFclass of PCF and, then, compare the resulting notion of realizability with Coquand game semantics and prove a full soundness and completeness result. In particular, we show there is a one-to-one correspondence between realizers and recursive winning strategies in the 1-Backtracking version of Tarski games. Third, we provide a complete and fully detailed constructive analysis of learning as it arises in learning based realizability for HA+EM1, Avigad's update procedures and epsilon substitution method for Peano Arithmetic PA. We present new constructive techniques to bound the length of learning processes and we apply them to reprove - by means of our theory - the classic result of Godel that provably total functions of PA can be represented in Godel's system T. Last, we give an axiomatization of the kind of learning that is needed to computationally interpret Predicative classical second order Arithmetic. Our work is an extension of Avigad's and generalizes the concept of update procedure to the transfinite case. Transfinite update procedures have to learn values of transfinite sequences of non computable functions in order to extract witnesses from classical proofs.

  15. Hamiltonian structure of classical N-body systems of finite-size particles subject to EM interactions

    NASA Astrophysics Data System (ADS)

    Cremaschini, C.; Tessarotto, M.

    2012-01-01

    An open issue in classical relativistic mechanics is the consistent treatment of the dynamics of classical N-body systems of mutually interacting particles. This refers, in particular, to charged particles subject to EM interactions, including both binary interactions and self-interactions ( EM-interacting N- body systems). The correct solution to the question represents an overriding prerequisite for the consistency between classical and quantum mechanics. In this paper it is shown that such a description can be consistently obtained in the context of classical electrodynamics, for the case of a N-body system of classical finite-size charged particles. A variational formulation of the problem is presented, based on the N -body hybrid synchronous Hamilton variational principle. Covariant Lagrangian and Hamiltonian equations of motion for the dynamics of the interacting N-body system are derived, which are proved to be delay-type ODEs. Then, a representation in both standard Lagrangian and Hamiltonian forms is proved to hold, the latter expressed by means of classical Poisson Brackets. The theory developed retains both the covariance with respect to the Lorentz group and the exact Hamiltonian structure of the problem, which is shown to be intrinsically non-local. Different applications of the theory are investigated. The first one concerns the development of a suitable Hamiltonian approximation of the exact equations that retains finite delay-time effects characteristic of the binary interactions and self-EM-interactions. Second, basic consequences concerning the validity of Dirac generator formalism are pointed out, with particular reference to the instant-form representation of Poincaré generators. Finally, a discussion is presented both on the validity and possible extension of the Dirac generator formalism as well as the failure of the so-called Currie "no-interaction" theorem for the non-local Hamiltonian system considered here.

  16. An efficient direct solver for rarefied gas flows with arbitrary statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diaz, Manuel A., E-mail: f99543083@ntu.edu.tw; Yang, Jaw-Yen, E-mail: yangjy@iam.ntu.edu.tw; Center of Advanced Study in Theoretical Science, National Taiwan University, Taipei 10167, Taiwan

    2016-01-15

    A new numerical methodology associated with a unified treatment is presented to solve the Boltzmann–BGK equation of gas dynamics for the classical and quantum gases described by the Bose–Einstein and Fermi–Dirac statistics. Utilizing a class of globally-stiffly-accurate implicit–explicit Runge–Kutta scheme for the temporal evolution, associated with the discrete ordinate method for the quadratures in the momentum space and the weighted essentially non-oscillatory method for the spatial discretization, the proposed scheme is asymptotic-preserving and imposes no non-linear solver or requires the knowledge of fugacity and temperature to capture the flow structures in the hydrodynamic (Euler) limit. The proposed treatment overcomes themore » limitations found in the work by Yang and Muljadi (2011) [33] due to the non-linear nature of quantum relations, and can be applied in studying the dynamics of a gas with internal degrees of freedom with correct values of the ratio of specific heat for the flow regimes for all Knudsen numbers and energy wave lengths. The present methodology is numerically validated with the unified treatment by the one-dimensional shock tube problem and the two-dimensional Riemann problems for gases of arbitrary statistics. Descriptions of ideal quantum gases including rotational degrees of freedom have been successfully achieved under the proposed methodology.« less

  17. Current Management of Presbyopia

    PubMed Central

    Papadopoulos, Pandelis A.; Papadopoulos, Alexandros P.

    2014-01-01

    Presbyopia is a physiologic inevitability that causes gradual loss of accommodation during the fifth decade of life. The correction of presbyopia and the restoration of accommodation are considered the final frontier of refractive surgery. Different approaches on the cornea, the crystalline lens and the sclera are being pursued to achieve surgical correction of this disability. There are however, a number of limitations and considerations that have prevented widespread acceptance of surgical correction for presbyopia. The quality of vision, optical and visual distortions, regression of effect, complications such as corneal ectasia and haze, anisometropia after monovision correction, impaired distance vision and the invasive nature of the currently techniques have limited the utilization of presbyopia surgery. The purpose of this paper is to provide an update of current procedures available for presbyopia correction and their limitations. PMID:24669140

  18. Trajectory-based understanding of the quantum-classical transition for barrier scattering

    NASA Astrophysics Data System (ADS)

    Chou, Chia-Chun

    2018-06-01

    The quantum-classical transition of wave packet barrier scattering is investigated using a hydrodynamic description in the framework of a nonlinear Schrödinger equation. The nonlinear equation provides a continuous description for the quantum-classical transition of physical systems by introducing a degree of quantumness. Based on the transition equation, the transition trajectory formalism is developed to establish the connection between classical and quantum trajectories. The quantum-classical transition is then analyzed for the scattering of a Gaussian wave packet from an Eckart barrier and the decay of a metastable state. Computational results for the evolution of the wave packet and the transmission probabilities indicate that classical results are recovered when the degree of quantumness tends to zero. Classical trajectories are in excellent agreement with the transition trajectories in the classical limit, except in some regions where transition trajectories cannot cross because of the single-valuedness of the transition wave function. As the computational results demonstrate, the process that the Planck constant tends to zero is equivalent to the gradual removal of quantum effects originating from the quantum potential. This study provides an insightful trajectory interpretation for the quantum-classical transition of wave packet barrier scattering.

  19. Classical least squares multivariate spectral analysis

    DOEpatents

    Haaland, David M.

    2002-01-01

    An improved classical least squares multivariate spectral analysis method that adds spectral shapes describing non-calibrated components and system effects (other than baseline corrections) present in the analyzed mixture to the prediction phase of the method. These improvements decrease or eliminate many of the restrictions to the CLS-type methods and greatly extend their capabilities, accuracy, and precision. One new application of PACLS includes the ability to accurately predict unknown sample concentrations when new unmodeled spectral components are present in the unknown samples. Other applications of PACLS include the incorporation of spectrometer drift into the quantitative multivariate model and the maintenance of a calibration on a drifting spectrometer. Finally, the ability of PACLS to transfer a multivariate model between spectrometers is demonstrated.

  20. A Study on Fast Gates for Large-Scale Quantum Simulation with Trapped Ions

    PubMed Central

    Taylor, Richard L.; Bentley, Christopher D. B.; Pedernales, Julen S.; Lamata, Lucas; Solano, Enrique; Carvalho, André R. R.; Hope, Joseph J.

    2017-01-01

    Large-scale digital quantum simulations require thousands of fundamental entangling gates to construct the simulated dynamics. Despite success in a variety of small-scale simulations, quantum information processing platforms have hitherto failed to demonstrate the combination of precise control and scalability required to systematically outmatch classical simulators. We analyse how fast gates could enable trapped-ion quantum processors to achieve the requisite scalability to outperform classical computers without error correction. We analyze the performance of a large-scale digital simulator, and find that fidelity of around 70% is realizable for π-pulse infidelities below 10−5 in traps subject to realistic rates of heating and dephasing. This scalability relies on fast gates: entangling gates faster than the trap period. PMID:28401945

  1. A Study on Fast Gates for Large-Scale Quantum Simulation with Trapped Ions.

    PubMed

    Taylor, Richard L; Bentley, Christopher D B; Pedernales, Julen S; Lamata, Lucas; Solano, Enrique; Carvalho, André R R; Hope, Joseph J

    2017-04-12

    Large-scale digital quantum simulations require thousands of fundamental entangling gates to construct the simulated dynamics. Despite success in a variety of small-scale simulations, quantum information processing platforms have hitherto failed to demonstrate the combination of precise control and scalability required to systematically outmatch classical simulators. We analyse how fast gates could enable trapped-ion quantum processors to achieve the requisite scalability to outperform classical computers without error correction. We analyze the performance of a large-scale digital simulator, and find that fidelity of around 70% is realizable for π-pulse infidelities below 10 -5 in traps subject to realistic rates of heating and dephasing. This scalability relies on fast gates: entangling gates faster than the trap period.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zachos, C. K.; High Energy Physics

    Following ref [1], a classical upper bound for quantum entropy is identified and illustrated, 0 {le} S{sub q} {le} ln (e{sigma}{sup 2}/2{h_bar}), involving the variance {sigma}{sup 2} in phase space of the classical limit distribution of a given system. A fortiori, this further bounds the corresponding information-theoretical generalizations of the quantum entropy proposed by Renyi.

  3. Soprano and source: A laryngographic analysis

    NASA Astrophysics Data System (ADS)

    Bateman, Laura Anne

    2005-04-01

    Popular music in the 21st century uses a particular singing quality for female voice that is quite different from the trained classical singing quality. Classical quality has been the subject of a vast body of research, whereas research that deals with non-classical qualities is limited. In order to learn more about these issues, the author chose to do research on singing qualities using a variety of standard voice quality tests. This paper looks at voice qualities found in various different styles of singing: Classical, Belt, Legit, R&B, Jazz, Country, and Pop. The data was elicited from a professional soprano and the voice qualities reflect industry standards. The data set for this paper is limited to samples using the vowel [i]. Laryngographic (LGG) data was generated simultaneously with the audio samples. This paper will focus on the results of the LGG analysis; however, an audio analysis was also performed using Spectrogram, LPC, and FFT. Data from the LGG is used to calculate the contact quotient, speed quotient, and ascending slope. The LGG waveform is also visually assessed. The LGG analysis gives insights into the source vibration for the different singing styles.

  4. Back to Classics: Teaching Limits through Infinitesimals.

    ERIC Educational Resources Information Center

    Todorov, Todor D.

    2001-01-01

    Criticizes the method of using calculators for the purpose of selecting candidates for L, for the limit value of a function. Suggests an alternative: a working formula for calculating the limit value L of a real function in terms of infinitesimals. (Author/ASK)

  5. Anomalous Subsidence at the Ocean Continent Transition of the Gulf of Aden Rifted Continental Margin

    NASA Astrophysics Data System (ADS)

    Cowie, Leanne; Kusznir, Nick; Leroy, Sylvie

    2013-04-01

    It has been proposed that some rifted continental margins have anomalous subsidence and that at break-up they were elevated at shallower bathymetries than the isostatic response predicted by classical rift models (McKenzie, 1978). The existence of anomalous syn- or early-post break-up subsidence of this form would have important implications for our understanding of the geodynamics of continental break-up and sea-floor spreading initiation. We have investigated subsidence of the young rifted continental margin of the eastern Gulf of Aden, focussing on the western Oman margin (break-up age 17.6 Ma). Lucazeau et al. (2008) have found that the observed bathymetry here is approximately 1 km shallower than the predicted bathymetry. In order to examine the proposition of an anomalous early post break-up subsidence history of the Omani Gulf of Aden rifted continental margin, we have determined the subsidence of the oldest oceanic crust adjacent to the continent-ocean boundary (COB) using residual depth anomaly (RDA) analysis corrected for sediment loading and oceanic crustal thickness variation. RDAs corrected for sediment loading using flexural backstripping and decompaction have been calculated by comparing observed and age predicted oceanic bathymetries in order to identify anomalous subsidence of the Gulf of Aden rifted continental margin. Age predicted bathymetric anomalies have been calculated using the thermal plate model predictions of Crosby and McKenzie (2009). Non-zero RDAs at the Omani Gulf of Aden rifted continental margin can be the result of non standard oceanic crustal thickness or the effect of mantle dynamic topography or a non-classical rift and break-up model. Oceanic crustal basement thicknesses from gravity inversion together with Airy isostasy have been used to predict a "synthetic" gravity RDA, in order to determine the RDA contribution from non-standard oceanic crustal thickness. Gravity inversion, used to determine crustal basement thickness, incorporates a lithosphere thermal gravity anomaly correction and uses sediment thicknesses from 2D seismic data. Reference Moho depths used in the gravity inversion have been calibrated against seismic refraction Moho depths. The difference between the sediment corrected RDA and the "synthetic" gravity derived RDA gives the component of the RDA which is not due to variations in oceanic crustal thickness. This RDA corrected for sediment loading and crustal thickness variation has a magnitude between +600m and +1000m (corresponding to anomalous uplift) and is comparable to that reported (+1km) by Lucazeau et al. (2008). We are unable to distinguish whether this anomalous uplift is due to mantle dynamic topography or anomalous subsidence with respect to classical rift model predictions.

  6. Computational quantum-classical boundary of noisy commuting quantum circuits

    PubMed Central

    Fujii, Keisuke; Tamate, Shuhei

    2016-01-01

    It is often said that the transition from quantum to classical worlds is caused by decoherence originated from an interaction between a system of interest and its surrounding environment. Here we establish a computational quantum-classical boundary from the viewpoint of classical simulatability of a quantum system under decoherence. Specifically, we consider commuting quantum circuits being subject to decoherence. Or equivalently, we can regard them as measurement-based quantum computation on decohered weighted graph states. To show intractability of classical simulation in the quantum side, we utilize the postselection argument and crucially strengthen it by taking noise effect into account. Classical simulatability in the classical side is also shown constructively by using both separable criteria in a projected-entangled-pair-state picture and the Gottesman-Knill theorem for mixed state Clifford circuits. We found that when each qubit is subject to a single-qubit complete-positive-trace-preserving noise, the computational quantum-classical boundary is sharply given by the noise rate required for the distillability of a magic state. The obtained quantum-classical boundary of noisy quantum dynamics reveals a complexity landscape of controlled quantum systems. This paves a way to an experimentally feasible verification of quantum mechanics in a high complexity limit beyond classically simulatable region. PMID:27189039

  7. Computational quantum-classical boundary of noisy commuting quantum circuits.

    PubMed

    Fujii, Keisuke; Tamate, Shuhei

    2016-05-18

    It is often said that the transition from quantum to classical worlds is caused by decoherence originated from an interaction between a system of interest and its surrounding environment. Here we establish a computational quantum-classical boundary from the viewpoint of classical simulatability of a quantum system under decoherence. Specifically, we consider commuting quantum circuits being subject to decoherence. Or equivalently, we can regard them as measurement-based quantum computation on decohered weighted graph states. To show intractability of classical simulation in the quantum side, we utilize the postselection argument and crucially strengthen it by taking noise effect into account. Classical simulatability in the classical side is also shown constructively by using both separable criteria in a projected-entangled-pair-state picture and the Gottesman-Knill theorem for mixed state Clifford circuits. We found that when each qubit is subject to a single-qubit complete-positive-trace-preserving noise, the computational quantum-classical boundary is sharply given by the noise rate required for the distillability of a magic state. The obtained quantum-classical boundary of noisy quantum dynamics reveals a complexity landscape of controlled quantum systems. This paves a way to an experimentally feasible verification of quantum mechanics in a high complexity limit beyond classically simulatable region.

  8. Computational quantum-classical boundary of noisy commuting quantum circuits

    NASA Astrophysics Data System (ADS)

    Fujii, Keisuke; Tamate, Shuhei

    2016-05-01

    It is often said that the transition from quantum to classical worlds is caused by decoherence originated from an interaction between a system of interest and its surrounding environment. Here we establish a computational quantum-classical boundary from the viewpoint of classical simulatability of a quantum system under decoherence. Specifically, we consider commuting quantum circuits being subject to decoherence. Or equivalently, we can regard them as measurement-based quantum computation on decohered weighted graph states. To show intractability of classical simulation in the quantum side, we utilize the postselection argument and crucially strengthen it by taking noise effect into account. Classical simulatability in the classical side is also shown constructively by using both separable criteria in a projected-entangled-pair-state picture and the Gottesman-Knill theorem for mixed state Clifford circuits. We found that when each qubit is subject to a single-qubit complete-positive-trace-preserving noise, the computational quantum-classical boundary is sharply given by the noise rate required for the distillability of a magic state. The obtained quantum-classical boundary of noisy quantum dynamics reveals a complexity landscape of controlled quantum systems. This paves a way to an experimentally feasible verification of quantum mechanics in a high complexity limit beyond classically simulatable region.

  9. Laser-only Adaptive Optics Achieves Significant Image Quality Gains Compared to Seeing-limited Observations over the Entire Sky

    NASA Astrophysics Data System (ADS)

    Howard, Ward S.; Law, Nicholas M.; Ziegler, Carl A.; Baranec, Christoph; Riddle, Reed

    2018-02-01

    Adaptive optics laser guide-star systems perform atmospheric correction of stellar wavefronts in two parts: stellar tip-tilt and high-spatial-order laser correction. The requirement of a sufficiently bright guide star in the field-of-view to correct tip-tilt limits sky coverage. In this paper, we show an improvement to effective seeing without the need for nearby bright stars, enabling full sky coverage by performing only laser-assisted wavefront correction. We used Robo-AO, the first robotic AO system, to comprehensively demonstrate this laser-only correction. We analyze observations from four years of efficient robotic operation covering 15000 targets and 42000 observations, each realizing different seeing conditions. Using an autoguider (or a post-processing software equivalent) and the laser to improve effective seeing independent of the brightness of a target, Robo-AO observations show a 39% ± 19% improvement to effective FWHM, without any tip-tilt correction. We also demonstrate that 50% encircled energy performance without tip-tilt correction remains comparable to diffraction-limited, standard Robo-AO performance. Faint-target science programs primarily limited by 50% encircled energy (e.g., those employing integral field spectrographs placed behind the AO system) may see significant benefits to sky coverage from employing laser-only AO.

  10. Nonlocal correlations in a macroscopic measurement scenario

    NASA Astrophysics Data System (ADS)

    Kunkri, Samir; Banik, Manik; Ghosh, Sibasish

    2017-02-01

    Nonlocality is one of the main characteristic features of quantum systems involving more than one spatially separated subsystem. It is manifested theoretically as well as experimentally through violation of some local realistic inequality. On the other hand, classical behavior of all physical phenomena in the macroscopic limit gives a general intuition that any physical theory for describing microscopic phenomena should resemble classical physics in the macroscopic regime, the so-called macrorealism. In the 2-2-2 scenario (two parties, with each performing two measurements and each measurement having two outcomes), contemplating all the no-signaling correlations, we characterize which of them would exhibit classical (local realistic) behavior in the macroscopic limit. Interestingly, we find correlations which at the single-copy level violate the Bell-Clauser-Horne-Shimony-Holt inequality by an amount less than the optimal quantum violation (i.e., Cirel'son bound 2 √{2 } ), but in the macroscopic limit gives rise to a value which is higher than 2 √{2 } . Such correlations are therefore not considered physical. Our study thus provides a sufficient criterion to identify some of unphysical correlations.

  11. Experimental Demonstration of Higher Precision Weak-Value-Based Metrology Using Power Recycling

    NASA Astrophysics Data System (ADS)

    Wang, Yi-Tao; Tang, Jian-Shun; Hu, Gang; Wang, Jian; Yu, Shang; Zhou, Zong-Quan; Cheng, Ze-Di; Xu, Jin-Shi; Fang, Sen-Zhi; Wu, Qing-Lin; Li, Chuan-Feng; Guo, Guang-Can

    2016-12-01

    The weak-value-based metrology is very promising and has attracted a lot of attention in recent years because of its remarkable ability in signal amplification. However, it is suggested that the upper limit of the precision of this metrology cannot exceed that of classical metrology because of the low sample size caused by the probe loss during postselection. Nevertheless, a recent proposal shows that this probe loss can be reduced by the power-recycling technique, and thus enhance the precision of weak-value-based metrology. Here we experimentally realize the power-recycled interferometric weak-value-based beam-deflection measurement and obtain the amplitude of the detected signal and white noise by discrete Fourier transform. Our results show that the detected signal can be strengthened by power recycling, and the power-recycled weak-value-based signal-to-noise ratio can surpass the upper limit of the classical scheme, corresponding to the shot-noise limit. This work sheds light on higher precision metrology and explores the real advantage of the weak-value-based metrology over classical metrology.

  12. Effective field theory of dissipative fluids (II): classical limit, dynamical KMS symmetry and entropy current

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glorioso, Paolo; Crossley, Michael; Liu, Hong

    2017-09-20

    Here in this paper we further develop the fluctuating hydrodynamics proposed in a number of ways. We first work out in detail the classical limit of the hydrodynamical action, which exhibits many simplifications. In particular, this enables a transparent formulation of the action in physical spacetime in the presence of arbitrary external fields. It also helps to clarify issues related to field redefinitions and frame choices. We then propose that the action is invariant under a Z2 symmetry to which we refer as the dynamical KMS symmetry. The dynamical KMS symmetry is physically equivalent to the previously proposed local KMSmore » condition in the classical limit, but is more convenient to implement and more general. It is applicable to any states in local equilibrium rather than just thermal density matrix perturbed by external background fields. Finally we elaborate the formulation for a conformal fluid, which contains some new features, and work out the explicit form of the entropy current to second order in derivatives for a neutral conformal fluid.« less

  13. Time-dependent variational approach in terms of squeezed coherent states: Implication to semi-classical approximation

    NASA Technical Reports Server (NTRS)

    Tsue, Yasuhiko

    1994-01-01

    A general framework for time-dependent variational approach in terms of squeezed coherent states is constructed with the aim of describing quantal systems by means of classical mechanics including higher order quantal effects with the aid of canonicity conditions developed in the time-dependent Hartree-Fock theory. The Maslov phase occurring in a semi-classical quantization rule is investigated in this framework. In the limit of a semi-classical approximation in this approach, it is definitely shown that the Maslov phase has a geometric nature analogous to the Berry phase. It is also indicated that this squeezed coherent state approach is a possible way to go beyond the usual WKB approximation.

  14. Computation of the properties of liquid neon, methane, and gas helium at low temperature by the Feynman-Hibbs approach.

    PubMed

    Tchouar, N; Ould-Kaddour, F; Levesque, D

    2004-10-15

    The properties of liquid methane, liquid neon, and gas helium are calculated at low temperatures over a large range of pressure from the classical molecular-dynamics simulations. The molecular interactions are represented by the Lennard-Jones pair potentials supplemented by quantum corrections following the Feynman-Hibbs approach. The equations of state, diffusion, and shear viscosity coefficients are determined for neon at 45 K, helium at 80 K, and methane at 110 K. A comparison is made with the existing experimental data and for thermodynamical quantities, with results computed from quantum numerical simulations when they are available. The theoretical variation of the viscosity coefficient with pressure is in good agreement with the experimental data when the quantum corrections are taken into account, thus reducing considerably the 60% discrepancy between the simulations and experiments in the absence of these corrections.

  15. Automatic cortical segmentation in the developing brain.

    PubMed

    Xue, Hui; Srinivasan, Latha; Jiang, Shuzhou; Rutherford, Mary; Edwards, A David; Rueckert, Daniel; Hajnal, Jo V

    2007-01-01

    The segmentation of neonatal cortex from magnetic resonance (MR) images is much more challenging than the segmentation of cortex in adults. The main reason is the inverted contrast between grey matter (GM) and white matter (WM) that occurs when myelination is incomplete. This causes mislabeled partial volume voxels, especially at the interface between GM and cerebrospinal fluid (CSF). We propose a fully automatic cortical segmentation algorithm, detecting these mislabeled voxels using a knowledge-based approach and correcting errors by adjusting local priors to favor the correct classification. Our results show that the proposed algorithm corrects errors in the segmentation of both GM and WM compared to the classic EM scheme. The segmentation algorithm has been tested on 25 neonates with the gestational ages ranging from approximately 27 to 45 weeks. Quantitative comparison to the manual segmentation demonstrates good performance of the method (mean Dice similarity: 0.758 +/- 0.037 for GM and 0.794 +/- 0.078 for WM).

  16. Simplified projection technique to correct geometric and chromatic lens aberrations using plenoptic imaging.

    PubMed

    Dallaire, Xavier; Thibault, Simon

    2017-04-01

    Plenoptic imaging has been used in the past decade mainly for 3D reconstruction or digital refocusing. It was also shown that this technology has potential for correcting monochromatic aberrations in a standard optical system. In this paper, we present an algorithm for reconstructing images using a projection technique while correcting defects present in it that can apply to chromatic aberrations and wide-angle optical systems. We show that the impact of noise on the reconstruction procedure is minimal. Trade-offs between the sampling of the optical system needed for characterization and image quality are presented. Examples are shown for aberrations in a classic optical system and for chromatic aberrations. The technique is also applied to a wide-angle full field of view of 140° (FFOV 140°) optical system. This technique could be used in order to further simplify or minimize optical systems.

  17. Nonlinear responses of chiral fluids from kinetic theory

    NASA Astrophysics Data System (ADS)

    Hidaka, Yoshimasa; Pu, Shi; Yang, Di-Lun

    2018-01-01

    The second-order nonlinear responses of inviscid chiral fluids near local equilibrium are investigated by applying the chiral kinetic theory (CKT) incorporating side-jump effects. It is shown that the local equilibrium distribution function can be nontrivially introduced in a comoving frame with respect to the fluid velocity when the quantum corrections in collisions are involved. For the study of anomalous transport, contributions from both quantum corrections in anomalous hydrodynamic equations of motion and those from the CKT and Wigner functions are considered under the relaxation-time (RT) approximation, which result in anomalous charge Hall currents propagating along the cross product of the background electric field and the temperature (or chemical-potential) gradient and of the temperature and chemical-potential gradients. On the other hand, the nonlinear quantum correction on the charge density vanishes in the classical RT approximation, which in fact satisfies the matching condition given by the anomalous equation obtained from the CKT.

  18. To Punish or Not to Punish-That Is the Question.

    PubMed

    Chen, Gila; Einat, Tomer

    2017-02-01

    Attitudes toward punishment have long been of interest to policymakers, researchers, and criminal justice practitioners. The current study examined the relationship between academic education in criminology and attitudes toward punishment among 477 undergraduate students in three subgroups: police officers, correctional officers, and criminology students who were not employed by the criminal justice system (CJS). Our main findings concluded that (a) punitive attitudes of the correctional officers and police officers at the beginning of their academic studies were harsher than those of the criminology and criminal justice students who were not employed by the CJS, (b) punitive attitudes of the correctional officers at the end of their academic studies were less severe than their first-year counterparts, (c) fear of crime was higher among women than among men, and (d) the strongest predictor of punitive attitudes was a firm belief in the principles of the classical and labeling theories (beyond group). Implications of these results are discussed.

  19. Validation of Blockage Interference Corrections in the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.

    2007-01-01

    A validation test has recently been constructed for wall interference methods as applied to the National Transonic Facility (NTF). The goal of this study was to begin to address the uncertainty of wall-induced-blockage interference corrections, which will make it possible to address the overall quality of data generated by the facility. The validation test itself is not specific to any particular modeling. For this present effort, the Transonic Wall Interference Correction System (TWICS) as implemented at the NTF is the mathematical model being tested. TWICS uses linear, potential boundary conditions that must first be calibrated. These boundary conditions include three different classical, linear. homogeneous forms that have been historically used to approximate the physical behavior of longitudinally slotted test section walls. Results of the application of the calibrated wall boundary conditions are discussed in the context of the validation test.

  20. The Potential Energy Density in Transverse String Waves Depends Critically on Longitudinal Motion

    ERIC Educational Resources Information Center

    Rowland, David R.

    2011-01-01

    The question of the correct formula for the potential energy density in transverse waves on a taut string continues to attract attention (e.g. Burko 2010 "Eur. J. Phys." 31 L71), and at least three different formulae can be found in the literature, with the classic text by Morse and Feshbach ("Methods of Theoretical Physics" pp 126-127) stating…

  1. Periodically Self Restoring Redundant Systems for VLSI Based Highly Reliable Design,

    DTIC Science & Technology

    1984-01-01

    fault tolerance technique for realizing highly reliable computer systems for critical control applications . However, VL.SI technology has imposed a...operating correctly; failed critical real time control applications . n modules are discarded from the vote. the classical "static" voted redundancy...redundant modules are failure number of InterconnecttIon3. This results In f aree. However, for applications requiring higm modular complexity because

  2. Invariants for correcting field polarisation effect in MT-VLF resistivity mapping

    NASA Astrophysics Data System (ADS)

    Guérin, Roger; Tabbagh, Alain; Benderitter, Yves; Andrieux, Pierre

    1994-12-01

    MT-VLF resistivity mapping is well suited to perform hydrology and environment studies. However, the apparent anistropy generated by the polarisation of the primary field requires the use of two transmitters at a right angle to each other in order to prevent errors in interpretation. We propose a processing technique that uses approximate invariants derived from classical developments in tensor magnetotellurics. They consist of the calculation at each station of ?. Both synthetic and field cases show that they give identical results and correct perfectly for the apparent anisotropy generated by the polarisation of the transmitted field. They should be preferred to verticalization of the electric field which remains of interest when only transmitter data are available.

  3. Relativistic Corrections to the Energy of the Electron in a Hydrogenlike Atom

    NASA Astrophysics Data System (ADS)

    Skobelev, V. V.

    2017-11-01

    Using the previously found solution of the Dirac equation for an electron in the field of the nucleus ( Ze), expressed in terms of the eigenfunction of the spin projection operator Σ3, in the expansion in the small parameter ( Zα), α = e 2/ ħc ≈ 1/137, relativistic and spin-orbit corrections to the energy of the electron in a hydrogenlike atom are calculated, where the latter, in our view, are represented in an easier to visualize form in comparison with previously known classical results. This work may be of methodological interest in the sense of some modification of the corresponding sections of the traditional course on quantum mechanics.

  4. [Therapy of scoliosis from a historical perspective].

    PubMed

    Harms, J; Rauschmann, M; Rickert, M

    2015-12-01

    Scoliosis can be considered as one of the classical orthopedic diseases of the spine. The history of orthopedics is closely connected to the development of the therapy of scoliosis. In the eighteenth and the beginning of the nineteenth centuries the therapy of scoliosis was mainly a conservative corrective orthopedic treatment with a variety of corset forms and extension bed treatment. In the middle of the nineteenth century physiotherapy (movement therapy) became established as an supplementary active treatment. The first operations for treatment of scoliosis were carried out in 1839. The real success with surgical procedures for improvement in corrective options was connected to the introduction of metal spinal implants in the early 1960s.

  5. Secure quantum communication using classical correlated channel

    NASA Astrophysics Data System (ADS)

    Costa, D.; de Almeida, N. G.; Villas-Boas, C. J.

    2016-10-01

    We propose a secure protocol to send quantum information from one part to another without a quantum channel. In our protocol, which resembles quantum teleportation, a sender (Alice) and a receiver (Bob) share classical correlated states instead of EPR ones, with Alice performing measurements in two different bases and then communicating her results to Bob through a classical channel. Our secure quantum communication protocol requires the same amount of classical bits as the standard quantum teleportation protocol. In our scheme, as in the usual quantum teleportation protocol, once the classical channel is established in a secure way, a spy (Eve) will never be able to recover the information of the unknown quantum state, even if she is aware of Alice's measurement results. Security, advantages, and limitations of our protocol are discussed and compared with the standard quantum teleportation protocol.

  6. Nematode taxonomy: from morphology to metabarcoding

    NASA Astrophysics Data System (ADS)

    Ahmed, M.; Sapp, M.; Prior, T.; Karssen, G.; Back, M.

    2015-11-01

    Nematodes represent a species rich and morphologically diverse group of metazoans inhabiting both aquatic and terrestrial environments. Their role as biological indicators and as key players in nutrient cycling has been well documented. Some groups of nematodes are also known to cause significant losses to crop production. In spite of this, knowledge of their diversity is still limited due to the difficulty in achieving species identification using morphological characters. Molecular methodology has provided very useful means of circumventing the numerous limitations associated with classical morphology based identification. We discuss herein the history and the progress made within the field of nematode systematics, the limitations of classical taxonomy and how the advent of high throughput sequencing is facilitating advanced ecological and molecular studies.

  7. Quantum chaos in nuclear physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bunakov, V. E., E-mail: bunakov@VB13190.spb.edu

    A definition of classical and quantum chaos on the basis of the Liouville–Arnold theorem is proposed. According to this definition, a chaotic quantum system that has N degrees of freedom should have M < N independent first integrals of motion (good quantum numbers) that are determined by the symmetry of the Hamiltonian for the system being considered. Quantitative measures of quantum chaos are established. In the classical limit, they go over to the Lyapunov exponent or the classical stability parameter. The use of quantum-chaos parameters in nuclear physics is demonstrated.

  8. Non-linear quantum-classical scheme to simulate non-equilibrium strongly correlated fermionic many-body dynamics

    PubMed Central

    Kreula, J. M.; Clark, S. R.; Jaksch, D.

    2016-01-01

    We propose a non-linear, hybrid quantum-classical scheme for simulating non-equilibrium dynamics of strongly correlated fermions described by the Hubbard model in a Bethe lattice in the thermodynamic limit. Our scheme implements non-equilibrium dynamical mean field theory (DMFT) and uses a digital quantum simulator to solve a quantum impurity problem whose parameters are iterated to self-consistency via a classically computed feedback loop where quantum gate errors can be partly accounted for. We analyse the performance of the scheme in an example case. PMID:27609673

  9. Understanding quantum work in a quantum many-body system.

    PubMed

    Wang, Qian; Quan, H T

    2017-03-01

    Based on previous studies in a single-particle system in both the integrable [Jarzynski, Quan, and Rahav, Phys. Rev. X 5, 031038 (2015)2160-330810.1103/PhysRevX.5.031038] and the chaotic systems [Zhu, Gong, Wu, and Quan, Phys. Rev. E 93, 062108 (2016)1539-375510.1103/PhysRevE.93.062108], we study the the correspondence principle between quantum and classical work distributions in a quantum many-body system. Even though the interaction and the indistinguishability of identical particles increase the complexity of the system, we find that for a quantum many-body system the quantum work distribution still converges to its classical counterpart in the semiclassical limit. Our results imply that there exists a correspondence principle between quantum and classical work distributions in an interacting quantum many-body system, especially in the large particle number limit, and further justify the definition of quantum work via two-point energy measurements in quantum many-body systems.

  10. Quantum theory for 1D X-ray free electron laser

    NASA Astrophysics Data System (ADS)

    Anisimov, Petr M.

    2018-06-01

    Classical 1D X-ray Free Electron Laser (X-ray FEL) theory has stood the test of time by guiding FEL design and development prior to any full-scale analysis. Future X-ray FELs and inverse-Compton sources, where photon recoil approaches an electron energy spread value, push the classical theory to its limits of applicability. After substantial efforts by the community to find what those limits are, there is no universally agreed upon quantum approach to design and development of future X-ray sources. We offer a new approach to formulate the quantum theory for 1D X-ray FELs that has an obvious connection to the classical theory, which allows for immediate transfer of knowledge between the two regimes. We exploit this connection in order to draw quantum mechanical conclusions about the quantum nature of electrons and generated radiation in terms of FEL variables.

  11. Manipulating Images of Popular Culture upon Neo-Classical Theatre: "Tartuffe" at Susquehanna University.

    ERIC Educational Resources Information Center

    Sodd, Mary Jo

    Moliere's "Tartuffe" is an attack, not on religion, but on people who hide behind religion and exploit it. As a college professor in charge of student production searched for a director's concept for "Tartuffe," she realized that it would be unwise to attempt a museum staging of neo-classical theater with limited funding. She…

  12. Exploring and Listening to Chinese Classical Ensembles in General Music

    ERIC Educational Resources Information Center

    Zhang, Wenzhuo

    2017-01-01

    Music diversity is valued in theory, but the extent to which it is efficiently presented in music class remains limited. Within this article, I aim to bridge this gap by introducing four genres of Chinese classical ensembles--Qin and Xiao duets, Jiang Nan bamboo and silk ensembles, Cantonese ensembles, and contemporary Chinese orchestras--into the…

  13. Q-balls in flat potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Copeland, Edmund J.; Tsumagari, Mitsuo I.

    2009-07-15

    We study the classical and absolute stability of Q-balls in scalar field theories with flat potentials arising in both gravity-mediated and gauge-mediated models. We show that the associated Q-matter formed in gravity-mediated potentials can be stable against decay into their own free particles as long as the coupling constant of the nonrenormalizable term is small, and that all of the possible three-dimensional Q-ball configurations are classically stable against linear fluctuations. Three-dimensional gauge-mediated Q-balls can be absolutely stable in the thin-wall limit, but are completely unstable in the thick-wall limit.

  14. Semi-automatic assessment of skin capillary density: proof of principle and validation.

    PubMed

    Gronenschild, E H B M; Muris, D M J; Schram, M T; Karaca, U; Stehouwer, C D A; Houben, A J H M

    2013-11-01

    Skin capillary density and recruitment have been proven to be relevant measures of microvascular function. Unfortunately, the assessment of skin capillary density from movie files is very time-consuming, since this is done manually. This impedes the use of this technique in large-scale studies. We aimed to develop a (semi-) automated assessment of skin capillary density. CapiAna (Capillary Analysis) is a newly developed semi-automatic image analysis application. The technique involves four steps: 1) movement correction, 2) selection of the frame range and positioning of the region of interest (ROI), 3) automatic detection of capillaries, and 4) manual correction of detected capillaries. To gain insight into the performance of the technique, skin capillary density was measured in twenty participants (ten women; mean age 56.2 [42-72] years). To investigate the agreement between CapiAna and the classic manual counting procedure, we used weighted Deming regression and Bland-Altman analyses. In addition, intra- and inter-observer coefficients of variation (CVs), and differences in analysis time were assessed. We found a good agreement between CapiAna and the classic manual method, with a Pearson's correlation coefficient (r) of 0.95 (P<0.001) and a Deming regression coefficient of 1.01 (95%CI: 0.91; 1.10). In addition, we found no significant differences between the two methods, with an intercept of the Deming regression of 1.75 (-6.04; 9.54), while the Bland-Altman analysis showed a mean difference (bias) of 2.0 (-13.5; 18.4) capillaries/mm(2). The intra- and inter-observer CVs of CapiAna were 2.5% and 5.6% respectively, while for the classic manual counting procedure these were 3.2% and 7.2%, respectively. Finally, the analysis time for CapiAna ranged between 25 and 35min versus 80 and 95min for the manual counting procedure. We have developed a semi-automatic image analysis application (CapiAna) for the assessment of skin capillary density, which agrees well with the classic manual counting procedure, is time-saving, and has a better reproducibility as compared to the classic manual counting procedure. As a result, the use of skin capillaroscopy is feasible in large-scale studies, which importantly extends the possibilities to perform microcirculation research in humans. © 2013.

  15. 2D Quantum Transport Modeling in Nanoscale MOSFETs

    NASA Technical Reports Server (NTRS)

    Svizhenko, Alexei; Anantram, M. P.; Govindan, T. R.; Biegel, Bryan

    2001-01-01

    With the onset of quantum confinement in the inversion layer in nanoscale MOSFETs, behavior of the resonant level inevitably determines all device characteristics. While most classical device simulators take quantization into account in some simplified manner, the important details of electrostatics are missing. Our work addresses this shortcoming and provides: (a) a framework to quantitatively explore device physics issues such as the source-drain and gate leakage currents, DIBL, and threshold voltage shift due to quantization, and b) a means of benchmarking quantum corrections to semiclassical models (such as density- gradient and quantum-corrected MEDICI). We have developed physical approximations and computer code capable of realistically simulating 2-D nanoscale transistors, using the non-equilibrium Green's function (NEGF) method. This is the most accurate full quantum model yet applied to 2-D device simulation. Open boundary conditions, oxide tunneling and phase-breaking scattering are treated on equal footing. Electrons in the ellipsoids of the conduction band are treated within the anisotropic effective mass approximation. Quantum simulations are focused on MIT 25, 50 and 90 nm "well- tempered" MOSFETs and compared to classical and quantum corrected models. The important feature of quantum model is smaller slope of Id-Vg curve and consequently higher threshold voltage. These results are quantitatively consistent with I D Schroedinger-Poisson calculations. The effect of gate length on gate-oxide leakage and sub-threshold current has been studied. The shorter gate length device has an order of magnitude smaller current at zero gate bias than the longer gate length device without a significant trade-off in on-current. This should be a device design consideration.

  16. Multiband corrections for the semi-classical simulation of interband tunneling in GaAs tunnel junctions

    NASA Astrophysics Data System (ADS)

    Louarn, K.; Claveau, Y.; Hapiuk, D.; Fontaine, C.; Arnoult, A.; Taliercio, T.; Licitra, C.; Piquemal, F.; Bounouh, A.; Cavassilas, N.; Almuneau, G.

    2017-09-01

    The aim of this study is to investigate the impact of multiband corrections on the current density in GaAs tunnel junctions (TJs) calculated with a refined yet simple semi-classical interband tunneling model (SCITM). The non-parabolicity of the considered bands and the spin-orbit effects are considered by using a recently revisited SCITM available in the literature. The model is confronted to experimental results from a series of molecular beam epitaxy grown GaAs TJs and to numerical results obtained with a full quantum model based on the non-equilibrium Green’s function formalism and a 6-band k.p Hamiltonian. We emphasize the importance of considering the non-parabolicity of the conduction band by two different measurements of the energy-dependent electron effective mass in N-doped GaAs. We also propose an innovative method to compute the non-uniform electric field in the TJ for the SCITM simulations, which is of prime importance for a successful operation of the model. We demonstrate that, when considering the multiband corrections and this new computation of the non-uniform electric field, the SCITM succeeds in predicting the electrical characteristics of GaAs TJs, and are also in agreement with the quantum model. Besides the fundamental study of the tunneling phenomenon in TJs, the main benefit of this SCITM is that it can be easily embedded into drift-diffusion software, which are the most widely-used simulation tools for electronic and opto-electronic devices such as multi-junction solar cells, tunnel field-effect transistors, or vertical-cavity surface-emitting lasers.

  17. Implementation of the WICS Wall Interference Correction System at the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Iyer, Venkit; Everhart, Joel L.; Bir, Pamela J.; Ulbrich, Norbert

    2000-01-01

    The Wall Interference Correction System (WICS) is operational at the National Transonic Facility (NTF) of NASA Langley Research Center (NASA LaRC) for semispan and full span tests in the solid wall (slots covered) configuration. The method is based on the wall pressure signature method for computing corrections to the measured parameters. It is an adaptation of the WICS code operational at the 12 ft pressure wind tunnel (12ft PWT) of NASA Ames Research Center (NASA ARC). This paper discusses the details of implementation of WICS at the NTF including tunnel calibration, code modifications for tunnel and support geometry, changes made for the NTF wall orifices layout, details of interfacing with the tunnel data processing system, and post-processing of results. Example results of applying WICS to a semispan test and a full span test are presented. Comparison with classical correction results and an analysis of uncertainty in the corrections are also given. As a special application of the code, the Mach number calibration data from a centerline pipe test was computed by WICS. Finally, future work for expanding the applicability of the code including online implementation is discussed.

  18. Simple automatic strategy for background drift correction in chromatographic data analysis.

    PubMed

    Fu, Hai-Yan; Li, He-Dong; Yu, Yong-Jie; Wang, Bing; Lu, Peng; Cui, Hua-Peng; Liu, Ping-Ping; She, Yuan-Bin

    2016-06-03

    Chromatographic background drift correction, which influences peak detection and time shift alignment results, is a critical stage in chromatographic data analysis. In this study, an automatic background drift correction methodology was developed. Local minimum values in a chromatogram were initially detected and organized as a new baseline vector. Iterative optimization was then employed to recognize outliers, which belong to the chromatographic peaks, in this vector, and update the outliers in the baseline until convergence. The optimized baseline vector was finally expanded into the original chromatogram, and linear interpolation was employed to estimate background drift in the chromatogram. The principle underlying the proposed method was confirmed using a complex gas chromatographic dataset. Finally, the proposed approach was applied to eliminate background drift in liquid chromatography quadrupole time-of-flight samples used in the metabolic study of Escherichia coli samples. The proposed method was comparable with three classical techniques: morphological weighted penalized least squares, moving window minimum value strategy and background drift correction by orthogonal subspace projection. The proposed method allows almost automatic implementation of background drift correction, which is convenient for practical use. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Insight Is Not in the Problem: Investigating Insight in Problem Solving across Task Types.

    PubMed

    Webb, Margaret E; Little, Daniel R; Cropper, Simon J

    2016-01-01

    The feeling of insight in problem solving is typically associated with the sudden realization of a solution that appears obviously correct (Kounios et al., 2006). Salvi et al. (2016) found that a solution accompanied with sudden insight is more likely to be correct than a problem solved through conscious and incremental steps. However, Metcalfe (1986) indicated that participants would often present an inelegant but plausible (wrong) answer as correct with a high feeling of warmth (a subjective measure of closeness to solution). This discrepancy may be due to the use of different tasks or due to different methods in the measurement of insight (i.e., using a binary vs. continuous scale). In three experiments, we investigated both findings, using many different problem tasks (e.g., Compound Remote Associates, so-called classic insight problems, and non-insight problems). Participants rated insight-related affect (feelings of Aha-experience, confidence, surprise, impasse, and pleasure) on continuous scales. As expected we found that, for problems designed to elicit insight, correct solutions elicited higher proportions of reported insight in the solution compared to non-insight solutions; further, correct solutions elicited stronger feelings of insight compared to incorrect solutions.

  20. Insight Is Not in the Problem: Investigating Insight in Problem Solving across Task Types

    PubMed Central

    Webb, Margaret E.; Little, Daniel R.; Cropper, Simon J.

    2016-01-01

    The feeling of insight in problem solving is typically associated with the sudden realization of a solution that appears obviously correct (Kounios et al., 2006). Salvi et al. (2016) found that a solution accompanied with sudden insight is more likely to be correct than a problem solved through conscious and incremental steps. However, Metcalfe (1986) indicated that participants would often present an inelegant but plausible (wrong) answer as correct with a high feeling of warmth (a subjective measure of closeness to solution). This discrepancy may be due to the use of different tasks or due to different methods in the measurement of insight (i.e., using a binary vs. continuous scale). In three experiments, we investigated both findings, using many different problem tasks (e.g., Compound Remote Associates, so-called classic insight problems, and non-insight problems). Participants rated insight-related affect (feelings of Aha-experience, confidence, surprise, impasse, and pleasure) on continuous scales. As expected we found that, for problems designed to elicit insight, correct solutions elicited higher proportions of reported insight in the solution compared to non-insight solutions; further, correct solutions elicited stronger feelings of insight compared to incorrect solutions. PMID:27725805

  1. Implementation of the WICS Wall Interference Correction System at the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Iyer, Venkit; Martin, Lockheed; Everhart, Joel L.; Bir, Pamela J.; Ulbrich, Norbert

    2000-01-01

    The Wall Interference Correction System (WICS) is operational at the National Transonic Facility (NTF) of NASA Langley Research Center (NASA LaRC) for semispan and full span tests in the solid wall (slots covered) configuration, The method is based on the wall pressure signature method for computing corrections to the measured parameters. It is an adaptation of the WICS code operational at the 12 ft pressure wind tunnel (12ft PWT) of NASA Ames Research Center (NASA ARC). This paper discusses the details of implementation of WICS at the NTF including, tunnel calibration, code modifications for tunnel and support geometry, changes made for the NTF wall orifices layout, details of interfacing with the tunnel data processing system, and post-processing of results. Example results of applying WICS to a semispan test and a full span test are presented. Comparison with classical correction results and an analysis of uncertainty in the corrections are also given. As a special application of the code, the Mach number calibration data from a centerline pipe test was computed by WICS. Finally, future work for expanding the applicability of the code including online implementation is discussed.

  2. Cost vs. Risk: Determining the Correct Liability Insurance Limit.

    ERIC Educational Resources Information Center

    Klinksiek, Glenn

    1996-01-01

    Presents a model for evaluating liability insurance limits and selecting the correct limit for an individual institution. Argues that many colleges and universities may be making overly conservative decisions that lead to the purchase of too much liability insurance. Also discusses the financial consequences of an uninsured large liability loss.…

  3. 76 FR 2573 - Technical Corrections: Matters Subject to Protest and Various Protest Time Limits

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-14

    ... 174 [CBP Dec. 11-02] Technical Corrections: Matters Subject to Protest and Various Protest Time Limits..., in pertinent part, the types of matters subject to protest, the time required for allowing or denying an application for further review of a protest, and various other protest time limits. This document...

  4. 76 FR 27609 - Reduction of Foreign Tax Credit Limitation Categories Under Section 904(d); Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-12

    ... Reduction of Foreign Tax Credit Limitation Categories Under Section 904(d); Correction AGENCY: Internal... foreign tax credit limitation categories under section 904(d) of the Internal Revenue Code. DATES: This... in and Losses With Respect to the Pre-2007 Separate Category for High Withholding Tax Interest...

  5. Holographic Rényi entropy in AdS3/LCFT2 correspondence

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Song, Feng-yan; Zhang, Jia-ju

    2014-03-01

    The recent study in AdS3/CFT2 correspondence shows that the tree level contribution and 1-loop correction of holographic Rényi entanglement entropy (HRE) exactly match the direct CFT computation in the large central charge limit. This allows the Rényi entanglement entropy to be a new window to study the AdS/CFT correspondence. In this paper we generalize the study of Rényi entanglement entropy in pure AdS3 gravity to the massive gravity theories at the critical points. For the cosmological topological massive gravity (CTMG), the dual conformal field theory (CFT) could be a chiral conformal field theory or a logarithmic conformal field theory (LCFT), depending on the asymptotic boundary conditions imposed. In both cases, by studying the short interval expansion of the Rényi entanglement entropy of two disjoint intervals with small cross ratio x, we find that the classical and 1-loop HRE are in exact match with the CFT results, up to order x 6. To this order, the difference between the massless graviton and logarithmic mode can be seen clearly. Moreover, for the cosmological new massive gravity (CNMG) at critical point, which could be dual to a logarithmic CFT as well, we find the similar agreement in the CNMG/LCFT correspondence. Furthermore we read the 2-loop correction of graviton and logarithmic mode to HRE from CFT computation. It has distinct feature from the one in pure AdS3 gravity.

  6. Nothing in medicine makes sense, except in the light of evolution.

    PubMed

    Varki, Ajit

    2012-05-01

    The practice of medicine is a fruitful marriage of classic diagnostic and healing arts with modern advancements in many relevant sciences. The scientific aspects of medicine are rooted in understanding the biology of our species and those of other organisms that interact with us in health and disease. Thus, it is reasonable to paraphrase Dobzhansky, stating that, "nothing in the biological aspects of medicine makes sense except in the light of evolution." However, the art and science of medicine are also rooted in the unusual cognitive abilities of humans and the cultural evolutionary processes arising. This explains the rather bold and inclusive title of this essay. The near complete absence of evolution in medical school curricula is a historical anomaly that needs correction. Otherwise, we will continue to train generations of physicians who lack understanding of some fundamental principles that should guide both medical practice and research. I here recount my attempts to correct this deficiency at my own medical school and the lessons learned. I also attempt to summarize what I teach in the limited amount of time allowed for the purpose. Particular attention is given to the value of comparing human physiology and disease with those of other closely related species. There is a long way to go before the teaching of evolution can be placed in its rightful context within the medical curriculum. However, the trend is in the right direction. Let us aim for a day when an essay like this will no longer be relevant.

  7. Modified Essex-Lopresti / Westheus reduction for displaced intra-articular fractures of the calcaneus. Description of surgical technique and early outcomes.

    PubMed

    Pillai, Anand; Basappa, Prabhudeva; Ehrendorfer, Stefan

    2007-02-01

    We describe a modification of the classical Essex-Lopresti manoeuvre for the indirect reduction and stabilisation of displaced intra-articular fractures of the calcaneus. The radiological and functional results achieved using this technique in 15 patients is presented. Ten tongue-shaped and 8 joint depression type fractures were treated by the new method, incorporating the use of an additional traction pin. The pre and postoperative Böhler angles as well as the correction achieved were documented. Functional assessment was carried out using the Maryland Foot Score. The mean pre-operative Böhler angle in the joint depression group was 5.5 degrees, and in the tongue shaped fracture group 5 degrees. The mean postoperative Böhler angle in the joint depression group was 15.8 degrees, and in the tongue shape group was 23.25 degrees. At a mean follow-up of 28 months the joint depression group scored 51/100 on the foot score, and the tongue shaped fracture group 77/100. The mean correction achieved as well as the mean overall functional scores were significantly better in the tongue shaped group. The technique described has much promise in the management of selected displaced intra-articular fractures of the calcaneus (true tongue shaped / Sanders II), and may also have a limited role in other fracture types in patients with significant co-morbidities, soft tissue compromise and poor healing potential.

  8. HO + CO reaction rates and H/D kinetic isotope effects: master equation models with ab initio SCTST rate constants.

    PubMed

    Weston, Ralph E; Nguyen, Thanh Lam; Stanton, John F; Barker, John R

    2013-02-07

    Ab initio microcanonical rate constants were computed using Semi-Classical Transition State Theory (SCTST) and used in two master equation formulations (1D, depending on active energy with centrifugal corrections, and 2D, depending on total energy and angular momentum) to compute temperature-dependent rate constants for the title reactions using a potential energy surface obtained by sophisticated ab initio calculations. The 2D master equation was used at the P = 0 and P = ∞ limits, while the 1D master equation with centrifugal corrections and an empirical energy transfer parameter could be used over the entire pressure range. Rate constants were computed for 75 K ≤ T ≤ 2500 K and 0 ≤ [He] ≤ 10(23) cm(-3). For all temperatures and pressures important for combustion and for the terrestrial atmosphere, the agreement with the experimental rate constants is very good, but at very high pressures and T ≤ 200 K, the theoretical rate constants are significantly smaller than the experimental values. This effect is possibly due to the presence in the experiments of dimers and prereactive complexes, which were not included in the model calculations. The computed H/D kinetic isotope effects are in acceptable agreement with experimental data, which show considerable scatter. Overall, the agreement between experimental and theoretical H/D kinetic isotope effects is much better than in previous work, and an assumption of non-RRKM behavior does not appear to be needed to reproduce experimental observations.

  9. Mesoscopic Physics of Electronic and Optical Systems

    NASA Astrophysics Data System (ADS)

    Hentschel, Martina

    2005-10-01

    The progress in fabricating and controlling mesoscopic samples opens the possibility to investigate many-body phenomena on the nanoscopic scale, for example in quantum dots or nanoparticles. We recently studied the many-body signatures in the photoabsorption cross-section of those systems. Two counteracting many-body effects (Anderson's orthogonality catastrophe and Mahan's exciton) lead to deviations from the naively expected cross-section and to Fermi-edge singularities in the form of a peaked or rounded edge. We found that mesoscopic-coherent systems can show a many-body response that differs considerably from macroscopic samples. The reason for this lies in the finite number of particles and the lack of rotational symmetry in generic mesoscopic systems. The properties of mesoscopic systems crucially depend on whether the corresponding classical systems possess chaotic or integrable dynamics. Signatures of the underlying classical dynamics in quantum-mechanical behavior are searched for in the field of quantum chaos. We study it in the context of optical microresonators-billiards where reflection at hard walls is replaced by confinement due to total internal reflection. The relation between the simple ray model and the wave description (that has to be used when the wavelength becomes comparable to the system size) is called ``ray-wave correspondence.'' It can be established in both real and phase space. For the latter we generalized the concept of Husimi functions to dielectric boundaries. Although the ray model provides a qualitative understanding of the system properties even into the wave limit, semiclassical corrections of the ray picture are necessary in order to establish quantitative correspondence.

  10. Application of ply level analysis to flexural wave propagation

    NASA Astrophysics Data System (ADS)

    Valisetty, R. R.; Rehfield, L. W.

    1988-10-01

    A brief survey is presented of the shear deformation theories of laminated plates. It indicates that there are certain non-classical influences that affect bending-related behavior in the same way as do the transverse shear stresses. They include bending- and stretching-related section warping and the concomitant non-classical surface parallel stress contributions and the transverse normal stress. A bending theory gives significantly improved performance if these non-classical affects are incorporated. The heterogeneous shear deformations that are characteristic of laminates with highly dissimilar materials, however, require that attention be paid to the modeling of local rotations. In this paper, it is shown that a ply level analysis can be used to model such disparate shear deformations. Here, equilibrium of each layer is analyzed separately. Earlier applications of this analysis include free-edge laminate stresses. It is now extended to the study of flexural wave propagation in laminates. A recently developed homogeneous plate theory is used as a ply level model. Due consideration is given to the non-classical influences and no shear correction factors are introduced extraneously in this theory. The results for the lowest flexural mode of travelling planar harmonic waves indicate that this approach is competitive and yields better results for certain laminates.

  11. High-order corrections on the laser cooling limit in the Lamb-Dicke regime.

    PubMed

    Yi, Zhen; Gu, Wen-Ju

    2017-01-23

    We investigate corrections on the cooling limit of high-order Lamb-Dicke (LD) parameters in the double electromagnetically induced transparency (EIT) cooling scheme. Via utilizing quantum interferences, the single-phonon heating mechanism vanishes and the system evolves to a double dark state, from which we will obtain the mechanical occupation on the single-phonon excitation state. In addition, the further correction induced by two-phonon heating transitions is included to achieve a more accurate cooling limit. There exist two pathways of two-phonon heating transitions: direct two-phonon excitation from the dark state and further excitation from the single-phonon excited state. By adding up these two parts of correction, the obtained analytical predictions show a well consistence with numerical results. Moreover, we find that the two pathways can destructively interfere with each other, leading to the elimination of two-phonon heating transitions and achieving a lower cooling limit.

  12. Classic hallucinogens in the treatment of addictions.

    PubMed

    Bogenschutz, Michael P; Johnson, Matthew W

    2016-01-04

    Addictive disorders are very common and have devastating individual and social consequences. Currently available treatment is moderately effective at best. After many years of neglect, there is renewed interest in potential clinical uses for classic hallucinogens in the treatment of addictions and other behavioral health conditions. In this paper we provide a comprehensive review of both historical and recent clinical research on the use of classic hallucinogens in the treatment of addiction, selectively review other relevant research concerning hallucinogens, and suggest directions for future research. Clinical trial data are very limited except for the use of LSD in the treatment of alcoholism, where a meta-analysis of controlled trials has demonstrated a consistent and clinically significant beneficial effect of high-dose LSD. Recent pilot studies of psilocybin-assisted treatment of nicotine and alcohol dependence had strikingly positive outcomes, but controlled trials will be necessary to evaluate the efficacy of these treatments. Although plausible biological mechanisms have been proposed, currently the strongest evidence is for the role of mystical or other meaningful experiences as mediators of therapeutic effects. Classic hallucinogens have an excellent record of safety in the context of clinical research. Given our limited understanding of the clinically relevant effects of classic hallucinogens, there is a wealth of opportunities for research that could contribute important new knowledge and potentially lead to valuable new treatments for addiction. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Quantum vs Classical Mechanics for a 'Simple' Dissociation Reaction. Should They Give the Same Results?

    NASA Astrophysics Data System (ADS)

    Holloway, Stephen

    1997-03-01

    When performing molecular dynamical simulations on light systems at low energies, there is always the risk of producing data that bear no similarity to experiment. Indeed, John Barker himself was particularly anxious about treating Ar scattering from surfaces using classical mechanics where it had been shown experimentally in his own lab that diffraction occurs. In such cases, the correct procedure is probably to play the trump card "... well of course, quantum effects will modify this so that....." and retire gracefully. For our particular interests, the tables are turned in that we are interested in gas-surface dynamical studies for highly quantized systems, but would be interested to know when it is possible to use classical mechanics in order that a greater dimensionality might be treated. For molecular dissociation and scattering, it has been oft quoted that the greater the number of degrees of freedom, the more appropriate is classical mechanics, primarily because of the mass averaging over the quantized dimensions. Is this true? We have been investigating the dissociation of hydrogen molecules at surfaces and in this talk I will present quantum results for dissociation and scattering, along with a novel method for their interpretation based upon adiabatic potential energy surfaces. Comparison with classical calculations will be made and conclusions drawn. a novel method for their interpretation based upon adiabatic potential energy surfaces

  14. Does loop quantum cosmology replace the big rip singularity by a non-singular bounce?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haro, Jaume de, E-mail: jaime.haro@upc.edu

    It is stated that holonomy corrections in loop quantum cosmology introduce a modification in Friedmann's equation which prevent the big rip singularity. Recently in [1] it has been proved that this modified Friedmann equation is obtained in an inconsistent way, what means that the results deduced from it, in particular the big rip singularity avoidance, are not justified. The problem is that holonomy corrections modify the gravitational part of the Hamiltonian of the system leading, after Legendre's transformation, to a non covariant Lagrangian which is in contradiction with one of the main principles of General Relativity. A more consistent waymore » to deal with the big rip singularity avoidance is to disregard modification in the gravitational part of the Hamiltonian, and only consider inverse volume effects [2]. In this case we will see that, not like the big bang singularity, the big rip singularity survives in loop quantum cosmology. Another way to deal with the big rip avoidance is to take into account geometric quantum effects given by the the Wheeler-De Witt equation. In that case, even though the wave packets spread, the expectation values satisfy the same equations as their classical analogues. Then, following the viewpoint adopted in loop quantum cosmology, one can conclude that the big rip singularity survives when one takes into account these quantum effects. However, the spreading of the wave packets prevents the recover of the semiclassical time, and thus, one might conclude that the classical evolution of the universe come to and end before the big rip is reached. This is not conclusive because. as we will see, it always exists other external times that allows us to define the classical and quantum evolution of the universe up to the big rip singularity.« less

  15. The intuitive use of laryngeal airway tools by first year medical students.

    PubMed

    Bickenbach, Johannes; Schälte, Gereon; Beckers, Stefan; Fries, Michael; Derwall, Matthias; Rossaint, Rolf

    2009-09-22

    Providing a secured airway is of paramount importance in cardiopulmonary resuscitation. Although intubating the trachea is yet seen as gold standard, this technique is still reserved to experienced healthcare professionals. Compared to bag-valve facemask ventilation, however, the insertion of a laryngeal mask airway offers the opportunity to ventilate the patient effectively and can also be placed easily by lay responders. Obviously, it might be inserted without detailed background knowledge.The purpose of the study was to investigate the intuitive use of airway devices by first-year medical students as well as the effect of a simple, but well-directed training programme. Retention of skills was re-evaluated six months thereafter. The insertion of a LMA-Classic and a LMA-Fastrach performed by inexperienced medical students was compared in an airway model. The improvement on their performance after a training programme of overall two hours was examined afterwards. Prior to any instruction, mean time to correct placement was 55.5 +/- 29.6 s for the LMA-Classic and 38.1 +/- 24.9 s for the LMA-Fastrach. Following training, time to correct placement decreased significantly with 22.9 +/- 13.5 s for the LMA-Classic and 22.9 +/- 19.0 s for the LMA-Fastrach, respectively (p < 0.05). After six months, the results are comparable prior (55.6 +/- 29.9 vs 43.1 +/- 34.7 s) and after a further training period (23.5 +/- 13.2 vs 26.6 +/- 21.6, p < 0.05). Untrained laypersons are able to use different airway devices in a manikin and may therefore provide a secured airway even without having any detailed background knowledge about the tool. Minimal theoretical instruction and practical skill training can improve their performance significantly. However, refreshment of knowledge seems justified after six months.

  16. A classical model for closed-loop diagrams of binary liquid mixtures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schnitzler, J.v.; Prausnitz, J.M.

    1994-03-01

    A classical lattice model for closed-loop temperature-composition phase diagrams has been developed. It considers the effect of specific interactions, such as hydrogen bonding, between dissimilar components. This van Laar-type model includes a Flory-Huggins term for the excess entropy of mixing. It is applied to several liquid-liquid equilibria of nonelectrolytes, where the molecules of the two components differ in size. The model is able to represent the observed data semi-quantitatively, but in most cases it is not flexible enough to predict all parts of the closed loop quantitatively. The ability of the model to represent different binary systems is discussed. Finally,more » attention is given to a correction term, concerning the effect of concentration fluctuations near the upper critical solution temperature.« less

  17. On the Formulation of Anisotropic-Polyaxial Failure Criteria: A Comparative Study

    NASA Astrophysics Data System (ADS)

    Parisio, Francesco; Laloui, Lyesse

    2018-02-01

    The correct representation of the failure of geomaterials that feature strength anisotropy and polyaxiality is crucial for many applications. In this contribution, we propose and evaluate through a comparative study a generalized framework that covers both features. Polyaxiality of strength is modeled with a modified Van Eekelen approach, while the anisotropy is modeled using a fabric tensor approach of the Pietruszczak and Mroz type. Both approaches share the same philosophy as they can be applied to simpler failure surfaces, allowing great flexibility in model formulation. The new failure surface is tested against experimental data and its performance compared against classical failure criteria commonly used in geomechanics. Our study finds that the global error between predictions and data is generally smaller for the proposed framework compared to other classical approaches.

  18. Threshold quantum secret sharing based on single qubit

    NASA Astrophysics Data System (ADS)

    Lu, Changbin; Miao, Fuyou; Meng, Keju; Yu, Yue

    2018-03-01

    Based on unitary phase shift operation on single qubit in association with Shamir's ( t, n) secret sharing, a ( t, n) threshold quantum secret sharing scheme (or ( t, n)-QSS) is proposed to share both classical information and quantum states. The scheme uses decoy photons to prevent eavesdropping and employs the secret in Shamir's scheme as the private value to guarantee the correctness of secret reconstruction. Analyses show it is resistant to typical intercept-and-resend attack, entangle-and-measure attack and participant attacks such as entanglement swapping attack. Moreover, it is easier to realize in physic and more practical in applications when compared with related ones. By the method in our scheme, new ( t, n)-QSS schemes can be easily constructed using other classical ( t, n) secret sharing.

  19. Lagrangian dynamics for classical, Brownian, and quantum mechanical particles

    NASA Astrophysics Data System (ADS)

    Pavon, Michele

    1996-07-01

    In the framework of Nelson's stochastic mechanics [E. Nelson, Dynamical Theories of Brownian Motion (Princeton University, Princeton, 1967); F. Guerra, Phys. Rep. 77, 263 (1981); E. Nelson, Quantum Fluctuations (Princeton University, Princeton, 1985)] we seek to develop the particle counterpart of the hydrodynamic results of M. Pavon [J. Math. Phys. 36, 6774 (1995); Phys. Lett. A 209, 143 (1995)]. In particular, a first form of Hamilton's principle is established. We show that this variational principle leads to the correct equations of motion for the classical particle, the Brownian particle in thermodynamical equilibrium, and the quantum particle. In the latter case, the critical process q satisfies a stochastic Newton law. We then introduce the momentum process p, and show that the pair (q,p) satisfies canonical-like equations.

  20. Right-left and the scrotum in Greek sculpture.

    PubMed

    McManus, I C

    2004-04-01

    The scrotum in humans is asymmetric, the right testicle being visibly higher than the left in most men. Paradoxically, it is also the case that the right testicle is somewhat larger, rather than smaller, as might be expected. Greek classical and pre-classical art, which took great care in its attention to anatomical detail, correctly portrayed the right testicle as the higher, but then incorrectly portrayed the left testicle as visibly larger. The implication is that the Greeks used a simple mechanical theory, the left testicle being thought to be lower because it was larger and hence more subject to the pull of gravity. The present study examines data on scrotal asymmetry in more detail, and puts them in the context of Greek theories of functional differences between the right side and the left side.

  1. The Euclidean model of measurement in Fechner's psychophysics.

    PubMed

    Zudini, Verena

    2011-01-01

    Historians acknowledge Euclid and Fechner, respectively, as the founders of classical geometry and classical psychophysics. At all times, their ideas have been reference points and have shared the same destiny of being criticized, corrected, and even radically rejected, in their theoretical and methodological aspects and in their epistemological value. According to a model of measurement of magnitudes which goes back to Euclid, Fechner (1860) developed a theory for psychical magnitudes that opened a lively debate among numerous scholars. Fechner's attempt to apply the model proposed by Euclid to subjective sensation magnitudes--and the debate that followed--generated ideas and concepts that were destined to have rich developments in the psychological and (more generally) scientific field of the twentieth century and that still animate current psychophysics. © 2011 Wiley Periodicals, Inc.

  2. Interferogram conditioning for improved Fourier analysis and application to X-ray phase imaging by grating interferometry.

    PubMed

    Montaux-Lambert, Antoine; Mercère, Pascal; Primot, Jérôme

    2015-11-02

    An interferogram conditioning procedure, for subsequent phase retrieval by Fourier demodulation, is presented here as a fast iterative approach aiming at fulfilling the classical boundary conditions imposed by Fourier transform techniques. Interference fringe patterns with typical edge discontinuities were simulated in order to reveal the edge artifacts that classically appear in traditional Fourier analysis, and were consecutively used to demonstrate the correction efficiency of the proposed conditioning technique. Optimization of the algorithm parameters is also presented and discussed. Finally, the procedure was applied to grating-based interferometric measurements performed in the hard X-ray regime. The proposed algorithm enables nearly edge-artifact-free retrieval of the phase derivatives. A similar enhancement of the retrieved absorption and fringe visibility images is also achieved.

  3. Conservative corrections to the innermost stable circular orbit (ISCO) of a Kerr black hole: A new gauge-invariant post-Newtonian ISCO condition, and the ISCO shift due to test-particle spin and the gravitational self-force

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Favata, Marc

    2011-01-15

    The innermost stable circular orbit (ISCO) delimits the transition from circular orbits to those that plunge into a black hole. In the test-mass limit, well-defined ISCO conditions exist for the Kerr and Schwarzschild spacetimes. In the finite-mass case, there are a large variety of ways to define an ISCO in a post-Newtonian (PN) context. Here I generalize the gauge-invariant ISCO condition of Blanchet and Iyer [Classical Quantum Gravity 20, 755 (2003)] to the case of spinning (nonprecessing) binaries. The Blanchet-Iyer ISCO condition has two desirable and unexpected properties: (1) it exactly reproduces the Schwarzschild ISCO in the test-mass limit, andmore » (2) it accurately approximates the recently calculated shift in the Schwarzschild ISCO frequency due to the conservative-piece of the gravitational self-force [L. Barack and N. Sago, Phys. Rev. Lett. 102, 191101 (2009)]. The generalization of this ISCO condition to spinning binaries has the property that it also exactly reproduces the Kerr ISCO in the test-mass limit (up to the order at which PN spin corrections are currently known). The shift in the ISCO due to the spin of the test-particle is also calculated. Remarkably, the gauge-invariant PN ISCO condition exactly reproduces the ISCO shift predicted by the Papapetrou equations for a fully relativistic spinning particle. It is surprising that an analysis of the stability of the standard PN equations of motion is able (without any form of 'resummation') to accurately describe strong-field effects of the Kerr spacetime. The ISCO frequency shift due to the conservative self-force in Kerr is also calculated from this new ISCO condition, as well as from the effective-one-body Hamiltonian of Barausse and Buonanno [Phys. Rev. D 81, 084024 (2010)]. These results serve as a useful point of comparison for future gravitational self-force calculations in the Kerr spacetime.« less

  4. Systematic review of statistical approaches to quantify, or correct for, measurement error in a continuous exposure in nutritional epidemiology.

    PubMed

    Bennett, Derrick A; Landry, Denise; Little, Julian; Minelli, Cosetta

    2017-09-19

    Several statistical approaches have been proposed to assess and correct for exposure measurement error. We aimed to provide a critical overview of the most common approaches used in nutritional epidemiology. MEDLINE, EMBASE, BIOSIS and CINAHL were searched for reports published in English up to May 2016 in order to ascertain studies that described methods aimed to quantify and/or correct for measurement error for a continuous exposure in nutritional epidemiology using a calibration study. We identified 126 studies, 43 of which described statistical methods and 83 that applied any of these methods to a real dataset. The statistical approaches in the eligible studies were grouped into: a) approaches to quantify the relationship between different dietary assessment instruments and "true intake", which were mostly based on correlation analysis and the method of triads; b) approaches to adjust point and interval estimates of diet-disease associations for measurement error, mostly based on regression calibration analysis and its extensions. Two approaches (multiple imputation and moment reconstruction) were identified that can deal with differential measurement error. For regression calibration, the most common approach to correct for measurement error used in nutritional epidemiology, it is crucial to ensure that its assumptions and requirements are fully met. Analyses that investigate the impact of departures from the classical measurement error model on regression calibration estimates can be helpful to researchers in interpreting their findings. With regard to the possible use of alternative methods when regression calibration is not appropriate, the choice of method should depend on the measurement error model assumed, the availability of suitable calibration study data and the potential for bias due to violation of the classical measurement error model assumptions. On the basis of this review, we provide some practical advice for the use of methods to assess and adjust for measurement error in nutritional epidemiology.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bunakov, V. E., E-mail: bunakov@VB13190.spb.edu

    A critical analysis of the present-day concept of chaos in quantum systems as nothing but a “quantum signature” of chaos in classical mechanics is given. In contrast to the existing semi-intuitive guesses, a definition of classical and quantum chaos is proposed on the basis of the Liouville–Arnold theorem: a quantum chaotic system featuring N degrees of freedom should have M < N independent first integrals of motion (good quantum numbers) specified by the symmetry of the Hamiltonian of the system. Quantitative measures of quantum chaos that, in the classical limit, go over to the Lyapunov exponent and the classical stabilitymore » parameter are proposed. The proposed criteria of quantum chaos are applied to solving standard problems of modern dynamical chaos theory.« less

  6. Quantum mechanics as classical statistical mechanics with an ontic extension and an epistemic restriction.

    PubMed

    Budiyono, Agung; Rohrlich, Daniel

    2017-11-03

    Where does quantum mechanics part ways with classical mechanics? How does quantum randomness differ fundamentally from classical randomness? We cannot fully explain how the theories differ until we can derive them within a single axiomatic framework, allowing an unambiguous account of how one theory is the limit of the other. Here we derive non-relativistic quantum mechanics and classical statistical mechanics within a common framework. The common axioms include conservation of average energy and conservation of probability current. But two axioms distinguish quantum mechanics from classical statistical mechanics: an "ontic extension" defines a nonseparable (global) random variable that generates physical correlations, and an "epistemic restriction" constrains allowed phase space distributions. The ontic extension and epistemic restriction, with strength on the order of Planck's constant, imply quantum entanglement and uncertainty relations. This framework suggests that the wave function is epistemic, yet it does not provide an ontic dynamics for individual systems.

  7. Coherent-state constellations and polar codes for thermal Gaussian channels

    NASA Astrophysics Data System (ADS)

    Lacerda, Felipe; Renes, Joseph M.; Scholz, Volkher B.

    2017-06-01

    Optical communication channels are ultimately quantum mechanical in nature, and we must therefore look beyond classical information theory to determine their communication capacity as well as to find efficient encoding and decoding schemes of the highest rates. Thermal channels, which arise from linear coupling of the field to a thermal environment, are of particular practical relevance; their classical capacity has been recently established, but their quantum capacity remains unknown. While the capacity sets the ultimate limit on reliable communication rates, it does not promise that such rates are achievable by practical means. Here we construct efficiently encodable codes for thermal channels which achieve the classical capacity and the so-called Gaussian coherent information for transmission of classical and quantum information, respectively. Our codes are based on combining polar codes with a discretization of the channel input into a finite "constellation" of coherent states. Encoding of classical information can be done using linear optics.

  8. Quantum Neural Nets

    NASA Technical Reports Server (NTRS)

    Zak, Michail; Williams, Colin P.

    1997-01-01

    The capacity of classical neurocomputers is limited by the number of classical degrees of freedom which is roughly proportional to the size of the computer. By Contrast, a Hypothetical quantum neurocomputer can implement an exponentially large number of the degrees of freedom within the same size. In this paper an attempt is made to reconcile linear reversible structure of quantum evolution with nonlinear irreversible dynamics for neural nets.

  9. An Investigation on Changing Behaviours of University Students Switching from Using Classical Cell Phones to Smartphones

    ERIC Educational Resources Information Center

    Arslan, Yusuf

    2016-01-01

    In this study, it was tried to comprehend whether there occur any changes in behaviours of university students switching from classical cell phones to smartphones. The investigation was carried out according to quantitative research method. Questionnaire was employed as data collection tool. The datum of the study was limited with the information…

  10. Insights and possible resolution to the information loss paradox via the tunneling picture

    NASA Astrophysics Data System (ADS)

    Singleton, Douglas; Vagenas, Elias C.; Zhu, Tao; Ren, Ji-Rong

    2010-08-01

    This paper investigates the information loss paradox in the WKB/tunneling picture of Hawking radiation. In the tunneling picture one can obtain the tunneling amplitude to all orders in ℏ. However all terms beyond the lowest, semi-classical term involve unknown constants. Despite this we find that one can still arrive at interesting restrictions on Hawking radiation to all orders in ℏ: (i) Taking into account only quantum corrections the spectrum remains thermal to all orders. Thus quantum corrections by themselves will not resolve the information loss paradox. (ii) The quantum corrections do imply that the temperature of the radiation goes to zero as the mass of the black hole goes to zero. This is in contrast to the lowest order result where the radiation temperature diverges as the mass of the black hole goes to zero. (iii) Finally we show that by taking both quantum corrections and back reaction into account it is possible under specific conditions to solve the information paradox by having the black hole evaporate completely with the information carried away by the correlations of the outgoing radiation.

  11. 42 CFR 493.1832 - Directed plan of correction and directed portion of a plan of correction.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... laboratory to take specific corrective action within specific time frames in order to achieve compliance; and... plan of correction continues in effect until the day suspension, limitation, or revocation of the...

  12. 76 FR 437 - Airworthiness Directives; Empresa Brasileira de Aeronautica S.A. (EMBRAER) Model EMB-135BJ Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-05

    ... corrective action is revising the Airworthiness Limitations Section (ALS) of the Instructions for Continued... corrective action is revising the Airworthiness Limitations Section (ALS) of the Instructions for Continued... 16, 2008, revise the ALS of the ICA to incorporate Section A2.5.2, Fuel System Limitation Items, of...

  13. 5 CFR 1605.22 - Claims for correction of Board or TSP record keeper errors; time limitations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... record keeper errors; time limitations. 1605.22 Section 1605.22 Administrative Personnel FEDERAL... § 1605.22 Claims for correction of Board or TSP record keeper errors; time limitations. (a) Filing claims... after that time, the Board or TSP record keeper may use its sound discretion in deciding whether to...

  14. Cumulative sum analysis score and phacoemulsification competency learning curve.

    PubMed

    Vedana, Gustavo; Cardoso, Filipe G; Marcon, Alexandre S; Araújo, Licio E K; Zanon, Matheus; Birriel, Daniella C; Watte, Guilherme; Jun, Albert S

    2017-01-01

    To use the cumulative sum analysis score (CUSUM) to construct objectively the learning curve of phacoemulsification competency. Three second-year residents and an experienced consultant were monitored for a series of 70 phacoemulsification cases each and had their series analysed by CUSUM regarding posterior capsule rupture (PCR) and best-corrected visual acuity. The acceptable rate for PCR was <5% (lower limit h) and the unacceptable rate was >10% (upper limit h). The acceptable rate for best-corrected visual acuity worse than 20/40 was <10% (lower limit h) and the unacceptable rate was >20% (upper limit h). The area between lower limit h and upper limit h is called the decision interval. There was no statistically significant difference in the mean age, sex or cataract grades between groups. The first trainee achieved PCR CUSUM competency at his 22 nd case. His best-corrected visual acuity CUSUM was in the decision interval from his third case and stayed there until the end, never reaching competency. The second trainee achieved PCR CUSUM competency at his 39 th case. He could reach best-corrected visual acuity CUSUM competency at his 22 nd case. The third trainee achieved PCR CUSUM competency at his 41 st case. He reached best-corrected visual acuity CUSUM competency at his 14 th case. The learning curve of competency in phacoemulsification is constructed by CUSUM and in average took 38 cases for each trainee to achieve it.

  15. Pair correlation function and nonlinear kinetic equation for a spatially uniform polarizable nonideal plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belyi, V.V.; Kukharenko, Y.A.; Wallenborn, J.

    Taking into account the first non-Markovian correction to the Balescu-Lenard equation, we have derived an expression for the pair correlation function and a nonlinear kinetic equation valid for a nonideal polarized classical plasma. This last equation allows for the description of the correlational energy evolution and shows the global conservation of energy with dynamical polarization. {copyright} {ital 1996 The American Physical Society.}

  16. Estimation of Some Parameters from Morse-Morse-Spline-Van Der Waals Intermolecular Potential

    NASA Astrophysics Data System (ADS)

    Coroiu, I.

    2007-04-01

    Some parameters such as transport cross-sections and isotopic thermal diffusion factor have been calculated from an improved intermolecular potential, Morse-Morse-Spline-van der Waals (MMSV) potential proposed by R.A. Aziz et al. The treatment was completely classical and no corrections for quantum effects were made. The results would be employed for isotope separations of different spherical and quasi-spherical molecules.

  17. Challenges of Electronic Medical Surveillance Systems

    DTIC Science & Technology

    2004-06-01

    More sophisticated approaches, such as regression models and classical autoregressive moving average ( ARIMA ) models that make estimates based on...with those predicted by a mathematical model . The primary benefit of ARIMA models is their ability to correct for local trends in the data so that...works well, for example, during a particularly severe flu season, where prolonged periods of high visit rates are adjusted to by the ARIMA model , thus

  18. Anomaly-free cosmological perturbations in effective canonical quantum gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrau, Aurelien; Bojowald, Martin; Kagan, Mikhail

    2015-05-01

    This article lays out a complete framework for an effective theory of cosmological perturbations with corrections from canonical quantum gravity. Since several examples exist for quantum-gravity effects that change the structure of space-time, the classical perturbative treatment must be rethought carefully. The present discussion provides a unified picture of several previous works, together with new treatments of higher-order perturbations and the specification of initial states.

  19. Role of tunnelling in complete and incomplete fusion induced by 9Be on 169Tm and 187Re targets at around barrier energies

    NASA Astrophysics Data System (ADS)

    Kharab, Rajesh; Chahal, Rajiv; Kumar, Rajiv

    2017-04-01

    We have analyzed the complete and incomplete fusion excitation function for 9Be +169Tm, 187Re reactions at around barrier energies using the code PLATYPUS based on classical dynamical model. The quantum mechanical tunnelling correction is incorporated at near and sub barrier energies which significantly improves the matching between the data and prediction.

  20. Mass-conserving advection-diffusion Lattice Boltzmann model for multi-species reacting flows

    NASA Astrophysics Data System (ADS)

    Hosseini, S. A.; Darabiha, N.; Thévenin, D.

    2018-06-01

    Given the complex geometries usually found in practical applications, the Lattice Boltzmann (LB) method is becoming increasingly attractive. In addition to the simple treatment of intricate geometrical configurations, LB solvers can be implemented on very large parallel clusters with excellent scalability. However, reacting flows and especially combustion lead to additional challenges and have seldom been studied by LB methods. Indeed, overall mass conservation is a pressing issue in modeling multi-component flows. The classical advection-diffusion LB model recovers the species transport equations with the generalized Fick approximation under the assumption of an incompressible flow. However, for flows involving multiple species with different diffusion coefficients and density fluctuations - as is the case with weakly compressible solvers like Lattice Boltzmann -, this approximation is known not to conserve overall mass. In classical CFD, as the Fick approximation does not satisfy the overall mass conservation constraint a diffusion correction velocity is usually introduced. In the present work, a local expression is first derived for this correction velocity in a LB framework. In a second step, the error due to the incompressibility assumption is also accounted for through a modified equilibrium distribution function. Theoretical analyses and simulations show that the proposed scheme performs much better than the conventional advection-diffusion Lattice Boltzmann model in terms of overall mass conservation.

Top