NASA Technical Reports Server (NTRS)
Rueda, A.
1985-01-01
That particles may be accelerated by vacuum effects in quantum field theory has been repeatedly proposed in the last few years. A natural upshot of this is a mechanism for cosmic rays (CR) primaries acceleration. A mechanism for acceleration by the zero-point field (ZPE) when the ZPE is taken in a realistic sense (in opposition to a virtual field) was considered. Originally the idea was developed within a semiclassical context. The classical Einstein-Hopf model (EHM) was used to show that free isolated electromagnrtically interacting particles performed a random walk in phase space and more importantly in momentum space when submitted to the perennial action of the so called classical electromagnrtic ZPE.
Driven topological systems in the classical limit
NASA Astrophysics Data System (ADS)
Duncan, Callum W.; Öhberg, Patrik; Valiente, Manuel
2017-03-01
Periodically driven quantum systems can exhibit topologically nontrivial behavior, even when their quasienergy bands have zero Chern numbers. Much work has been conducted on noninteracting quantum-mechanical models where this kind of behavior is present. However, the inclusion of interactions in out-of-equilibrium quantum systems can prove to be quite challenging. On the other hand, the classical counterpart of hard-core interactions can be simulated efficiently via constrained random walks. The noninteracting model, proposed by Rudner et al. [Phys. Rev. X 3, 031005 (2013), 10.1103/PhysRevX.3.031005], has a special point for which the system is equivalent to a classical random walk. We consider the classical counterpart of this model, which is exact at a special point even when hard-core interactions are present, and show how these quantitatively affect the edge currents in a strip geometry. We find that the interacting classical system is well described by a mean-field theory. Using this we simulate the dynamics of the classical system, which show that the interactions play the role of Markovian, or time-dependent disorder. By comparing the evolution of classical and quantum edge currents in small lattices, we find regimes where the classical limit considered gives good insight into the quantum problem.
Zero-point energy constraint in quasi-classical trajectory calculations.
Xie, Zhen; Bowman, Joel M
2006-04-27
A method to constrain the zero-point energy in quasi-classical trajectory calculations is proposed and applied to the Henon-Heiles system. The main idea of this method is to smoothly eliminate the coupling terms in the Hamiltonian as the energy of any mode falls below a specified value.
Quantum Capacity under Adversarial Quantum Noise: Arbitrarily Varying Quantum Channels
NASA Astrophysics Data System (ADS)
Ahlswede, Rudolf; Bjelaković, Igor; Boche, Holger; Nötzel, Janis
2013-01-01
We investigate entanglement transmission over an unknown channel in the presence of a third party (called the adversary), which is enabled to choose the channel from a given set of memoryless but non-stationary channels without informing the legitimate sender and receiver about the particular choice that he made. This channel model is called an arbitrarily varying quantum channel (AVQC). We derive a quantum version of Ahlswede's dichotomy for classical arbitrarily varying channels. This includes a regularized formula for the common randomness-assisted capacity for entanglement transmission of an AVQC. Quite surprisingly and in contrast to the classical analog of the problem involving the maximal and average error probability, we find that the capacity for entanglement transmission of an AVQC always equals its strong subspace transmission capacity. These results are accompanied by different notions of symmetrizability (zero-capacity conditions) as well as by conditions for an AVQC to have a capacity described by a single-letter formula. In the final part of the paper the capacity of the erasure-AVQC is computed and some light shed on the connection between AVQCs and zero-error capacities. Additionally, we show by entirely elementary and operational arguments motivated by the theory of AVQCs that the quantum, classical, and entanglement-assisted zero-error capacities of quantum channels are generically zero and are discontinuous at every positivity point.
ERIC Educational Resources Information Center
Boyer, Timothy H.
1985-01-01
The classical vacuum of physics is not empty, but contains a distinctive pattern of electromagnetic fields. Discovery of the vacuum, thermal spectrum, classical electron theory, zero-point spectrum, and effects of acceleration are discussed. Connection between thermal radiation and the classical vacuum reveals unexpected unity in the laws of…
Critical scaling of the mutual information in two-dimensional disordered Ising models
NASA Astrophysics Data System (ADS)
Sriluckshmy, P. V.; Mandal, Ipsita
2018-04-01
Rényi mutual information, computed from second Rényi entropies, can identify classical phase transitions from their finite-size scaling at critical points. We apply this technique to examine the presence or absence of finite temperature phase transitions in various two-dimensional models on a square lattice, which are extensions of the conventional Ising model by adding a quenched disorder. When the quenched disorder causes the nearest neighbor bonds to be both ferromagnetic and antiferromagnetic, (a) a spin glass phase exists only at zero temperature, and (b) a ferromagnetic phase exists at a finite temperature when the antiferromagnetic bond distributions are sufficiently dilute. Furthermore, finite temperature paramagnetic-ferromagnetic transitions can also occur when the disordered bonds involve only ferromagnetic couplings of random strengths. In our numerical simulations, the ‘zero temperature only’ phase transitions are identified when there is no consistent finite-size scaling of the Rényi mutual information curves, while for finite temperature critical points, the curves can identify the critical temperature T c by their crossings at T c and 2 Tc .
The motion near L{sub 4} equilibrium point under non-point mass primaries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huda, I. N., E-mail: ibnu.nurul@students.itb.ac.id; Utama, J. A.; Madley, D.
2015-09-30
The Circular Restricted Three-Body Problem (CRTBP) possesses five equilibrium points, that comprise three collinear (L{sub 1}, L{sub 2}, and L{sub 3}) and two triangular points (L{sub 4} and L{sub 5}). The classical study (with the primaries are point mass) suggests that the equilibrium points may cause the velocity of infinitesimal object relatively becomes zero and reveals the zero velocity curve. We study the motion of infinitesimal object near triangular equilibrium point (L{sub 4}) and determine its zero velocity curve. We extend the study by taking into account the effects of radiation of the bigger primary (q{sub 1} ≠ 1, q{submore » 2} = 1) and oblateness of the smaller primary (A{sub 1} = 0, A{sub 2} ≠ 0). The location of L{sub 4} is analytically derived then the stability of L{sub 4} and its zero velocity curves are studied numerically. Our study suggests that the oblateness and the radiation of primaries may affect the stability and zero velocity curve around L{sub 4}.« less
Probability distribution for the Gaussian curvature of the zero level surface of a random function
NASA Astrophysics Data System (ADS)
Hannay, J. H.
2018-04-01
A rather natural construction for a smooth random surface in space is the level surface of value zero, or ‘nodal’ surface f(x,y,z) = 0, of a (real) random function f; the interface between positive and negative regions of the function. A physically significant local attribute at a point of a curved surface is its Gaussian curvature (the product of its principal curvatures) because, when integrated over the surface it gives the Euler characteristic. Here the probability distribution for the Gaussian curvature at a random point on the nodal surface f = 0 is calculated for a statistically homogeneous (‘stationary’) and isotropic zero mean Gaussian random function f. Capitalizing on the isotropy, a ‘fixer’ device for axes supplies the probability distribution directly as a multiple integral. Its evaluation yields an explicit algebraic function with a simple average. Indeed, this average Gaussian curvature has long been known. For a non-zero level surface instead of the nodal one, the probability distribution is not fully tractable, but is supplied as an integral expression.
Quantum and classical ripples in graphene
NASA Astrophysics Data System (ADS)
Hašík, Juraj; Tosatti, Erio; MartoÅák, Roman
2018-04-01
Thermal ripples of graphene are well understood at room temperature, but their quantum counterparts at low temperatures are in need of a realistic quantitative description. Here we present atomistic path-integral Monte Carlo simulations of freestanding graphene, which show upon cooling a striking classical-quantum evolution of height and angular fluctuations. The crossover takes place at ever-decreasing temperatures for ever-increasing wavelengths so that a completely quantum regime is never attained. Zero-temperature quantum graphene is flatter and smoother than classical graphene at large scales yet rougher at short scales. The angular fluctuation distribution of the normals can be quantitatively described by coexistence of two Gaussians, one classical strongly T -dependent and one quantum about 2° wide, of zero-point character. The quantum evolution of ripple-induced height and angular spread should be observable in electron diffraction in graphene and other two-dimensional materials, such as MoS2, bilayer graphene, boron nitride, etc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stinson, Jake L.; Kathmann, Shawn M.; Ford, Ian J.
2014-01-14
The nucleation of particles from trace gases in the atmosphere is an important source of cloud condensation nuclei (CCN), and these are vital for the formation of clouds in view of the high supersaturations required for homogeneous water droplet nucleation. The methods of quantum chemistry have increasingly been employed to model nucleation due to their high accuracy and efficiency in calculating configurational energies; and nucleation rates can be obtained from the associated free energies of particle formation. However, even in such advanced approaches, it is typically assumed that the nuclei have a classical nature, which is questionable for some systems.more » The importance of zero-point motion (also known as quantum nuclear dynamics) in modelling small clusters of sulphuric acid and water is tested here using the path integral molecular dynamics (PIMD) method at the density functional theory (DFT) level of theory. We observe a small zero-point effect on the the equilibrium structures of certain clusters. One configuration is found to display a bimodal behaviour at 300 K in contrast to the stable ionised state suggested from a zero temperature classical geometry optimisation. The general effect of zero-point motion is to promote the extent of proton transfer with respect to classical behaviour. We thank Prof. Angelos Michaelides and his group in University College London (UCL) for practical advice and helpful discussions. This work benefited from interactions with the Thomas Young Centre through seminar and discussions involving the PIMD method. SMK was supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences. JLS and IJF were supported by the IMPACT scheme at UCL and by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences. We are grateful for use of the UCL Legion High Performance Computing Facility and the resources of the National Energy Research Scientific Computing Center (NERSC), which is supported by the U.S. Department of Energy, Office of Science of the under Contract No. DE-AC02-05CH11231.« less
Zero Thermal Noise in Resistors at Zero Temperature
NASA Astrophysics Data System (ADS)
Kish, Laszlo B.; Niklasson, Gunnar A.; Granqvist, Claes-Göran
2016-06-01
The bandwidth of transistors in logic devices approaches the quantum limit, where Johnson noise and associated error rates are supposed to be strongly enhanced. However, the related theory — asserting a temperature-independent quantum zero-point (ZP) contribution to Johnson noise, which dominates the quantum regime — is controversial and resolution of the controversy is essential to determine the real error rate and fundamental energy dissipation limits of logic gates in the quantum limit. The Callen-Welton formula (fluctuation-dissipation theorem) of voltage and current noise for a resistance is the sum of Nyquist’s classical Johnson noise equation and a quantum ZP term with a power density spectrum proportional to frequency and independent of temperature. The classical Johnson-Nyquist formula vanishes at the approach of zero temperature, but the quantum ZP term still predicts non-zero noise voltage and current. Here, we show that this noise cannot be reconciled with the Fermi-Dirac distribution, which defines the thermodynamics of electrons according to quantum-statistical physics. Consequently, Johnson noise must be nil at zero temperature, and non-zero noise found for certain experimental arrangements may be a measurement artifact, such as the one mentioned in Kleen’s uncertainty relation argument.
Zero-point energy effects in anion solvation shells.
Habershon, Scott
2014-05-21
By comparing classical and quantum-mechanical (path-integral-based) molecular simulations of solvated halide anions X(-) [X = F, Cl, Br and I], we identify an ion-specific quantum contribution to anion-water hydrogen-bond dynamics; this effect has not been identified in previous simulation studies. For anions such as fluoride, which strongly bind water molecules in the first solvation shell, quantum simulations exhibit hydrogen-bond dynamics nearly 40% faster than the corresponding classical results, whereas those anions which form a weakly bound solvation shell, such as iodide, exhibit a quantum effect of around 10%. This observation can be rationalized by considering the different zero-point energy (ZPE) of the water vibrational modes in the first solvation shell; for strongly binding anions, the ZPE of bound water molecules is larger, giving rise to faster dynamics in quantum simulations. These results are consistent with experimental investigations of anion-bound water vibrational and reorientational motion.
Rotational diffusion of a molecular cat
NASA Astrophysics Data System (ADS)
Katz-Saporta, Ori; Efrati, Efi
We show that a simple isolated system can perform rotational random walk on account of internal excitations alone. We consider the classical dynamics of a ''molecular cat'': a triatomic molecule connected by three harmonic springs with non-zero rest lengths, suspended in free space. In this system, much like for falling cats, the angular momentum constraint is non-holonomic allowing for rotations with zero overall angular momentum. The geometric nonlinearities arising from the non-zero rest lengths of the springs suffice to break integrability and lead to chaotic dynamics. The coupling of the non-integrability of the system and its non-holonomic nature results in an angular random walk of the molecule. We study the properties and dynamics of this angular motion analytically and numerically. For low energy excitations the system displays normal-mode-like motion, while for high enough excitation energy we observe regular random-walk. In between, at intermediate energies we observe an angular Lévy-walk type motion associated with a fractional diffusion coefficient interpolating between the two regimes.
Quantum-Classical Hybrid for Information Processing
NASA Technical Reports Server (NTRS)
Zak, Michail
2011-01-01
Based upon quantum-inspired entanglement in quantum-classical hybrids, a simple algorithm for instantaneous transmissions of non-intentional messages (chosen at random) to remote distances is proposed. The idea is to implement instantaneous transmission of conditional information on remote distances via a quantum-classical hybrid that preserves superposition of random solutions, while allowing one to measure its state variables using classical methods. Such a hybrid system reinforces the advantages, and minimizes the limitations, of both quantum and classical characteristics. Consider n observers, and assume that each of them gets a copy of the system and runs it separately. Although they run identical systems, the outcomes of even synchronized runs may be different because the solutions of these systems are random. However, the global constrain must be satisfied. Therefore, if the observer #1 (the sender) made a measurement of the acceleration v(sub 1) at t =T, then the receiver, by measuring the corresponding acceleration v(sub 1) at t =T, may get a wrong value because the accelerations are random, and only their ratios are deterministic. Obviously, the transmission of this knowledge is instantaneous as soon as the measurements have been performed. In addition to that, the distance between the observers is irrelevant because the x-coordinate does not enter the governing equations. However, the Shannon information transmitted is zero. None of the senders can control the outcomes of their measurements because they are random. The senders cannot transmit intentional messages. Nevertheless, based on the transmitted knowledge, they can coordinate their actions based on conditional information. If the observer #1 knows his own measurements, the measurements of the others can be fully determined. It is important to emphasize that the origin of entanglement of all the observers is the joint probability density that couples their actions. There is no centralized source, or a sender of the signal, because each receiver can become a sender as well. An observer receives a signal by performing certain measurements synchronized with the measurements of the others. This means that the signal is uniformly and simultaneously distributed over the observers in a decentralized way. The signals transmit no intentional information that would favor one agent over another. All the sequence of signals received by different observers are not only statistically equivalent, but are also point-by-point identical. It is important to assume that each agent knows that the other agent simultaneously receives the identical signals. The sequences of the signals are true random, so that no agent could predict the next step with the probability different from those described by the density. Under these quite general assumptions, the entangled observers-agents can perform non-trivial tasks that include transmission of conditional information from one agent to another, simple paradigm of cooperation, etc. The problem of behavior of intelligent agents correlated by identical random messages in a decentralized way has its own significance: it simulates evolutionary behavior of biological and social systems correlated only via simultaneous sensoring sequences of unexpected events.
Garashchuk, Sophya; Rassolov, Vitaly A
2008-07-14
Semiclassical implementation of the quantum trajectory formalism [J. Chem. Phys. 120, 1181 (2004)] is further developed to give a stable long-time description of zero-point energy in anharmonic systems of high dimensionality. The method is based on a numerically cheap linearized quantum force approach; stabilizing terms compensating for the linearization errors are added into the time-evolution equations for the classical and nonclassical components of the momentum operator. The wave function normalization and energy are rigorously conserved. Numerical tests are performed for model systems of up to 40 degrees of freedom.
Image Processing, Coding, and Compression with Multiple-Point Impulse Response Functions.
NASA Astrophysics Data System (ADS)
Stossel, Bryan Joseph
1995-01-01
Aspects of image processing, coding, and compression with multiple-point impulse response functions are investigated. Topics considered include characterization of the corresponding random-walk transfer function, image recovery for images degraded by the multiple-point impulse response, and the application of the blur function to image coding and compression. It is found that although the zeros of the real and imaginary parts of the random-walk transfer function occur in continuous, closed contours, the zeros of the transfer function occur at isolated spatial frequencies. Theoretical calculations of the average number of zeros per area are in excellent agreement with experimental results obtained from computer counts of the zeros. The average number of zeros per area is proportional to the standard deviations of the real part of the transfer function as well as the first partial derivatives. Statistical parameters of the transfer function are calculated including the mean, variance, and correlation functions for the real and imaginary parts of the transfer function and their corresponding first partial derivatives. These calculations verify the assumptions required in the derivation of the expression for the average number of zeros. Interesting results are found for the correlations of the real and imaginary parts of the transfer function and their first partial derivatives. The isolated nature of the zeros in the transfer function and its characteristics at high spatial frequencies result in largely reduced reconstruction artifacts and excellent reconstructions are obtained for distributions of impulses consisting of 25 to 150 impulses. The multiple-point impulse response obscures original scenes beyond recognition. This property is important for secure transmission of data on many communication systems. The multiple-point impulse response enables the decoding and restoration of the original scene with very little distortion. Images prefiltered by the random-walk transfer function yield greater compression ratios than are obtained for the original scene. The multiple-point impulse response decreases the bit rate approximately 40-70% and affords near distortion-free reconstructions. Due to the lossy nature of transform-based compression algorithms, noise reduction measures must be incorporated to yield acceptable reconstructions after decompression.
Hong, Peilong; Li, Liming; Liu, Jianji; Zhang, Guoquan
2016-03-29
Young's double-slit or two-beam interference is of fundamental importance to understand various interference effects, in which the stationary phase difference between two beams plays the key role in the first-order coherence. Different from the case of first-order coherence, in the high-order optical coherence the statistic behavior of the optical phase will play the key role. In this article, by employing a fundamental interfering configuration with two classical point sources, we showed that the high- order optical coherence between two classical point sources can be actively designed by controlling the statistic behavior of the relative phase difference between two point sources. Synchronous position Nth-order subwavelength interference with an effective wavelength of λ/M was demonstrated, in which λ is the wavelength of point sources and M is an integer not larger than N. Interestingly, we found that the synchronous position Nth-order interference fringe fingerprints the statistic trace of random phase fluctuation of two classical point sources, therefore, it provides an effective way to characterize the statistic properties of phase fluctuation for incoherent light sources.
Zero-point energy conservation in classical trajectory simulations: Application to H2CO
NASA Astrophysics Data System (ADS)
Lee, Kin Long Kelvin; Quinn, Mitchell S.; Kolmann, Stephen J.; Kable, Scott H.; Jordan, Meredith J. T.
2018-05-01
A new approach for preventing zero-point energy (ZPE) violation in quasi-classical trajectory (QCT) simulations is presented and applied to H2CO "roaming" reactions. Zero-point energy may be problematic in roaming reactions because they occur at or near bond dissociation thresholds and these channels may be incorrectly open or closed depending on if, or how, ZPE has been treated. Here we run QCT simulations on a "ZPE-corrected" potential energy surface defined as the sum of the molecular potential energy surface (PES) and the global harmonic ZPE surface. Five different harmonic ZPE estimates are examined with four, on average, giving values within 4 kJ/mol—chemical accuracy—for H2CO. The local harmonic ZPE, at arbitrary molecular configurations, is subsequently defined in terms of "projected" Cartesian coordinates and a global ZPE "surface" is constructed using Shepard interpolation. This, combined with a second-order modified Shepard interpolated PES, V, allows us to construct a proof-of-concept ZPE-corrected PES for H2CO, Veff, at no additional computational cost to the PES itself. Both V and Veff are used to model product state distributions from the H + HCO → H2 + CO abstraction reaction, which are shown to reproduce the literature roaming product state distributions. Our ZPE-corrected PES allows all trajectories to be analysed, whereas, in previous simulations, a significant proportion was discarded because of ZPE violation. We find ZPE has little effect on product rotational distributions, validating previous QCT simulations. Running trajectories on V, however, shifts the product kinetic energy release to higher energy than on Veff and classical simulations of kinetic energy release should therefore be viewed with caution.
The Birth and Death of Redundancy in Decoherence and Quantum Darwinism
NASA Astrophysics Data System (ADS)
Riedel, Charles; Zurek, Wojciech; Zwolak, Michael
2012-02-01
Understanding the quantum-classical transition and the identification of a preferred classical domain through quantum Darwinism is based on recognizing high-redundancy states as both ubiquitous and exceptional. They are produced ubiquitously during decoherence, as has been demonstrated by the recent identification of very general conditions under which high-redundancy states develop. They are exceptional in that high-redundancy states occupy a very narrow corner of the global Hilbert space; states selected at random are overwelming likely to exhibit zero redundancy. In this letter, we examine the conditions and time scales for the transition from high-redundancy states to zero-redundancy states in many-body dynamics. We identify sufficient condition for the development of redundancy from product states and show that the destruction of redundancy can be accomplished even with highly constrained interactions.
Paul, Amit K; Hase, William L
2016-01-28
A zero-point energy (ZPE) constraint model is proposed for classical trajectory simulations of unimolecular decomposition and applied to CH4* → H + CH3 decomposition. With this model trajectories are not allowed to dissociate unless they have ZPE in the CH3 product. If not, they are returned to the CH4* region of phase space and, if necessary, given additional opportunities to dissociate with ZPE. The lifetime for dissociation of an individual trajectory is the time it takes to dissociate with ZPE in CH3, including multiple possible returns to CH4*. With this ZPE constraint the dissociation of CH4* is exponential in time as expected for intrinsic RRKM dynamics and the resulting rate constant is in good agreement with the harmonic quantum value of RRKM theory. In contrast, a model that discards trajectories without ZPE in the reaction products gives a CH4* → H + CH3 rate constant that agrees with the classical and not quantum RRKM value. The rate constant for the purely classical simulation indicates that anharmonicity may be important and the rate constant from the ZPE constrained classical trajectory simulation may not represent the complete anharmonicity of the RRKM quantum dynamics. The ZPE constraint model proposed here is compared with previous models for restricting ZPE flow in intramolecular dynamics, and connecting product and reactant/product quantum energy levels in chemical dynamics simulations.
"Simulated molecular evolution" or computer-generated artifacts?
Darius, F; Rojas, R
1994-11-01
1. The authors define a function with value 1 for the positive examples and 0 for the negative ones. They fit a continuous function but do not deal at all with the error margin of the fit, which is almost as large as the function values they compute. 2. The term "quality" for the value of the fitted function gives the impression that some biological significance is associated with values of the fitted function strictly between 0 and 1, but there is no justification for this kind of interpretation and finding the point where the fit achieves its maximum does not make sense. 3. By neglecting the error margin the authors try to optimize the fitted function using differences in the second, third, fourth, and even fifth decimal place which have no statistical significance. 4. Even if such a fit could profit from more data points, the authors should first prove that the region of interest has some kind of smoothness, that is, that a continuous fit makes any sense at all. 5. "Simulated molecular evolution" is a misnomer. We are dealing here with random search. Since the margin of error is so large, the fitted function does not provide statistically significant information about the points in search space where strings with cleavage sites could be found. This implies that the method is a highly unreliable stochastic search in the space of strings, even if the neural network is capable of learning some simple correlations. 6. Classical statistical methods are for these kind of problems with so few data points clearly superior to the neural networks used as a "black box" by the authors, which in the way they are structured provide a model with an error margin as large as the numbers being computed.7. And finally, even if someone would provide us with a function which separates strings with cleavage sites from strings without them perfectly, so-called simulated molecular evolution would not be better than random selection.Since a perfect fit would only produce exactly ones or zeros,starting a search in a region of space where all strings in the neighborhood get the value zero would not provide any kind of directional information for new iterations. We would just skip from one point to the other in a typical random walk manner.
Aspects of Geodesical Motion with Fisher-Rao Metric: Classical and Quantum
NASA Astrophysics Data System (ADS)
Ciaglia, Florio M.; Cosmo, Fabio Di; Felice, Domenico; Mancini, Stefano; Marmo, Giuseppe; Pérez-Pardo, Juan M.
The purpose of this paper is to exploit the geometric structure of quantum mechanics and of statistical manifolds to study the qualitative effect that the quantum properties have in the statistical description of a system. We show that the end points of geodesics in the classical setting coincide with the probability distributions that minimise Shannon’s entropy, i.e. with distributions of zero dispersion. In the quantum setting this happens only for particular initial conditions, which in turn correspond to classical submanifolds. This result can be interpreted as a geometric manifestation of the uncertainty principle.
NASA Astrophysics Data System (ADS)
García, Isaac A.; Llibre, Jaume; Maza, Susanna
2018-06-01
In this work we consider real analytic functions , where , Ω is a bounded open subset of , is an interval containing the origin, are parameters, and ε is a small parameter. We study the branching of the zero-set of at multiple points when the parameter ε varies. We apply the obtained results to improve the classical averaging theory for computing T-periodic solutions of λ-families of analytic T-periodic ordinary differential equations defined on , using the displacement functions defined by these equations. We call the coefficients in the Taylor expansion of in powers of ε the averaged functions. The main contribution consists in analyzing the role that have the multiple zeros of the first non-zero averaged function. The outcome is that these multiple zeros can be of two different classes depending on whether the zeros belong or not to the analytic set defined by the real variety associated to the ideal generated by the averaged functions in the Noetheriang ring of all the real analytic functions at . We bound the maximum number of branches of isolated zeros that can bifurcate from each multiple zero z 0. Sometimes these bounds depend on the cardinalities of minimal bases of the former ideal. Several examples illustrate our results and they are compared with the classical theory, branching theory and also under the light of singularity theory of smooth maps. The examples range from polynomial vector fields to Abel differential equations and perturbed linear centers.
Inertial Mass Viewed as Reaction of the Vacuum to Accelerated Motion
NASA Technical Reports Server (NTRS)
Rueda, Alfonso; Haisch, Bernhard
1999-01-01
Preliminary analysis of the momentum flux (or of the Poynting vector) of the classical electromagnetic version of the quantum vacuum consisting of zero-point radiation impinging on accelerated objects as viewed by an inertial observer suggests that the resistance to acceleration attributed to inertia may be a force of opposition originating in the vacuum. This analysis avoids the ad hoc modeling of particle-field interaction dynamics used previously by Haisck Rueda and Puthoff (1994) to derive a similar result. This present approach is not dependent upon what happens at the particle point but on how an external observer assesses the kinematical characteristics of the zero-point radiation impinging on the accelerated object. A relativistic form of the equation of motion results from the present analysis.
NASA Astrophysics Data System (ADS)
Wang, Wenji; Zhao, Yi
2017-07-01
Methane dissociation is a prototypical system for the study of surface reaction dynamics. The dissociation and recombination rates of CH4 through the Ni(111) surface are calculated by using the quantum instanton method with an analytical potential energy surface. The Ni(111) lattice is treated rigidly, classically, and quantum mechanically so as to reveal the effect of lattice motion. The results demonstrate that it is the lateral displacements rather than the upward and downward movements of the surface nickel atoms that affect the rates a lot. Compared with the rigid lattice, the classical relaxation of the lattice can increase the rates by lowering the free energy barriers. For instance, at 300 K, the dissociation and recombination rates with the classical lattice exceed the ones with the rigid lattice by 6 and 10 orders of magnitude, respectively. Compared with the classical lattice, the quantum delocalization rather than the zero-point energy of the Ni atoms further enhances the rates by widening the reaction path. For instance, the dissociation rate with the quantum lattice is about 10 times larger than that with the classical lattice at 300 K. On the rigid lattice, due to the zero-point energy difference between CH4 and CD4, the kinetic isotope effects are larger than 1 for the dissociation process, while they are smaller than 1 for the recombination process. The increasing kinetic isotope effect with decreasing temperature demonstrates that the quantum tunneling effect is remarkable for the dissociation process.
Random Matrix Theory and Elliptic Curves
2014-11-24
distribution is unlimited. 1 ELLIPTIC CURVES AND THEIR L-FUNCTIONS 2 points on that curve. Counting rational points on curves is a field with a rich ...deficiency of zeros near the origin of the histograms in Figure 1. While as d becomes large this discretization becomes smaller and has less and less effect...order of 30), the regular oscillations seen at the origin become dominated by fluctuations of an arithmetic origin, influenced by zeros of the Riemann
NASA Astrophysics Data System (ADS)
Chernodub, M. N.
2013-01-01
Recently, we have demonstrated that for a certain class of Casimir-type systems (“devices”) the energy of zero-point vacuum fluctuations reaches its global minimum when the device rotates about a certain axis rather than remains static. This rotational vacuum effect may lead to the emergence of permanently rotating objects provided the negative rotational energy of zero-point fluctuations cancels the positive rotational energy of the device itself. In this paper, we show that for massless electrically charged particles the rotational vacuum effect should be drastically (astronomically) enhanced in the presence of a magnetic field. As an illustration, we show that in a background of experimentally available magnetic fields the zero-point energy of massless excitations in rotating torus-shaped doped carbon nanotubes may indeed overwhelm the classical energy of rotation for certain angular frequencies so that the permanently rotating state is energetically favored. The suggested “zero-point-driven” devices—which have no internally moving parts—correspond to a perpetuum mobile of a new, fourth kind: They do not produce any work despite the fact that their equilibrium (ground) state corresponds to a permanent rotation even in the presence of an external environment. We show that our proposal is consistent with the laws of thermodynamics.
Activation of zero-error classical capacity in low-dimensional quantum systems
NASA Astrophysics Data System (ADS)
Park, Jeonghoon; Heo, Jun
2018-06-01
Channel capacities of quantum channels can be nonadditive even if one of two quantum channels has no channel capacity. We call this phenomenon activation of the channel capacity. In this paper, we show that when we use a quantum channel on a qubit system, only a noiseless qubit channel can generate the activation of the zero-error classical capacity. In particular, we show that the zero-error classical capacity of two quantum channels on qubit systems cannot be activated. Furthermore, we present a class of examples showing the activation of the zero-error classical capacity in low-dimensional systems.
Zero field reversal probability in thermally assisted magnetization reversal
NASA Astrophysics Data System (ADS)
Prasetya, E. B.; Utari; Purnama, B.
2017-11-01
This paper discussed about zero field reversal probability in thermally assisted magnetization reversal (TAMR). Appearance of reversal probability in zero field investigated through micromagnetic simulation by solving stochastic Landau-Lifshitz-Gibert (LLG). The perpendicularly anisotropy magnetic dot of 50×50×20 nm3 is considered as single cell magnetic storage of magnetic random acces memory (MRAM). Thermally assisted magnetization reversal was performed by cooling writing process from near/almost Curie point to room temperature on 20 times runs for different randomly magnetized state. The results show that the probability reversal under zero magnetic field decreased with the increase of the energy barrier. The zero-field probability switching of 55% attained for energy barrier of 60 k B T and the reversal probability become zero noted at energy barrier of 2348 k B T. The higest zero-field switching probability of 55% attained for energy barrier of 60 k B T which corespond to magnetif field of 150 Oe for switching.
High resolution study of magnetic ordering at absolute zero.
Lee, M; Husmann, A; Rosenbaum, T F; Aeppli, G
2004-05-07
High resolution pressure measurements in the zero-temperature limit provide a unique opportunity to study the behavior of strongly interacting, itinerant electrons with coupled spin and charge degrees of freedom. Approaching the precision that has become the hallmark of experiments on classical critical phenomena, we characterize the quantum critical behavior of the model, elemental antiferromagnet chromium, lightly doped with vanadium. We resolve the sharp doubling of the Hall coefficient at the quantum critical point and trace the dominating effects of quantum fluctuations up to surprisingly high temperatures.
Semi-classical Reissner-Nordstrom model for the structure of charged leptons
NASA Technical Reports Server (NTRS)
Rosen, G.
1980-01-01
The lepton self-mass problem is examined within the framework of the quantum theory of electromagnetism and gravity. Consideration is given to the Reissner-Nordstrom solution to the Einstein-Maxwell classical field equations for an electrically charged mass point, and the WKB theory for a semiclassical system with total energy zero is used to obtain an expression for the Einstein-Maxwell action factor. The condition obtained is found to account for the observed mass values of the three charged leptons, and to be in agreement with the correspondence principle.
Explorations of the Gauss-Lucas Theorem
ERIC Educational Resources Information Center
Brilleslyper, Michael A.; Schaubroeck, Beth
2017-01-01
The Gauss-Lucas Theorem is a classical complex analysis result that states the critical points of a single-variable complex polynomial lie inside the closed convex hull of the zeros of the polynomial. Although the result is well-known, it is not typically presented in a first course in complex analysis. The ease with which modern technology allows…
Theory of point contact spectroscopy in correlated materials
Lee, Wei-Cheng; Park, Wan Kyu; Arham, Hamood Z.; ...
2015-01-05
Here, we developed a microscopic theory for the point-contact conductance between a metallic electrode and a strongly correlated material using the nonequilibrium Schwinger-Kadanoff-Baym-Keldysh formalism. We explicitly show that, in the classical limit, contact size shorter than the scattering length of the system, the microscopic model can be reduced to an effective model with transfer matrix elements that conserve in-plane momentum. We found that the conductance dI/dV is proportional to the effective density of states, that is, the integrated single-particle spectral function A(ω = eV) over the whole Brillouin zone. From this conclusion, we are able to establish the conditions undermore » which a non-Fermi liquid metal exhibits a zero-bias peak in the conductance. Lastly, this finding is discussed in the context of recent point-contact spectroscopy on the iron pnictides and chalcogenides, which has exhibited a zero-bias conductance peak.« less
NASA Astrophysics Data System (ADS)
Balog, Ivan; Tarjus, Gilles; Tissier, Matthieu
2018-03-01
We show that, contrary to previous suggestions based on computer simulations or erroneous theoretical treatments, the critical points of the random-field Ising model out of equilibrium, when quasistatically changing the applied source at zero temperature, and in equilibrium are not in the same universality class below some critical dimension dD R≈5.1 . We demonstrate this by implementing a nonperturbative functional renormalization group for the associated dynamical field theory. Above dD R, the avalanches, which characterize the evolution of the system at zero temperature, become irrelevant at large distance, and hysteresis and equilibrium critical points are then controlled by the same fixed point. We explain how to use computer simulation and finite-size scaling to check the correspondence between in and out of equilibrium criticality in a far less ambiguous way than done so far.
Exploring the importance of quantum effects in nucleation: The archetypical Nen case
NASA Astrophysics Data System (ADS)
Unn-Toc, Wesley; Halberstadt, Nadine; Meier, Christoph; Mella, Massimo
2012-07-01
The effect of quantum mechanics (QM) on the details of the nucleation process is explored employing Ne clusters as test cases due to their semi-quantal nature. In particular, we investigate the impact of quantum mechanics on both condensation and dissociation rates in the framework of the microcanonical ensemble. Using both classical trajectories and two semi-quantal approaches (zero point averaged dynamics, ZPAD, and Gaussian-based time dependent Hartree, G-TDH) to model cluster and collision dynamics, we simulate the dissociation and monomer capture for Ne8 as a function of the cluster internal energy, impact parameter and collision speed. The results for the capture probability Ps(b) as a function of the impact parameter suggest that classical trajectories always underestimate capture probabilities with respect to ZPAD, albeit at most by 15%-20% in the cases we studied. They also do so in some important situations when using G-TDH. More interestingly, dissociation rates kdiss are grossly overestimated by classical mechanics, at least by one order of magnitude. We interpret both behaviours as mainly due to the reduced amount of kinetic energy available to a quantum cluster for a chosen total internal energy. We also find that the decrease in monomer dissociation energy due to zero point energy effects plays a key role in defining dissociation rates. In fact, semi-quantal and classical results for kdiss seem to follow a common "corresponding states" behaviour when the proper definition of internal and dissociation energies are used in a transition state model estimation of the evaporation rate constants.
NASA Astrophysics Data System (ADS)
Bonhommeau, David; Truhlar, Donald G.
2008-07-01
The photodissociation dynamics of ammonia upon excitation of the out-of-plane bending mode (mode ν2 with n2=0,…,6 quanta of vibration) in the à electronic state is investigated by means of several mixed quantum/classical methods, and the calculated final-state properties are compared to experiments. Five mixed quantum/classical methods are tested: one mean-field approach (the coherent switching with decay of mixing method), two surface-hopping methods [the fewest switches with time uncertainty (FSTU) and FSTU with stochastic decay (FSTU/SD) methods], and two surface-hopping methods with zero-point energy (ZPE) maintenance [the FSTU /SD+trajectory projection onto ZPE orbit (TRAPZ) and FSTU /SD+minimal TRAPZ (mTRAPZ) methods]. We found a qualitative difference between final NH2 internal energy distributions obtained for n2=0 and n2>1, as observed in experiments. Distributions obtained for n2=1 present an intermediate behavior between distributions obtained for smaller and larger n2 values. The dynamics is found to be highly electronically nonadiabatic with all these methods. NH2 internal energy distributions may have a negative energy tail when the ZPE is not maintained throughout the dynamics. The original TRAPZ method was designed to maintain ZPE in classical trajectories, but we find that it leads to unphysically high internal vibrational energies. The mTRAPZ method, which is new in this work and provides a general method for maintaining ZPE in either single-surface or multisurface trajectories, does not lead to unphysical results and is much less time consuming. The effect of maintaining ZPE in mixed quantum/classical dynamics is discussed in terms of agreement with experimental findings. The dynamics for n2=0 and n2=6 are also analyzed to reveal details not available from experiment, in particular, the time required for quenching of electronic excitation and the adiabatic energy gap and geometry at the time of quenching.
Bonhommeau, David; Truhlar, Donald G
2008-07-07
The photodissociation dynamics of ammonia upon excitation of the out-of-plane bending mode (mode nu(2) with n(2)=0,[ellipsis (horizontal)],6 quanta of vibration) in the A electronic state is investigated by means of several mixed quantum/classical methods, and the calculated final-state properties are compared to experiments. Five mixed quantum/classical methods are tested: one mean-field approach (the coherent switching with decay of mixing method), two surface-hopping methods [the fewest switches with time uncertainty (FSTU) and FSTU with stochastic decay (FSTU/SD) methods], and two surface-hopping methods with zero-point energy (ZPE) maintenance [the FSTUSD+trajectory projection onto ZPE orbit (TRAPZ) and FSTUSD+minimal TRAPZ (mTRAPZ) methods]. We found a qualitative difference between final NH(2) internal energy distributions obtained for n(2)=0 and n(2)>1, as observed in experiments. Distributions obtained for n(2)=1 present an intermediate behavior between distributions obtained for smaller and larger n(2) values. The dynamics is found to be highly electronically nonadiabatic with all these methods. NH(2) internal energy distributions may have a negative energy tail when the ZPE is not maintained throughout the dynamics. The original TRAPZ method was designed to maintain ZPE in classical trajectories, but we find that it leads to unphysically high internal vibrational energies. The mTRAPZ method, which is new in this work and provides a general method for maintaining ZPE in either single-surface or multisurface trajectories, does not lead to unphysical results and is much less time consuming. The effect of maintaining ZPE in mixed quantum/classical dynamics is discussed in terms of agreement with experimental findings. The dynamics for n(2)=0 and n(2)=6 are also analyzed to reveal details not available from experiment, in particular, the time required for quenching of electronic excitation and the adiabatic energy gap and geometry at the time of quenching.
Signatures of bifurcation on quantum correlations: Case of the quantum kicked top
NASA Astrophysics Data System (ADS)
Bhosale, Udaysinh T.; Santhanam, M. S.
2017-01-01
Quantum correlations reflect the quantumness of a system and are useful resources for quantum information and computational processes. Measures of quantum correlations do not have a classical analog and yet are influenced by classical dynamics. In this work, by modeling the quantum kicked top as a multiqubit system, the effect of classical bifurcations on measures of quantum correlations such as the quantum discord, geometric discord, and Meyer and Wallach Q measure is studied. The quantum correlation measures change rapidly in the vicinity of a classical bifurcation point. If the classical system is largely chaotic, time averages of the correlation measures are in good agreement with the values obtained by considering the appropriate random matrix ensembles. The quantum correlations scale with the total spin of the system, representing its semiclassical limit. In the vicinity of trivial fixed points of the kicked top, the scaling function decays as a power law. In the chaotic limit, for large total spin, quantum correlations saturate to a constant, which we obtain analytically, based on random matrix theory, for the Q measure. We also suggest that it can have experimental consequences.
The contrasting roles of Planck's constant in classical and quantum theories
NASA Astrophysics Data System (ADS)
Boyer, Timothy H.
2018-04-01
We trace the historical appearance of Planck's constant in physics, and we note that initially the constant did not appear in connection with quanta. Furthermore, we emphasize that Planck's constant can appear in both classical and quantum theories. In both theories, Planck's constant sets the scale of atomic phenomena. However, the roles played in the foundations of the theories are sharply different. In quantum theory, Planck's constant is crucial to the structure of the theory. On the other hand, in classical electrodynamics, Planck's constant is optional, since it appears only as the scale factor for the (homogeneous) source-free contribution to the general solution of Maxwell's equations. Since classical electrodynamics can be solved while taking the homogenous source-free contribution in the solution as zero or non-zero, there are naturally two different theories of classical electrodynamics, one in which Planck's constant is taken as zero and one where it is taken as non-zero. The textbooks of classical electromagnetism present only the version in which Planck's constant is taken to vanish.
Surface Impact Simulations of Helium Nanodroplets
2015-06-30
mechanical delocalization of the individual helium atoms in the droplet and the quan- tum statistical effects that accompany the interchange of identical...incorporates the effects of atomic delocaliza- tion by treating individual atoms as smeared-out probability distributions that move along classical...probability density distributions to give effec- tive interatomic potential energy curves that have zero-point averaging effects built into them [25
Cresti, Alessandro; Ortmann, Frank; Louvet, Thibaud; Van Tuan, Dinh; Roche, Stephan
2013-05-10
The role of defect-induced zero-energy modes on charge transport in graphene is investigated using Kubo and Landauer transport calculations. By tuning the density of random distributions of monovacancies either equally populating the two sublattices or exclusively located on a single sublattice, all conduction regimes are covered from direct tunneling through evanescent modes to mesoscopic transport in bulk disordered graphene. Depending on the transport measurement geometry, defect density, and broken sublattice symmetry, the Dirac-point conductivity is either exceptionally robust against disorder (supermetallic state) or suppressed through a gap opening or by algebraic localization of zero-energy modes, whereas weak localization and the Anderson insulating regime are obtained for higher energies. These findings clarify the contribution of zero-energy modes to transport at the Dirac point, hitherto controversial.
Khatua, Pradip; Bansal, Bhavtosh; Shahar, Dan
2014-01-10
In a "thought experiment," now a classic in physics pedagogy, Feynman visualizes Young's double-slit interference experiment with electrons in magnetic field. He shows that the addition of an Aharonov-Bohm phase is equivalent to shifting the zero-field wave interference pattern by an angle expected from the Lorentz force calculation for classical particles. We have performed this experiment with one slit, instead of two, where ballistic electrons within two-dimensional electron gas diffract through a small orifice formed by a quantum point contact (QPC). As the QPC width is comparable to the electron wavelength, the observed intensity profile is further modulated by the transverse waveguide modes present at the injector QPC. Our experiments open the way to realizing diffraction-based ideas in mesoscopic physics.
A random walk approach to quantum algorithms.
Kendon, Vivien M
2006-12-15
The development of quantum algorithms based on quantum versions of random walks is placed in the context of the emerging field of quantum computing. Constructing a suitable quantum version of a random walk is not trivial; pure quantum dynamics is deterministic, so randomness only enters during the measurement phase, i.e. when converting the quantum information into classical information. The outcome of a quantum random walk is very different from the corresponding classical random walk owing to the interference between the different possible paths. The upshot is that quantum walkers find themselves further from their starting point than a classical walker on average, and this forms the basis of a quantum speed up, which can be exploited to solve problems faster. Surprisingly, the effect of making the walk slightly less than perfectly quantum can optimize the properties of the quantum walk for algorithmic applications. Looking to the future, even with a small quantum computer available, the development of quantum walk algorithms might proceed more rapidly than it has, especially for solving real problems.
Toward simulating complex systems with quantum effects
NASA Astrophysics Data System (ADS)
Kenion-Hanrath, Rachel Lynn
Quantum effects like tunneling, coherence, and zero point energy often play a significant role in phenomena on the scales of atoms and molecules. However, the exact quantum treatment of a system scales exponentially with dimensionality, making it impractical for characterizing reaction rates and mechanisms in complex systems. An ongoing effort in the field of theoretical chemistry and physics is extending scalable, classical trajectory-based simulation methods capable of capturing quantum effects to describe dynamic processes in many-body systems; in the work presented here we explore two such techniques. First, we detail an explicit electron, path integral (PI)-based simulation protocol for predicting the rate of electron transfer in condensed-phase transition metal complex systems. Using a PI representation of the transferring electron and a classical representation of the transition metal complex and solvent atoms, we compute the outer sphere free energy barrier and dynamical recrossing factor of the electron transfer rate while accounting for quantum tunneling and zero point energy effects. We are able to achieve this employing only a single set of force field parameters to describe the system rather than parameterizing along the reaction coordinate. Following our success in describing a simple model system, we discuss our next steps in extending our protocol to technologically relevant materials systems. The latter half focuses on the Mixed Quantum-Classical Initial Value Representation (MQC-IVR) of real-time correlation functions, a semiclassical method which has demonstrated its ability to "tune'' between quantum- and classical-limit correlation functions while maintaining dynamic consistency. Specifically, this is achieved through a parameter that determines the quantumness of individual degrees of freedom. Here, we derive a semiclassical correction term for the MQC-IVR to systematically characterize the error introduced by different choices of simulation parameters, and demonstrate the ability of this approach to optimize MQC-IVR simulations.
Wigner surmises and the two-dimensional homogeneous Poisson point process.
Sakhr, Jamal; Nieminen, John M
2006-04-01
We derive a set of identities that relate the higher-order interpoint spacing statistics of the two-dimensional homogeneous Poisson point process to the Wigner surmises for the higher-order spacing distributions of eigenvalues from the three classical random matrix ensembles. We also report a remarkable identity that equates the second-nearest-neighbor spacing statistics of the points of the Poisson process and the nearest-neighbor spacing statistics of complex eigenvalues from Ginibre's ensemble of 2 x 2 complex non-Hermitian random matrices.
Gómez-Carrasco, Susana; González-Sánchez, Lola; Aguado, Alfredo; Sanz-Sanz, Cristina; Zanchet, Alexandre; Roncero, Octavio
2012-09-07
In this work we present a dynamically biased statistical model to describe the evolution of the title reaction from statistical to a more direct mechanism, using quasi-classical trajectories (QCT). The method is based on the one previously proposed by Park and Light [J. Chem. Phys. 126, 044305 (2007)]. A recent global potential energy surface is used here to calculate the capture probabilities, instead of the long-range ion-induced dipole interactions. The dynamical constraints are introduced by considering a scrambling matrix which depends on energy and determine the probability of the identity/hop/exchange mechanisms. These probabilities are calculated using QCT. It is found that the high zero-point energy of the fragments is transferred to the rest of the degrees of freedom, what shortens the lifetime of H(5)(+) complexes and, as a consequence, the exchange mechanism is produced with lower proportion. The zero-point energy (ZPE) is not properly described in quasi-classical trajectory calculations and an approximation is done in which the initial ZPE of the reactants is reduced in QCT calculations to obtain a new ZPE-biased scrambling matrix. This reduction of the ZPE is explained by the need of correcting the pure classical level number of the H(5)(+) complex, as done in classical simulations of unimolecular processes and to get equivalent quantum and classical rate constants using Rice-Ramsperger-Kassel-Marcus theory. This matrix allows to obtain a ratio of hop/exchange mechanisms, α(T), in rather good agreement with recent experimental results by Crabtree et al. [J. Chem. Phys. 134, 194311 (2011)] at room temperature. At lower temperatures, however, the present simulations predict too high ratios because the biased scrambling matrix is not statistical enough. This demonstrates the importance of applying quantum methods to simulate this reaction at the low temperatures of astrophysical interest.
NASA Astrophysics Data System (ADS)
Gómez-Carrasco, Susana; González-Sánchez, Lola; Aguado, Alfredo; Sanz-Sanz, Cristina; Zanchet, Alexandre; Roncero, Octavio
2012-09-01
In this work we present a dynamically biased statistical model to describe the evolution of the title reaction from statistical to a more direct mechanism, using quasi-classical trajectories (QCT). The method is based on the one previously proposed by Park and Light [J. Chem. Phys. 126, 044305 (2007), 10.1063/1.2430711]. A recent global potential energy surface is used here to calculate the capture probabilities, instead of the long-range ion-induced dipole interactions. The dynamical constraints are introduced by considering a scrambling matrix which depends on energy and determine the probability of the identity/hop/exchange mechanisms. These probabilities are calculated using QCT. It is found that the high zero-point energy of the fragments is transferred to the rest of the degrees of freedom, what shortens the lifetime of H_5^+ complexes and, as a consequence, the exchange mechanism is produced with lower proportion. The zero-point energy (ZPE) is not properly described in quasi-classical trajectory calculations and an approximation is done in which the initial ZPE of the reactants is reduced in QCT calculations to obtain a new ZPE-biased scrambling matrix. This reduction of the ZPE is explained by the need of correcting the pure classical level number of the H_5^+ complex, as done in classical simulations of unimolecular processes and to get equivalent quantum and classical rate constants using Rice-Ramsperger-Kassel-Marcus theory. This matrix allows to obtain a ratio of hop/exchange mechanisms, α(T), in rather good agreement with recent experimental results by Crabtree et al. [J. Chem. Phys. 134, 194311 (2011), 10.1063/1.3587246] at room temperature. At lower temperatures, however, the present simulations predict too high ratios because the biased scrambling matrix is not statistical enough. This demonstrates the importance of applying quantum methods to simulate this reaction at the low temperatures of astrophysical interest.
NASA Astrophysics Data System (ADS)
Kimura, Kenji; Higuchi, Saburo
2017-11-01
We introduce a novel random walk model that emerges in the event-chain Monte Carlo (ECMC) of spin systems. In the ECMC, the lifting variable specifying the spin to be updated changes its value to one of its interacting neighbor spins. This movement can be regarded as a random walk in a random environment with a feedback. We investigate this random walk numerically in the case of the classical XY model in 1, 2, and 3 dimensions to find that it is superdiffusive near the critical point of the underlying spin system. It is suggested that the performance improvement of the ECMC is related to this anomalous behavior.
Najafi, M N; Nezhadhaghighi, M Ghasemi
2017-03-01
We characterize the carrier density profile of the ground state of graphene in the presence of particle-particle interaction and random charged impurity in zero gate voltage. We provide detailed analysis on the resulting spatially inhomogeneous electron gas, taking into account the particle-particle interaction and the remote Coulomb disorder on an equal footing within the Thomas-Fermi-Dirac theory. We present some general features of the carrier density probability measure of the graphene sheet. We also show that, when viewed as a random surface, the electron-hole puddles at zero chemical potential show peculiar self-similar statistical properties. Although the disorder potential is chosen to be Gaussian, we show that the charge field is non-Gaussian with unusual Kondev relations, which can be regarded as a new class of two-dimensional random-field surfaces. Using Schramm-Loewner (SLE) evolution, we numerically demonstrate that the ungated graphene has conformal invariance and the random zero-charge density contours are SLE_{κ} with κ=1.8±0.2, consistent with c=-3 conformal field theory.
Classical and quantum filaments in the ground state of trapped dipolar Bose gases
NASA Astrophysics Data System (ADS)
Cinti, Fabio; Boninsegni, Massimo
2017-07-01
We study, by quantum Monte Carlo simulations, the ground state of a harmonically confined dipolar Bose gas with aligned dipole moments and with the inclusion of a repulsive two-body potential of varying range. Two different limits can clearly be identified, namely, a classical one in which the attractive part of the dipolar interaction dominates and the system forms an ordered array of parallel filaments and a quantum-mechanical one, wherein filaments are destabilized by zero-point motion, and eventually the ground state becomes a uniform cloud. The physical character of the system smoothly evolves from classical to quantum mechanical as the range of the repulsive two-body potential increases. An intermediate regime is observed in which ordered filaments are still present, albeit forming different structures from the ones predicted classically; quantum-mechanical exchanges of indistinguishable particles across different filaments allow phase coherence to be established, underlying a global superfluid response.
Empirically Calibrated Asteroseismic Masses and Radii for Red Giants in the Kepler Fields
NASA Astrophysics Data System (ADS)
Pinsonneault, Marc; Elsworth, Yvonne; Silva Aguirre, Victor; Chaplin, William J.; Garcia, Rafael A.; Hekker, Saskia; Holtzman, Jon; Huber, Daniel; Johnson, Jennifer; Kallinger, Thomas; Mosser, Benoit; Mathur, Savita; Serenelli, Aldo; Shetrone, Matthew; Stello, Dennis; Tayar, Jamie; Zinn, Joel; APOGEE Team, KASC Team, APOKASC Team
2018-01-01
We report on the joint asteroseismic and spectroscopic properties of a sample of 6048 evolved stars in the fields originally observed by the Kepler satellite. We use APOGEE spectroscopic data taken from Data Release 13 of the Sloan Digital Sky Survey, combined with asteroseismic data analyzed by members of the Kepler Asteroseismic Science Consortium. With high statistical significance, the different pipelines do not have relative zero points that are the same as the solar values, and red clump stars do not have the same empirical relative zero points as red giants. We employ theoretically motivated corrections to the scaling relation for the large frequency spacing, and adjust the zero point of the frequency of maximum power scaling relation to be consistent with masses and radii for members of star clusters. The scatter in calibrator masses is consistent with our error estimation. Systematic and random mass errors are explicitly separated and identified. The measurement scatter, and random uncertainties, are three times larger for red giants where one or more technique failed to return a value than for targets where all five methods could do so, and this is a substantial fraction of the sample (20% of red giants and 25% of red clump stars). Overall trends and future prospects are discussed.
Indeterminism in Classical Dynamics of Particle Motion
NASA Astrophysics Data System (ADS)
Eyink, Gregory; Vishniac, Ethan; Lalescu, Cristian; Aluie, Hussein; Kanov, Kalin; Burns, Randal; Meneveau, Charles; Szalay, Alex
2013-03-01
We show that ``God plays dice'' not only in quantum mechanics but also in the classical dynamics of particles advected by turbulent fluids. With a fixed deterministic flow velocity and an exactly known initial position, the particle motion is nevertheless completely unpredictable! In analogy with spontaneous magnetization in ferromagnets which persists as external field is taken to zero, the particle trajectories in turbulent flow remain random as external noise vanishes. The necessary ingredient is a rough advecting field with a power-law energy spectrum extending to smaller scales as noise is taken to zero. The physical mechanism of ``spontaneous stochasticity'' is the explosive dispersion of particle pairs proposed by L. F. Richardson in 1926, so the phenomenon should be observable in laboratory and natural turbulent flows. We present here the first empirical corroboration of these effects in high Reynolds-number numerical simulations of hydrodynamic and magnetohydrodynamic fluid turbulence. Since power-law spectra are seen in many other systems in condensed matter, geophysics and astrophysics, the phenomenon should occur rather widely. Fast reconnection in solar flares and other astrophysical systems can be explained by spontaneous stochasticity of magnetic field-line motion
Shear modulus of neutron star crust
NASA Astrophysics Data System (ADS)
Baiko, D. A.
2011-09-01
The shear modulus of solid neutron star crust is calculated by the thermodynamic perturbation theory, taking into account ion motion. At a given density, the crust is modelled as a body-centred cubic Coulomb crystal of fully ionized atomic nuclei of one type with a uniform charge-compensating electron background. Classic and quantum regimes of ion motion are considered. The calculations in the classic temperature range agree well with previous Monte Carlo simulations. At these temperatures, the shear modulus is given by the sum of a positive contribution due to the static lattice and a negative ∝ T contribution due to the ion motion. The quantum calculations are performed for the first time. The main result is that at low temperatures the contribution to the shear modulus due to the ion motion saturates at a constant value, associated with zero-point ion vibrations. Such behaviour is qualitatively similar to the zero-point ion motion contribution to the crystal energy. The quantum effects may be important for lighter elements at higher densities, where the ion plasma temperature is not entirely negligible compared to the typical Coulomb ion interaction energy. The results of numerical calculations are approximated by convenient fitting formulae. They should be used for precise neutron star oscillation modelling, a rapidly developing branch of stellar seismology.
Czakó, Gábor; Kaledin, Alexey L; Bowman, Joel M
2010-04-28
We report the implementation of a previously suggested method to constrain a molecular system to have mode-specific vibrational energy greater than or equal to the zero-point energy in quasiclassical trajectory calculations [J. M. Bowman et al., J. Chem. Phys. 91, 2859 (1989); W. H. Miller et al., J. Chem. Phys. 91, 2863 (1989)]. The implementation is made practical by using a technique described recently [G. Czako and J. M. Bowman, J. Chem. Phys. 131, 244302 (2009)], where a normal-mode analysis is performed during the course of a trajectory and which gives only real-valued frequencies. The method is applied to the water dimer, where its effectiveness is shown by computing mode energies as a function of integration time. Radial distribution functions are also calculated using constrained quasiclassical and standard classical molecular dynamics at low temperature and at 300 K and compared to rigorous quantum path integral calculations.
Robust Approach to Verifying the Weak Form of the Efficient Market Hypothesis
NASA Astrophysics Data System (ADS)
Střelec, Luboš
2011-09-01
The weak form of the efficient markets hypothesis states that prices incorporate only past information about the asset. An implication of this form of the efficient markets hypothesis is that one cannot detect mispriced assets and consistently outperform the market through technical analysis of past prices. One of possible formulations of the efficient market hypothesis used for weak form tests is that share prices follow a random walk. It means that returns are realizations of IID sequence of random variables. Consequently, for verifying the weak form of the efficient market hypothesis, we can use distribution tests, among others, i.e. some tests of normality and/or some graphical methods. Many procedures for testing the normality of univariate samples have been proposed in the literature [7]. Today the most popular omnibus test of normality for a general use is the Shapiro-Wilk test. The Jarque-Bera test is the most widely adopted omnibus test of normality in econometrics and related fields. In particular, the Jarque-Bera test (i.e. test based on the classical measures of skewness and kurtosis) is frequently used when one is more concerned about heavy-tailed alternatives. As these measures are based on moments of the data, this test has a zero breakdown value [2]. In other words, a single outlier can make the test worthless. The reason so many classical procedures are nonrobust to outliers is that the parameters of the model are expressed in terms of moments, and their classical estimators are expressed in terms of sample moments, which are very sensitive to outliers. Another approach to robustness is to concentrate on the parameters of interest suggested by the problem under this study. Consequently, novel robust testing procedures of testing normality are presented in this paper to overcome shortcomings of classical normality tests in the field of financial data, which are typical with occurrence of remote data points and additional types of deviations from normality. This study also discusses some results of simulation power studies of these tests for normality against selected alternatives. Based on outcome of the power simulation study, selected normality tests were consequently used to verify weak form of efficiency in Central Europe stock markets.
Periodic orbit spectrum in terms of Ruelle-Pollicott resonances
NASA Astrophysics Data System (ADS)
Leboeuf, P.
2004-02-01
Fully chaotic Hamiltonian systems possess an infinite number of classical solutions which are periodic, e.g., a trajectory “p” returns to its initial conditions after some fixed time τp. Our aim is to investigate the spectrum {τ1,τ2,…} of periods of the periodic orbits. An explicit formula for the density ρ(τ)=∑pδ(τ-τp) is derived in terms of the eigenvalues of the classical evolution operator. The density is naturally decomposed into a smooth part plus an interferent sum over oscillatory terms. The frequencies of the oscillatory terms are given by the imaginary part of the complex eigenvalues (Ruelle-Pollicott resonances). For large periods, corrections to the well-known exponential growth of the smooth part of the density are obtained. An alternative formula for ρ(τ) in terms of the zeros and poles of the Ruelle ζ function is also discussed. The results are illustrated with the geodesic motion in billiards of constant negative curvature. Connections with the statistical properties of the corresponding quantum eigenvalues, random-matrix theory, and discrete maps are also considered. In particular, a random-matrix conjecture is proposed for the eigenvalues of the classical evolution operator of chaotic billiards.
Random Walk Quantum Clustering Algorithm Based on Space
NASA Astrophysics Data System (ADS)
Xiao, Shufen; Dong, Yumin; Ma, Hongyang
2018-01-01
In the random quantum walk, which is a quantum simulation of the classical walk, data points interacted when selecting the appropriate walk strategy by taking advantage of quantum-entanglement features; thus, the results obtained when the quantum walk is used are different from those when the classical walk is adopted. A new quantum walk clustering algorithm based on space is proposed by applying the quantum walk to clustering analysis. In this algorithm, data points are viewed as walking participants, and similar data points are clustered using the walk function in the pay-off matrix according to a certain rule. The walk process is simplified by implementing a space-combining rule. The proposed algorithm is validated by a simulation test and is proved superior to existing clustering algorithms, namely, Kmeans, PCA + Kmeans, and LDA-Km. The effects of some of the parameters in the proposed algorithm on its performance are also analyzed and discussed. Specific suggestions are provided.
Wilson, Lorna R M; Hopcraft, Keith I
2017-12-01
The problem of zero crossings is of great historical prevalence and promises extensive application. The challenge is to establish precisely how the autocorrelation function or power spectrum of a one-dimensional continuous random process determines the density function of the intervals between the zero crossings of that process. This paper investigates the case where periodicities are incorporated into the autocorrelation function of a smooth process. Numerical simulations, and statistics about the number of crossings in a fixed interval, reveal that in this case the zero crossings segue between a random and deterministic point process depending on the relative time scales of the periodic and nonperiodic components of the autocorrelation function. By considering the Laplace transform of the density function, we show that incorporating correlation between successive intervals is essential to obtaining accurate results for the interval variance. The same method enables prediction of the density function tail in some regions, and we suggest approaches for extending this to cover all regions. In an ever-more complex world, the potential applications for this scale of regularity in a random process are far reaching and powerful.
NASA Astrophysics Data System (ADS)
Wilson, Lorna R. M.; Hopcraft, Keith I.
2017-12-01
The problem of zero crossings is of great historical prevalence and promises extensive application. The challenge is to establish precisely how the autocorrelation function or power spectrum of a one-dimensional continuous random process determines the density function of the intervals between the zero crossings of that process. This paper investigates the case where periodicities are incorporated into the autocorrelation function of a smooth process. Numerical simulations, and statistics about the number of crossings in a fixed interval, reveal that in this case the zero crossings segue between a random and deterministic point process depending on the relative time scales of the periodic and nonperiodic components of the autocorrelation function. By considering the Laplace transform of the density function, we show that incorporating correlation between successive intervals is essential to obtaining accurate results for the interval variance. The same method enables prediction of the density function tail in some regions, and we suggest approaches for extending this to cover all regions. In an ever-more complex world, the potential applications for this scale of regularity in a random process are far reaching and powerful.
A multi-assets artificial stock market with zero-intelligence traders
NASA Astrophysics Data System (ADS)
Ponta, L.; Raberto, M.; Cincotti, S.
2011-01-01
In this paper, a multi-assets artificial financial market populated by zero-intelligence traders with finite financial resources is presented. The market is characterized by different types of stocks representing firms operating in different sectors of the economy. Zero-intelligence traders follow a random allocation strategy which is constrained by finite resources, past market volatility and allocation universe. Within this framework, stock price processes exhibit volatility clustering, fat-tailed distribution of returns and reversion to the mean. Moreover, the cross-correlations between returns of different stocks are studied using methods of random matrix theory. The probability distribution of eigenvalues of the cross-correlation matrix shows the presence of outliers, similar to those recently observed on real data for business sectors. It is worth noting that business sectors have been recovered in our framework without dividends as only consequence of random restrictions on the allocation universe of zero-intelligence traders. Furthermore, in the presence of dividend-paying stocks and in the case of cash inflow added to the market, the artificial stock market points out the same structural results obtained in the simulation without dividends. These results suggest a significative structural influence on statistical properties of multi-assets stock market.
Antigravity and the big crunch/big bang transition
NASA Astrophysics Data System (ADS)
Bars, Itzhak; Chen, Shih-Hung; Steinhardt, Paul J.; Turok, Neil
2012-08-01
We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.
Precise Determination of the Zero-Gravity Surface Figure of a Mirror without Gravity-Sag Modeling
NASA Technical Reports Server (NTRS)
Bloemhof, Eric E.; Lam, Jonathan C.; Feria, V. Alfonso; Chang, Zensheu
2007-01-01
The zero-gravity surface figure of optics used in spaceborne astronomical instruments must be known to high accuracy, but earthbound metrology is typically corrupted by gravity sag. Generally, inference of the zero-gravity surface figure from a measurement made under normal gravity requires finite-element analysis (FEA), and for accurate results the mount forces must be well characterized. We describe how to infer the zero-gravity surface figure very precisely using the alternative classical technique of averaging pairs of measurements made with the direction of gravity reversed. We show that mount forces as well as gravity must be reversed between the two measurements and discuss how the St. Venant principle determines when a reversed mount force may be considered to be applied at the same place in the two orientations. Our approach requires no finite-element modeling and no detailed knowledge of mount forces other than the fact that they reverse and are applied at the same point in each orientation. If mount schemes are suitably chosen, zero-gravity optical surfaces may be inferred much more simply and more accurately than with FEA.
NASA Astrophysics Data System (ADS)
Fujitani, Y.; Sumino, Y.
2018-04-01
A classically scale invariant extension of the standard model predicts large anomalous Higgs self-interactions. We compute missing contributions in previous studies for probing the Higgs triple coupling of a minimal model using the process e+e- → Zhh. Employing a proper order counting, we compute the total and differential cross sections at the leading order, which incorporate the one-loop corrections between zero external momenta and their physical values. Discovery/exclusion potential of a future e+e- collider for this model is estimated. We also find a unique feature in the momentum dependence of the Higgs triple vertex for this class of models.
A random wave model for the Aharonov-Bohm effect
NASA Astrophysics Data System (ADS)
Houston, Alexander J. H.; Gradhand, Martin; Dennis, Mark R.
2017-05-01
We study an ensemble of random waves subject to the Aharonov-Bohm effect. The introduction of a point with a magnetic flux of arbitrary strength into a random wave ensemble gives a family of wavefunctions whose distribution of vortices (complex zeros) is responsible for the topological phase associated with the Aharonov-Bohm effect. Analytical expressions are found for the vortex number and topological charge densities as functions of distance from the flux point. Comparison is made with the distribution of vortices in the isotropic random wave model. The results indicate that as the flux approaches half-integer values, a vortex with the same sign as the fractional part of the flux is attracted to the flux point, merging with it in the limit of half-integer flux. We construct a statistical model of the neighbourhood of the flux point to study how this vortex-flux merger occurs in more detail. Other features of the Aharonov-Bohm vortex distribution are also explored.
Thermodynamics in the vicinity of a relativistic quantum critical point in 2+1 dimensions.
Rançon, A; Kodio, O; Dupuis, N; Lecheminant, P
2013-07-01
We study the thermodynamics of the relativistic quantum O(N) model in two space dimensions. In the vicinity of the zero-temperature quantum critical point (QCP), the pressure can be written in the scaling form P(T)=P(0)+N(T(3)/c(2))F(N)(Δ/T), where c is the velocity of the excitations at the QCP and |Δ| a characteristic zero-temperature energy scale. Using both a large-N approach to leading order and the nonperturbative renormalization group, we compute the universal scaling function F(N). For small values of N (N~10) we find that F(N)(x) is nonmonotonic in the quantum critical regime (|x|~1) with a maximum near x=0. The large-N approach-if properly interpreted-is a good approximation both in the renormalized classical (x~-1) and quantum disordered (x>/~1) regimes, but fails to describe the nonmonotonic behavior of F(N) in the quantum critical regime. We discuss the renormalization-group flows in the various regimes near the QCP and make the connection with the quantum nonlinear sigma model in the renormalized classical regime. We compute the Berezinskii-Kosterlitz-Thouless transition temperature in the quantum O(2) model and find that in the vicinity of the QCP the universal ratio T(BKT)/ρ(s)(0) is very close to π/2, implying that the stiffness ρ(s)(T(BKT)(-)) at the transition is only slightly reduced with respect to the zero-temperature stiffness ρ(s)(0). Finally, we briefly discuss the experimental determination of the universal function F(2) from the pressure of a Bose gas in an optical lattice near the superfluid-Mott-insulator transition.
Wang, Xiaohong; Bowman, Joel M
2013-02-12
We calculate the probabilities for the association reactions H+HCN→H2CN* and cis/trans-HCNH*, using quasiclassical trajectory (QCT) and classical trajectory (CT) calculations, on a new global ab initio potential energy surface (PES) for H2CN including the reaction channels. The surface is a linear least-squares fit of roughly 60 000 CCSD(T)-F12b/aug-cc-pVDZ electronic energies, using a permutationally invariant basis with Morse-type variables. The reaction probabilities are obtained at a variety of collision energies and impact parameters. Large differences in the threshold energies in the two types of dynamics calculations are traced to the absence of zero-point energy in the CT calculations. We argue that the QCT threshold energy is the realistic one. In addition, trajectories find a direct pathway to trans-HCNH, even though there is no obvious transition state (TS) for this pathway. Instead the saddle point (SP) for the addition to cis-HCNH is evidently also the TS for direct formation of trans-HCNH.
Quantum chaos for nonstandard symmetry classes in the Feingold-Peres model of coupled tops
NASA Astrophysics Data System (ADS)
Fan, Yiyun; Gnutzmann, Sven; Liang, Yuqi
2017-12-01
We consider two coupled quantum tops with angular momentum vectors L and M . The coupling Hamiltonian defines the Feingold-Peres model, which is a known paradigm of quantum chaos. We show that this model has a nonstandard symmetry with respect to the Altland-Zirnbauer tenfold symmetry classification of quantum systems, which extends the well-known threefold way of Wigner and Dyson (referred to as "standard" symmetry classes here). We identify the nonstandard symmetry classes BD I0 (chiral orthogonal class with no zero modes), BD I1 (chiral orthogonal class with one zero mode), and C I (antichiral orthogonal class) as well as the standard symmetry class A I (orthogonal class). We numerically analyze the specific spectral quantum signatures of chaos related to the nonstandard symmetries. In the microscopic density of states and in the distribution of the lowest positive energy eigenvalue, we show that the Feingold-Peres model follows the predictions of the Gaussian ensembles of random-matrix theory in the appropriate symmetry class if the corresponding classical dynamics is chaotic. In a crossover to mixed and near-integrable classical dynamics, we show that these signatures disappear or strongly change.
Quantum chaos for nonstandard symmetry classes in the Feingold-Peres model of coupled tops.
Fan, Yiyun; Gnutzmann, Sven; Liang, Yuqi
2017-12-01
We consider two coupled quantum tops with angular momentum vectors L and M. The coupling Hamiltonian defines the Feingold-Peres model, which is a known paradigm of quantum chaos. We show that this model has a nonstandard symmetry with respect to the Altland-Zirnbauer tenfold symmetry classification of quantum systems, which extends the well-known threefold way of Wigner and Dyson (referred to as "standard" symmetry classes here). We identify the nonstandard symmetry classes BDI_{0} (chiral orthogonal class with no zero modes), BDI_{1} (chiral orthogonal class with one zero mode), and CI (antichiral orthogonal class) as well as the standard symmetry class AI (orthogonal class). We numerically analyze the specific spectral quantum signatures of chaos related to the nonstandard symmetries. In the microscopic density of states and in the distribution of the lowest positive energy eigenvalue, we show that the Feingold-Peres model follows the predictions of the Gaussian ensembles of random-matrix theory in the appropriate symmetry class if the corresponding classical dynamics is chaotic. In a crossover to mixed and near-integrable classical dynamics, we show that these signatures disappear or strongly change.
Physics of singularities in pressure-impulse theory
NASA Astrophysics Data System (ADS)
Krechetnikov, R.
2018-05-01
The classical solution in the pressure-impulse theory for the inviscid, incompressible, and zero-surface-tension water impact of a flat plate at zero dead-rise angle exhibits both singular-in-time initial fluid acceleration, ∂v /∂ t |t =0˜δ (t ) , and a near-plate-edge spatial singularity in the velocity distribution, v ˜r-1 /2 , where r is the distance from the plate edge. The latter velocity divergence also leads to the interface being stretched infinitely right after the impact, which is another nonphysical artifact. From the point of view of matched asymptotic analysis, this classical solution is a singular limit when three physical quantities achieve limiting values: sound speed c0→∞ , fluid kinematic viscosity ν →0 , and surface tension σ →0 . This leaves open a question on how to resolve these singularities mathematically by including the neglected physical effects—compressibility, viscosity, and surface tension—first one by one and then culminating in the local compressible viscous solution valid for t →0 and r →0 , demonstrating a nontrivial flow structure that changes with the degree of the bulk compressibility. In the course of this study, by starting with the general physically relevant formulation of compressible viscous flow, we clarify the parameter range(s) of validity of the key analytical solutions including classical ones (inviscid incompressible and compressible, etc.) and understand the solution structure, its intermediate asymptotics nature, characteristics influencing physical processes, and the role of potential and rotational flow components. In particular, it is pointed out that sufficiently close to the plate edge surface tension must be taken into account. Overall, the idea is to highlight the interesting physics behind the singularities in the pressure-impulse theory.
Entangled trajectories Hamiltonian dynamics for treating quantum nuclear effects
NASA Astrophysics Data System (ADS)
Smith, Brendan; Akimov, Alexey V.
2018-04-01
A simple and robust methodology, dubbed Entangled Trajectories Hamiltonian Dynamics (ETHD), is developed to capture quantum nuclear effects such as tunneling and zero-point energy through the coupling of multiple classical trajectories. The approach reformulates the classically mapped second-order Quantized Hamiltonian Dynamics (QHD-2) in terms of coupled classical trajectories. The method partially enforces the uncertainty principle and facilitates tunneling. The applicability of the method is demonstrated by studying the dynamics in symmetric double well and cubic metastable state potentials. The methodology is validated using exact quantum simulations and is compared to QHD-2. We illustrate its relationship to the rigorous Bohmian quantum potential approach, from which ETHD can be derived. Our simulations show a remarkable agreement of the ETHD calculation with the quantum results, suggesting that ETHD may be a simple and inexpensive way of including quantum nuclear effects in molecular dynamics simulations.
NASA Technical Reports Server (NTRS)
Lindh, Roland; Rice, Julia E.; Lee, Timothy J.
1991-01-01
The energy separation between the classical and nonclassical forms of protonated acetylene has been reinvestigated in light of the recent experimentally deduced lower bound to this value of 6.0 kcal/mol. The objective of the present study is to use state-of-the-art ab initio quantum mechanical methods to establish this energy difference to within chemical accuracy (i.e., about 1 kcal/mol). The one-particle basis sets include up to g-type functions and the electron correlation methods include single and double excitation coupled-cluster (CCSD), the CCSD(T) extension, multireference configuration interaction, and the averaged coupled-pair functional methods. A correction for zero-point vibrational energies has also been included, yielding a best estimate for the energy difference between the classical and nonclassical forms of 3.7 + or - 1.3 kcal/mol.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bunakov, V. E.; Kadmensky, S. G., E-mail: kadmensky@phys.vsu.ru; Lyubashevsky, D. E.
2016-05-15
It is shown that A. Bohr’s classic theory of angular distributions of fragments originating from low-energy fission should be supplemented with quantum corrections based on the involvement of a superposition of a very large number of angular momenta L{sub m} in the description of the relative motion of fragments flying apart along the straight line coincidentwith the symmetry axis. It is revealed that quantum zero-point wriggling-type vibrations of the fissile system in the vicinity of its scission point are a source of these angular momenta and of high fragment spins observed experimentally.
Qu, Chen; Bowman, Joel M
2016-07-14
Semiclassical quantization of vibrational energies, using adiabatic switching (AS), is applied to CH4 using a recent ab initio potential energy surface, for which exact quantum calculations of vibrational energies are available. Details of the present calculations, which employ a harmonic normal-mode zeroth-order Hamiltonian, emphasize the importance of transforming to the Eckart frame during the propagation of the adiabatically switched Hamiltonian. The AS energies for the zero-point, and fundamental excitations of two modes are in good agreement with the quantum ones. The use of AS in the context of quasi-classical trajectory calculations is revisited, following previous work reported in 1995, which did not recommend the procedure. We come to a different conclusion here.
Quantum propagation in single mode fiber
NASA Technical Reports Server (NTRS)
Joneckis, Lance G.; Shapiro, Jeffrey H.
1994-01-01
This paper presents a theory for quantum light propagation in a single-mode fiber which includes the effects of the Kerr nonlinearity, group-velocity dispersion, and linear loss. The theory reproduces the results of classical self-phase modulation, quantum four-wave mixing, and classical solution physics, within their respective regions of validity. It demonstrates the crucial role played by the Kerr-effect material time constant, in limiting the quantum phase shifts caused by the broadband zero-point fluctuations that accompany any quantized input field. Operator moment equations - approximated, numerically, via a terminated cumulant expansion - are used to obtain results for homodyne-measurement noise spectra when dispersion is negligible. More complicated forms of these equations can be used to incorporate dispersion into the noise calculations.
Effect of chiral symmetry on chaotic scattering from Majorana zero modes.
Schomerus, H; Marciani, M; Beenakker, C W J
2015-04-24
In many of the experimental systems that may host Majorana zero modes, a so-called chiral symmetry exists that protects overlapping zero modes from splitting up. This symmetry is operative in a superconducting nanowire that is narrower than the spin-orbit scattering length, and at the Dirac point of a superconductor-topological insulator heterostructure. Here we show that chiral symmetry strongly modifies the dynamical and spectral properties of a chaotic scatterer, even if it binds only a single zero mode. These properties are quantified by the Wigner-Smith time-delay matrix Q=-iℏS^{†}dS/dE, the Hermitian energy derivative of the scattering matrix, related to the density of states by ρ=(2πℏ)^{-1}TrQ. We compute the probability distribution of Q and ρ, dependent on the number ν of Majorana zero modes, in the chiral ensembles of random-matrix theory. Chiral symmetry is essential for a significant ν dependence.
Derivatives of random matrix characteristic polynomials with applications to elliptic curves
NASA Astrophysics Data System (ADS)
Snaith, N. C.
2005-12-01
The value distribution of derivatives of characteristic polynomials of matrices from SO(N) is calculated at the point 1, the symmetry point on the unit circle of the eigenvalues of these matrices. We consider subsets of matrices from SO(N) that are constrained to have at least n eigenvalues equal to 1 and investigate the first non-zero derivative of the characteristic polynomial at that point. The connection between the values of random matrix characteristic polynomials and values of L-functions in families has been well established. The motivation for this work is the expectation that through this connection with L-functions derived from families of elliptic curves, and using the Birch and Swinnerton-Dyer conjecture to relate values of the L-functions to the rank of elliptic curves, random matrix theory will be useful in probing important questions concerning these ranks.
Generation of a non-zero discord bipartite state with classical second-order interference.
Choi, Yujun; Hong, Kang-Hee; Lim, Hyang-Tag; Yune, Jiwon; Kwon, Osung; Han, Sang-Wook; Oh, Kyunghwan; Kim, Yoon-Ho; Kim, Yong-Su; Moon, Sung
2017-02-06
We report an investigation on quantum discord in classical second-order interference. In particular, we theoretically show that a bipartite state with D = 0.311 of discord can be generated via classical second-order interference. We also experimentally verify the theory by obtaining D = 0.197 ± 0.060 of non-zero discord state. Together with the fact that the nonclassicalities originated from physical constraints and information theoretic perspectives are not equivalent, this result provides an insight to understand the nature of quantum discord.
Jin, Dafei; Lu, Ling; Wang, Zhong; Fang, Chen; Joannopoulos, John D.; Soljačić, Marin; Fu, Liang; Fang, Nicholas X.
2016-01-01
Classical wave fields are real-valued, ensuring the wave states at opposite frequencies and momenta to be inherently identical. Such a particle–hole symmetry can open up new possibilities for topological phenomena in classical systems. Here we show that the historically studied two-dimensional (2D) magnetoplasmon, which bears gapped bulk states and gapless one-way edge states near-zero frequency, is topologically analogous to the 2D topological p+ip superconductor with chiral Majorana edge states and zero modes. We further predict a new type of one-way edge magnetoplasmon at the interface of opposite magnetic domains, and demonstrate the existence of zero-frequency modes bounded at the peripheries of a hollow disk. These findings can be readily verified in experiment, and can greatly enrich the topological phases in bosonic and classical systems. PMID:27892453
Jin, Dafei; Lu, Ling; Wang, Zhong; ...
2016-11-28
Classical wave fields are real-valued, ensuring the wave states at opposite frequencies and momenta to be inherently identical. Such a particle–hole symmetry can open up new possibilities for topological phenomena in classical systems. Here we show that the historically studied two-dimensional (2D) magnetoplasmon, which bears gapped bulk states and gapless one-way edge states near-zero frequency, is topologically analogous to the 2D topological p+ip superconductor with chiral Majorana edge states and zero modes. We further predict a new type of one-way edge magnetoplasmon at the interface of opposite magnetic domains, and demonstrate the existence of zero-frequency modes bounded at the peripheriesmore » of a hollow disk. Finally, these findings can be readily verified in experiment, and can greatly enrich the topological phases in bosonic and classical systems.« less
Free Fermions and the Classical Compact Groups
NASA Astrophysics Data System (ADS)
Cunden, Fabio Deelan; Mezzadri, Francesco; O'Connell, Neil
2018-06-01
There is a close connection between the ground state of non-interacting fermions in a box with classical (absorbing, reflecting, and periodic) boundary conditions and the eigenvalue statistics of the classical compact groups. The associated determinantal point processes can be extended in two natural directions: (i) we consider the full family of admissible quantum boundary conditions (i.e., self-adjoint extensions) for the Laplacian on a bounded interval, and the corresponding projection correlation kernels; (ii) we construct the grand canonical extensions at finite temperature of the projection kernels, interpolating from Poisson to random matrix eigenvalue statistics. The scaling limits in the bulk and at the edges are studied in a unified framework, and the question of universality is addressed. Whether the finite temperature determinantal processes correspond to the eigenvalue statistics of some matrix models is, a priori, not obvious. We complete the picture by constructing a finite temperature extension of the Haar measure on the classical compact groups. The eigenvalue statistics of the resulting grand canonical matrix models (of random size) corresponds exactly to the grand canonical measure of free fermions with classical boundary conditions.
NASA Astrophysics Data System (ADS)
Zhu, Zheng; Ochoa, Andrew J.; Katzgraber, Helmut G.
2018-05-01
The search for problems where quantum adiabatic optimization might excel over classical optimization techniques has sparked a recent interest in inducing a finite-temperature spin-glass transition in quasiplanar topologies. We have performed large-scale finite-temperature Monte Carlo simulations of a two-dimensional square-lattice bimodal spin glass with next-nearest ferromagnetic interactions claimed to exhibit a finite-temperature spin-glass state for a particular relative strength of the next-nearest to nearest interactions [Phys. Rev. Lett. 76, 4616 (1996), 10.1103/PhysRevLett.76.4616]. Our results show that the system is in a paramagnetic state in the thermodynamic limit, despite zero-temperature simulations [Phys. Rev. B 63, 094423 (2001), 10.1103/PhysRevB.63.094423] suggesting the existence of a finite-temperature spin-glass transition. Therefore, deducing the finite-temperature behavior from zero-temperature simulations can be dangerous when corrections to scaling are large.
Quantum power source: putting in order of a Brownian motion without Maxwell's demon
NASA Astrophysics Data System (ADS)
Aristov, Vitaly V.; Nikulov, A. V.
2003-07-01
The problem of possible violation of the second law of thermodynamics is discussed. It is noted that the task of the well known challenge to the second law called Maxwell's demon is put in order a chaotic perpetual motion and if any ordered Brownian motion exists then the second law can be broken without this hypothetical intelligent entity. The postulate of absolute randomness of any Brownian motion saved the second law in the beginning of the 20th century when it was realized as perpetual motion. This postulate can be proven in the limits of classical mechanics but is not correct according to quantum mechanics. Moreover some enough known quantum phenomena, such as the persistent current at non-zero resistance, are an experimental evidence of the non-chaotic Brownian motion with non-zero average velocity. An experimental observation of a dc quantum power soruce is interperted as evidence of violation of the second law.
Schinke, Reinhard; Fleurat-Lessard, Paul
2005-03-01
The effect of zero-point energy differences (DeltaZPE) between the possible fragmentation channels of highly excited O(3) complexes on the isotope dependence of the formation of ozone is investigated by means of classical trajectory calculations and a strong-collision model. DeltaZPE is incorporated in the calculations in a phenomenological way by adjusting the potential energy surface in the product channels so that the correct exothermicities and endothermicities are matched. The model contains two parameters, the frequency of stabilizing collisions omega and an energy dependent parameter Delta(damp), which favors the lower energies in the Maxwell-Boltzmann distribution. The stabilization frequency is used to adjust the pressure dependence of the absolute formation rate while Delta(damp) is utilized to control its isotope dependence. The calculations for several isotope combinations of oxygen atoms show a clear dependence of relative formation rates on DeltaZPE. The results are similar to those of Gao and Marcus [J. Chem. Phys. 116, 137 (2002)] obtained within a statistical model. In particular, like in the statistical approach an ad hoc parameter eta approximately 1.14, which effectively reduces the formation rates of the symmetric ABA ozone molecules, has to be introduced in order to obtain good agreement with the measured relative rates of Janssen et al. [Phys. Chem. Chem. Phys. 3, 4718 (2001)]. The temperature dependence of the recombination rate is also addressed.
Quantum Hall Effect near the Charge Neutrality Point in a Two-Dimensional Electron-Hole System
NASA Astrophysics Data System (ADS)
Gusev, G. M.; Olshanetsky, E. B.; Kvon, Z. D.; Mikhailov, N. N.; Dvoretsky, S. A.; Portal, J. C.
2010-04-01
We study the transport properties of HgTe-based quantum wells containing simultaneously electrons and holes in a magnetic field B. At the charge neutrality point (CNP) with nearly equal electron and hole densities, the resistance is found to increase very strongly with B while the Hall resistivity turns to zero. This behavior results in a wide plateau in the Hall conductivity σxy≈0 and in a minimum of diagonal conductivity σxx at ν=νp-νn=0, where νn and νp are the electron and hole Landau level filling factors. We suggest that the transport at the CNP point is determined by electron-hole “snake states” propagating along the ν=0 lines. Our observations are qualitatively similar to the quantum Hall effect in graphene as well as to the transport in a random magnetic field with a zero mean value.
An alternative method for centrifugal compressor loading factor modelling
NASA Astrophysics Data System (ADS)
Galerkin, Y.; Drozdov, A.; Rekstin, A.; Soldatova, K.
2017-08-01
The loading factor at design point is calculated by one or other empirical formula in classical design methods. Performance modelling as a whole is out of consideration. Test data of compressor stages demonstrates that loading factor versus flow coefficient at the impeller exit has a linear character independent of compressibility. Known Universal Modelling Method exploits this fact. Two points define the function - loading factor at design point and at zero flow rate. The proper formulae include empirical coefficients. A good modelling result is possible if the choice of coefficients is based on experience and close analogs. Earlier Y. Galerkin and K. Soldatova had proposed to define loading factor performance by the angle of its inclination to the ordinate axis and by the loading factor at zero flow rate. Simple and definite equations with four geometry parameters were proposed for loading factor performance calculated for inviscid flow. The authors of this publication have studied the test performance of thirteen stages of different types. The equations are proposed with universal empirical coefficients. The calculation error lies in the range of plus to minus 1,5%. The alternative model of a loading factor performance modelling is included in new versions of the Universal Modelling Method.
Aguero-Valverde, Jonathan
2013-01-01
In recent years, complex statistical modeling approaches have being proposed to handle the unobserved heterogeneity and the excess of zeros frequently found in crash data, including random effects and zero inflated models. This research compares random effects, zero inflated, and zero inflated random effects models using a full Bayes hierarchical approach. The models are compared not just in terms of goodness-of-fit measures but also in terms of precision of posterior crash frequency estimates since the precision of these estimates is vital for ranking of sites for engineering improvement. Fixed-over-time random effects models are also compared to independent-over-time random effects models. For the crash dataset being analyzed, it was found that once the random effects are included in the zero inflated models, the probability of being in the zero state is drastically reduced, and the zero inflated models degenerate to their non zero inflated counterparts. Also by fixing the random effects over time the fit of the models and the precision of the crash frequency estimates are significantly increased. It was found that the rankings of the fixed-over-time random effects models are very consistent among them. In addition, the results show that by fixing the random effects over time, the standard errors of the crash frequency estimates are significantly reduced for the majority of the segments on the top of the ranking. Copyright © 2012 Elsevier Ltd. All rights reserved.
The Green's functions for peridynamic non-local diffusion.
Wang, L J; Xu, J F; Wang, J X
2016-09-01
In this work, we develop the Green's function method for the solution of the peridynamic non-local diffusion model in which the spatial gradient of the generalized potential in the classical theory is replaced by an integral of a generalized response function in a horizon. We first show that the general solutions of the peridynamic non-local diffusion model can be expressed as functionals of the corresponding Green's functions for point sources, along with volume constraints for non-local diffusion. Then, we obtain the Green's functions by the Fourier transform method for unsteady and steady diffusions in infinite domains. We also demonstrate that the peridynamic non-local solutions converge to the classical differential solutions when the non-local length approaches zero. Finally, the peridynamic analytical solutions are applied to an infinite plate heated by a Gauss source, and the predicted variations of temperature are compared with the classical local solutions. The peridynamic non-local diffusion model predicts a lower rate of variation of the field quantities than that of the classical theory, which is consistent with experimental observations. The developed method is applicable to general diffusion-type problems.
Renormalized Energy Concentration in Random Matrices
NASA Astrophysics Data System (ADS)
Borodin, Alexei; Serfaty, Sylvia
2013-05-01
We define a "renormalized energy" as an explicit functional on arbitrary point configurations of constant average density in the plane and on the real line. The definition is inspired by ideas of Sandier and Serfaty (From the Ginzburg-Landau model to vortex lattice problems, 2012; 1D log-gases and the renormalized energy, 2013). Roughly speaking, it is obtained by subtracting two leading terms from the Coulomb potential on a growing number of charges. The functional is expected to be a good measure of disorder of a configuration of points. We give certain formulas for its expectation for general stationary random point processes. For the random matrix β-sine processes on the real line ( β = 1,2,4), and Ginibre point process and zeros of Gaussian analytic functions process in the plane, we compute the expectation explicitly. Moreover, we prove that for these processes the variance of the renormalized energy vanishes, which shows concentration near the expected value. We also prove that the β = 2 sine process minimizes the renormalized energy in the class of determinantal point processes with translation invariant correlation kernels.
NASA Astrophysics Data System (ADS)
Pikulin, D. I.; Franz, M.
2017-07-01
A system of Majorana zero modes with random infinite-range interactions—the Sachdev-Ye-Kitaev (SYK) model—is thought to exhibit an intriguing relation to the horizons of extremal black holes in two-dimensional anti-de Sitter space. This connection provides a rare example of holographic duality between a solvable quantum-mechanical model and dilaton gravity. Here, we propose a physical realization of the SYK model in a solid-state system. The proposed setup employs the Fu-Kane superconductor realized at the interface between a three-dimensional topological insulator and an ordinary superconductor. The requisite N Majorana zero modes are bound to a nanoscale hole fabricated in the superconductor that is threaded by N quanta of magnetic flux. We show that when the system is tuned to the surface neutrality point (i.e., chemical potential coincident with the Dirac point of the topological insulator surface state) and the hole has sufficiently irregular shape, the Majorana zero modes are described by the SYK Hamiltonian. We perform extensive numerical simulations to demonstrate that the system indeed exhibits physical properties expected of the SYK model, including thermodynamic quantities and two-point as well as four-point correlators, and discuss ways in which these can be observed experimentally.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altintas, Ferdi, E-mail: ferdialtintas@ibu.edu.tr; Eryigit, Resul, E-mail: resul@ibu.edu.tr
2012-12-15
We have investigated the quantum phase transitions in the ground states of several critical systems, including transverse field Ising and XY models as well as XY with multiple spin interactions, XXZ and the collective system Lipkin-Meshkov-Glick models, by using different quantumness measures, such as entanglement of formation, quantum discord, as well as its classical counterpart, measurement-induced disturbance and the Clauser-Horne-Shimony-Holt-Bell function. Measurement-induced disturbance is found to detect the first and second order phase transitions present in these critical systems, while, surprisingly, it is found to fail to signal the infinite-order phase transition present in the XXZ model. Remarkably, the Clauser-Horne-Shimony-Holt-Bellmore » function is found to detect all the phase transitions, even when quantum and classical correlations are zero for the relevant ground state. - Highlights: Black-Right-Pointing-Pointer The ability of correlation measures to detect quantum phase transitions has been studied. Black-Right-Pointing-Pointer Measurement induced disturbance fails to detect the infinite order phase transition. Black-Right-Pointing-Pointer CHSH-Bell function detects all phase transitions even when the bipartite density matrix is uncorrelated.« less
Zero-inflated count models for longitudinal measurements with heterogeneous random effects.
Zhu, Huirong; Luo, Sheng; DeSantis, Stacia M
2017-08-01
Longitudinal zero-inflated count data arise frequently in substance use research when assessing the effects of behavioral and pharmacological interventions. Zero-inflated count models (e.g. zero-inflated Poisson or zero-inflated negative binomial) with random effects have been developed to analyze this type of data. In random effects zero-inflated count models, the random effects covariance matrix is typically assumed to be homogeneous (constant across subjects). However, in many situations this matrix may be heterogeneous (differ by measured covariates). In this paper, we extend zero-inflated count models to account for random effects heterogeneity by modeling their variance as a function of covariates. We show via simulation that ignoring intervention and covariate-specific heterogeneity can produce biased estimates of covariate and random effect estimates. Moreover, those biased estimates can be rectified by correctly modeling the random effects covariance structure. The methodological development is motivated by and applied to the Combined Pharmacotherapies and Behavioral Interventions for Alcohol Dependence (COMBINE) study, the largest clinical trial of alcohol dependence performed in United States with 1383 individuals.
Marino, Ricardo; Majumdar, Satya N; Schehr, Grégory; Vivo, Pierpaolo
2016-09-01
Let P_{β}^{(V)}(N_{I}) be the probability that a N×Nβ-ensemble of random matrices with confining potential V(x) has N_{I} eigenvalues inside an interval I=[a,b] on the real line. We introduce a general formalism, based on the Coulomb gas technique and the resolvent method, to compute analytically P_{β}^{(V)}(N_{I}) for large N. We show that this probability scales for large N as P_{β}^{(V)}(N_{I})≈exp[-βN^{2}ψ^{(V)}(N_{I}/N)], where β is the Dyson index of the ensemble. The rate function ψ^{(V)}(k_{I}), independent of β, is computed in terms of single integrals that can be easily evaluated numerically. The general formalism is then applied to the classical β-Gaussian (I=[-L,L]), β-Wishart (I=[1,L]), and β-Cauchy (I=[-L,L]) ensembles. Expanding the rate function around its minimum, we find that generically the number variance var(N_{I}) exhibits a nonmonotonic behavior as a function of the size of the interval, with a maximum that can be precisely characterized. These analytical results, corroborated by numerical simulations, provide the full counting statistics of many systems where random matrix models apply. In particular, we present results for the full counting statistics of zero-temperature one-dimensional spinless fermions in a harmonic trap.
Coupling the Gaussian Free Fields with Free and with Zero Boundary Conditions via Common Level Lines
NASA Astrophysics Data System (ADS)
Qian, Wei; Werner, Wendelin
2018-06-01
We point out a new simple way to couple the Gaussian Free Field (GFF) with free boundary conditions in a two-dimensional domain with the GFF with zero boundary conditions in the same domain: Starting from the latter, one just has to sample at random all the signs of the height gaps on its boundary-touching zero-level lines (these signs are alternating for the zero-boundary GFF) in order to obtain a free boundary GFF. Constructions and couplings of the free boundary GFF and its level lines via soups of reflected Brownian loops and their clusters are also discussed. Such considerations show for instance that in a domain with an axis of symmetry, if one looks at the overlay of a single usual Conformal Loop Ensemble CLE3 with its own symmetric image, one obtains the CLE4-type collection of level lines of a GFF with mixed zero/free boundary conditions in the half-domain.
Behavior of an aeroelastic system beyond critical point of instability
NASA Astrophysics Data System (ADS)
Sekar, T. Chandra; Agarwal, Ravindra; Mandal, Alakesh Chandra; Kushari, Abhijit
2017-11-01
Understanding the behavior of an aeroelastic system beyond the critical point is essential for effective implementation of any active control scheme since the control system design depends on the type of instability (bifurcation) the system encounters. Previous studies had found the aeroelastic system to enter into chaos beyond the point of instability. In the present work, an attempt has been made to carry out an experimental study on an aeroelastic model placed in a wind tunnel, to understand the behavior of aerodynamics around a wing section undergoing classical flutter. Wind speed was increased from zero until the model encountered flutter. Pressure at various locations along the surface of wing and acceleration at multiple points on the wing were measured in real time for the entire duration of experiment. A Leading Edge Separation Bubble (LSB) was observed beyond the critical point. The growing strength of the LSB with increasing wind speed was found to alter the aerodynamic moment acting on the system, which forced the system to enter into a second bifurcation. Based on the nature of the response, the system appears to undergo periodic doubling bifurcation rather than Hopf-bifurcation, resulting in chaotic motion. Eliminating the LSB can help in preventing the system from entering chaos. Any active flow control scheme that can avoid or counter the formation of leading edge separation bubble can be a potential solution to control the classical flutter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Breznay, Nicholas P.; Tendulkar, Mihir; Zhang, Li
Here, we study the two-dimensional superconductor-insulator transition (SIT) in thin films of tantalum nitride. At zero magnetic field, films can be disorder-tuned across the SIT by adjusting thickness and film stoichiometry; insulating films exhibit classical hopping transport. Superconducting films exhibit a magnetic-field-tuned SIT, whose insulating ground state at high field appears to be a quantum-corrected metal. Scaling behavior at the field-tuned SIT shows classical percolation critical exponents zν ≈ 1.3, with a corresponding critical field H c << H c2, the upper critical field. The Hall effect exhibits a crossing point near H c, but with a nonuniversal critical valuemore » ρ c xy comparable to the normal-state Hall resistivity. We propose that high-carrier-density metals will always exhibit this pattern of behavior at the boundary between superconducting and (trivially) insulating ground states.« less
Superconductor to weak-insulator transitions in disordered tantalum nitride films
NASA Astrophysics Data System (ADS)
Breznay, Nicholas P.; Tendulkar, Mihir; Zhang, Li; Lee, Sang-Chul; Kapitulnik, Aharon
2017-10-01
We study the two-dimensional superconductor-insulator transition (SIT) in thin films of tantalum nitride. At zero magnetic field, films can be disorder-tuned across the SIT by adjusting thickness and film stoichiometry; insulating films exhibit classical hopping transport. Superconducting films exhibit a magnetic-field-tuned SIT, whose insulating ground state at high field appears to be a quantum-corrected metal. Scaling behavior at the field-tuned SIT shows classical percolation critical exponents z ν ≈1.3 , with a corresponding critical field Hc≪Hc 2 , the upper critical field. The Hall effect exhibits a crossing point near Hc, but with a nonuniversal critical value ρxy c comparable to the normal-state Hall resistivity. We propose that high-carrier-density metals will always exhibit this pattern of behavior at the boundary between superconducting and (trivially) insulating ground states.
Can Indian classical instrumental music reduce pain felt during venepuncture?
Balan, Rajiv; Bavdekar, S B; Jadhav, Sandhya
2009-05-01
Local anesthetic agent is not usually used to reduce pain experienced by children undergoing venepuncture. This study was undertaken to determine comparative efficacy of local anesthetic cream, Indian classical instrumental music and placebo, in reducing pain due to venepuncture in children. Children aged 5-12 yr requiring venepuncture were enrolled in a prospective randomized clinical trial conducted at a tertiary care center. They were randomly assigned to 3 groups: local anesthetic (LA), music or placebo (control) group. Eutactic mixture of local anesthetic agents (EMLA) and Indian classical instrumental music (raaga-Todi) were used in the first 2 groups, respectively. Pain was assessed independently by parent, patient, investigator and an independent observer at the time of insertion of the cannula (0 min) and at 1- and 5 min after the insertion using a Visual Analog Scale (VAS). Kruskal- Wallis and Mann-Whitney U tests were used to assess the difference amongst the VAS scores. Fifty subjects were enrolled in each group. Significantly higher VAS scores were noted in control (placebo) group by all the categories of observers (parent, patient, investigator, independent observer) at all time points. The VAS scores obtained in LA group were lowest at all time points. However, the difference between VAS scores in LA group were significantly lower than those in music group only at some time-points and with some categories of observers (parent: 1 min; investigator: 0-, 1-, 5 min and independent observer: 5 min). Pain experienced during venepuncture can be significantly reduced by using EMLA or Indian classical instrumental music. The difference between VAS scores with LA and music is not always significant. Hence, the choice between EMLA and music could be dictated by logistical factors.
Interuniversal entanglement in a cyclic multiverse
NASA Astrophysics Data System (ADS)
Robles-Pérez, Salvador; Balcerzak, Adam; Dąbrowski, Mariusz P.; Krämer, Manuel
2017-04-01
We study scenarios of parallel cyclic multiverses which allow for a different evolution of the physical constants, while having the same geometry. These universes are classically disconnected, but quantum-mechanically entangled. Applying the thermodynamics of entanglement, we calculate the temperature and the entropy of entanglement. It emerges that the entropy of entanglement is large at big bang and big crunch singularities of the parallel universes as well as at the maxima of the expansion of these universes. The latter seems to confirm earlier studies that quantum effects are strong at turning points of the evolution of the universe performed in the context of the timeless nature of the Wheeler-DeWitt equation and decoherence. On the other hand, the entropy of entanglement at big rip singularities is going to zero despite its presumably quantum nature. This may be an effect of total dissociation of the universe structures into infinitely separated patches violating the null energy condition. However, the temperature of entanglement is large/infinite at every classically singular point and at maximum expansion and seems to be a better measure of quantumness.
40 CFR 86.1333 - Transient test cycle generation.
Code of Federal Regulations, 2014 CFR
2014-07-01
... zero percent speeds, zero percent torque points, but may be engaged up to two points preceding a non-zero point, and may be engaged for time segments with zero percent speed and torque points of durations...
Open quantum random walk in terms of quantum Bernoulli noise
NASA Astrophysics Data System (ADS)
Wang, Caishi; Wang, Ce; Ren, Suling; Tang, Yuling
2018-03-01
In this paper, we introduce an open quantum random walk, which we call the QBN-based open walk, by means of quantum Bernoulli noise, and study its properties from a random walk point of view. We prove that, with the localized ground state as its initial state, the QBN-based open walk has the same limit probability distribution as the classical random walk. We also show that the probability distributions of the QBN-based open walk include those of the unitary quantum walk recently introduced by Wang and Ye (Quantum Inf Process 15:1897-1908, 2016) as a special case.
Meta-analysis of diagnostic test data: a bivariate Bayesian modeling approach.
Verde, Pablo E
2010-12-30
In the last decades, the amount of published results on clinical diagnostic tests has expanded very rapidly. The counterpart to this development has been the formal evaluation and synthesis of diagnostic results. However, published results present substantial heterogeneity and they can be regarded as so far removed from the classical domain of meta-analysis, that they can provide a rather severe test of classical statistical methods. Recently, bivariate random effects meta-analytic methods, which model the pairs of sensitivities and specificities, have been presented from the classical point of view. In this work a bivariate Bayesian modeling approach is presented. This approach substantially extends the scope of classical bivariate methods by allowing the structural distribution of the random effects to depend on multiple sources of variability. Meta-analysis is summarized by the predictive posterior distributions for sensitivity and specificity. This new approach allows, also, to perform substantial model checking, model diagnostic and model selection. Statistical computations are implemented in the public domain statistical software (WinBUGS and R) and illustrated with real data examples. Copyright © 2010 John Wiley & Sons, Ltd.
Trajectory-based understanding of the quantum-classical transition for barrier scattering
NASA Astrophysics Data System (ADS)
Chou, Chia-Chun
2018-06-01
The quantum-classical transition of wave packet barrier scattering is investigated using a hydrodynamic description in the framework of a nonlinear Schrödinger equation. The nonlinear equation provides a continuous description for the quantum-classical transition of physical systems by introducing a degree of quantumness. Based on the transition equation, the transition trajectory formalism is developed to establish the connection between classical and quantum trajectories. The quantum-classical transition is then analyzed for the scattering of a Gaussian wave packet from an Eckart barrier and the decay of a metastable state. Computational results for the evolution of the wave packet and the transmission probabilities indicate that classical results are recovered when the degree of quantumness tends to zero. Classical trajectories are in excellent agreement with the transition trajectories in the classical limit, except in some regions where transition trajectories cannot cross because of the single-valuedness of the transition wave function. As the computational results demonstrate, the process that the Planck constant tends to zero is equivalent to the gradual removal of quantum effects originating from the quantum potential. This study provides an insightful trajectory interpretation for the quantum-classical transition of wave packet barrier scattering.
Quantum fluctuations increase the self-diffusive motion of para-hydrogen in narrow carbon nanotubes.
Kowalczyk, Piotr; Gauden, Piotr A; Terzyk, Artur P; Furmaniak, Sylwester
2011-05-28
Quantum fluctuations significantly increase the self-diffusive motion of para-hydrogen adsorbed in narrow carbon nanotubes at 30 K comparing to its classical counterpart. Rigorous Feynman's path integral calculations reveal that self-diffusive motion of para-hydrogen in a narrow (6,6) carbon nanotube at 30 K and pore densities below ∼29 mmol cm(-3) is one order of magnitude faster than the classical counterpart. We find that the zero-point energy and tunneling significantly smoothed out the free energy landscape of para-hydrogen molecules adsorbed in a narrow (6,6) carbon nanotube. This promotes a delocalization of the confined para-hydrogen at 30 K (i.e., population of unclassical paths due to quantum effects). Contrary the self-diffusive motion of classical para-hydrogen molecules in a narrow (6,6) carbon nanotube at 30 K is very slow. This is because classical para-hydrogen molecules undergo highly correlated movement when their collision diameter approached the carbon nanotube size (i.e., anomalous diffusion in quasi-one dimensional pores). On the basis of current results we predict that narrow single-walled carbon nanotubes are promising nanoporous molecular sieves being able to separate para-hydrogen molecules from mixtures of classical particles at cryogenic temperatures. This journal is © the Owner Societies 2011
Entanglement of Distillation for Lattice Gauge Theories.
Van Acoleyen, Karel; Bultinck, Nick; Haegeman, Jutho; Marien, Michael; Scholz, Volkher B; Verstraete, Frank
2016-09-23
We study the entanglement structure of lattice gauge theories from the local operational point of view, and, similar to Soni and Trivedi [J. High Energy Phys. 1 (2016) 1], we show that the usual entanglement entropy for a spatial bipartition can be written as the sum of an undistillable gauge part and of another part corresponding to the local operations and classical communication distillable entanglement, which is obtained by depolarizing the local superselection sectors. We demonstrate that the distillable entanglement is zero for pure Abelian gauge theories at zero gauge coupling, while it is in general nonzero for the non-Abelian case. We also consider gauge theories with matter, and show in a perturbative approach how area laws-including a topological correction-emerge for the distillable entanglement. Finally, we also discuss the entanglement entropy of gauge fixed states and show that it has no relation to the physical distillable entropy.
NASA Astrophysics Data System (ADS)
Sarfatti, Jack; Levit, Creon
2009-06-01
We present a model for the origin of gravity, dark energy and dark matter: Dark energy and dark matter are residual pre-inflation false vacuum random zero point energy (w = - 1) of large-scale negative, and short-scale positive pressure, respectively, corresponding to the "zero point" (incoherent) component of a superfluid (supersolid) ground state. Gravity, in contrast, arises from the 2nd order topological defects in the post-inflation virtual "condensate" (coherent) component. We predict, as a consequence, that the LHC will never detect exotic real on-mass-shell particles that can explain dark matter ΩMDM approx 0.23. We also point out that the future holographic dark energy de Sitter horizon is a total absorber (in the sense of retro-causal Wheeler-Feynman action-at-a-distance electrodynamics) because it is an infinite redshift surface for static detectors. Therefore, the advanced Hawking-Unruh thermal radiation from the future de Sitter horizon is a candidate for the negative pressure dark vacuum energy.
NASA Astrophysics Data System (ADS)
Dinç, Erdal; Kanbur, Murat; Baleanu, Dumitru
2007-10-01
Comparative simultaneous determination of chlortetracycline and benzocaine in the commercial veterinary powder product was carried out by continuous wavelet transform (CWT) and classical derivative transform (or classical derivative spectrophotometry). In this quantitative spectral analysis, two proposed analytical methods do not require any chemical separation process. In the first step, several wavelet families were tested to find an optimal CWT for the overlapping signal processing of the analyzed compounds. Subsequently, we observed that the coiflets (COIF-CWT) method with dilation parameter, a = 400, gives suitable results for this analytical application. For a comparison, the classical derivative spectrophotometry (CDS) approach was also applied to the simultaneous quantitative resolution of the same analytical problem. Calibration functions were obtained by measuring the transform amplitudes corresponding to zero-crossing points for both CWT and CDS methods. The utility of these two analytical approaches were verified by analyzing various synthetic mixtures consisting of chlortetracycline and benzocaine and they were applied to the real samples consisting of veterinary powder formulation. The experimental results obtained from the COIF-CWT approach were statistically compared with those obtained by classical derivative spectrophotometry and successful results were reported.
The Green’s functions for peridynamic non-local diffusion
Wang, L. J.; Xu, J. F.
2016-01-01
In this work, we develop the Green’s function method for the solution of the peridynamic non-local diffusion model in which the spatial gradient of the generalized potential in the classical theory is replaced by an integral of a generalized response function in a horizon. We first show that the general solutions of the peridynamic non-local diffusion model can be expressed as functionals of the corresponding Green’s functions for point sources, along with volume constraints for non-local diffusion. Then, we obtain the Green’s functions by the Fourier transform method for unsteady and steady diffusions in infinite domains. We also demonstrate that the peridynamic non-local solutions converge to the classical differential solutions when the non-local length approaches zero. Finally, the peridynamic analytical solutions are applied to an infinite plate heated by a Gauss source, and the predicted variations of temperature are compared with the classical local solutions. The peridynamic non-local diffusion model predicts a lower rate of variation of the field quantities than that of the classical theory, which is consistent with experimental observations. The developed method is applicable to general diffusion-type problems. PMID:27713658
Estimation of correlation functions by stochastic approximation.
NASA Technical Reports Server (NTRS)
Habibi, A.; Wintz, P. A.
1972-01-01
Consideration of the autocorrelation function of a zero-mean stationary random process. The techniques are applicable to processes with nonzero mean provided the mean is estimated first and subtracted. Two recursive techniques are proposed, both of which are based on the method of stochastic approximation and assume a functional form for the correlation function that depends on a number of parameters that are recursively estimated from successive records. One technique uses a standard point estimator of the correlation function to provide estimates of the parameters that minimize the mean-square error between the point estimates and the parametric function. The other technique provides estimates of the parameters that maximize a likelihood function relating the parameters of the function to the random process. Examples are presented.
Li, Haocheng; Staudenmayer, John; Wang, Tianying; Keadle, Sarah Kozey; Carroll, Raymond J
2018-02-20
We take a functional data approach to longitudinal studies with complex bivariate outcomes. This work is motivated by data from a physical activity study that measured 2 responses over time in 5-minute intervals. One response is the proportion of time active in each interval, a continuous proportions with excess zeros and ones. The other response, energy expenditure rate in the interval, is a continuous variable with excess zeros and skewness. This outcome is complex because there are 3 possible activity patterns in each interval (inactive, partially active, and completely active), and those patterns, which are observed, induce both nonrandom and random associations between the responses. More specifically, the inactive pattern requires a zero value in both the proportion for active behavior and the energy expenditure rate; a partially active pattern means that the proportion of activity is strictly between zero and one and that the energy expenditure rate is greater than zero and likely to be moderate, and the completely active pattern means that the proportion of activity is exactly one, and the energy expenditure rate is greater than zero and likely to be higher. To address these challenges, we propose a 3-part functional data joint modeling approach. The first part is a continuation-ratio model to reorder the ordinal valued 3 activity patterns. The second part models the proportions when they are in interval (0,1). The last component specifies the skewed continuous energy expenditure rate with Box-Cox transformations when they are greater than zero. In this 3-part model, the regression structures are specified as smooth curves measured at various time points with random effects that have a correlation structure. The smoothed random curves for each variable are summarized using a few important principal components, and the association of the 3 longitudinal components is modeled through the association of the principal component scores. The difficulties in handling the ordinal and proportional variables are addressed using a quasi-likelihood type approximation. We develop an efficient algorithm to fit the model that also involves the selection of the number of principal components. The method is applied to physical activity data and is evaluated empirically by a simulation study. Copyright © 2017 John Wiley & Sons, Ltd.
Prendergast, Michael L.; Pearson, Frank S.; Podus, Deborah; Hamilton, Zachary K.; Greenwell, Lisa
2013-01-01
Objectives The purpose of the present meta-analysis was to answer the question: Can the Andrews principles of risk, needs, and responsivity, originally developed for programs that treat offenders, be extended to programs that treat drug abusers? Methods Drawing from a dataset that included 243 independent comparisons, we conducted random-effects meta-regression and ANOVA-analog meta-analyses to test the Andrews principles by averaging crime and drug use outcomes over a diverse set of programs for drug abuse problems. Results For crime outcomes, in the meta-regressions the point estimates for each of the principles were substantial, consistent with previous studies of the Andrews principles. There was also a substantial point estimate for programs exhibiting a greater number of the principles. However, almost all of the 95% confidence intervals included the zero point. For drug use outcomes, in the meta-regressions the point estimates for each of the principles was approximately zero; however, the point estimate for programs exhibiting a greater number of the principles was somewhat positive. All of the estimates for the drug use principles had confidence intervals that included the zero point. Conclusions This study supports previous findings from primary research studies targeting the Andrews principles that those principles are effective in reducing crime outcomes, here in meta-analytic research focused on drug treatment programs. By contrast, programs that follow the principles appear to have very little effect on drug use outcomes. Primary research studies that experimentally test the Andrews principles in drug treatment programs are recommended. PMID:24058325
Magnetic properties of graphene quantum dots
NASA Astrophysics Data System (ADS)
Espinosa-Ortega, T.; Luk'yanchuk, I. A.; Rubo, Y. G.
2013-05-01
Using the tight-binding approximation we calculated the diamagnetic susceptibility of graphene quantum dots (GQDs) of different geometrical shapes and characteristic sizes of 2-10 nm, when the magnetic properties are governed by the electron edge states. Two types of edge states can be discerned: the zero-energy states (ZESs), located exactly at the zero-energy Dirac point, and the dispersed edge states (DESs), with the energy close but not exactly equal to zero. DESs are responsible for a temperature-independent diamagnetic response, while ZESs provide a temperature-dependent spin paramagnetism. Hexagonal, circular, and randomly shaped GQDs contain mainly DESs, and, as a result, they are diamagnetic. The edge states of the triangular GQDs are of ZES type. These dots reveal the crossover between spin paramagnetism, dominating for small dots and at low temperatures, and orbital diamagnetism, dominating for large dots and at high temperatures.
Theory of the interface between a classical plasma and a hard wall
NASA Astrophysics Data System (ADS)
Ballone, P.; Pastore, G.; Tosi, M. P.
1983-09-01
The interfacial density profile of a classical one-component plasma confined by a hard wall is studied in planar and spherical geometries. The approach adapts to interfacial problems a modified hypernetted-chain approximation developed by Lado and by Rosenfeld and Ashcroft for the bulk structure of simple liquids. The specific new aim is to embody selfconsistently into the theory a contact theorem, fixing the plasma density at the wall through an equilibrium condition which involves the electrical potential drop across the interface and the bulk pressure. The theory is brought into fully quantitative contact with computer simulation data for a plasma confined in a spherical cavity of large but finite radius. The interfacial potential at the point of zero charge is accurately reproduced by suitably combining the contact theorem with relevant bulk properties in a simple, approximate representation of the interfacial charge density profile.
Theory of the interface between a classical plasma and a hard wall
NASA Astrophysics Data System (ADS)
Ballone, P.; Pastore, G.; Tosi, M. P.
1984-12-01
The interfacial density profile of a classical one-component plasma confined by a hard wall is studied in planar and spherical geometries. The approach adapts to interfacial problems a modified hypernetted-chain approximation developed by Lado and by Rosenfeld and Ashcroft for the bulk structure of simple liquids. The specific new aim is to embody self-consistently into the theory a “contact theorem”, fixing the plasma density at the wall through an equilibrium condition which involves the electrical potential drop across the interface and the bulk pressure. The theory is brought into fully quantitative contact with computer simulation data for a plasma confined in a spherical cavity of large but finite radius. It is also shown that the interfacial potential at the point of zero charge is accurately reproduced by suitably combining the contact theorem with relevant bulk properties in a simple, approximate representation of the interfacial charge density profile.
Superconductor to weak-insulator transitions in disordered tantalum nitride films
Breznay, Nicholas P.; Tendulkar, Mihir; Zhang, Li; ...
2017-10-31
Here, we study the two-dimensional superconductor-insulator transition (SIT) in thin films of tantalum nitride. At zero magnetic field, films can be disorder-tuned across the SIT by adjusting thickness and film stoichiometry; insulating films exhibit classical hopping transport. Superconducting films exhibit a magnetic-field-tuned SIT, whose insulating ground state at high field appears to be a quantum-corrected metal. Scaling behavior at the field-tuned SIT shows classical percolation critical exponents zν ≈ 1.3, with a corresponding critical field H c << H c2, the upper critical field. The Hall effect exhibits a crossing point near H c, but with a nonuniversal critical valuemore » ρ c xy comparable to the normal-state Hall resistivity. We propose that high-carrier-density metals will always exhibit this pattern of behavior at the boundary between superconducting and (trivially) insulating ground states.« less
Quantum Locality in Game Strategy
NASA Astrophysics Data System (ADS)
Melo-Luna, Carlos A.; Susa, Cristian E.; Ducuara, Andrés F.; Barreiro, Astrid; Reina, John H.
2017-03-01
Game theory is a well established branch of mathematics whose formalism has a vast range of applications from the social sciences, biology, to economics. Motivated by quantum information science, there has been a leap in the formulation of novel game strategies that lead to new (quantum Nash) equilibrium points whereby players in some classical games are always outperformed if sharing and processing joint information ruled by the laws of quantum physics is allowed. We show that, for a bipartite non zero-sum game, input local quantum correlations, and separable states in particular, suffice to achieve an advantage over any strategy that uses classical resources, thus dispensing with quantum nonlocality, entanglement, or even discord between the players’ input states. This highlights the remarkable key role played by pure quantum coherence at powering some protocols. Finally, we propose an experiment that uses separable states and basic photon interferometry to demonstrate the locally-correlated quantum advantage.
Quantum Locality in Game Strategy
Melo-Luna, Carlos A.; Susa, Cristian E.; Ducuara, Andrés F.; Barreiro, Astrid; Reina, John H.
2017-01-01
Game theory is a well established branch of mathematics whose formalism has a vast range of applications from the social sciences, biology, to economics. Motivated by quantum information science, there has been a leap in the formulation of novel game strategies that lead to new (quantum Nash) equilibrium points whereby players in some classical games are always outperformed if sharing and processing joint information ruled by the laws of quantum physics is allowed. We show that, for a bipartite non zero-sum game, input local quantum correlations, and separable states in particular, suffice to achieve an advantage over any strategy that uses classical resources, thus dispensing with quantum nonlocality, entanglement, or even discord between the players’ input states. This highlights the remarkable key role played by pure quantum coherence at powering some protocols. Finally, we propose an experiment that uses separable states and basic photon interferometry to demonstrate the locally-correlated quantum advantage. PMID:28327567
Melting of Boltzmann particles in different 2D trapping potential
NASA Astrophysics Data System (ADS)
Bhattacharya, Dyuti; Filinov, Alexei; Ghosal, Amit; Bonitz, Michael
2015-03-01
We analyze the quantum melting of two dimensional Wigner solid in several confined geometries and compare them with corresponding thermal melting in a purely classical system. Our results show that the geometry play little role in deciding the crossover quantum parameter nX, as the effects from boundary is well screened by the quantum zero point motion. The unique phase diagram in the plane of thermal and quantum fluctuations determined from independent melting criteria separates out the Wigner molecule ``phase'' from the classical and quantum ``liquids''. An intriguing signature of weakening liquidity with increasing temperature T have been found in the extreme quantum regime (n). This crossover is associated with production of defects, just like in case of thermal melting, though the role of them in determining the mechanism of the crossover appears different. Our study will help comprehending melting in a variety of experimental realization of confined system - from quantum dots to complex plasma.
Continuous time quantum random walks in free space
NASA Astrophysics Data System (ADS)
Eichelkraut, Toni; Vetter, Christian; Perez-Leija, Armando; Christodoulides, Demetrios; Szameit, Alexander
2014-05-01
We show theoretically and experimentally that two-dimensional continuous time coherent random walks are possible in free space, that is, in the absence of any external potential, by properly tailoring the associated initial wave function. These effects are experimentally demonstrated using classical paraxial light. Evidently, the usage of classical beams to explore the dynamics of point-like quantum particles is possible since both phenomena are mathematically equivalent. This in turn makes our approach suitable for the realization of random walks using different quantum particles, including electrons and photons. To study the spatial evolution of a wavefunction theoretically, we consider the one-dimensional paraxial wave equation (i∂z +1/2 ∂x2) Ψ = 0 . Starting with the initially localized wavefunction Ψ (x , 0) = exp [ -x2 / 2σ2 ] J0 (αx) , one can show that the evolution of such Gaussian-apodized Bessel envelopes within a region of validity resembles the probability pattern of a quantum walker traversing a uniform lattice. In order to generate the desired input-field in our experimental setting we shape the amplitude and phase of a collimated light beam originating from a classical HeNe-Laser (633 nm) utilizing a spatial light modulator.
Resonant paramagnetic enhancement of the thermal and zero-point Nyquist noise
NASA Astrophysics Data System (ADS)
França, H. M.; Santos, R. B. B.
1999-01-01
The interaction between a very thin macroscopic solenoid, and a single magnetic particle precessing in a external magnetic field B0, is described by taking into account the thermal and the zero-point fluctuations of stochastic electrodynamics. The inductor belongs to a RLC circuit without batteries and the random motion of the magnetic dipole generates in the solenoid a fluctuating current Idip( t), and a fluctuating voltage εdip( t), with spectral distribution quite different from the Nyquist noise. We show that the mean square value < Idip2> presents an enormous variation when the frequency of precession approaches the frequency of the circuit, but it is still much smaller than the Nyquist current in the circuit. However, we also show that < Idip2> can reach measurable values if the inductor is interacting with a macroscopic sample of magnetic particles (atoms or nuclei) which are close enough to its coils.
NASA Astrophysics Data System (ADS)
Ljungberg, Mathias P.
2017-12-01
A method is presented for describing vibrational effects in x-ray absorption spectroscopy and resonant inelastic x-ray scattering (RIXS) using a combination of the classical Franck-Condon (FC) approximation and classical trajectories run on the core-excited state. The formulation of RIXS is an extension of the semiclassical Kramers-Heisenberg formalism of Ljungberg et al. [Phys. Rev. B 82, 245115 (2010), 10.1103/PhysRevB.82.245115] to the resonant case, retaining approximately the same computational cost. To overcome difficulties with connecting the absorption and emission processes in RIXS, the classical FC approximation is used for the absorption, which is seen to work well provided that a zero-point-energy correction is included. In the case of core-excited states with dissociative character, the method is capable of closely reproducing the main features for one-dimensional test systems, compared to the quantum-mechanical formulation. Due to the good accuracy combined with the relatively low computational cost, the method has great potential of being used for complex systems with many degrees of freedom, such as liquids and surface adsorbates.
Mixed quantum-classical electrodynamics: Understanding spontaneous decay and zero-point energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Tao E.; Nitzan, Abraham; Sukharev, Maxim
The dynamics of an electronic two-level system coupled to an electromagnetic field are simulated explicitly for one- and three-dimensional systems through semiclassical propagation of the Maxwell-Liouville equations. Here, we consider three flavors of mixed quantum-classical dynamics: (i) the classical path approximation (CPA), (ii) Ehrenfest dynamics, and (iii) symmetrical quasiclassical (SQC) dynamics. Our findings are as follows: (i) The CPA fails to recover a consistent description of spontaneous emission, (ii) a consistent “spontaneous” emission can be obtained from Ehrenfest dynamics, provided that one starts in an electronic superposition state, and (iii) spontaneous emission is always obtained using SQC dynamics. Using themore » SQC and Ehrenfest frameworks, we further calculate the dynamics following an incoming pulse, but here we find very different responses: SQC and Ehrenfest dynamics deviate sometimes strongly in the calculated rate of decay of the transient excited state. Nevertheless, our work confirms the earlier observations by Miller [J. Chem. Phys. 69, 2188 (1978)] that Ehrenfest dynamics can effectively describe some aspects of spontaneous emission and highlights interesting possibilities for studying light-matter interactions with semiclassical mechanics.« less
Mixed quantum-classical electrodynamics: Understanding spontaneous decay and zero-point energy
NASA Astrophysics Data System (ADS)
Li, Tao E.; Nitzan, Abraham; Sukharev, Maxim; Martinez, Todd; Chen, Hsing-Ta; Subotnik, Joseph E.
2018-03-01
The dynamics of an electronic two-level system coupled to an electromagnetic field are simulated explicitly for one- and three-dimensional systems through semiclassical propagation of the Maxwell-Liouville equations. We consider three flavors of mixed quantum-classical dynamics: (i) the classical path approximation (CPA), (ii) Ehrenfest dynamics, and (iii) symmetrical quasiclassical (SQC) dynamics. Our findings are as follows: (i) The CPA fails to recover a consistent description of spontaneous emission, (ii) a consistent "spontaneous" emission can be obtained from Ehrenfest dynamics, provided that one starts in an electronic superposition state, and (iii) spontaneous emission is always obtained using SQC dynamics. Using the SQC and Ehrenfest frameworks, we further calculate the dynamics following an incoming pulse, but here we find very different responses: SQC and Ehrenfest dynamics deviate sometimes strongly in the calculated rate of decay of the transient excited state. Nevertheless, our work confirms the earlier observations by Miller [J. Chem. Phys. 69, 2188 (1978), 10.1063/1.436793] that Ehrenfest dynamics can effectively describe some aspects of spontaneous emission and highlights interesting possibilities for studying light-matter interactions with semiclassical mechanics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mastromatteo, Michael; Jackson, Bret, E-mail: jackson@chem.umass.edu
Electronic structure methods based on density functional theory are used to construct a reaction path Hamiltonian for CH{sub 4} dissociation on the Ni(100) and Ni(111) surfaces. Both quantum and quasi-classical trajectory approaches are used to compute dissociative sticking probabilities, including all molecular degrees of freedom and the effects of lattice motion. Both approaches show a large enhancement in sticking when the incident molecule is vibrationally excited, and both can reproduce the mode specificity observed in experiments. However, the quasi-classical calculations significantly overestimate the ground state dissociative sticking at all energies, and the magnitude of the enhancement in sticking with vibrationalmore » excitation is much smaller than that computed using the quantum approach or observed in the experiments. The origin of this behavior is an unphysical flow of zero point energy from the nine normal vibrational modes into the reaction coordinate, giving large values for reaction at energies below the activation energy. Perturbative assumptions made in the quantum studies are shown to be accurate at all energies studied.« less
Mixed quantum-classical electrodynamics: Understanding spontaneous decay and zero-point energy
Li, Tao E.; Nitzan, Abraham; Sukharev, Maxim; ...
2018-03-12
The dynamics of an electronic two-level system coupled to an electromagnetic field are simulated explicitly for one- and three-dimensional systems through semiclassical propagation of the Maxwell-Liouville equations. Here, we consider three flavors of mixed quantum-classical dynamics: (i) the classical path approximation (CPA), (ii) Ehrenfest dynamics, and (iii) symmetrical quasiclassical (SQC) dynamics. Our findings are as follows: (i) The CPA fails to recover a consistent description of spontaneous emission, (ii) a consistent “spontaneous” emission can be obtained from Ehrenfest dynamics, provided that one starts in an electronic superposition state, and (iii) spontaneous emission is always obtained using SQC dynamics. Using themore » SQC and Ehrenfest frameworks, we further calculate the dynamics following an incoming pulse, but here we find very different responses: SQC and Ehrenfest dynamics deviate sometimes strongly in the calculated rate of decay of the transient excited state. Nevertheless, our work confirms the earlier observations by Miller [J. Chem. Phys. 69, 2188 (1978)] that Ehrenfest dynamics can effectively describe some aspects of spontaneous emission and highlights interesting possibilities for studying light-matter interactions with semiclassical mechanics.« less
NASA Astrophysics Data System (ADS)
Montina, Alberto; Wolf, Stefan
2014-07-01
We consider the process consisting of preparation, transmission through a quantum channel, and subsequent measurement of quantum states. The communication complexity of the channel is the minimal amount of classical communication required for classically simulating it. Recently, we reduced the computation of this quantity to a convex minimization problem with linear constraints. Every solution of the constraints provides an upper bound on the communication complexity. In this paper, we derive the dual maximization problem of the original one. The feasible points of the dual constraints, which are inequalities, give lower bounds on the communication complexity, as illustrated with an example. The optimal values of the two problems turn out to be equal (zero duality gap). By this property, we provide necessary and sufficient conditions for optimality in terms of a set of equalities and inequalities. We use these conditions and two reasonable but unproven hypotheses to derive the lower bound n ×2n -1 for a noiseless quantum channel with capacity equal to n qubits. This lower bound can have interesting consequences in the context of the recent debate on the reality of the quantum state.
Marques, J M C; Martínez-Núñez, E; Fernandez-Ramos, A; Vazquez, S A
2005-06-23
Large-scale classical trajectory calculations have been performed to study the reaction Ar + CH4--> CH3 +H + Ar in the temperature range 2500 < or = T/K < or = 4500. The potential energy surface used for ArCH4 is the sum of the nonbonding pairwise potentials of Hase and collaborators (J. Chem. Phys. 2001, 114, 535) that models the intermolecular interaction and the CH4 intramolecular potential of Duchovic et al. (J. Phys. Chem. 1984, 88, 1339), which has been modified to account for the H-H repulsion at small bending angles. The thermal rate coefficient has been calculated, and the zero-point energy (ZPE) of the CH3 product molecule has been taken into account in the analysis of the results; also, two approaches have been applied for discarding predissociative trajectories. In both cases, good agreement is observed between the experimental and trajectory results after imposing the ZPE of CH3. The energy-transfer parameters have also been obtained from trajectory calculations and compared with available values estimated from experiment using the master equation formalism; in general, the agreement is good.
Criticality of the mean-field spin-boson model: boson state truncation and its scaling analysis
NASA Astrophysics Data System (ADS)
Hou, Y.-H.; Tong, N.-H.
2010-11-01
The spin-boson model has nontrivial quantum phase transitions at zero temperature induced by the spin-boson coupling. The bosonic numerical renormalization group (BNRG) study of the critical exponents β and δ of this model is hampered by the effects of boson Hilbert space truncation. Here we analyze the mean-field spin boson model to figure out the scaling behavior of magnetization under the cutoff of boson states N b . We find that the truncation is a strong relevant operator with respect to the Gaussian fixed point in 0 < s < 1/2 and incurs the deviation of the exponents from the classical values. The magnetization at zero bias near the critical point is described by a generalized homogeneous function (GHF) of two variables τ = α - α c and x = 1/ N b . The universal function has a double-power form and the powers are obtained analytically as well as numerically. Similarly, m( α = α c ) is found to be a GHF of γ and x. In the regime s > 1/2, the truncation produces no effect. Implications of these findings to the BNRG study are discussed.
NASA Astrophysics Data System (ADS)
Perugini, G.; Ricci-Tersenghi, F.
2018-01-01
We first present an empirical study of the Belief Propagation (BP) algorithm, when run on the random field Ising model defined on random regular graphs in the zero temperature limit. We introduce the notion of extremal solutions for the BP equations, and we use them to fix a fraction of spins in their ground state configuration. At the phase transition point the fraction of unconstrained spins percolates and their number diverges with the system size. This in turn makes the associated optimization problem highly non trivial in the critical region. Using the bounds on the BP messages provided by the extremal solutions we design a new and very easy to implement BP scheme which is able to output a large number of stable fixed points. On one hand this new algorithm is able to provide the minimum energy configuration with high probability in a competitive time. On the other hand we found that the number of fixed points of the BP algorithm grows with the system size in the critical region. This unexpected feature poses new relevant questions about the physics of this class of models.
Discrepancy-based error estimates for Quasi-Monte Carlo III. Error distributions and central limits
NASA Astrophysics Data System (ADS)
Hoogland, Jiri; Kleiss, Ronald
1997-04-01
In Quasi-Monte Carlo integration, the integration error is believed to be generally smaller than in classical Monte Carlo with the same number of integration points. Using an appropriate definition of an ensemble of quasi-random point sets, we derive various results on the probability distribution of the integration error, which can be compared to the standard Central Limit Theorem for normal stochastic sampling. In many cases, a Gaussian error distribution is obtained.
Kassahun, Wondwosen; Neyens, Thomas; Molenberghs, Geert; Faes, Christel; Verbeke, Geert
2014-11-10
Count data are collected repeatedly over time in many applications, such as biology, epidemiology, and public health. Such data are often characterized by the following three features. First, correlation due to the repeated measures is usually accounted for using subject-specific random effects, which are assumed to be normally distributed. Second, the sample variance may exceed the mean, and hence, the theoretical mean-variance relationship is violated, leading to overdispersion. This is usually allowed for based on a hierarchical approach, combining a Poisson model with gamma distributed random effects. Third, an excess of zeros beyond what standard count distributions can predict is often handled by either the hurdle or the zero-inflated model. A zero-inflated model assumes two processes as sources of zeros and combines a count distribution with a discrete point mass as a mixture, while the hurdle model separately handles zero observations and positive counts, where then a truncated-at-zero count distribution is used for the non-zero state. In practice, however, all these three features can appear simultaneously. Hence, a modeling framework that incorporates all three is necessary, and this presents challenges for the data analysis. Such models, when conditionally specified, will naturally have a subject-specific interpretation. However, adopting their purposefully modified marginalized versions leads to a direct marginal or population-averaged interpretation for parameter estimates of covariate effects, which is the primary interest in many applications. In this paper, we present a marginalized hurdle model and a marginalized zero-inflated model for correlated and overdispersed count data with excess zero observations and then illustrate these further with two case studies. The first dataset focuses on the Anopheles mosquito density around a hydroelectric dam, while adolescents' involvement in work, to earn money and support their families or themselves, is studied in the second example. Sub-models, which result from omitting zero-inflation and/or overdispersion features, are also considered for comparison's purpose. Analysis of the two datasets showed that accounting for the correlation, overdispersion, and excess zeros simultaneously resulted in a better fit to the data and, more importantly, that omission of any of them leads to incorrect marginal inference and erroneous conclusions about covariate effects. Copyright © 2014 John Wiley & Sons, Ltd.
Saddle point localization of molecular wavefunctions.
Mellau, Georg Ch; Kyuberis, Alexandra A; Polyansky, Oleg L; Zobov, Nikolai; Field, Robert W
2016-09-15
The quantum mechanical description of isomerization is based on bound eigenstates of the molecular potential energy surface. For the near-minimum regions there is a textbook-based relationship between the potential and eigenenergies. Here we show how the saddle point region that connects the two minima is encoded in the eigenstates of the model quartic potential and in the energy levels of the [H, C, N] potential energy surface. We model the spacing of the eigenenergies with the energy dependent classical oscillation frequency decreasing to zero at the saddle point. The eigenstates with the smallest spacing are localized at the saddle point. The analysis of the HCN ↔ HNC isomerization states shows that the eigenstates with small energy spacing relative to the effective (v1, v3, ℓ) bending potentials are highly localized in the bending coordinate at the transition state. These spectroscopically detectable states represent a chemical marker of the transition state in the eigenenergy spectrum. The method developed here provides a basis for modeling characteristic patterns in the eigenenergy spectrum of bound states.
Tunneling splitting in double-proton transfer: direct diagonalization results for porphycene.
Smedarchina, Zorka; Siebrand, Willem; Fernández-Ramos, Antonio
2014-11-07
Zero-point and excited level splittings due to double-proton tunneling are calculated for porphycene and the results are compared with experiment. The calculation makes use of a multidimensional imaginary-mode Hamiltonian, diagonalized directly by an effective reduction of its dimensionality. Porphycene has a complex potential energy surface with nine stationary configurations that allow a variety of tunneling paths, many of which include classically accessible regions. A symmetry-based approach is used to show that the zero-point level, although located above the cis minimum, corresponds to concerted tunneling along a direct trans - trans path; a corresponding cis - cis path is predicted at higher energy. This supports the conclusion of a previous paper [Z. Smedarchina, W. Siebrand, and A. Fernández-Ramos, J. Chem. Phys. 127, 174513 (2007)] based on the instanton approach to a model Hamiltonian of correlated double-proton transfer. A multidimensional tunneling Hamiltonian is then generated, based on a double-minimum potential along the coordinate of concerted proton motion, which is newly evaluated at the RI-CC2/cc-pVTZ level of theory. To make it suitable for diagonalization, its dimensionality is reduced by treating fast weakly coupled modes in the adiabatic approximation. This results in a coordinate-dependent mass of tunneling, which is included in a unique Hermitian form into the kinetic energy operator. The reduced Hamiltonian contains three symmetric and one antisymmetric mode coupled to the tunneling mode and is diagonalized by a modified Jacobi-Davidson algorithm implemented in the Jadamilu software for sparse matrices. The results are in satisfactory agreement with the observed splitting of the zero-point level and several vibrational fundamentals after a partial reassignment, imposed by recently derived selection rules. They also agree well with instanton calculations based on the same Hamiltonian.
A randomized controlled trial of single point acupuncture in primary dysmenorrhea.
Liu, Cun-Zhi; Xie, Jie-Ping; Wang, Lin-Peng; Liu, Yu-Qi; Song, Jia-Shan; Chen, Yin-Ying; Shi, Guang-Xia; Zhou, Wei; Gao, Shu-Zhong; Li, Shi-Liang; Xing, Jian-Min; Ma, Liang-Xiao; Wang, Yan-Xia; Zhu, Jiang; Liu, Jian-Ping
2014-06-01
Acupuncture is often used for primary dysmenorrhea. But there is no convincing evidence due to low methodological quality. We aim to assess immediate effect of acupuncture at specific acupoint compared with unrelated acupoint and nonacupoint on primary dysmenorrhea. The Acupuncture Analgesia Effect in Primary Dysmenorrhoea-II is a multicenter controlled trial conducted in six large hospitals of China. Patients who met inclusion criteria were randomly assigned to classic acupoint (N = 167), unrelated acupoint (N = 167), or non-acupoint (N = 167) group on a 1:1:1 basis. They received three sessions with electro-acupuncture at a classic acupoint (Sanyinjiao, SP6), or an unrelated acupoint (Xuanzhong, GB39), or nonacupoint location, respectively. The primary outcome was subjective pain as measured by a 100-mm visual analog scale (VAS). Measurements were obtained at 0, 5, 10, 30, and 60 minutes following the first intervention. In addition, patients scored changes of general complaints using Cox retrospective symptom scales (RSS-Cox) and 7-point verbal rating scale (VRS) during three menstrual cycles. Secondary outcomes included VAS score for average pain, pain total time, additional in-bed time, and proportion of participants using analgesics during three menstrual cycles. Five hundred and one people underwent random assignment. The primary comparison of VAS scores following the first intervention demonstrated that classic acupoint group was more effective both than unrelated acupoint (-4.0 mm, 95% CI -7.1 to -0.9, P = 0.010) and nonacupoint (-4.0 mm, 95% CI -7.0 to -0.9, P = 0.012) groups. However, no significant differences were detected among the three acupuncture groups for RSS-Cox or VRS outcomes. The per-protocol analysis showed similar pattern. No serious adverse events were noted. Specific acupoint acupuncture produced a statistically, but not clinically, significant effect compared with unrelated acupoint and nonacupoint acupuncture in primary dysmenorrhea patients. Future studies should focus on effects of multiple points acupuncture on primary dysmenorrhea. Wiley Periodicals, Inc.
Electron scattering intensities and Patterson functions of Skyrmions
NASA Astrophysics Data System (ADS)
Karliner, M.; King, C.; Manton, N. S.
2016-06-01
The scattering of electrons off nuclei is one of the best methods of probing nuclear structure. In this paper we focus on electron scattering off nuclei with spin and isospin zero within the Skyrme model. We consider two distinct methods and simplify our calculations by use of the Born approximation. The first method is to calculate the form factor of the spherically averaged Skyrmion charge density; the second uses the Patterson function to calculate the scattering intensity off randomly oriented Skyrmions, and spherically averages at the end. We compare our findings with experimental scattering data. We also find approximate analytical formulae for the first zero and first stationary point of a form factor.
Isoelectric points and points of zero charge of metal (hydr)oxides: 50years after Parks' review.
Kosmulski, Marek
2016-12-01
The pH-dependent surface charging of metal (hydr)oxides is reviewed on the occasion of the 50th anniversary of the publication by G.A. Parks: "Isoelectric points of solid oxides, solid hydroxides, and aqueous hydroxo complex systems" in Chemical Reviews. The point of zero charge (PZC) and isoelectric point (IEP) became standard parameters to characterize metal oxides in aqueous dispersions, and they define adsorption (surface excess) of ions, stability against coagulation, rheological properties of dispersions, etc. They are commonly used in many branches of science including mineral processing, soil science, materials science, geochemistry, environmental engineering, and corrosion science. Parks established standard procedures and experimental conditions which are required to obtain reliable and reproducible values of PZC and IEP. The field is very active, and the number of related papers exceeds 300 a year, and the standards established by Parks remain still valid. Relevant experimental techniques improved over the years, especially the measurements of electrophoretic mobility became easier and more reliable, are the numerical values of PZC and IEP compiled by Parks were confirmed by contemporary publications with a few exceptions. The present paper is an up-to-date compilation of the values of PZC and IEP of metal oxides. Unlike in former reviews by the same author, which were more comprehensive, only limited number of selected results are presented and discussed here. On top of the results obtained by means of classical methods (titration and electrokinetic methods), new methods and correlations found over the recent 50years are presented. Copyright © 2016 Elsevier B.V. All rights reserved.
Tüzün, Emİne Handan; Gıldır, Sıla; Angın, Ender; Tecer, Büşra Hande; Dana, Kezban Öztürk; Malkoç, Mehtap
2017-09-01
[Purpose] We compared the effectiveness of dry needling with a classical physiotherapy program in patients with chronic low-back pain caused by lumbar disc hernia (LHNP). [Subjects and Methods] In total, 34 subjects were allocated randomly to the study (n=18) and control groups (n=16). In the study group, dry needling was applied using acupuncture needles. The control group performed a home exercise program in addition to hot pack, TENS, and ultrasound applications. Pain was assessed with the short form of the McGill Pain Questionnaire. The number of trigger points and their pressure sensitivity were evaluated with a physical examination (palpation). The Beck Depression Inventory was used to assess depression. The Tampa Kinesiophobia Scale was used to assess fear of movement. [Results] In the study group, the calculated Cohen's effect sizes were bigger than those in the control group in terms of pain, trigger point-related variables, and fear of movement. Effect sizes for reducing depressive symptoms were similar in both groups. [Conclusion] These results suggest that dry needling can be an effective treatment for reducing pain, number of trigger points, sensitivity, and kinesiophobia in patients with chronic low-back pain caused by lumbar disc hernia.
All about Eve: Secret Sharing using Quantum Effects
NASA Technical Reports Server (NTRS)
Jackson, Deborah J.
2005-01-01
This document discusses the nature of light (including classical light and photons), encryption, quantum key distribution (QKD), light polarization and beamsplitters and their application to information communication. A quantum of light represents the smallest possible subdivision of radiant energy (light) and is called a photon. The QKD key generation sequence is outlined including the receiver broadcasting the initial signal indicating reception availability, timing pulses from the sender to provide reference for gated detection of photons, the sender generating photons through random polarization while the receiver detects photons with random polarization and communicating via data link to mutually establish random keys. The QKD network vision includes inter-SATCOM, point-to-point Gnd Fiber and SATCOM-fiber nodes. QKD offers an unconditionally secure method of exchanging encryption keys. Ongoing research will focus on how to increase the key generation rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Inno, L.; Bono, G.; Buonanno, R.
2013-02-10
We present the largest near-infrared (NIR) data sets, JHKs, ever collected for classical Cepheids in the Magellanic Clouds (MCs). We selected fundamental (FU) and first overtone (FO) pulsators, and found 4150 (2571 FU, 1579 FO) Cepheids for Small Magellanic Cloud (SMC) and 3042 (1840 FU, 1202 FO) for Large Magellanic Cloud (LMC). Current sample is 2-3 times larger than any sample used in previous investigations with NIR photometry. We also discuss optical VI photometry from OGLE-III. NIR and optical-NIR Period-Wesenheit (PW) relations are linear over the entire period range (0.0 < log P {sub FU} {<=} 1.65) and their slopesmore » are, within the intrinsic dispersions, common between the MCs. These are consistent with recent results from pulsation models and observations suggesting that the PW relations are minimally affected by the metal content. The new FU and FO PW relations were calibrated using a sample of Galactic Cepheids with distances based on trigonometric parallaxes and Cepheid pulsation models. By using FU Cepheids we found a true distance moduli of 18.45 {+-} 0.02(random) {+-} 0.10(systematic) mag (LMC) and 18.93 {+-} 0.02(random) {+-} 0.10(systematic) mag (SMC). These estimates are the weighted mean over 10 PW relations and the systematic errors account for uncertainties in the zero point and in the reddening law. We found similar distances using FO Cepheids (18.60 {+-} 0.03(random) {+-} 0.10(systematic) mag (LMC) and 19.12 {+-} 0.03(random) {+-} 0.10(systematic) mag (SMC)). These new MC distances lead to the relative distance, {Delta}{mu} = 0.48 {+-} 0.03 mag (FU, log P = 1) and {Delta}{mu} = 0.52 {+-} 0.03 mag (FO, log P = 0.5), which agrees quite well with previous estimates based on robust distance indicators.« less
Disease Mapping of Zero-excessive Mesothelioma Data in Flanders
Neyens, Thomas; Lawson, Andrew B.; Kirby, Russell S.; Nuyts, Valerie; Watjou, Kevin; Aregay, Mehreteab; Carroll, Rachel; Nawrot, Tim S.; Faes, Christel
2016-01-01
Purpose To investigate the distribution of mesothelioma in Flanders using Bayesian disease mapping models that account for both an excess of zeros and overdispersion. Methods The numbers of newly diagnosed mesothelioma cases within all Flemish municipalities between 1999 and 2008 were obtained from the Belgian Cancer Registry. To deal with overdispersion, zero-inflation and geographical association, the hurdle combined model was proposed, which has three components: a Bernoulli zero-inflation mixture component to account for excess zeros, a gamma random effect to adjust for overdispersion and a normal conditional autoregressive random effect to attribute spatial association. This model was compared with other existing methods in literature. Results The results indicate that hurdle models with a random effects term accounting for extra-variance in the Bernoulli zero-inflation component fit the data better than hurdle models that do not take overdispersion in the occurrence of zeros into account. Furthermore, traditional models that do not take into account excessive zeros but contain at least one random effects term that models extra-variance in the counts have better fits compared to their hurdle counterparts. In other words, the extra-variability, due to an excess of zeros, can be accommodated by spatially structured and/or unstructured random effects in a Poisson model such that the hurdle mixture model is not necessary. Conclusions Models taking into account zero-inflation do not always provide better fits to data with excessive zeros than less complex models. In this study, a simple conditional autoregressive model identified a cluster in mesothelioma cases near a former asbestos processing plant (Kapelle-op-den-Bos). This observation is likely linked with historical local asbestos exposures. Future research will clarify this. PMID:27908590
Disease mapping of zero-excessive mesothelioma data in Flanders.
Neyens, Thomas; Lawson, Andrew B; Kirby, Russell S; Nuyts, Valerie; Watjou, Kevin; Aregay, Mehreteab; Carroll, Rachel; Nawrot, Tim S; Faes, Christel
2017-01-01
To investigate the distribution of mesothelioma in Flanders using Bayesian disease mapping models that account for both an excess of zeros and overdispersion. The numbers of newly diagnosed mesothelioma cases within all Flemish municipalities between 1999 and 2008 were obtained from the Belgian Cancer Registry. To deal with overdispersion, zero inflation, and geographical association, the hurdle combined model was proposed, which has three components: a Bernoulli zero-inflation mixture component to account for excess zeros, a gamma random effect to adjust for overdispersion, and a normal conditional autoregressive random effect to attribute spatial association. This model was compared with other existing methods in literature. The results indicate that hurdle models with a random effects term accounting for extra variance in the Bernoulli zero-inflation component fit the data better than hurdle models that do not take overdispersion in the occurrence of zeros into account. Furthermore, traditional models that do not take into account excessive zeros but contain at least one random effects term that models extra variance in the counts have better fits compared to their hurdle counterparts. In other words, the extra variability, due to an excess of zeros, can be accommodated by spatially structured and/or unstructured random effects in a Poisson model such that the hurdle mixture model is not necessary. Models taking into account zero inflation do not always provide better fits to data with excessive zeros than less complex models. In this study, a simple conditional autoregressive model identified a cluster in mesothelioma cases near a former asbestos processing plant (Kapelle-op-den-Bos). This observation is likely linked with historical local asbestos exposures. Future research will clarify this. Copyright © 2016 Elsevier Inc. All rights reserved.
Stochastic gradient ascent outperforms gamers in the Quantum Moves game
NASA Astrophysics Data System (ADS)
Sels, Dries
2018-04-01
In a recent work on quantum state preparation, Sørensen and co-workers [Nature (London) 532, 210 (2016), 10.1038/nature17620] explore the possibility of using video games to help design quantum control protocols. The authors present a game called "Quantum Moves" (https://www.scienceathome.org/games/quantum-moves/) in which gamers have to move an atom from A to B by means of optical tweezers. They report that, "players succeed where purely numerical optimization fails." Moreover, by harnessing the player strategies, they can "outperform the most prominent established numerical methods." The aim of this Rapid Communication is to analyze the problem in detail and show that those claims are untenable. In fact, without any prior knowledge and starting from a random initial seed, a simple stochastic local optimization method finds near-optimal solutions which outperform all players. Counterdiabatic driving can even be used to generate protocols without resorting to numeric optimization. The analysis results in an accurate analytic estimate of the quantum speed limit which, apart from zero-point motion, is shown to be entirely classical in nature. The latter might explain why gamers are reasonably good at the game. A simple modification of the BringHomeWater challenge is proposed to test this hypothesis.
Classical many-particle systems with unique disordered ground states
NASA Astrophysics Data System (ADS)
Zhang, G.; Stillinger, F. H.; Torquato, S.
2017-10-01
Classical ground states (global energy-minimizing configurations) of many-particle systems are typically unique crystalline structures, implying zero enumeration entropy of distinct patterns (aside from trivial symmetry operations). By contrast, the few previously known disordered classical ground states of many-particle systems are all high-entropy (highly degenerate) states. Here we show computationally that our recently proposed "perfect-glass" many-particle model [Sci. Rep. 6, 36963 (2016), 10.1038/srep36963] possesses disordered classical ground states with a zero entropy: a highly counterintuitive situation . For all of the system sizes, parameters, and space dimensions that we have numerically investigated, the disordered ground states are unique such that they can always be superposed onto each other or their mirror image. At low energies, the density of states obtained from simulations matches those calculated from the harmonic approximation near a single ground state, further confirming ground-state uniqueness. Our discovery provides singular examples in which entropy and disorder are at odds with one another. The zero-entropy ground states provide a unique perspective on the celebrated Kauzmann-entropy crisis in which the extrapolated entropy of a supercooled liquid drops below that of the crystal. We expect that our disordered unique patterns to be of value in fields beyond glass physics, including applications in cryptography as pseudorandom functions with tunable computational complexity.
Kamali, Fahimeh; Mirkhani, Hossein; Nematollahi, Ahmadreza; Heidari, Saeed; Moosavi, Elahesadat; Mohamadi, Marzieh
2017-04-01
Transcutaneous electrical nerve stimulation (TENS) is a widely-practiced method to increase blood flow in clinical practice. The best location for stimulation to achieve optimal blood flow has not yet been determined. We compared the effect of TENS application at sympathetic ganglions and acupuncture points on blood flow in the foot of healthy individuals. Seventy-five healthy individuals were randomly assigned to three groups. The first group received cutaneous electrical stimulation at the thoracolumbar sympathetic ganglions. The second group received stimulation at acupuncture points. The third group received stimulation in the mid-calf area as a control group. Blood flow was recorded at time zero as baseline and every 3 minutes after baseline during stimulation, with a laser Doppler flow-meter. Individuals who received sympathetic ganglion stimulation showed significantly greater blood flow than those receiving acupuncture point stimulation or those in the control group (p<0.001). Data analysis revealed that blood flow at different times during stimulation increased significantly from time zero in each group. Therefore, the application of low-frequency TENS at the thoracolumbar sympathetic ganglions was more effective in increasing peripheral blood circulation than stimulation at acupuncture points. Copyright © 2017 Medical Association of Pharmacopuncture Institute. Published by Elsevier B.V. All rights reserved.
Control of mechanical systems by the mixed "time and expenditure" criterion
NASA Astrophysics Data System (ADS)
Alesova, I. M.; Babadzanjanz, L. K.; Pototskaya, I. Yu.; Pupysheva, Yu. Yu.; Saakyan, A. T.
2018-05-01
The optimal controlled motion of a mechanical system, that is determined by the linear system ODE with constant coefficients and piecewise constant control components, is considered. The number of control switching points and the heights of control steps are considered as preset. The optimized functional is combination of classical time criteria and "Expenditure criteria", that is equal to the total area of all steps of all control components. In the absence of control, the solution of the system is equal to the sum of components (frequency components) corresponding to different eigenvalues of the matrix of the ODE system. Admissible controls are those that turn to zero (at a non predetermined time moment) the previously chosen frequency components of the solution. An algorithm for the finding of control switching points, based on the necessary minimum conditions for mixed criteria, is proposed.
Exact, E = 0, classical and quantum solutions for general power-law oscillators
NASA Technical Reports Server (NTRS)
Nieto, Michael Martin; Daboul, Jamil
1995-01-01
For zero energy, E = 0, we derive exact, classical and quantum solutions for all power-law oscillators with potentials V(r) = -gamma/r(exp nu), gamma greater than 0 and -infinity less than nu less than infinity. When the angular momentum is non-zero, these solutions lead to the classical orbits (p(t) = (cos mu(phi(t) - phi(sub 0)t))(exp 1/mu) with mu = nu/2 - 1 does not equal 0. For nu greater than 2, the orbits are bound and go through the origin. We calculate the periods and precessions of these bound orbits, and graph a number of specific examples. The unbound orbits are also discussed in detail. Quantum mechanically, this system is also exactly solvable. We find that when nu is greater than 2 the solutions are normalizable (bound), as in the classical case. Further, there are normalizable discrete, yet unbound, states. They correspond to unbound classical particles which reach infinity in a finite time. Finally, the number of space dimensions of the system can determine whether or not an E = 0 state is bound. These and other interesting comparisons to the classical system will be discussed.
The effect of surface grain reversal on the AC losses of sintered Nd-Fe-B permanent magnets
NASA Astrophysics Data System (ADS)
Moore, Martina; Roth, Stefan; Gebert, Annett; Schultz, Ludwig; Gutfleisch, Oliver
2015-02-01
Sintered Nd-Fe-B magnets are exposed to AC magnetic fields in many applications, e.g. in permanent magnet electric motors. We have measured the AC losses of sintered Nd-Fe-B magnets in a closed circuit arrangement using AC fields with root mean square-values up to 80 mT (peak amplitude 113 mT) over the frequency range 50 to 1000 Hz. Two magnet grades with different dysprosium content were investigated. Around the remanence point the low grade material (1.7 wt% Dy) showed significant hysteresis losses; whereas the losses in the high grade material (8.9 wt% Dy) were dominated by classical eddy currents. Kerr microscopy images revealed that the hysteresis losses measured for the low grade magnet can be mainly ascribed to grains at the sample surface with multiple domains. This was further confirmed when the high grade material was subsequently exposed to DC and AC magnetic fields. Here a larger number of surface grains with multiple domains are also present once the step in the demagnetization curve attributed to the surface grain reversal is reached and a rise in the measured hysteresis losses is evident. If in the low grade material the operating point is slightly offset from the remanence point, such that zero field is not bypassed, its AC losses can also be fairly well described with classical eddy current theory.
NASA Technical Reports Server (NTRS)
Chadwick, C.
1984-01-01
This paper describes the development and use of an algorithm to compute approximate statistics of the magnitude of a single random trajectory correction maneuver (TCM) Delta v vector. The TCM Delta v vector is modeled as a three component Cartesian vector each of whose components is a random variable having a normal (Gaussian) distribution with zero mean and possibly unequal standard deviations. The algorithm uses these standard deviations as input to produce approximations to (1) the mean and standard deviation of the magnitude of Delta v, (2) points of the probability density function of the magnitude of Delta v, and (3) points of the cumulative and inverse cumulative distribution functions of Delta v. The approximates are based on Monte Carlo techniques developed in a previous paper by the author and extended here. The algorithm described is expected to be useful in both pre-flight planning and in-flight analysis of maneuver propellant requirements for space missions.
Trajectory description of the quantum–classical transition for wave packet interference
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chou, Chia-Chun, E-mail: ccchou@mx.nthu.edu.tw
2016-08-15
The quantum–classical transition for wave packet interference is investigated using a hydrodynamic description. A nonlinear quantum–classical transition equation is obtained by introducing a degree of quantumness ranging from zero to one into the classical time-dependent Schrödinger equation. This equation provides a continuous description for the transition process of physical systems from purely quantum to purely classical regimes. In this study, the transition trajectory formalism is developed to provide a hydrodynamic description for the quantum–classical transition. The flow momentum of transition trajectories is defined by the gradient of the action function in the transition wave function and these trajectories follow themore » main features of the evolving probability density. Then, the transition trajectory formalism is employed to analyze the quantum–classical transition of wave packet interference. For the collision-like wave packet interference where the propagation velocity is faster than the spreading speed of the wave packet, the interference process remains collision-like for all the degree of quantumness. However, the interference features demonstrated by transition trajectories gradually disappear when the degree of quantumness approaches zero. For the diffraction-like wave packet interference, the interference process changes continuously from a diffraction-like to collision-like case when the degree of quantumness gradually decreases. This study provides an insightful trajectory interpretation for the quantum–classical transition of wave packet interference.« less
Cola soft drinks for evaluating the bioaccessibility of uranium in contaminated mine soils.
Lottermoser, Bernd G; Schnug, Ewald; Haneklaus, Silvia
2011-08-15
There is a rising need for scientifically sound and quantitative as well as simple, rapid, cheap and readily available soil testing procedures. The purpose of this study was to explore selected soft drinks (Coca-Cola Classic®, Diet Coke®, Coke Zero®) as indicators of bioaccessible uranium and other trace elements (As, Ce, Cu, La, Mn, Ni, Pb, Th, Y, Zn) in contaminated soils of the Mary Kathleen uranium mine site, Australia. Data of single extraction tests using Coca-Cola Classic®, Diet Coke® and Coke Zero® demonstrate that extractable arsenic, copper, lanthanum, manganese, nickel, yttrium and zinc concentrations correlate significantly with DTPA- and CaCl₂-extractable metals. Moreover, the correlation between DTPA-extractable uranium and that extracted using Coca-Cola Classic® is close to unity (+0.98), with reduced correlations for Diet Coke® (+0.66) and Coke Zero® (+0.55). Also, Coca-Cola Classic® extracts uranium concentrations near identical to DTPA, whereas distinctly higher uranium fractions were extracted using Diet Coke® and Coke Zero®. Results of this study demonstrate that the use of Coca-Cola Classic® in single extraction tests provided an excellent indication of bioaccessible uranium in the analysed soils and of uranium uptake into leaves and stems of the Sodom apple (Calotropis procera). Moreover, the unconventional reagent is superior in terms of availability, costs, preparation and disposal compared to traditional chemicals. Contaminated site assessments and rehabilitation of uranium mine sites require a solid understanding of the chemical speciation of environmentally significant elements for estimating their translocation in soils and plant uptake. Therefore, Cola soft drinks have potential applications in single extraction tests of uranium contaminated soils and may be used for environmental impact assessments of uranium mine sites, nuclear fuel processing plants and waste storage and disposal facilities. Copyright © 2011 Elsevier B.V. All rights reserved.
Computing diffusivities from particle models out of equilibrium
NASA Astrophysics Data System (ADS)
Embacher, Peter; Dirr, Nicolas; Zimmer, Johannes; Reina, Celia
2018-04-01
A new method is proposed to numerically extract the diffusivity of a (typically nonlinear) diffusion equation from underlying stochastic particle systems. The proposed strategy requires the system to be in local equilibrium and have Gaussian fluctuations but it is otherwise allowed to undergo arbitrary out-of-equilibrium evolutions. This could be potentially relevant for particle data obtained from experimental applications. The key idea underlying the method is that finite, yet large, particle systems formally obey stochastic partial differential equations of gradient flow type satisfying a fluctuation-dissipation relation. The strategy is here applied to three classic particle models, namely independent random walkers, a zero-range process and a symmetric simple exclusion process in one space dimension, to allow the comparison with analytic solutions.
Cendagorta, Joseph R; Powers, Anna; Hele, Timothy J H; Marsalek, Ondrej; Bačić, Zlatko; Tuckerman, Mark E
2016-11-30
Clathrate hydrates hold considerable promise as safe and economical materials for hydrogen storage. Here we present a quantum mechanical study of H 2 and D 2 diffusion through a hexagonal face shared by two large cages of clathrate hydrates over a wide range of temperatures. Path integral molecular dynamics simulations are used to compute the free-energy profiles for the diffusion of H 2 and D 2 as a function of temperature. Ring polymer molecular dynamics rate theory, incorporating both exact quantum statistics and approximate quantum dynamical effects, is utilized in the calculations of the H 2 and D 2 diffusion rates in a broad temperature interval. We find that the shape of the quantum free-energy profiles and their height relative to the classical free energy barriers at a given temperature, as well as the rate of diffusion, are strongly affected by competing quantum effects: above 25 K, zero-point energy (ZPE) perpendicular to the reaction path for diffusion between cavities decreases the quantum rate compared to the classical rate, whereas at lower temperatures tunneling outcompetes the ZPE and as a result the quantum rate is greater than the classical rate.
NASA Technical Reports Server (NTRS)
Holms, A. G.
1980-01-01
Population model coefficients were chosen to simulate a saturated 2 to the fourth power fixed effects experiment having an unfavorable distribution of relative values. Using random number studies, deletion strategies were compared that were based on the F distribution, on an order statistics distribution of Cochran's, and on a combination of the two. Results of the comparisons and a recommended strategy are given.
Minimizing Statistical Bias with Queries.
1995-09-14
method for optimally selecting these points would o er enormous savings in time and money. An active learning system will typically attempt to select data...research in active learning assumes that the sec- ond term of Equation 2 is approximately zero, that is, that the learner is unbiased. If this is the case...outperforms the variance- minimizing algorithm and random exploration. and e ective strategy for active learning . I have given empirical evidence that, with
NASA Astrophysics Data System (ADS)
Habershon, Scott; Manolopoulos, David E.
2009-12-01
The approximate quantum mechanical ring polymer molecular dynamics (RPMD) and linearized semiclassical initial value representation (LSC-IVR) methods are compared and contrasted in a study of the dynamics of the flexible q-TIP4P/F water model at room temperature. For this water model, a RPMD simulation gives a diffusion coefficient that is only a few percent larger than the classical diffusion coefficient, whereas a LSC-IVR simulation gives a diffusion coefficient that is three times larger. We attribute this discrepancy to the unphysical leakage of initially quantized zero point energy (ZPE) from the intramolecular to the intermolecular modes of the liquid as the LSC-IVR simulation progresses. In spite of this problem, which is avoided by construction in RPMD, the LSC-IVR may still provide a useful approximation to certain short-time dynamical properties which are not so strongly affected by the ZPE leakage. We illustrate this with an application to the liquid water dipole absorption spectrum, for which the RPMD approximation breaks down at frequencies in the O-H stretching region owing to contamination from the internal modes of the ring polymer. The LSC-IVR does not suffer from this difficulty and it appears to provide quite a promising way to calculate condensed phase vibrational spectra.
Habershon, Scott; Manolopoulos, David E
2009-12-28
The approximate quantum mechanical ring polymer molecular dynamics (RPMD) and linearized semiclassical initial value representation (LSC-IVR) methods are compared and contrasted in a study of the dynamics of the flexible q-TIP4P/F water model at room temperature. For this water model, a RPMD simulation gives a diffusion coefficient that is only a few percent larger than the classical diffusion coefficient, whereas a LSC-IVR simulation gives a diffusion coefficient that is three times larger. We attribute this discrepancy to the unphysical leakage of initially quantized zero point energy (ZPE) from the intramolecular to the intermolecular modes of the liquid as the LSC-IVR simulation progresses. In spite of this problem, which is avoided by construction in RPMD, the LSC-IVR may still provide a useful approximation to certain short-time dynamical properties which are not so strongly affected by the ZPE leakage. We illustrate this with an application to the liquid water dipole absorption spectrum, for which the RPMD approximation breaks down at frequencies in the O-H stretching region owing to contamination from the internal modes of the ring polymer. The LSC-IVR does not suffer from this difficulty and it appears to provide quite a promising way to calculate condensed phase vibrational spectra.
Fluctuating Selection in the Moran
Dean, Antony M.; Lehman, Clarence; Yi, Xiao
2017-01-01
Contrary to classical population genetics theory, experiments demonstrate that fluctuating selection can protect a haploid polymorphism in the absence of frequency dependent effects on fitness. Using forward simulations with the Moran model, we confirm our analytical results showing that a fluctuating selection regime, with a mean selection coefficient of zero, promotes polymorphism. We find that increases in heterozygosity over neutral expectations are especially pronounced when fluctuations are rapid, mutation is weak, the population size is large, and the variance in selection is big. Lowering the frequency of fluctuations makes selection more directional, and so heterozygosity declines. We also show that fluctuating selection raises dn/ds ratios for polymorphism, not only by sweeping selected alleles into the population, but also by purging the neutral variants of selected alleles as they undergo repeated bottlenecks. Our analysis shows that randomly fluctuating selection increases the rate of evolution by increasing the probability of fixation. The impact is especially noticeable when the selection is strong and mutation is weak. Simulations show the increase in the rate of evolution declines as the rate of new mutations entering the population increases, an effect attributable to clonal interference. Intriguingly, fluctuating selection increases the dn/ds ratios for divergence more than for polymorphism, a pattern commonly seen in comparative genomics. Our model, which extends the classical neutral model of molecular evolution by incorporating random fluctuations in selection, accommodates a wide variety of observations, both neutral and selected, with economy. PMID:28108586
Energetics and solvation structure of a dihalogen dopant (I2) in (4)He clusters.
Pérez de Tudela, Ricardo; Barragán, Patricia; Valdés, Álvaro; Prosmiti, Rita
2014-08-21
The energetics and structure of small HeNI2 clusters are analyzed as the size of the system changes, with N up to 38. The full interaction between the I2 molecule and the He atoms is based on analytical ab initio He-I2 potentials plus the He-He interaction, obtained from first-principle calculations. The most stable structures, as a function of the number of solvent He atoms, are obtained by employing an evolutionary algorithm and compared with CCSD(T) and MP2 ab initio computations. Further, the classical description is completed by explicitly including thermal corrections and quantum features, such as zero-point-energy values and spatial delocalization. From quantum PIMC calculations, the binding energies and radial/angular probability density distributions of the thermal equilibrium state for selected-size clusters are computed at a low temperature. The sequential formation of regular shell structures is analyzed and discussed for both classical and quantum treatments.
Classical dimer model with anisotropic interactions on the square lattice
NASA Astrophysics Data System (ADS)
Otsuka, Hiromi
2009-07-01
We discuss phase transitions and the phase diagram of a classical dimer model with anisotropic interactions defined on a square lattice. For the attractive region, the perturbation of the orientational order parameter introduced by the anisotropy causes the Berezinskii-Kosterlitz-Thouless transitions from a dimer-liquid to columnar phases. According to the discussion by Nomura and Okamoto for a quantum-spin chain system [J. Phys. A 27, 5773 (1994)], we proffer criteria to determine transition points and also universal level-splitting conditions. Subsequently, we perform numerical diagonalization calculations of the nonsymmetric real transfer matrices up to linear dimension specified by L=20 and determine the global phase diagram. For the repulsive region, we find the boundary between the dimer-liquid and the strong repulsion phases. Based on the dispersion relation of the one-string motion, which exhibits a twofold “zero-energy flat band” in the strong repulsion limit, we give an intuitive account for the property of the strong repulsion phase.
Using CAS to Solve Classical Mathematics Problems
ERIC Educational Resources Information Center
Burke, Maurice J.; Burroughs, Elizabeth A.
2009-01-01
Historically, calculus has displaced many algebraic methods for solving classical problems. This article illustrates an algebraic method for finding the zeros of polynomial functions that is closely related to Newton's method (devised in 1669, published in 1711), which is encountered in calculus. By exploring this problem, precalculus students…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamada, Atsushi; Kojima, Hidekazu; Okazaki, Susumu, E-mail: okazaki@apchem.nagoya-u.ac.jp
2014-08-28
In order to investigate proton transfer reaction in solution, mixed quantum-classical molecular dynamics calculations have been carried out based on our previously proposed quantum equation of motion for the reacting system [A. Yamada and S. Okazaki, J. Chem. Phys. 128, 044507 (2008)]. Surface hopping method was applied to describe forces acting on the solvent classical degrees of freedom. In a series of our studies, quantum and solvent effects on the reaction dynamics in solutions have been analysed in detail. Here, we report our mixed quantum-classical molecular dynamics calculations for intramolecular proton transfer of malonaldehyde in water. Thermally activated proton transfermore » process, i.e., vibrational excitation in the reactant state followed by transition to the product state and vibrational relaxation in the product state, as well as tunneling reaction can be described by solving the equation of motion. Zero point energy is, of course, included, too. The quantum simulation in water has been compared with the fully classical one and the wave packet calculation in vacuum. The calculated quantum reaction rate in water was 0.70 ps{sup −1}, which is about 2.5 times faster than that in vacuum, 0.27 ps{sup −1}. This indicates that the solvent water accelerates the reaction. Further, the quantum calculation resulted in the reaction rate about 2 times faster than the fully classical calculation, which indicates that quantum effect enhances the reaction rate, too. Contribution from three reaction mechanisms, i.e., tunneling, thermal activation, and barrier vanishing reactions, is 33:46:21 in the mixed quantum-classical calculations. This clearly shows that the tunneling effect is important in the reaction.« less
NASA Astrophysics Data System (ADS)
Liu, Cheng-Wei
Phase transitions and their associated critical phenomena are of fundamental importance and play a crucial role in the development of statistical physics for both classical and quantum systems. Phase transitions embody diverse aspects of physics and also have numerous applications outside physics, e.g., in chemistry, biology, and combinatorial optimization problems in computer science. Many problems can be reduced to a system consisting of a large number of interacting agents, which under some circumstances (e.g., changes of external parameters) exhibit collective behavior; this type of scenario also underlies phase transitions. The theoretical understanding of equilibrium phase transitions was put on a solid footing with the establishment of the renormalization group. In contrast, non-equilibrium phase transition are relatively less understood and currently a very active research topic. One important milestone here is the Kibble-Zurek (KZ) mechanism, which provides a useful framework for describing a system with a transition point approached through a non-equilibrium quench process. I developed two efficient Monte Carlo techniques for studying phase transitions, one is for classical phase transition and the other is for quantum phase transitions, both are under the framework of KZ scaling. For classical phase transition, I develop a non-equilibrium quench (NEQ) simulation that can completely avoid the critical slowing down problem. For quantum phase transitions, I develop a new algorithm, named quasi-adiabatic quantum Monte Carlo (QAQMC) algorithm for studying quantum quenches. I demonstrate the utility of QAQMC quantum Ising model and obtain high-precision results at the transition point, in particular showing generalized dynamic scaling in the quantum system. To further extend the methods, I study more complex systems such as spin-glasses and random graphs. The techniques allow us to investigate the problems efficiently. From the classical perspective, using the NEQ approach I verify the universality class of the 3D Ising spin-glasses. I also investigate the random 3-regular graphs in terms of both classical and quantum phase transitions. I demonstrate that under this simulation scheme, one can extract information associated with the classical and quantum spin-glass transitions without any knowledge prior to the simulation.
Gravitational instability of slowly rotating isothermal spheres
NASA Astrophysics Data System (ADS)
Chavanis, P. H.
2002-12-01
We discuss the statistical mechanics of rotating self-gravitating systems by allowing properly for the conservation of angular momentum. We study analytically the case of slowly rotating isothermal spheres by expanding the solutions of the Boltzmann-Poisson equation in a series of Legendre polynomials, adapting the procedure introduced by Chandrasekhar (1933) for distorted polytropes. We show how the classical spiral of Lynden-Bell & Wood (1967) in the temperature-energy plane is deformed by rotation. We find that gravitational instability occurs sooner in the microcanonical ensemble and later in the canonical ensemble. According to standard turning point arguments, the onset of the collapse coincides with the minimum energy or minimum temperature state in the series of equilibria. Interestingly, it happens to be close to the point of maximum flattening. We generalize the singular isothermal solution to the case of a slowly rotating configuration. We also consider slowly rotating configurations of the self-gravitating Fermi gas at non-zero temperature.
Laplace-Runge-Lenz vector in quantum mechanics in noncommutative space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gáliková, Veronika; Kováčik, Samuel; Prešnajder, Peter
2013-12-15
The main point of this paper is to examine a “hidden” dynamical symmetry connected with the conservation of Laplace-Runge-Lenz vector (LRL) in the hydrogen atom problem solved by means of non-commutative quantum mechanics (NCQM). The basic features of NCQM will be introduced to the reader, the key one being the fact that the notion of a point, or a zero distance in the considered configuration space, is abandoned and replaced with a “fuzzy” structure in such a way that the rotational invariance is preserved. The main facts about the conservation of LRL vector in both classical and quantum theory willmore » be reviewed. Finally, we will search for an analogy in the NCQM, provide our results and their comparison with the QM predictions. The key notions we are going to deal with are non-commutative space, Coulomb-Kepler problem, and symmetry.« less
Crits-Christoph, Paul; Gallop, Robert; Sadicario, Jaclyn S; Markell, Hannah M; Calsyn, Donald A; Tang, Wan; He, Hua; Tu, Xin; Woody, George
2014-01-16
The objective of the current study was to examine predictors and moderators of response to two HIV sexual risk interventions of different content and duration for individuals in substance abuse treatment programs. Participants were recruited from community drug treatment programs participating in the National Institute on Drug Abuse Clinical Trials Network (CTN). Data were pooled from two parallel randomized controlled CTN studies (one with men and one with women) each examining the impact of a multi-session motivational and skills training program, in comparison to a single-session HIV education intervention, on the degree of reduction in unprotected sex from baseline to 3- and 6- month follow-ups. The findings were analyzed using a zero-inflated negative binomial (ZINB) model. Severity of drug use (p < .01), gender (p < .001), and age (p < .001) were significant main effect predictors of number of unprotected sexual occasions (USOs) at follow-up in the non-zero portion of the ZINB model (men, younger participants, and those with greater severity of drug/alcohol abuse have more USOs). Monogamous relationship status (p < .001) and race/ethnicity (p < .001) were significant predictors of having at least one USO vs. none (monogamous individuals and African Americans were more likely to have at least one USO). Significant moderators of intervention effectiveness included recent sex under the influence of drugs/alcohol (p < .01 in non-zero portion of model), duration of abuse of primary drug (p < .05 in non-zero portion of model), and Hispanic ethnicity (p < .01 in the zero portion, p < .05 in the non-zero portion of model). These predictor and moderator findings point to ways in which patients may be selected for the different HIV sexual risk reduction interventions and suggest potential avenues for further development of the interventions for increasing their effectiveness within certain subgroups.
Entanglement entropy of dispersive media from thermodynamic entropy in one higher dimension.
Maghrebi, M F; Reid, M T H
2015-04-17
A dispersive medium becomes entangled with zero-point fluctuations in the vacuum. We consider an arbitrary array of material bodies weakly interacting with a quantum field and compute the quantum mutual information between them. It is shown that the mutual information in D dimensions can be mapped to classical thermodynamic entropy in D+1 dimensions. As a specific example, we compute the mutual information both analytically and numerically for a range of separation distances between two bodies in D=2 dimensions and find a logarithmic correction to the area law at short separations. A key advantage of our method is that it allows the strong subadditivity property to be easily verified.
Bounded energy exchange as an alternative to the third law of thermodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heidrich, Matthias, E-mail: Heidrich_Matthias@web.de
This paper introduces a postulate explicitly forbidding the extraction of an infinite amount of energy from a thermodynamic system. It also introduces the assumption that no measuring equipment is capable of detecting arbitrarily small energy exchanges. The Kelvin formulation of the second law is reinterpreted accordingly. Then statements related to both the unattainability version and the entropic version of the third law are derived. The value of any common thermodynamic potential of a one-component system at absolute zero of temperature is ascertained if some assumptions with regard to the state space can be made. The point of view is themore » phenomenological, macroscopic and non-statistical one of classical thermodynamics.« less
Bounded energy exchange as an alternative to the third law of thermodynamics
NASA Astrophysics Data System (ADS)
Heidrich, Matthias
2016-10-01
This paper introduces a postulate explicitly forbidding the extraction of an infinite amount of energy from a thermodynamic system. It also introduces the assumption that no measuring equipment is capable of detecting arbitrarily small energy exchanges. The Kelvin formulation of the second law is reinterpreted accordingly. Then statements related to both the unattainability version and the entropic version of the third law are derived. The value of any common thermodynamic potential of a one-component system at absolute zero of temperature is ascertained if some assumptions with regard to the state space can be made. The point of view is the phenomenological, macroscopic and non-statistical one of classical thermodynamics.
A superparticle on the super Riemann surface
NASA Astrophysics Data System (ADS)
Matsumoto, Shuji; Uehara, Shozo; Yasui, Yukinori
1990-02-01
The free motion of a nonrelativistic superparticle on the super Riemann surface (SRS) of genus h≥2 is investigated. Geodesics or classical paths are given explicitly on the super Poincaré upper half-plane SH, a universal covering space of the SRS, and the paths with some suitable initial conditions yield periodic orbits on the SRS. The periodic orbits are unstable and the system is chaotic. Quantum mechanics is solved on the universal covering space SH and the heat kernel is given on the SRS. This leads to a superanalog of the Selberg trace formula. The Selberg super zeta function is introduced whose zero points and poles determine the energy spectrum on the SRS.
Normal forms for Hopf-Zero singularities with nonconservative nonlinear part
NASA Astrophysics Data System (ADS)
Gazor, Majid; Mokhtari, Fahimeh; Sanders, Jan A.
In this paper we are concerned with the simplest normal form computation of the systems x˙=2xf(x,y2+z2), y˙=z+yf(x,y2+z2), z˙=-y+zf(x,y2+z2), where f is a formal function with real coefficients and without any constant term. These are the classical normal forms of a larger family of systems with Hopf-Zero singularity. Indeed, these are defined such that this family would be a Lie subalgebra for the space of all classical normal form vector fields with Hopf-Zero singularity. The simplest normal forms and simplest orbital normal forms of this family with nonzero quadratic part are computed. We also obtain the simplest parametric normal form of any non-degenerate perturbation of this family within the Lie subalgebra. The symmetry group of the simplest normal forms is also discussed. This is a part of our results in decomposing the normal forms of Hopf-Zero singular systems into systems with a first integral and nonconservative systems.
Solving the patient zero inverse problem by using generalized simulated annealing
NASA Astrophysics Data System (ADS)
Menin, Olavo H.; Bauch, Chris T.
2018-01-01
Identifying patient zero - the initially infected source of a given outbreak - is an important step in epidemiological investigations of both existing and emerging infectious diseases. Here, the use of the Generalized Simulated Annealing algorithm (GSA) to solve the inverse problem of finding the source of an outbreak is studied. The classical disease natural histories susceptible-infected (SI), susceptible-infected-susceptible (SIS), susceptible-infected-recovered (SIR) and susceptible-infected-recovered-susceptible (SIRS) in a regular lattice are addressed. Both the position of patient zero and its time of infection are considered unknown. The algorithm performance with respect to the generalization parameter q˜v and the fraction ρ of infected nodes for whom infection was ascertained is assessed. Numerical experiments show the algorithm is able to retrieve the epidemic source with good accuracy, even when ρ is small, but present no evidence to support that GSA performs better than its classical version. Our results suggest that simulated annealing could be a helpful tool for identifying patient zero in an outbreak where not all cases can be ascertained.
Shi, Xiaoping; Wu, Yuehua; Rao, Calyampudi Radhakrishna
2018-06-05
The change-point detection has been carried out in terms of the Euclidean minimum spanning tree (MST) and shortest Hamiltonian path (SHP), with successful applications in the determination of authorship of a classic novel, the detection of change in a network over time, the detection of cell divisions, etc. However, these Euclidean graph-based tests may fail if a dataset contains random interferences. To solve this problem, we present a powerful non-Euclidean SHP-based test, which is consistent and distribution-free. The simulation shows that the test is more powerful than both Euclidean MST- and SHP-based tests and the non-Euclidean MST-based test. Its applicability in detecting both landing and departure times in video data of bees' flower visits is illustrated.
Chaos in matrix models and black hole evaporation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berkowitz, Evan; Hanada, Masanori; Maltz, Jonathan
Is the evaporation of a black hole described by a unitary theory? In order to shed light on this question—especially aspects of this question such as a black hole’s negative specific heat—we consider the real-time dynamics of a solitonic object in matrix quantum mechanics, which can be interpreted as a black hole (black zero-brane) via holography. We point out that the chaotic nature of the system combined with the flat directions of its potential naturally leads to the emission of D0-branes from the black brane, which is suppressed in the large N limit. Simple arguments show that the black zero-brane,more » like the Schwarzschild black hole, has negative specific heat, in the sense that the temperature goes up when it evaporates by emitting D0-branes. While the largest Lyapunov exponent grows during the evaporation, the Kolmogorov-Sinai entropy decreases. These are consequences of the generic properties of matrix models and gauge theory. Based on these results, we give a possible geometric interpretation of the eigenvalue distribution of matrices in terms of gravity. Applying the same argument in the M-theory parameter region, we provide a scenario to derive the Hawking radiation of massless particles from the Schwarzschild black hole. In conclusion, we suggest that by adding a fraction of the quantum effects to the classical theory, we can obtain a matrix model whose classical time evolution mimics the entire life of the black brane, from its formation to the evaporation.« less
Chaos in matrix models and black hole evaporation
Berkowitz, Evan; Hanada, Masanori; Maltz, Jonathan
2016-12-19
Is the evaporation of a black hole described by a unitary theory? In order to shed light on this question—especially aspects of this question such as a black hole’s negative specific heat—we consider the real-time dynamics of a solitonic object in matrix quantum mechanics, which can be interpreted as a black hole (black zero-brane) via holography. We point out that the chaotic nature of the system combined with the flat directions of its potential naturally leads to the emission of D0-branes from the black brane, which is suppressed in the large N limit. Simple arguments show that the black zero-brane,more » like the Schwarzschild black hole, has negative specific heat, in the sense that the temperature goes up when it evaporates by emitting D0-branes. While the largest Lyapunov exponent grows during the evaporation, the Kolmogorov-Sinai entropy decreases. These are consequences of the generic properties of matrix models and gauge theory. Based on these results, we give a possible geometric interpretation of the eigenvalue distribution of matrices in terms of gravity. Applying the same argument in the M-theory parameter region, we provide a scenario to derive the Hawking radiation of massless particles from the Schwarzschild black hole. In conclusion, we suggest that by adding a fraction of the quantum effects to the classical theory, we can obtain a matrix model whose classical time evolution mimics the entire life of the black brane, from its formation to the evaporation.« less
Heterogeneity in the Strehler-Mildvan general theory of mortality and aging.
Zheng, Hui; Yang, Yang; Land, Kenneth C
2011-02-01
This study examines and further develops the classic Strehler-Mildvan (SM) general theory of mortality and aging. Three predictions from the SM theory are tested by examining the age dependence of mortality patterns for 42 countries (including developed and developing countries) over the period 1955-2003. By applying finite mixture regression models, principal component analysis, and random-effects panel regression models, we find that (1) the negative correlation between the initial adulthood mortality rate and the rate of increase in mortality with age derived in the SM theory exists but is not constant; (2) within the SM framework, the implied age of expected zero vitality (expected maximum survival age) also is variable over time; (3) longevity trajectories are not homogeneous among the countries; (4) Central American and Southeast Asian countries have higher expected age of zero vitality than other countries in spite of relatively disadvantageous national ecological systems; (5) within the group of Central American and Southeast Asian countries, a more disadvantageous national ecological system is associated with a higher expected age of zero vitality; and (6) larger agricultural and food productivities, higher labor participation rates, higher percentages of population living in urban areas, and larger GDP per capita and GDP per unit of energy use are important beneficial national ecological system factors that can promote survival. These findings indicate that the SM theory needs to be generalized to incorporate heterogeneity among human populations.
de Tudela, Ricardo Pérez; Barragán, Patricia; Prosmiti, Rita; Villarreal, Pablo; Delgado-Barrio, Gerardo
2011-03-31
Classical and path integral Monte Carlo (CMC, PIMC) "on the fly" calculations are carried out to investigate anharmonic quantum effects on the thermal equilibrium structure of the H5(+) cluster. The idea to follow in our computations is based on using a combination of the above-mentioned nuclear classical and quantum statistical methods, and first-principles density functional (DFT) electronic structure calculations. The interaction energies are computed within the DFT framework using the B3(H) hybrid functional, specially designed for hydrogen-only systems. The global minimum of the potential is predicted to be a nonplanar configuration of C(2v) symmetry, while the next three low-lying stationary points on the surface correspond to extremely low-energy barriers for the internal proton transfer and to the rotation of the H2 molecules, around the C2 axis of H5(+), connecting the symmetric C(2v) minima in the planar and nonplanar orientations. On the basis of full-dimensional converged PIMC calculations, results on the quantum vibrational zero-point energy (ZPE) and state of H5(+) are reported at a low temperature of 10 K, and the influence of the above-mentioned topological features of the surface on its probability distributions is clearly demonstrated.
Dynamical conductivity at the dirty superconductor-metal quantum phase transition.
Del Maestro, Adrian; Rosenow, Bernd; Hoyos, José A; Vojta, Thomas
2010-10-01
We study the transport properties of ultrathin disordered nanowires in the neighborhood of the superconductor-metal quantum phase transition. To this end we combine numerical calculations with analytical strong-disorder renormalization group results. The quantum critical conductivity at zero temperature diverges logarithmically as a function of frequency. In the metallic phase, it obeys activated scaling associated with an infinite-randomness quantum critical point. We extend the scaling theory to higher dimensions and discuss implications for experiments.
Two-point correlation function for Dirichlet L-functions
NASA Astrophysics Data System (ADS)
Bogomolny, E.; Keating, J. P.
2013-03-01
The two-point correlation function for the zeros of Dirichlet L-functions at a height E on the critical line is calculated heuristically using a generalization of the Hardy-Littlewood conjecture for pairs of primes in arithmetic progression. The result matches the conjectured random-matrix form in the limit as E → ∞ and, importantly, includes finite-E corrections. These finite-E corrections differ from those in the case of the Riemann zeta-function, obtained in Bogomolny and Keating (1996 Phys. Rev. Lett. 77 1472), by certain finite products of primes which divide the modulus of the primitive character used to construct the L-function in question.
Random Numbers and Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Scherer, Philipp O. J.
Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.
Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A
2016-06-14
A false positive is the mistake of inferring an effect when none exists, and although α controls the false positive (Type I error) rate in classical hypothesis testing, a given α value is accurate only if the underlying model of randomness appropriately reflects experimentally observed variance. Hypotheses pertaining to one-dimensional (1D) (e.g. time-varying) biomechanical trajectories are most often tested using a traditional zero-dimensional (0D) Gaussian model of randomness, but variance in these datasets is clearly 1D. The purpose of this study was to determine the likelihood that analyzing smooth 1D data with a 0D model of variance will produce false positives. We first used random field theory (RFT) to predict the probability of false positives in 0D analyses. We then validated RFT predictions via numerical simulations of smooth Gaussian 1D trajectories. Results showed that, across a range of public kinematic, force/moment and EMG datasets, the median false positive rate was 0.382 and not the assumed α=0.05, even for a simple two-sample t test involving N=10 trajectories per group. The median false positive rate for experiments involving three-component vector trajectories was p=0.764. This rate increased to p=0.945 for two three-component vector trajectories, and to p=0.999 for six three-component vectors. This implies that experiments involving vector trajectories have a high probability of yielding 0D statistical significance when there is, in fact, no 1D effect. Either (a) explicit a priori identification of 0D variables or (b) adoption of 1D methods can more tightly control α. Copyright © 2016 Elsevier Ltd. All rights reserved.
Practical quantum random number generator based on measuring the shot noise of vacuum states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen Yong; Zou Hongxin; Tian Liang
2010-06-15
The shot noise of vacuum states is a kind of quantum noise and is totally random. In this paper a nondeterministic random number generation scheme based on measuring the shot noise of vacuum states is presented and experimentally demonstrated. We use a homodyne detector to measure the shot noise of vacuum states. Considering that the frequency bandwidth of our detector is limited, we derive the optimal sampling rate so that sampling points have the least correlation with each other. We also choose a method to extract random numbers from sampling values, and prove that the influence of classical noise canmore » be avoided with this method so that the detector does not have to be shot-noise limited. The random numbers generated with this scheme have passed ent and diehard tests.« less
Modeling the rate of HIV testing from repeated binary data amidst potential never-testers.
Rice, John D; Johnson, Brent A; Strawderman, Robert L
2018-01-04
Many longitudinal studies with a binary outcome measure involve a fraction of subjects with a homogeneous response profile. In our motivating data set, a study on the rate of human immunodeficiency virus (HIV) self-testing in a population of men who have sex with men (MSM), a substantial proportion of the subjects did not self-test during the follow-up study. The observed data in this context consist of a binary sequence for each subject indicating whether or not that subject experienced any events between consecutive observation time points, so subjects who never self-tested were observed to have a response vector consisting entirely of zeros. Conventional longitudinal analysis is not equipped to handle questions regarding the rate of events (as opposed to the odds, as in the classical logistic regression model). With the exception of discrete mixture models, such methods are also not equipped to handle settings in which there may exist a group of subjects for whom no events will ever occur, i.e. a so-called "never-responder" group. In this article, we model the observed data assuming that events occur according to some unobserved continuous-time stochastic process. In particular, we consider the underlying subject-specific processes to be Poisson conditional on some unobserved frailty, leading to a natural focus on modeling event rates. Specifically, we propose to use the power variance function (PVF) family of frailty distributions, which contains both the gamma and inverse Gaussian distributions as special cases and allows for the existence of a class of subjects having zero frailty. We generalize a computational algorithm developed for a log-gamma random intercept model (Conaway, 1990. A random effects model for binary data. Biometrics46, 317-328) to compute the exact marginal likelihood, which is then maximized to obtain estimates of model parameters. We conduct simulation studies, exploring the performance of the proposed method in comparison with competitors. Applying the PVF as well as a Gaussian random intercept model and a corresponding discrete mixture model to our motivating data set, we conclude that the group assigned to receive follow-up messages via SMS was self-testing at a significantly lower rate than the control group, but that there is no evidence to support the existence of a group of never-testers. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Dynamical conductivity at the dirty superconductor-metal quantum phase transition
NASA Astrophysics Data System (ADS)
Hoyos, J. A.; Del Maestro, Adrian; Rosenow, Bernd; Vojta, Thomas
2011-03-01
We study the transport properties of ultrathin disordered nanowires in the neighborhood of the superconductor-metal quantum phase transition. To this end we combine numerical calculations with analytical strong-disorder renormalization group results. The quantum critical conductivity at zero temperature diverges logarithmically as a function of frequency. In the metallic phase, it obeys activated scaling associated with an infinite-randomness quantum critical point. We extend the scaling theory to higher dimensions and discuss implications for experiments. Financial support: Fapesp, CNPq, NSF, and Research Corporation.
The zero-action hypothesis and high-temperature thermodynamics in the heterotic superstring theory
NASA Astrophysics Data System (ADS)
Pollock, M. D.
2005-07-01
The effective action S for the Einstein theory of gravity coupled to massless scalar fields phi, spinor fields ψ and gauge vector fields Fij describing radiation, so that FijFij = 0, vanishes identically after substitution from the classical equations of motion, thus allowing a perfect fluid for which the energy density ρ and pressure p = (γ - 1)ρ are related by values of the adiabatic index throughout the range 4/3 <= γ <= 2. In the heterotic superstring theory, four-point gravitational interactions generate a tree-level quadratic, higher-derivative contribution to the Lagrangian, after reduction to four dimensions, whose form, unchanged at one-loop level, is {\\cal R}^2 = B(R^2 -R_{ij}R^{ij}) = {1 \\over 6} B (\\gamma-2)(\\gamma-1) \\kappa^4 \\rho^2 , where the constant B ap 1 for a three-generation Calabi Yau manifold, and which thus constitutes a type of anomaly. The zero-action hypothesis requires the theory to be free of such anomalies, and thus predicts that the Universe started off in the state p = ρ discussed by Zel'dovich, characterized by the maximum value γ = 2 consistent with causality. Applying classical thermodynamics to a perfect fluid, we find that ρ, p and hence also the Helmholtz free-energy density f ≡ -p, scale with temperature as Tγ/γ-1, leading to the prediction that f ~ T2, which is exactly verified by the calculation of Atick and Witten, valid at genus-one in the high-temperature limit T Gt TH, after Euclideanizing the time coordinate, where TH is the Hagedorn temperature. The response of the action to the operators T, C and P is also discussed, T-invariance requiring γ = 2 and hence S = 0, and P-invariance requiring S = 0, showing that the zero-action hypothesis can be understood in terms of these discrete symmetries.
NASA Astrophysics Data System (ADS)
Ngastiti, P. T. B.; Surarso, Bayu; Sutimin
2018-05-01
Transportation issue of the distribution problem such as the commodity or goods from the supply tothe demmand is to minimize the transportation costs. Fuzzy transportation problem is an issue in which the transport costs, supply and demand are in the form of fuzzy quantities. Inthe case study at CV. Bintang Anugerah Elektrik, a company engages in the manufacture of gensets that has more than one distributors. We use the methods of zero point and zero suffix to investigate the transportation minimum cost. In implementing both methods, we use robust ranking techniques for the defuzzification process. The studyresult show that the iteration of zero suffix method is less than that of zero point method.
Extremal entanglement witnesses
NASA Astrophysics Data System (ADS)
Hansen, Leif Ove; Hauge, Andreas; Myrheim, Jan; Sollid, Per Øyvind
2015-02-01
We present a study of extremal entanglement witnesses on a bipartite composite quantum system. We define the cone of witnesses as the dual of the set of separable density matrices, thus TrΩρ≥0 when Ω is a witness and ρ is a pure product state, ρ=ψψ† with ψ=ϕ⊗χ. The set of witnesses of unit trace is a compact convex set, uniquely defined by its extremal points. The expectation value f(ϕ,χ)=TrΩρ as a function of vectors ϕ and χ is a positive semidefinite biquadratic form. Every zero of f(ϕ,χ) imposes strong real-linear constraints on f and Ω. The real and symmetric Hessian matrix at the zero must be positive semidefinite. Its eigenvectors with zero eigenvalue, if such exist, we call Hessian zeros. A zero of f(ϕ,χ) is quadratic if it has no Hessian zeros, otherwise it is quartic. We call a witness quadratic if it has only quadratic zeros, and quartic if it has at least one quartic zero. A main result we prove is that a witness is extremal if and only if no other witness has the same, or a larger, set of zeros and Hessian zeros. A quadratic extremal witness has a minimum number of isolated zeros depending on dimensions. If a witness is not extremal, then the constraints defined by its zeros and Hessian zeros determine all directions in which we may search for witnesses having more zeros or Hessian zeros. A finite number of iterated searches in random directions, by numerical methods, leads to an extremal witness which is nearly always quadratic and has the minimum number of zeros. We discuss briefly some topics related to extremal witnesses, in particular the relation between the facial structures of the dual sets of witnesses and separable states. We discuss the relation between extremality and optimality of witnesses, and a conjecture of separability of the so-called structural physical approximation (SPA) of an optimal witness. Finally, we discuss how to treat the entanglement witnesses on a complex Hilbert space as a subset of the witnesses on a real Hilbert space.
NASA Astrophysics Data System (ADS)
Fukushima, Kimichika; Sato, Hikaru
2018-04-01
Ultraviolet self-interaction energies in field theory sometimes contain meaningful physical quantities. The self-energies in such as classical electrodynamics are usually subtracted from the rest mass. For the consistent treatment of energies as sources of curvature in the Einstein field equations, this study includes these subtracted self-energies into vacuum energy expressed by the constant Lambda (used in such as Lambda-CDM). In this study, the self-energies in electrodynamics and macroscopic classical Einstein field equations are examined, using the formalisms with the ultraviolet cut-off scheme. One of the cut-off formalisms is the field theory in terms of the step-function-type basis functions, developed by the present authors. The other is a continuum theory of a fundamental particle with the same cut-off length. Based on the effectiveness of the continuum theory with the cut-off length shown in the examination, the dominant self-energy is the quadratic term of the Higgs field at a quantum level (classical self-energies are reduced to logarithmic forms by quantum corrections). The cut-off length is then determined to reproduce today's tiny value of Lambda for vacuum energy. Additionally, a field with nonperiodic vanishing boundary conditions is treated, showing that the field has no zero-point energy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loubenets, Elena R.
We prove the existence for each Hilbert space of the two new quasi hidden variable (qHV) models, statistically noncontextual and context-invariant, reproducing all the von Neumann joint probabilities via non-negative values of real-valued measures and all the quantum product expectations—via the qHV (classical-like) average of the product of the corresponding random variables. In a context-invariant model, a quantum observable X can be represented by a variety of random variables satisfying the functional condition required in quantum foundations but each of these random variables equivalently models X under all joint von Neumann measurements, regardless of their contexts. The proved existence ofmore » this model negates the general opinion that, in terms of random variables, the Hilbert space description of all the joint von Neumann measurements for dimH≥3 can be reproduced only contextually. The existence of a statistically noncontextual qHV model, in particular, implies that every N-partite quantum state admits a local quasi hidden variable model introduced in Loubenets [J. Math. Phys. 53, 022201 (2012)]. The new results of the present paper point also to the generality of the quasi-classical probability model proposed in Loubenets [J. Phys. A: Math. Theor. 45, 185306 (2012)].« less
Uncertainties in scaling factors for ab initio vibrational zero-point energies
NASA Astrophysics Data System (ADS)
Irikura, Karl K.; Johnson, Russell D.; Kacker, Raghu N.; Kessel, Rüdiger
2009-03-01
Vibrational zero-point energies (ZPEs) determined from ab initio calculations are often scaled by empirical factors. An empirical scaling factor partially compensates for the effects arising from vibrational anharmonicity and incomplete treatment of electron correlation. These effects are not random but are systematic. We report scaling factors for 32 combinations of theory and basis set, intended for predicting ZPEs from computed harmonic frequencies. An empirical scaling factor carries uncertainty. We quantify and report, for the first time, the uncertainties associated with scaling factors for ZPE. The uncertainties are larger than generally acknowledged; the scaling factors have only two significant digits. For example, the scaling factor for B3LYP/6-31G(d) is 0.9757±0.0224 (standard uncertainty). The uncertainties in the scaling factors lead to corresponding uncertainties in predicted ZPEs. The proposed method for quantifying the uncertainties associated with scaling factors is based upon the Guide to the Expression of Uncertainty in Measurement, published by the International Organization for Standardization. We also present a new reference set of 60 diatomic and 15 polyatomic "experimental" ZPEs that includes estimated uncertainties.
16 CFR Figure 5 to Subpart A of... - Zero Reference Point Related to Detecting Plane
Code of Federal Regulations, 2013 CFR
2013-01-01
... 16 Commercial Practices 2 2013-01-01 2013-01-01 false Zero Reference Point Related to Detecting Plane 5 Figure 5 to Subpart A of Part 1209 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION.... 1209, Subpt. A, Fig. 5 Figure 5 to Subpart A of Part 1209—Zero Reference Point Related to Detecting...
16 CFR Figure 5 to Subpart A of... - Zero Reference Point Related to Detecting Plane
Code of Federal Regulations, 2012 CFR
2012-01-01
... 16 Commercial Practices 2 2012-01-01 2012-01-01 false Zero Reference Point Related to Detecting Plane 5 Figure 5 to Subpart A of Part 1209 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION.... 1209, Subpt. A, Fig. 5 Figure 5 to Subpart A of Part 1209—Zero Reference Point Related to Detecting...
16 CFR Figure 5 to Subpart A of... - Zero Reference Point Related to Detecting Plane
Code of Federal Regulations, 2014 CFR
2014-01-01
... 16 Commercial Practices 2 2014-01-01 2014-01-01 false Zero Reference Point Related to Detecting Plane 5 Figure 5 to Subpart A of Part 1209 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION.... 1209, Subpt. A, Fig. 5 Figure 5 to Subpart A of Part 1209—Zero Reference Point Related to Detecting...
Stone, William J.
1986-01-01
A zero-home locator includes a fixed phototransistor switch and a moveable actuator including two symmetrical, opposed wedges, each wedge defining a point at which switching occurs. The zero-home location is the average of the positions of the points defined by the wedges.
Stone, W.J.
1983-10-31
A zero-home locator includes a fixed phototransistor switch and a moveable actuator including two symmetrical, opposed wedges, each wedge defining a point at which switching occurs. The zero-home location is the average of the positions of the points defined by the wedges.
Fluctuating Selection in the Moran.
Dean, Antony M; Lehman, Clarence; Yi, Xiao
2017-03-01
Contrary to classical population genetics theory, experiments demonstrate that fluctuating selection can protect a haploid polymorphism in the absence of frequency dependent effects on fitness. Using forward simulations with the Moran model, we confirm our analytical results showing that a fluctuating selection regime, with a mean selection coefficient of zero, promotes polymorphism. We find that increases in heterozygosity over neutral expectations are especially pronounced when fluctuations are rapid, mutation is weak, the population size is large, and the variance in selection is big. Lowering the frequency of fluctuations makes selection more directional, and so heterozygosity declines. We also show that fluctuating selection raises d n / d s ratios for polymorphism, not only by sweeping selected alleles into the population, but also by purging the neutral variants of selected alleles as they undergo repeated bottlenecks. Our analysis shows that randomly fluctuating selection increases the rate of evolution by increasing the probability of fixation. The impact is especially noticeable when the selection is strong and mutation is weak. Simulations show the increase in the rate of evolution declines as the rate of new mutations entering the population increases, an effect attributable to clonal interference. Intriguingly, fluctuating selection increases the d n / d s ratios for divergence more than for polymorphism, a pattern commonly seen in comparative genomics. Our model, which extends the classical neutral model of molecular evolution by incorporating random fluctuations in selection, accommodates a wide variety of observations, both neutral and selected, with economy. Copyright © 2017 by the Genetics Society of America.
Hamiltonian flows with random-walk behaviour originating from zero-sum games and fictitious play
NASA Astrophysics Data System (ADS)
van Strien, Sebastian
2011-06-01
In this paper we introduce Hamiltonian dynamics, inspired by zero-sum games (best response and fictitious play dynamics). The Hamiltonian functions we consider are continuous and piecewise affine (and of a very simple form). It follows that the corresponding Hamiltonian vector fields are discontinuous and multi-valued. Differential equations with discontinuities along a hyperplane are often called 'Filippov systems', and there is a large literature on such systems, see for example (di Bernardo et al 2008 Theory and applications Piecewise-Smooth Dynamical Systems (Applied Mathematical Sciences vol 163) (London: Springer); Kunze 2000 Non-Smooth Dynamical Systems (Lecture Notes in Mathematics vol 1744) (Berlin: Springer); Leine and Nijmeijer 2004 Dynamics and Bifurcations of Non-smooth Mechanical Systems (Lecture Notes in Applied and Computational Mechanics vol 18) (Berlin: Springer)). The special feature of the systems we consider here is that they have discontinuities along a large number of intersecting hyperplanes. Nevertheless, somewhat surprisingly, the flow corresponding to such a vector field exists, is unique and continuous. We believe that these vector fields deserve attention, because it turns out that the resulting dynamics are rather different from those found in more classically defined Hamiltonian dynamics. The vector field is extremely simple: outside codimension-one hyperplanes it is piecewise constant and so the flow phit piecewise a translation (without stationary points). Even so, the dynamics can be rather rich and complicated as a detailed study of specific examples show (see for example theorems 7.1 and 7.2 and also (Ostrovski and van Strien 2011 Regular Chaotic Dynf. 16 129-54)). In the last two sections of the paper we give some applications to game theory, and finish with posing a version of the Palis conjecture in the context of the class of non-smooth systems studied in this paper. To Jacob Palis on his 70th birthday.
Evanescent radiation, quantum mechanics and the Casimir effect
NASA Technical Reports Server (NTRS)
Schatten, Kenneth H.
1989-01-01
An attempt to bridge the gap between classical and quantum mechanics and to explain the Casimir effect is presented. The general nature of chaotic motion is discussed from two points of view: the first uses catastrophe theory and strange attractors to describe the deterministic view of this motion; the underlying framework for chaos in these classical dynamic systems is their extreme sensitivity to initial conditions. The second interpretation refers to randomness associated with probabilistic dynamics, as for Brownian motion. The present approach to understanding evanescent radiation and its relation to the Casimir effect corresponds to the first interpretation, whereas stochastic electrodynamics corresponds to the second viewpoint. The nonlinear behavior of the electromagnetic field is also studied. This well-understood behavior is utilized to examine the motions of two orbiting charges and shows a closeness between the classical behavior and the quantum uncertainty principle. The evanescent radiation is used to help explain the Casimir effect.
Characterizing pixel and point patterns with a hyperuniformity disorder length
NASA Astrophysics Data System (ADS)
Chieco, A. T.; Dreyfus, R.; Durian, D. J.
2017-09-01
We introduce the concept of a "hyperuniformity disorder length" h that controls the variance of volume fraction fluctuations for randomly placed windows of fixed size. In particular, fluctuations are determined by the average number of particles within a distance h from the boundary of the window. We first compute special expectations and bounds in d dimensions, and then illustrate the range of behavior of h versus window size L by analyzing several different types of simulated two-dimensional pixel patterns—where particle positions are stored as a binary digital image in which pixels have value zero if empty and one if they contain a particle. The first are random binomial patterns, where pixels are randomly flipped from zero to one with probability equal to area fraction. These have long-ranged density fluctuations, and simulations confirm the exact result h =L /2 . Next we consider vacancy patterns, where a fraction f of particles on a lattice are randomly removed. These also display long-range density fluctuations, but with h =(L /2 )(f /d ) for small f , and h =L /2 for f →1 . And finally, for a hyperuniform system with no long-range density fluctuations, we consider "Einstein patterns," where each particle is independently displaced from a lattice site by a Gaussian-distributed amount. For these, at large L ,h approaches a constant equal to about half the root-mean-square displacement in each dimension. Then we turn to gray-scale pixel patterns that represent simulated arrangements of polydisperse particles, where the volume of a particle is encoded in the value of its central pixel. And we discuss the continuum limit of point patterns, where pixel size vanishes. In general, we thus propose to quantify particle configurations not just by the scaling of the density fluctuation spectrum but rather by the real-space spectrum of h (L ) versus L . We call this approach "hyperuniformity disorder length spectroscopy".
Characterizing pixel and point patterns with a hyperuniformity disorder length.
Chieco, A T; Dreyfus, R; Durian, D J
2017-09-01
We introduce the concept of a "hyperuniformity disorder length" h that controls the variance of volume fraction fluctuations for randomly placed windows of fixed size. In particular, fluctuations are determined by the average number of particles within a distance h from the boundary of the window. We first compute special expectations and bounds in d dimensions, and then illustrate the range of behavior of h versus window size L by analyzing several different types of simulated two-dimensional pixel patterns-where particle positions are stored as a binary digital image in which pixels have value zero if empty and one if they contain a particle. The first are random binomial patterns, where pixels are randomly flipped from zero to one with probability equal to area fraction. These have long-ranged density fluctuations, and simulations confirm the exact result h=L/2. Next we consider vacancy patterns, where a fraction f of particles on a lattice are randomly removed. These also display long-range density fluctuations, but with h=(L/2)(f/d) for small f, and h=L/2 for f→1. And finally, for a hyperuniform system with no long-range density fluctuations, we consider "Einstein patterns," where each particle is independently displaced from a lattice site by a Gaussian-distributed amount. For these, at large L,h approaches a constant equal to about half the root-mean-square displacement in each dimension. Then we turn to gray-scale pixel patterns that represent simulated arrangements of polydisperse particles, where the volume of a particle is encoded in the value of its central pixel. And we discuss the continuum limit of point patterns, where pixel size vanishes. In general, we thus propose to quantify particle configurations not just by the scaling of the density fluctuation spectrum but rather by the real-space spectrum of h(L) versus L. We call this approach "hyperuniformity disorder length spectroscopy".
Finite-temperature effects in helical quantum turbulence
NASA Astrophysics Data System (ADS)
Clark Di Leoni, Patricio; Mininni, Pablo D.; Brachet, Marc E.
2018-04-01
We perform a study of the evolution of helical quantum turbulence at different temperatures by solving numerically the Gross-Pitaevskii and the stochastic Ginzburg-Landau equations, using up to 40963 grid points with a pseudospectral method. We show that for temperatures close to the critical one, the fluid described by these equations can act as a classical viscous flow, with the decay of the incompressible kinetic energy and the helicity becoming exponential. The transition from this behavior to the one observed at zero temperature is smooth as a function of temperature. Moreover, the presence of strong thermal effects can inhibit the development of a proper turbulent cascade. We provide Ansätze for the effective viscosity and friction as a function of the temperature.
Order by disorder and gaugelike degeneracy in a quantum pyrochlore antiferromagnet.
Henley, Christopher L
2006-02-03
The (three-dimensional) pyrochlore lattice antiferromagnet with Heisenberg spins of large spin length S is a highly frustrated model with a macroscopic degeneracy of classical ground states. The zero-point energy of (harmonic-order) spin-wave fluctuations distinguishes a subset of these states. I derive an approximate but illuminating effective Hamiltonian, acting within the subspace of Ising spin configurations representing the collinear ground states. It consists of products of Ising spins around loops, i.e., has the form of a Z2 lattice gauge theory. The remaining ground-state entropy is still infinite but not extensive, being O(L) for system size O(L3). All these ground states have unit cells bigger than those considered previously.
Quantized vortices in the ideal bose gas: a physical realization of random polynomials.
Castin, Yvan; Hadzibabic, Zoran; Stock, Sabine; Dalibard, Jean; Stringari, Sandro
2006-02-03
We propose a physical system allowing one to experimentally observe the distribution of the complex zeros of a random polynomial. We consider a degenerate, rotating, quasi-ideal atomic Bose gas prepared in the lowest Landau level. Thermal fluctuations provide the randomness of the bosonic field and of the locations of the vortex cores. These vortices can be mapped to zeros of random polynomials, and observed in the density profile of the gas.
Nonadiabatic Molecular Dynamics and Orthogonality Constrained Density Functional Theory
NASA Astrophysics Data System (ADS)
Shushkov, Philip Georgiev
The exact quantum dynamics of realistic, multidimensional systems remains a formidable computational challenge. In many chemical processes, however, quantum effects such as tunneling, zero-point energy quantization, and nonadiabatic transitions play an important role. Therefore, approximate approaches that improve on the classical mechanical framework are of special practical interest. We propose a novel ring polymer surface hopping method for the calculation of chemical rate constants. The method blends two approaches, namely ring polymer molecular dynamics that accounts for tunneling and zero-point energy quantization, and surface hopping that incorporates nonadiabatic transitions. We test the method against exact quantum mechanical calculations for a one-dimensional, two-state model system. The method reproduces quite accurately the tunneling contribution to the rate and the distribution of reactants between the electronic states for this model system. Semiclassical instanton theory, an approach related to ring polymer molecular dynamics, accounts for tunneling by the use of periodic classical trajectories on the inverted potential energy surface. We study a model of electron transfer in solution, a chemical process where nonadiabatic events are prominent. By representing the tunneling electron with a ring polymer, we derive Marcus theory of electron transfer from semiclassical instanton theory after a careful analysis of the tunneling mode. We demonstrate that semiclassical instanton theory can recover the limit of Fermi's Golden Rule rate in a low-temperature, deep-tunneling regime. Mixed quantum-classical dynamics treats a few important degrees of freedom quantum mechanically, while classical mechanics describes affordably the rest of the system. But the interface of quantum and classical description is a challenging theoretical problem, especially for low-energy chemical processes. We therefore focus on the semiclassical limit of the coupled nuclear-electronic dynamics. We show that the time-dependent Schrodinger equation for the electrons employed in the widely used fewest switches surface hopping method is applicable only in the limit of nearly identical classical trajectories on the different potential energy surfaces. We propose a short-time decoupling algorithm that restricts the use of the Schrodinger equation only to the interaction regions. We test the short-time approximation on three model systems against exact quantum-mechanical calculations. The approximation improves the performance of the surface hopping approach. Nonadiabatic molecular dynamics simulations require the efficient and accurate computation of ground and excited state potential energy surfaces. Unlike the ground state calculations where standard methods exist, the computation of excited state properties is a challenging task. We employ time-independent density functional theory, in which the excited state energy is represented as a functional of the total density. We suggest an adiabatic-like approximation that simplifies the excited state exchange-correlation functional. We also derive a set of minimal conditions to impose exactly the orthogonality of the excited state Kohn-Sham determinant to the ground state determinant. This leads to an efficient, variational algorithm for the self-consistent optimization of the excited state energy. Finally, we assess the quality of the excitation energies obtained by the new method on a set of 28 organic molecules. The new approach provides results of similar accuracy to time-dependent density functional theory.
Precision Linear Actuator for Space Interferometry Mission (SIM) Siderostat Pointing
NASA Technical Reports Server (NTRS)
Cook, Brant; Braun, David; Hankins, Steve; Koenig, John; Moore, Don
2008-01-01
'SIM PlanetQuest will exploit the classical measuring tool of astrometry (interferometry) with unprecedented precision to make dramatic advances in many areas of astronomy and astrophysics'(1). In order to obtain interferometric data two large steerable mirrors, or Siderostats, are used to direct starlight into the interferometer. A gimbaled mechanism actuated by linear actuators is chosen to meet the unprecedented pointing and angle tracking requirements of SIM. A group of JPL engineers designed, built, and tested a linear ballscrew actuator capable of performing submicron incremental steps for 10 years of continuous operation. Precise, zero backlash, closed loop pointing control requirements, lead the team to implement a ballscrew actuator with a direct drive DC motor and a precision piezo brake. Motor control commutation using feedback from a precision linear encoder on the ballscrew output produced an unexpected incremental step size of 20 nm over a range of 120 mm, yielding a dynamic range of 6,000,000:1. The results prove linear nanometer positioning requires no gears, levers, or hydraulic converters. Along the way many lessons have been learned and will subsequently be shared.
Basire, Marie; Borgis, Daniel; Vuilleumier, Rodolphe
2013-08-14
Langevin dynamics coupled to a quantum thermal bath (QTB) allows for the inclusion of vibrational quantum effects in molecular dynamics simulations at virtually no additional computer cost. We investigate here the ability of the QTB method to reproduce the quantum Wigner distribution of a variety of model potentials, designed to assess the performances and limits of the method. We further compute the infrared spectrum of a multidimensional model of proton transfer in the gas phase and in solution, using classical trajectories sampled initially from the Wigner distribution. It is shown that for this type of system involving large anharmonicities and strong nonlinear coupling to the environment, the quantum thermal bath is able to sample the Wigner distribution satisfactorily and to account for both zero point energy and tunneling effects. It leads to quantum time correlation functions having the correct short-time behavior, and the correct associated spectral frequencies, but that are slightly too overdamped. This is attributed to the classical propagation approximation rather than the generation of the quantized initial conditions themselves.
Precise Stabilization of the Optical Frequency of WGMRs
NASA Technical Reports Server (NTRS)
Savchenkov, Anatoliy; Matsko, Andrey; Matsko, Andrey; Yu, Nan; Maleki, Lute; Iltchenko, Vladimir
2009-01-01
Crystalline whispering gallery mode resonators (CWGMRs) made of crystals with axial symmetry have ordinary and extraordinary families of optical modes. These modes have substantially different thermo-refractive constants. This results in a very sharp dependence of differential detuning of optical frequency on effective temperature. This frequency difference compared with clock gives an error signal for precise compensation of the random fluctuations of optical frequency. Certain crystals, like MgF2, have turnover points where the thermo-refractive effect is completely nullified. An advantage for applications using WGMRs for frequency stabilization is in the possibility of manufacturing resonators out of practically any optically transparent crystal. It is known that there are crystals with negative and zero thermal expansion at some specific temperatures. Doping changes properties of the crystals and it is possible to create an optically transparent crystal with zero thermal expansion at room temperature. With this innovation s stabilization technique, the resultant WGMR will have absolute frequency stability The expansion of the resonator s body can be completely compensated for by nonlinear elements. This results in compensation of linear thermal expansion (see figure). In three-mode, the MgF2 resonator, if tuned at the turnover thermal point, can compensate for all types of random thermal-related frequency drift. Simplified dual-mode method is also available. This creates miniature optical resonators with good short- and long-term stability for passive secondary frequency ethalon and an active resonator for active secondary frequency standard (a narrowband laser with long-term stability).
Distribution law of the Dirac eigenmodes in QCD
NASA Astrophysics Data System (ADS)
Catillo, Marco; Glozman, Leonid Ya.
2018-04-01
The near-zero modes of the Dirac operator are connected to spontaneous breaking of chiral symmetry in QCD (SBCS) via the Banks-Casher relation. At the same time, the distribution of the near-zero modes is well described by the Random Matrix Theory (RMT) with the Gaussian Unitary Ensemble (GUE). Then, it has become a standard lore that a randomness, as observed through distributions of the near-zero modes of the Dirac operator, is a consequence of SBCS. The higher-lying modes of the Dirac operator are not affected by SBCS and are sensitive to confinement physics and related SU(2)CS and SU(2NF) symmetries. We study the distribution of the near-zero and higher-lying eigenmodes of the overlap Dirac operator within NF = 2 dynamical simulations. We find that both the distributions of the near-zero and higher-lying modes are perfectly described by GUE of RMT. This means that randomness, while consistent with SBCS, is not a consequence of SBCS and is linked to the confining chromo-electric field.
Computer Analysis of 400 HZ Aircraft Electrical Generator Test Data.
1980-06-01
Data Acquisition System. ............ 6 3 Voltage Waveform with Data Points. ....... 19 14 Zero Crossover Interpolation. ........ 20 5 Numerical...difference between successive positive-sloped zero crossovers of the waveform. However, the exact time of zero crossover is not known. This is because...data sampling and the generator output are not synchronized. This unsynchronization means that data points which correspond with an exact zero crossover
Zero-point term and quantum effects in the Johnson noise of resistors: a critical appraisal
NASA Astrophysics Data System (ADS)
Kish, Laszlo B.; Niklasson, Gunnar A.; Granqvist, Claes G.
2016-05-01
There is a longstanding debate about the zero-point term in the Johnson noise voltage of a resistor. This term originates from a quantum-theoretical treatment of the fluctuation-dissipation theorem (FDT). Is the zero-point term really there, or is it only an experimental artifact, due to the uncertainty principle, for phase-sensitive amplifiers? Could it be removed by renormalization of theories? We discuss some historical measurement schemes that do not lead to the effect predicted by the FDT, and we analyse new features that emerge when the consequences of the zero-point term are measured via the mean energy and force in a capacitor shunting the resistor. If these measurements verify the existence of a zero-point term in the noise, then two types of perpetual motion machines can be constructed. Further investigation with the same approach shows that, in the quantum limit, the Johnson-Nyquist formula is also invalid under general conditions even though it is valid for a resistor-antenna system. Therefore we conclude that in a satisfactory quantum theory of the Johnson noise, the FDT must, as a minimum, include also the measurement system used to evaluate the observed quantities. Issues concerning the zero-point term may also have implications for phenomena in advanced nanotechnology.
Superslow relaxation in identical phase oscillators with random and frustrated interactions
NASA Astrophysics Data System (ADS)
Daido, H.
2018-04-01
This paper is concerned with the relaxation dynamics of a large population of identical phase oscillators, each of which interacts with all the others through random couplings whose parameters obey the same Gaussian distribution with the average equal to zero and are mutually independent. The results obtained by numerical simulation suggest that for the infinite-size system, the absolute value of Kuramoto's order parameter exhibits superslow relaxation, i.e., 1/ln t as time t increases. Moreover, the statistics on both the transient time T for the system to reach a fixed point and the absolute value of Kuramoto's order parameter at t = T are also presented together with their distribution densities over many realizations of the coupling parameters.
Observable signatures of a classical transition
NASA Astrophysics Data System (ADS)
Johnson, Matthew C.; Lin, Wei
2016-03-01
Eternal inflation arising from a potential landscape predicts that our universe is one realization of many possible cosmological histories. One way to access different cosmological histories is via the nucleation of bubble universes from a metastable false vacuum. Another way to sample different cosmological histories is via classical transitions, the creation of pocket universes through the collision between bubbles. Using relativistic numerical simulations, we examine the possibility of observationally determining if our observable universe resulted from a classical transition. We find that classical transitions produce spatially infinite, approximately open Friedman-Robertson-Walker universes. The leading set of observables in the aftermath of a classical transition are negative spatial curvature and a contribution to the Cosmic Microwave Background temperature quadrupole. The level of curvature and magnitude of the quadrupole are dependent on the position of the observer, and we determine the possible range of observables for two classes of single-scalar field models. For the first class, where the inflationary phase has a lower energy than the vacuum preceding the classical transition, the magnitude of the observed quadrupole generally falls to zero with distance from the collision while the spatial curvature grows to a constant. For the second class, where the inflationary phase has a higher energy than the vacuum preceding the classical transition, the magnitude of the observed quadrupole generically falls to zero with distance from the collision while the spatial curvature grows without bound. We find that the magnitude of the quadrupole and curvature grow with increasing centre of mass energy of the collision, and explore variations of the parameters in the scalar field lagrangian.
Observable signatures of a classical transition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Matthew C.; Lin, Wei, E-mail: mjohnson@perimeterinstitute.ca, E-mail: lewisweilin@gmail.com
2016-03-01
Eternal inflation arising from a potential landscape predicts that our universe is one realization of many possible cosmological histories. One way to access different cosmological histories is via the nucleation of bubble universes from a metastable false vacuum. Another way to sample different cosmological histories is via classical transitions, the creation of pocket universes through the collision between bubbles. Using relativistic numerical simulations, we examine the possibility of observationally determining if our observable universe resulted from a classical transition. We find that classical transitions produce spatially infinite, approximately open Friedman-Robertson-Walker universes. The leading set of observables in the aftermath ofmore » a classical transition are negative spatial curvature and a contribution to the Cosmic Microwave Background temperature quadrupole. The level of curvature and magnitude of the quadrupole are dependent on the position of the observer, and we determine the possible range of observables for two classes of single-scalar field models. For the first class, where the inflationary phase has a lower energy than the vacuum preceding the classical transition, the magnitude of the observed quadrupole generally falls to zero with distance from the collision while the spatial curvature grows to a constant. For the second class, where the inflationary phase has a higher energy than the vacuum preceding the classical transition, the magnitude of the observed quadrupole generically falls to zero with distance from the collision while the spatial curvature grows without bound. We find that the magnitude of the quadrupole and curvature grow with increasing centre of mass energy of the collision, and explore variations of the parameters in the scalar field lagrangian.« less
NASA Astrophysics Data System (ADS)
Judson, Richard S.; Rabitz, Herschel
1987-04-01
The relationship between structure in the potential surface and classical mechanical observables is examined by means of functional sensitivity analysis. Functional sensitivities provide maps of the potential surface, highlighting those regions that play the greatest role in determining the behavior of observables. A set of differential equations for the sensitivities of the trajectory components are derived. These are then solved using a Green's function method. It is found that the sensitivities become singular at the trajectory turning points with the singularities going as η-3/2, with η being the distance from the nearest turning point. The sensitivities are zero outside of the energetically and dynamically allowed region of phase space. A second set of equations is derived from which the sensitivities of observables can be directly calculated. An adjoint Green's function technique is employed, providing an efficient method for numerically calculating these quantities. Sensitivity maps are presented for a simple collinear atom-diatom inelastic scattering problem and for two Henon-Heiles type Hamiltonians modeling intramolecular processes. It is found that the positions of the trajectory caustics in the bound state problem determine regions of the highest potential surface sensitivities. In the scattering problem (which is impulsive, so that ``sticky'' collisions did not occur), the positions of the turning points of the individual trajectory components determine the regions of high sensitivity. In both cases, these lines of singularities are superimposed on a rich background structure. Most interesting is the appearance of classical interference effects. The interference features in the sensitivity maps occur most noticeably where two or more lines of turning points cross. The important practical motivation for calculating the sensitivities derives from the fact that the potential is a function, implying that any direct attempt to understand how local potential regions affect the behavior of the observables by repeatedly and systematically altering the potential will be prohibitively expensive. The functional sensitivity method enables one to perform this analysis at a fraction of the computational labor required for the direct method.
Liquid-gas phase transitions and C K symmetry in quantum field theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nishimura, Hiromichi; Ogilvie, Michael C.; Pangeni, Kamal
A general field-theoretic framework for the treatment of liquid-gas phase transitions is developed. Starting from a fundamental four-dimensional field theory at nonzero temperature and density, an effective three-dimensional field theory is derived. The effective field theory has a sign problem at finite density. Although finite density explicitly breaks charge conjugation C , there remains a symmetry under C K , where K is complex conjugation. Here, we consider four models: relativistic fermions, nonrelativistic fermions, static fermions and classical particles. The interactions are via an attractive potential due to scalar field exchange and a repulsive potential due to massive vector exchange.more » The field-theoretic representation of the partition function is closely related to the equivalence of the sine-Gordon field theory with a classical gas. The thermodynamic behavior is extracted from C K -symmetric complex saddle points of the effective field theory at tree level. In the cases of nonrelativistic fermions and classical particles, we find complex saddle point solutions but no first-order transitions, and neither model has a ground state at tree level. The relativistic and static fermions show a liquid-gas transition at tree level in the effective field theory. The liquid-gas transition, when it occurs, manifests as a first-order line at low temperature and high density, terminated by a critical end point. The mass matrix controlling the behavior of correlation functions is obtained from fluctuations around the saddle points. Due to the C K symmetry of the models, the eigenvalues of the mass matrix are not always real but can be complex. This then leads to the existence of disorder lines, which mark the boundaries where the eigenvalues go from purely real to complex. The regions where the mass matrix eigenvalues are complex are associated with the critical line. In the case of static fermions, a powerful duality between particles and holes allows for the analytic determination of both the critical line and the disorder lines. Depending on the values of the parameters, either zero, one, or two disorder lines are found. Our numerical results for relativistic fermions give a very similar picture.« less
Liquid-gas phase transitions and C K symmetry in quantum field theories
Nishimura, Hiromichi; Ogilvie, Michael C.; Pangeni, Kamal
2017-04-04
A general field-theoretic framework for the treatment of liquid-gas phase transitions is developed. Starting from a fundamental four-dimensional field theory at nonzero temperature and density, an effective three-dimensional field theory is derived. The effective field theory has a sign problem at finite density. Although finite density explicitly breaks charge conjugation C , there remains a symmetry under C K , where K is complex conjugation. Here, we consider four models: relativistic fermions, nonrelativistic fermions, static fermions and classical particles. The interactions are via an attractive potential due to scalar field exchange and a repulsive potential due to massive vector exchange.more » The field-theoretic representation of the partition function is closely related to the equivalence of the sine-Gordon field theory with a classical gas. The thermodynamic behavior is extracted from C K -symmetric complex saddle points of the effective field theory at tree level. In the cases of nonrelativistic fermions and classical particles, we find complex saddle point solutions but no first-order transitions, and neither model has a ground state at tree level. The relativistic and static fermions show a liquid-gas transition at tree level in the effective field theory. The liquid-gas transition, when it occurs, manifests as a first-order line at low temperature and high density, terminated by a critical end point. The mass matrix controlling the behavior of correlation functions is obtained from fluctuations around the saddle points. Due to the C K symmetry of the models, the eigenvalues of the mass matrix are not always real but can be complex. This then leads to the existence of disorder lines, which mark the boundaries where the eigenvalues go from purely real to complex. The regions where the mass matrix eigenvalues are complex are associated with the critical line. In the case of static fermions, a powerful duality between particles and holes allows for the analytic determination of both the critical line and the disorder lines. Depending on the values of the parameters, either zero, one, or two disorder lines are found. Our numerical results for relativistic fermions give a very similar picture.« less
NASA Astrophysics Data System (ADS)
Boche, Holger; Cai, Minglai; Deppe, Christian; Nötzel, Janis
2017-10-01
We analyze arbitrarily varying classical-quantum wiretap channels. These channels are subject to two attacks at the same time: one passive (eavesdropping) and one active (jamming). We elaborate on our previous studies [H. Boche et al., Quantum Inf. Process. 15(11), 4853-4895 (2016) and H. Boche et al., Quantum Inf. Process. 16(1), 1-48 (2016)] by introducing a reduced class of allowable codes that fulfills a more stringent secrecy requirement than earlier definitions. In addition, we prove that non-symmetrizability of the legal link is sufficient for equality of the deterministic and the common randomness assisted secrecy capacities. Finally, we focus on analytic properties of both secrecy capacities: We completely characterize their discontinuity points and their super-activation properties.
Inverse Jacobi multiplier as a link between conservative systems and Poisson structures
NASA Astrophysics Data System (ADS)
García, Isaac A.; Hernández-Bermejo, Benito
2017-08-01
Some aspects of the relationship between conservativeness of a dynamical system (namely the preservation of a finite measure) and the existence of a Poisson structure for that system are analyzed. From the local point of view, due to the flow-box theorem we restrict ourselves to neighborhoods of singularities. In this sense, we characterize Poisson structures around the typical zero-Hopf singularity in dimension 3 under the assumption of having a local analytic first integral with non-vanishing first jet by connecting with the classical Poincaré center problem. From the global point of view, we connect the property of being strictly conservative (the invariant measure must be positive) with the existence of a Poisson structure depending on the phase space dimension. Finally, weak conservativeness in dimension two is introduced by the extension of inverse Jacobi multipliers as weak solutions of its defining partial differential equation and some of its applications are developed. Examples including Lotka-Volterra systems, quadratic isochronous centers, and non-smooth oscillators are provided.
Quantum random walks on congested lattices and the effect of dephasing.
Motes, Keith R; Gilchrist, Alexei; Rohde, Peter P
2016-01-27
We consider quantum random walks on congested lattices and contrast them to classical random walks. Congestion is modelled on lattices that contain static defects which reverse the walker's direction. We implement a dephasing process after each step which allows us to smoothly interpolate between classical and quantum random walks as well as study the effect of dephasing on the quantum walk. Our key results show that a quantum walker escapes a finite boundary dramatically faster than a classical walker and that this advantage remains in the presence of heavily congested lattices.
Quantum walks with tuneable self-avoidance in one dimension
Camilleri, Elizabeth; Rohde, Peter P.; Twamley, Jason
2014-01-01
Quantum walks exhibit many unique characteristics compared to classical random walks. In the classical setting, self-avoiding random walks have been studied as a variation on the usual classical random walk. Here the walker has memory of its previous locations and preferentially avoids stepping back to locations where it has previously resided. Classical self-avoiding random walks have found numerous algorithmic applications, most notably in the modelling of protein folding. We consider the analogous problem in the quantum setting – a quantum walk in one dimension with tunable levels of self-avoidance. We complement a quantum walk with a memory register that records where the walker has previously resided. The walker is then able to avoid returning back to previously visited sites or apply more general memory conditioned operations to control the walk. We characterise this walk by examining the variance of the walker's distribution against time, the standard metric for quantifying how quantum or classical a walk is. We parameterise the strength of the memory recording and the strength of the memory back-action on the walker, and investigate their effect on the dynamics of the walk. We find that by manipulating these parameters, which dictate the degree of self-avoidance, the walk can be made to reproduce ideal quantum or classical random walk statistics, or a plethora of more elaborate diffusive phenomena. In some parameter regimes we observe a close correspondence between classical self-avoiding random walks and the quantum self-avoiding walk. PMID:24762398
Algebraic Riccati equations in zero-sum differential games
NASA Technical Reports Server (NTRS)
Johnson, T. L.; Chao, A.
1974-01-01
The procedure for finding the closed-loop Nash equilibrium solution of two-player zero-sum linear time-invariant differential games with quadratic performance criteria and classical information pattern may be reduced in most cases to the solution of an algebraic Riccati equation. Based on the results obtained by Willems, necessary and sufficient conditions for existence of solutions to these equations are derived, and explicit conditions for a scalar example are given.
Pal, Anirban; Acharya, Amita; Pal, Nidhi Dawar; Dawn, Satrajit; Biswas, Jhuma
2011-01-01
Postdural puncture headache (PDPH) is a distressing complication of the subarachnoid block. The previous studies conducted, including the recent ones, do not conclusively prove that pencil-point spinal needles decrease the incidence of PDPH. In this study, we have tried to find out whether a pencil-point Whitacre needle is a better alternative than the classic cutting beveled, commonly used, Quincke spinal needle, in patients at risk of PDPH. Three hundred and twenty obstetric patients, 20-36 years of age, ASA I and II, posted for Cesarean section under subarachnoid block, were randomly assigned into two groups W and Q, where 25G Whitacre and 25G Quincke spinal needles were used, respectively. The primary objective of the study was to find out the difference in incidence of PDPH, if any, between the two groups, by using the t test and Chi square test. The incidence of PDPH was 5% in group W and 28.12% in group Q, and the difference in incidence was statistically significant (P<0.001). The pencil-point 25G Whitacre spinal needle causes less incidence of PDPH compared to the classic 25G Quincke needle, and is recommended for use in patients at risk of PDPH.
Pal, Anirban; Acharya, Amita; Pal, Nidhi Dawar; Dawn, Satrajit; Biswas, Jhuma
2011-01-01
Background: Postdural puncture headache (PDPH) is a distressing complication of the subarachnoid block. The previous studies conducted, including the recent ones, do not conclusively prove that pencil-point spinal needles decrease the incidence of PDPH. In this study, we have tried to find out whether a pencil-point Whitacre needle is a better alternative than the classic cutting beveled, commonly used, Quincke spinal needle, in patients at risk of PDPH. Materials and Methods: Three hundred and twenty obstetric patients, 20-36 years of age, ASA I and II, posted for Cesarean section under subarachnoid block, were randomly assigned into two groups W and Q, where 25G Whitacre and 25G Quincke spinal needles were used, respectively. The primary objective of the study was to find out the difference in incidence of PDPH, if any, between the two groups, by using the t test and Chi square test. Results: The incidence of PDPH was 5% in group W and 28.12% in group Q, and the difference in incidence was statistically significant (P<0.001). Conclusion: The pencil-point 25G Whitacre spinal needle causes less incidence of PDPH compared to the classic 25G Quincke needle, and is recommended for use in patients at risk of PDPH. PMID:25885381
Joe, Yong S; Lee, Sun H; Hedin, Eric R; Kim, Young D
2013-06-01
We utilize a two-dimensional four-channel DNA model, with a tight-binding (TB) Hamiltonian, and investigate the temperature and the magnetic field dependence of the transport behavior of a short DNA molecule. Random variation of the hopping integrals due to the thermal structural disorder, which partially destroy phase coherence of electrons and reduce quantum interference, leads to a reduction of the localization length and causes suppressed overall transmission. We also incorporate a variation of magnetic field flux density into the hopping integrals as a phase factor and observe Aharonov-Bohm (AB) oscillations in the transmission. It is shown that for non-zero magnetic flux, the transmission zero leaves the real-energy axis and moves up into the complex-energy plane. We also point out that the hydrogen bonds between the base pair with flux variations play a role to determine the periodicity of AB oscillations in the transmission.
Do semiclassical zero temperature black holes exist?
Anderson, P R; Hiscock, W A; Taylor, B E
2000-09-18
The semiclassical Einstein equations are solved to first order in epsilon = Planck's over 2pi/M2 for the case of a Reissner-Nordström black hole perturbed by the vacuum stress energy of quantized free fields. Massless and massive fields of spin 0, 1/2, and 1 are considered. We show that in all physically realistic cases, macroscopic zero temperature black hole solutions do not exist. Any static zero temperature semiclassical black hole solutions must then be microscopic and isolated in the space of solutions; they do not join smoothly onto the classical extreme Reissner-Nordström solution as epsilon-->0.
Cooling in reduced period optical lattices: Non-zero Raman detuning
NASA Astrophysics Data System (ADS)
Malinovsky, V. S.; Berman, P. R.
2006-08-01
In a previous paper [Phys. Rev. A 72 (2005) 033415], it was shown that sub-Doppler cooling occurs in a standing-wave Raman scheme (SWRS) that can lead to reduced period optical lattices. These calculations are extended to allow for non-zero detuning of the Raman transitions. New physical phenomena are encountered, including cooling to non-zero velocities, combinations of Sisyphus and "corkscrew" polarization cooling, and somewhat unusual origins of the friction force. The calculations are carried out in a semi-classical approximation and a dressed state picture is introduced to aid in the interpretation of the results.
Kronberg, J.W.
1993-04-20
An apparatus for selecting at random one item of N items on the average comprising counter and reset elements for counting repeatedly between zero and N, a number selected by the user, a circuit for activating and deactivating the counter, a comparator to determine if the counter stopped at a count of zero, an output to indicate an item has been selected when the count is zero or not selected if the count is not zero. Randomness is provided by having the counter cycle very often while varying the relatively longer duration between activation and deactivation of the count. The passive circuit components of the activating/deactivating circuit and those of the counter are selected for the sensitivity of their response to variations in temperature and other physical characteristics of the environment so that the response time of the circuitry varies. Additionally, the items themselves, which may be people, may vary in shape or the time they press a pushbutton, so that, for example, an ultrasonic beam broken by the item or person passing through it will add to the duration of the count and thus to the randomness of the selection.
Kronberg, James W.
1993-01-01
An apparatus for selecting at random one item of N items on the average comprising counter and reset elements for counting repeatedly between zero and N, a number selected by the user, a circuit for activating and deactivating the counter, a comparator to determine if the counter stopped at a count of zero, an output to indicate an item has been selected when the count is zero or not selected if the count is not zero. Randomness is provided by having the counter cycle very often while varying the relatively longer duration between activation and deactivation of the count. The passive circuit components of the activating/deactivating circuit and those of the counter are selected for the sensitivity of their response to variations in temperature and other physical characteristics of the environment so that the response time of the circuitry varies. Additionally, the items themselves, which may be people, may vary in shape or the time they press a pushbutton, so that, for example, an ultrasonic beam broken by the item or person passing through it will add to the duration of the count and thus to the randomness of the selection.
Optical interferometry and Gaia parallaxes for a robust calibration of the Cepheid distance scale
NASA Astrophysics Data System (ADS)
Kervella, Pierre; Mérand, Antoine; Gallenne, Alexandre; Trahin, Boris; Borgniet, Simon; Pietrzynski, Grzegorz; Nardetto, Nicolas; Gieren, Wolfgang
2018-04-01
We present the modeling tool we developed to incorporate multi-technique observations of Cepheids in a single pulsation model: the Spectro-Photo-Interferometry of Pulsating Stars (SPIPS). The combination of angular diameters from optical interferometry, radial velocities and photometry with the coming Gaia DR2 parallaxes of nearby Galactic Cepheids will soon enable us to calibrate the projection factor of the classical Parallax-of-Pulsation method. This will extend its applicability to Cepheids too distant for accurate Gaia parallax measurements, and allow us to precisely calibrate the Leavitt law's zero point. As an example application, we present the SPIPS model of the long-period Cepheid RS Pup that provides a measurement of its projection factor, using the independent distance estimated from its light echoes.
Alexander, Helen K.; Mayer, Stephanie I.; Bonhoeffer, Sebastian
2017-01-01
Abstract Mutation rate is a crucial evolutionary parameter that has typically been treated as a constant in population genetic analyses. However, the propensity to mutate is likely to vary among co-existing individuals within a population, due to genetic polymorphisms, heterogeneous environmental influences, and random physiological fluctuations. We review the evidence for mutation rate heterogeneity and explore its consequences by extending classic population genetic models to allow an arbitrary distribution of mutation rate among individuals, either with or without inheritance. With this general new framework, we rigorously establish the effects of heterogeneity at various evolutionary timescales. In a single generation, variation of mutation rate about the mean increases the probability of producing zero or many simultaneous mutations on a genome. Over multiple generations of mutation and selection, heterogeneity accelerates the appearance of both deleterious and beneficial multi-point mutants. At mutation-selection balance, higher-order mutant frequencies are likewise boosted, while lower-order mutants exhibit subtler effects; nonetheless, population mean fitness is always enhanced. We quantify the dependencies on moments of the mutation rate distribution and selection coefficients, and clarify the role of mutation rate inheritance. While typical methods of estimating mutation rate will recover only the population mean, analyses assuming mutation rate is fixed to this mean could underestimate the potential for multi-locus adaptation, including medically relevant evolution in pathogenic and cancerous populations. We discuss the potential to empirically parameterize mutation rate distributions, which have to date hardly been quantified. PMID:27836985
Role of confinements on the melting of Wigner molecules in quantum dots
NASA Astrophysics Data System (ADS)
Bhattacharya, Dyuti; Filinov, Alexei V.; Ghosal, Amit; Bonitz, Michael
2016-03-01
We explore the stability of a Wigner molecule (WM) formed in confinements with different geometries emulating the role of disorder and analyze the melting (or crossover) of such a system. Building on a recent calculation [D. Bhattacharya, A. Ghosal, Eur. Phys. J. B 86, 499 (2013)] that discussed the effects of irregularities on the thermal crossover in classical systems, we expand our studies in the untested territory by including both the effects of quantum fluctuations and of disorder. Our results, using classical and quantum (path integral) Monte Carlo techniques, unfold complementary mechanisms that drive the quantum and thermal crossovers in a WM and show that the symmetry of the confinement plays no significant role in determining the quantum crossover scale n X . This is because the zero-point motion screens the boundary effects within short distances. The phase diagram as a function of thermal and quantum fluctuations determined from independent criteria is unique, and shows "melting" from the WM to both the classical and quantum "liquids". An intriguing signature of weakening liquidity with increasing temperature, T, is found in the extreme quantum regime. The crossover is associated with production of defects. However, these defects appear to play distinct roles in driving the quantum and thermal "melting". Our analyses carry serious implications for a variety of experiments on many-particle systems - semiconductor heterostructure quantum dots, trapped ions, nanoclusters, colloids and complex plasma.
Zero-Point Energy Leakage in Quantum Thermal Bath Molecular Dynamics Simulations.
Brieuc, Fabien; Bronstein, Yael; Dammak, Hichem; Depondt, Philippe; Finocchi, Fabio; Hayoun, Marc
2016-12-13
The quantum thermal bath (QTB) has been presented as an alternative to path-integral-based methods to introduce nuclear quantum effects in molecular dynamics simulations. The method has proved to be efficient, yielding accurate results for various systems. However, the QTB method is prone to zero-point energy leakage (ZPEL) in highly anharmonic systems. This is a well-known problem in methods based on classical trajectories where part of the energy of the high-frequency modes is transferred to the low-frequency modes leading to a wrong energy distribution. In some cases, the ZPEL can have dramatic consequences on the properties of the system. Thus, we investigate the ZPEL by testing the QTB method on selected systems with increasing complexity in order to study the conditions and the parameters that influence the leakage. We also analyze the consequences of the ZPEL on the structural and vibrational properties of the system. We find that the leakage is particularly dependent on the damping coefficient and that increasing its value can reduce and, in some cases, completely remove the ZPEL. When using sufficiently high values for the damping coefficient, the expected energy distribution among the vibrational modes is ensured. In this case, the QTB method gives very encouraging results. In particular, the structural properties are well-reproduced. The dynamical properties should be regarded with caution although valuable information can still be extracted from the vibrational spectrum, even for large values of the damping term.
Quantum random walks on congested lattices and the effect of dephasing
Motes, Keith R.; Gilchrist, Alexei; Rohde, Peter P.
2016-01-01
We consider quantum random walks on congested lattices and contrast them to classical random walks. Congestion is modelled on lattices that contain static defects which reverse the walker’s direction. We implement a dephasing process after each step which allows us to smoothly interpolate between classical and quantum random walks as well as study the effect of dephasing on the quantum walk. Our key results show that a quantum walker escapes a finite boundary dramatically faster than a classical walker and that this advantage remains in the presence of heavily congested lattices. PMID:26812924
Turbulent mixing of a critical fluid: The non-perturbative renormalization
NASA Astrophysics Data System (ADS)
Hnatič, M.; Kalagov, G.; Nalimov, M.
2018-01-01
Non-perturbative Renormalization Group (NPRG) technique is applied to a stochastical model of a non-conserved scalar order parameter near its critical point, subject to turbulent advection. The compressible advecting flow is modeled by a random Gaussian velocity field with zero mean and correlation function 〈υjυi 〉 ∼ (Pji⊥ + αPji∥) /k d + ζ. Depending on the relations between the parameters ζ, α and the space dimensionality d, the model reveals several types of scaling regimes. Some of them are well known (model A of equilibrium critical dynamics and linear passive scalar field advected by a random turbulent flow), but there is a new nonequilibrium regime (universality class) associated with new nontrivial fixed points of the renormalization group equations. We have obtained the phase diagram (d, ζ) of possible scaling regimes in the system. The physical point d = 3, ζ = 4 / 3 corresponding to three-dimensional fully developed Kolmogorov's turbulence, where critical fluctuations are irrelevant, is stable for α ≲ 2.26. Otherwise, in the case of "strong compressibility" α ≳ 2.26, the critical fluctuations of the order parameter become relevant for three-dimensional turbulence. Estimations of critical exponents for each scaling regime are presented.
Sudden emergence of q-regular subgraphs in random graphs
NASA Astrophysics Data System (ADS)
Pretti, M.; Weigt, M.
2006-07-01
We investigate the computationally hard problem whether a random graph of finite average vertex degree has an extensively large q-regular subgraph, i.e., a subgraph with all vertices having degree equal to q. We reformulate this problem as a constraint-satisfaction problem, and solve it using the cavity method of statistical physics at zero temperature. For q = 3, we find that the first large q-regular subgraphs appear discontinuously at an average vertex degree c3 - reg simeq 3.3546 and contain immediately about 24% of all vertices in the graph. This transition is extremely close to (but different from) the well-known 3-core percolation point c3 - core simeq 3.3509. For q > 3, the q-regular subgraph percolation threshold is found to coincide with that of the q-core.
What is quantum in quantum randomness?
Grangier, P; Auffèves, A
2018-07-13
It is often said that quantum and classical randomness are of different nature, the former being ontological and the latter epistemological. However, so far the question of 'What is quantum in quantum randomness?', i.e. what is the impact of quantization and discreteness on the nature of randomness, remains to be answered. In a first part, we make explicit the differences between quantum and classical randomness within a recently proposed ontology for quantum mechanics based on contextual objectivity. In this view, quantum randomness is the result of contextuality and quantization. We show that this approach strongly impacts the purposes of quantum theory as well as its areas of application. In particular, it challenges current programmes inspired by classical reductionism, aiming at the emergence of the classical world from a large number of quantum systems. In a second part, we analyse quantum physics and thermodynamics as theories of randomness, unveiling their mutual influences. We finally consider new technological applications of quantum randomness that have opened up in the emerging field of quantum thermodynamics.This article is part of a discussion meeting issue 'Foundations of quantum mechanics and their impact on contemporary society'. © 2018 The Author(s).
Bhattacharya, Rupak; Mondal, Richarj; Khatua, Pradip; Rudra, Alok; Kapon, Eli; Malzer, Stefan; Döhler, Gottfried; Pal, Bipul; Bansal, Bhavtosh
2015-01-30
We study a specific type of lifetime broadening resulting in the well-known exponential "Urbach tail" density of states within the energy gap of an insulator. After establishing the frequency and temperature dependence of the Urbach edge in GaAs quantum wells, we show that the broadening due to the zero-point optical phonons is the fundamental limit to the Urbach slope in high-quality samples. In rough analogy with Welton's heuristic interpretation of the Lamb shift, the zero-temperature contribution to the Urbach slope can be thought of as arising from the electric field of the zero-point longitudinal-optical phonons. The value of this electric field is experimentally measured to be 3 kV cm-1, in excellent agreement with the theoretical estimate.
Quantum friction on monoatomic layers and its classical analog
NASA Astrophysics Data System (ADS)
Maslovski, Stanislav I.; Silveirinha, Mário G.
2013-07-01
We consider the effect of quantum friction at zero absolute temperature resulting from polaritonic interactions in closely positioned two-dimensional arrays of polarizable atoms (e.g., graphene sheets) or thin dielectric sheets modeled as such arrays. The arrays move one with respect to another with a nonrelativistic velocity v≪c. We confirm that quantum friction is inevitably related to material dispersion, and that such friction vanishes in nondispersive media. In addition, we consider a classical analog of the quantum friction which allows us to establish a link between the phenomena of quantum friction and classical parametric generation. In particular, we demonstrate how the quasiparticle generation rate typically obtained from the quantum Fermi golden rule can be calculated classically.
Rota-Baxter operators on sl (2,C) and solutions of the classical Yang-Baxter equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pei, Jun, E-mail: peitsun@163.com; Bai, Chengming, E-mail: baicm@nankai.edu.cn; Guo, Li, E-mail: liguo@rutgers.edu
2014-02-15
We explicitly determine all Rota-Baxter operators (of weight zero) on sl (2,C) under the Cartan-Weyl basis. For the skew-symmetric operators, we give the corresponding skew-symmetric solutions of the classical Yang-Baxter equation in sl (2,C), confirming the related study by Semenov-Tian-Shansky. In general, these Rota-Baxter operators give a family of solutions of the classical Yang-Baxter equation in the six-dimensional Lie algebra sl (2,C)⋉{sub ad{sup *}} sl (2,C){sup *}. They also give rise to three-dimensional pre-Lie algebras which in turn yield solutions of the classical Yang-Baxter equation in other six-dimensional Lie algebras.
NASA Astrophysics Data System (ADS)
Monthus, Cécile
2018-06-01
For random interacting Majorana models where the only symmetries are the parity P and the time-reversal-symmetry T, various approaches are compared to construct exact even and odd normalized zero modes Γ in finite size, i.e. Hermitian operators that commute with the Hamiltonian, that square to the identity, and that commute (even) or anticommute (odd) with the parity P. Even normalized zero-modes are well known under the name of ‘pseudo-spins’ in the field of many-body-localization or more precisely ‘local integrals of motion’ (LIOMs) in the many-body-localized-phase where the pseudo-spins happens to be spatially localized. Odd normalized zero-modes are popular under the name of ‘Majorana zero modes’ or ‘strong zero modes’. Explicit examples for small systems are described in detail. Applications to real-space renormalization procedures based on blocks containing an odd number of Majorana fermions are also discussed.
Exact results for the O( N ) model with quenched disorder
NASA Astrophysics Data System (ADS)
Delfino, Gesualdo; Lamsen, Noel
2018-04-01
We use scale invariant scattering theory to exactly determine the lines of renormalization group fixed points for O( N )-symmetric models with quenched disorder in two dimensions. Random fixed points are characterized by two disorder parameters: a modulus that vanishes when approaching the pure case, and a phase angle. The critical lines fall into three classes depending on the values of the disorder modulus. Besides the class corresponding to the pure case, a second class has maximal value of the disorder modulus and includes Nishimori-like multicritical points as well as zero temperature fixed points. The third class contains critical lines that interpolate, as N varies, between the first two classes. For positive N , it contains a single line of infrared fixed points spanning the values of N from √{2}-1 to 1. The symmetry sector of the energy density operator is superuniversal (i.e. N -independent) along this line. For N = 2 a line of fixed points exists only in the pure case, but accounts also for the Berezinskii-Kosterlitz-Thouless phase observed in presence of disorder.
The serpentine optical waveguide: engineering the dispersion relations and the stopped light points.
Scheuer, Jacob; Weiss, Ori
2011-06-06
We present a study a new type of optical slow-light structure comprising a serpentine shaped waveguide were the loops are coupled. The dispersion relation, group velocity and GVD are studied analytically using a transfer matrix method and numerically using finite difference time domain simulations. The structure exhibits zero group velocity points at the ends of the Brillouin zone, but also within the zone. The position of mid-zone zero group velocity point can be tuned by modifying the coupling coefficient between adjacent loops. Closed-form analytic expressions for the dispersion relations, group velocity and the mid-zone zero v(g) points are found and presented.
Kosmulski, Marek; Maczka, Edward; Jartych, Elzbieta; Rosenholm, Jarl B
2003-03-19
Aging of synthetic goethite at 140 degrees C overnight leads to a composite material in which hematite is detectable by Mössbauer spectroscopy, but X-ray diffraction does not reveal any hematite peaks. The pristine point of zero charge (PZC) of synthetic goethite was found at pH 9.4 as the common intersection point of potentiometric titration curves at different ionic strengths and the isoelectric point (IEP). For the goethite-hematite composite, the common intersection point (pH 9.4), and the IEP (pH 8.8) do not match. The electrokinetic potential of goethite at ionic strengths up to 1 mol dm(-3) was determined. Unlike metal oxides, for which the electrokinetic potential is reversed to positive over the entire pH range at sufficiently high ionic strength, the IEP of goethite is rather insensitive to the ionic strength. A literature survey of published PZC/IEP values of iron oxides and hydroxides indicated that the average PZC/IEP does not depend on the degree of hydration (oxide or hydroxide). Our material showed a higher PZC and IEP than most published results. The present results confirm the allegation that electroacoustic measurements produce a higher IEP than the average IEP obtained by means of classical electrokinetic methods.
Quantum versus classical hyperfine-induced dynamics in a quantum dota)
NASA Astrophysics Data System (ADS)
Coish, W. A.; Loss, Daniel; Yuzbashyan, E. A.; Altshuler, B. L.
2007-04-01
In this article we analyze spin dynamics for electrons confined to semiconductor quantum dots due to the contact hyperfine interaction. We compare mean-field (classical) evolution of an electron spin in the presence of a nuclear field with the exact quantum evolution for the special case of uniform hyperfine coupling constants. We find that (in this special case) the zero-magnetic-field dynamics due to the mean-field approximation and quantum evolution are similar. However, in a finite magnetic field, the quantum and classical solutions agree only up to a certain time scale t <τc, after which they differ markedly.
Turning Points in the Development of Classical Musicians
ERIC Educational Resources Information Center
Gabor, Elena
2011-01-01
This qualitative study investigated the vocational socialization turning points in families of classical musicians. I sampled and interviewed 20 parent-child dyads, for a total of 46 interviews. Data analysis revealed that classical musicians' experiences were marked by 11 turning points that affected their identification with the occupation:…
Butlitsky, M A; Zelener, B B; Zelener, B V
2014-07-14
A two-component plasma model, which we called a "shelf Coulomb" model has been developed in this work. A Monte Carlo study has been undertaken to calculate equations of state, pair distribution functions, internal energies, and other thermodynamics properties. A canonical NVT ensemble with periodic boundary conditions was used. The motivation behind the model is also discussed in this work. The "shelf Coulomb" model can be compared to classical two-component (electron-proton) model where charges with zero size interact via a classical Coulomb law. With important difference for interaction of opposite charges: electrons and protons interact via the Coulomb law for large distances between particles, while interaction potential is cut off on small distances. The cut off distance is defined by an arbitrary ɛ parameter, which depends on system temperature. All the thermodynamics properties of the model depend on dimensionless parameters ɛ and γ = βe(2)n(1/3) (where β = 1/kBT, n is the particle's density, kB is the Boltzmann constant, and T is the temperature) only. In addition, it has been shown that the virial theorem works in this model. All the calculations were carried over a wide range of dimensionless ɛ and γ parameters in order to find the phase transition region, critical point, spinodal, and binodal lines of a model system. The system is observed to undergo a first order gas-liquid type phase transition with the critical point being in the vicinity of ɛ(crit) ≈ 13(T(*)(crit) ≈ 0.076), γ(crit) ≈ 1.8(v(*)(crit) ≈ 0.17), P(*)(crit) ≈ 0.39, where specific volume v* = 1/γ(3) and reduced temperature T(*) = ɛ(-1).
Li, Baoyue; Lingsma, Hester F; Steyerberg, Ewout W; Lesaffre, Emmanuel
2011-05-23
Logistic random effects models are a popular tool to analyze multilevel also called hierarchical data with a binary or ordinal outcome. Here, we aim to compare different statistical software implementations of these models. We used individual patient data from 8509 patients in 231 centers with moderate and severe Traumatic Brain Injury (TBI) enrolled in eight Randomized Controlled Trials (RCTs) and three observational studies. We fitted logistic random effects regression models with the 5-point Glasgow Outcome Scale (GOS) as outcome, both dichotomized as well as ordinal, with center and/or trial as random effects, and as covariates age, motor score, pupil reactivity or trial. We then compared the implementations of frequentist and Bayesian methods to estimate the fixed and random effects. Frequentist approaches included R (lme4), Stata (GLLAMM), SAS (GLIMMIX and NLMIXED), MLwiN ([R]IGLS) and MIXOR, Bayesian approaches included WinBUGS, MLwiN (MCMC), R package MCMCglmm and SAS experimental procedure MCMC.Three data sets (the full data set and two sub-datasets) were analysed using basically two logistic random effects models with either one random effect for the center or two random effects for center and trial. For the ordinal outcome in the full data set also a proportional odds model with a random center effect was fitted. The packages gave similar parameter estimates for both the fixed and random effects and for the binary (and ordinal) models for the main study and when based on a relatively large number of level-1 (patient level) data compared to the number of level-2 (hospital level) data. However, when based on relatively sparse data set, i.e. when the numbers of level-1 and level-2 data units were about the same, the frequentist and Bayesian approaches showed somewhat different results. The software implementations differ considerably in flexibility, computation time, and usability. There are also differences in the availability of additional tools for model evaluation, such as diagnostic plots. The experimental SAS (version 9.2) procedure MCMC appeared to be inefficient. On relatively large data sets, the different software implementations of logistic random effects regression models produced similar results. Thus, for a large data set there seems to be no explicit preference (of course if there is no preference from a philosophical point of view) for either a frequentist or Bayesian approach (if based on vague priors). The choice for a particular implementation may largely depend on the desired flexibility, and the usability of the package. For small data sets the random effects variances are difficult to estimate. In the frequentist approaches the MLE of this variance was often estimated zero with a standard error that is either zero or could not be determined, while for Bayesian methods the estimates could depend on the chosen "non-informative" prior of the variance parameter. The starting value for the variance parameter may be also critical for the convergence of the Markov chain.
Thermodynamics of finite systems: a key issues review
NASA Astrophysics Data System (ADS)
Swendsen, Robert H.
2018-07-01
A little over ten years ago, Campisi, and Dunkel and Hilbert, published papers claiming that the Gibbs (volume) entropy of a classical system was correct, and that the Boltzmann (surface) entropy was not. They claimed further that the quantum version of the Gibbs entropy was also correct, and that the phenomenon of negative temperatures was thermodynamically inconsistent. Their work began a vigorous debate of exactly how the entropy, both classical and quantum, should be defined. The debate has called into question the basis of thermodynamics, along with fundamental ideas such as whether heat always flows from hot to cold. The purpose of this paper is to sum up the present status—admittedly from my point of view. I will show that standard thermodynamics, with some minor generalizations, is correct, and the alternative thermodynamics suggested by Hilbert, Hänggi, and Dunkel is not. Heat does not flow from cold to hot. Negative temperatures are thermodynamically consistent. The small ‘errors’ in the Boltzmann entropy that started the whole debate are shown to be a consequence of the micro-canonical assumption of an energy distribution of zero width. Improved expressions for the entropy are found when this assumption is abandoned.
Hosseinpour, Mehdi; Yahaya, Ahmad Shukri; Sadullah, Ahmad Farhan
2014-01-01
Head-on crashes are among the most severe collision types and of great concern to road safety authorities. Therefore, it justifies more efforts to reduce both the frequency and severity of this collision type. To this end, it is necessary to first identify factors associating with the crash occurrence. This can be done by developing crash prediction models that relate crash outcomes to a set of contributing factors. This study intends to identify the factors affecting both the frequency and severity of head-on crashes that occurred on 448 segments of five federal roads in Malaysia. Data on road characteristics and crash history were collected on the study segments during a 4-year period between 2007 and 2010. The frequency of head-on crashes were fitted by developing and comparing seven count-data models including Poisson, standard negative binomial (NB), random-effect negative binomial, hurdle Poisson, hurdle negative binomial, zero-inflated Poisson, and zero-inflated negative binomial models. To model crash severity, a random-effect generalized ordered probit model (REGOPM) was used given a head-on crash had occurred. With respect to the crash frequency, the random-effect negative binomial (RENB) model was found to outperform the other models according to goodness of fit measures. Based on the results of the model, the variables horizontal curvature, terrain type, heavy-vehicle traffic, and access points were found to be positively related to the frequency of head-on crashes, while posted speed limit and shoulder width decreased the crash frequency. With regard to the crash severity, the results of REGOPM showed that horizontal curvature, paved shoulder width, terrain type, and side friction were associated with more severe crashes, whereas land use, access points, and presence of median reduced the probability of severe crashes. Based on the results of this study, some potential countermeasures were proposed to minimize the risk of head-on crashes. Copyright © 2013 Elsevier Ltd. All rights reserved.
The Role of Eigensolutions in Nonlinear Inverse Cavity-Flow-Theory. Revision.
1985-06-10
The method of Levi Civita is applied to an isolated fully cavitating body at zero cavitation number and adapted to the solution of the inverse...Eigensolutions in Nonlinear Inverse Cavity-Flow Theory [Revised] Abstract: The method of Levi Civita is applied to an isolated fully cavitating body at...problem is not thought * to present much of a challenge at zero cavitation number. In this case, - the classical method of Levi Civita [7] can be
Absolute metrology for space interferometers
NASA Astrophysics Data System (ADS)
Salvadé, Yves; Courteville, Alain; Dändliker, René
2017-11-01
The crucial issue of space-based interferometers is the laser interferometric metrology systems to monitor with very high accuracy optical path differences. Although classical high-resolution laser interferometers using a single wavelength are well developed, this type of incremental interferometer has a severe drawback: any interruption of the interferometer signal results in the loss of the zero reference, which requires a new calibration, starting at zero optical path difference. We propose in this paper an absolute metrology system based on multiplewavelength interferometry.
Quantum Griffiths singularity of superconductor-metal transition in Ga thin films.
Xing, Ying; Zhang, Hui-Min; Fu, Hai-Long; Liu, Haiwen; Sun, Yi; Peng, Jun-Ping; Wang, Fa; Lin, Xi; Ma, Xu-Cun; Xue, Qi-Kun; Wang, Jian; Xie, X C
2015-10-30
The Griffiths singularity in a phase transition, caused by disorder effects, was predicted more than 40 years ago. Its signature, the divergence of the dynamical critical exponent, is challenging to observe experimentally. We report the experimental observation of the quantum Griffiths singularity in a two-dimensional superconducting system. We measured the transport properties of atomically thin gallium films and found that the films undergo superconductor-metal transitions with increasing magnetic field. Approaching the zero-temperature quantum critical point, we observed divergence of the dynamical critical exponent, which is consistent with the Griffiths singularity behavior. We interpret the observed superconductor-metal quantum phase transition as the infinite-randomness critical point, where the properties of the system are controlled by rare large superconducting regions. Copyright © 2015, American Association for the Advancement of Science.
An Investigation Into the Effects of Frequency Response Function Estimators on Model Updating
NASA Astrophysics Data System (ADS)
Ratcliffe, M. J.; Lieven, N. A. J.
1999-03-01
Model updating is a very active research field, in which significant effort has been invested in recent years. Model updating methodologies are invariably successful when used on noise-free simulated data, but tend to be unpredictable when presented with real experimental data that are—unavoidably—corrupted with uncorrelated noise content. In the development and validation of model-updating strategies, a random zero-mean Gaussian variable is added to simulated test data to tax the updating routines more fully. This paper proposes a more sophisticated model for experimental measurement noise, and this is used in conjunction with several different frequency response function estimators, from the classical H1and H2to more refined estimators that purport to be unbiased. Finite-element model case studies, in conjunction with a genuine experimental test, suggest that the proposed noise model is a more realistic representation of experimental noise phenomena. The choice of estimator is shown to have a significant influence on the viability of the FRF sensitivity method. These test cases find that the use of the H2estimator for model updating purposes is contraindicated, and that there is no advantage to be gained by using the sophisticated estimators over the classical H1estimator.
Classical and quantum stability in putative landscapes
Dine, Michael
2017-01-18
Landscape analyses often assume the existence of large numbers of fields, N, with all of the many couplings among these fields (subject to constraints such as local supersymmetry) selected independently and randomly from simple (say Gaussian) distributions. We point out that unitarity and perturbativity place significant constraints on behavior of couplings with N, eliminating otherwise puzzling results. In would-be flux compactifications of string theory, we point out that in order that there be large numbers of light fields, the compactification radii must scale as a positive power of N; scaling of couplings with N may also be necessary for perturbativity.more » We show that in some simple string theory settings with large numbers of fields, for fixed R and string coupling, one can bound certain sums of squares of couplings by order one numbers. This may argue for strong correlations, possibly calling into question the assumption of uncorrelated distributions. Finally, we consider implications of these considerations for classical and quantum stability of states without supersymmetry, with low energy supersymmetry arising from tuning of parameters, and with dynamical breaking of supersymmetry.« less
Classical and quantum stability in putative landscapes
NASA Astrophysics Data System (ADS)
Dine, Michael
2017-01-01
Landscape analyses often assume the existence of large numbers of fields, N , with all of the many couplings among these fields (subject to constraints such as local supersymmetry) selected independently and randomly from simple (say Gaussian) distributions. We point out that unitarity and perturbativity place significant constraints on behavior of couplings with N , eliminating otherwise puzzling results. In would-be flux compactifications of string theory, we point out that in order that there be large numbers of light fields, the compactification radii must scale as a positive power of N ; scaling of couplings with N may also be necessary for perturbativity. We show that in some simple string theory settings with large numbers of fields, for fixed R and string coupling, one can bound certain sums of squares of couplings by order one numbers. This may argue for strong correlations, possibly calling into question the assumption of uncorrelated distributions. We consider implications of these considerations for classical and quantum stability of states without supersymmetry, with low energy supersymmetry arising from tuning of parameters, and with dynamical breaking of supersymmetry.
Contrast Invariant Interest Point Detection by Zero-Norm LoG Filter.
Zhenwei Miao; Xudong Jiang; Kim-Hui Yap
2016-01-01
The Laplacian of Gaussian (LoG) filter is widely used in interest point detection. However, low-contrast image structures, though stable and significant, are often submerged by the high-contrast ones in the response image of the LoG filter, and hence are difficult to be detected. To solve this problem, we derive a generalized LoG filter, and propose a zero-norm LoG filter. The response of the zero-norm LoG filter is proportional to the weighted number of bright/dark pixels in a local region, which makes this filter be invariant to the image contrast. Based on the zero-norm LoG filter, we develop an interest point detector to extract local structures from images. Compared with the contrast dependent detectors, such as the popular scale invariant feature transform detector, the proposed detector is robust to illumination changes and abrupt variations of images. Experiments on benchmark databases demonstrate the superior performance of the proposed zero-norm LoG detector in terms of the repeatability and matching score of the detected points as well as the image recognition rate under different conditions.
Yu, Pei; Li, Zi-Yuan; Xu, Hong-Ya; Huang, Liang; Dietz, Barbara; Grebogi, Celso; Lai, Ying-Cheng
2016-12-01
A crucial result in quantum chaos, which has been established for a long time, is that the spectral properties of classically integrable systems generically are described by Poisson statistics, whereas those of time-reversal symmetric, classically chaotic systems coincide with those of random matrices from the Gaussian orthogonal ensemble (GOE). Does this result hold for two-dimensional Dirac material systems? To address this fundamental question, we investigate the spectral properties in a representative class of graphene billiards with shapes of classically integrable circular-sector billiards. Naively one may expect to observe Poisson statistics, which is indeed true for energies close to the band edges where the quasiparticle obeys the Schrödinger equation. However, for energies near the Dirac point, where the quasiparticles behave like massless Dirac fermions, Poisson statistics is extremely rare in the sense that it emerges only under quite strict symmetry constraints on the straight boundary parts of the sector. An arbitrarily small amount of imperfection of the boundary results in GOE statistics. This implies that, for circular-sector confinements with arbitrary angle, the spectral properties will generically be GOE. These results are corroborated by extensive numerical computation. Furthermore, we provide a physical understanding for our results.
NASA Astrophysics Data System (ADS)
Yu, Pei; Li, Zi-Yuan; Xu, Hong-Ya; Huang, Liang; Dietz, Barbara; Grebogi, Celso; Lai, Ying-Cheng
2016-12-01
A crucial result in quantum chaos, which has been established for a long time, is that the spectral properties of classically integrable systems generically are described by Poisson statistics, whereas those of time-reversal symmetric, classically chaotic systems coincide with those of random matrices from the Gaussian orthogonal ensemble (GOE). Does this result hold for two-dimensional Dirac material systems? To address this fundamental question, we investigate the spectral properties in a representative class of graphene billiards with shapes of classically integrable circular-sector billiards. Naively one may expect to observe Poisson statistics, which is indeed true for energies close to the band edges where the quasiparticle obeys the Schrödinger equation. However, for energies near the Dirac point, where the quasiparticles behave like massless Dirac fermions, Poisson statistics is extremely rare in the sense that it emerges only under quite strict symmetry constraints on the straight boundary parts of the sector. An arbitrarily small amount of imperfection of the boundary results in GOE statistics. This implies that, for circular-sector confinements with arbitrary angle, the spectral properties will generically be GOE. These results are corroborated by extensive numerical computation. Furthermore, we provide a physical understanding for our results.
Very highly excited vibrational states of LiCN using a discrete variable representation
NASA Astrophysics Data System (ADS)
Henderson, James R.; Tennyson, Jonathan
Calculations are presented for the lowest 900 vibrational (J = 0) states of the LiCN floppy system for a two dimensional potential energy surface (rCN frozen). Most of these states lie well above the barrier separating the two linear isomers of the molecule and the point where the classical dynamics of the system becomes chaotic. Analysis of the wavefunctions of individual states in the high energy region shows that while most have an irregular nodal structure, a significant number of states appear regular - corresponding to solutions of standard, 'mode localized' hamiltonians. Motions corresponding in zero-order to Li-CN and Li-NC normal modes as well as free rotor states are identified. The distribution of level spacings is also studied and yields results in good agreement with those obtained by analysing nodal structures.
Critical Exponents, Scaling Law, Universality and Renormalization Group Flow in Strong Coupling QED
NASA Astrophysics Data System (ADS)
Kondo, Kei-Ichi
The critical behavior of strongly coupled QED with a chiral-invariant four-fermion interaction (gauged Nambu-Jona-Lasinio model) is investigated through the unquenched Schwinger-Dyson equation including the fermion loop effect at the one-loop level. It is shown that the critical exponents satisfy the (hyper)scaling relations as in the quenched case. However, the respective critical exponent takes the classical mean-field value, and consequently unquenched QED belongs to the same universality class as the zero-charge model. On the other hand, it is pointed out that quenched QED violates not only universality but also weak universality, due to continuously varying critical exponents. Furthermore, the renormalization group flow of constant renormalized charge is given. All the results are consistent with triviality of QED and the gauged Nambu-Jona-Lasinio model in the unquenched case.
Effective dynamics of a classical point charge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polonyi, Janos, E-mail: polonyi@iphc.cnrs.fr
2014-03-15
The effective Lagrangian of a point charge is derived by eliminating the electromagnetic field within the framework of the classical closed time path formalism. The short distance singularity of the electromagnetic field is regulated by an UV cutoff. The Abraham–Lorentz force is recovered and its similarity to quantum anomalies is underlined. The full cutoff-dependent linearized equation of motion is obtained, no runaway trajectories are found but the effective dynamics shows acausality if the cutoff is beyond the classical charge radius. The strength of the radiation reaction force displays a pole in its cutoff-dependence in a manner reminiscent of the Landau-polemore » of perturbative QED. Similarity between the dynamical breakdown of the time reversal invariance and dynamical symmetry breaking is pointed out. -- Highlights: •Extension of the classical action principle for dissipative systems. •New derivation of the Abraham–Lorentz force for a point charge. •Absence of a runaway solution of the Abraham–Lorentz force. •Acausality in classical electrodynamics. •Renormalization of classical electrodynamics of point charges.« less
NASA Astrophysics Data System (ADS)
Prarokijjak, Worasak; Soodchomshom, Bumned
2018-04-01
Spin-valley transport and magnetoresistance are investigated in silicene-based N/TB/N/TB/N junction where N and TB are normal silicene and topological barriers. The topological phase transitions in TB's are controlled by electric, exchange fields and circularly polarized light. As a result, we find that by applying electric and exchange fields, four groups of spin-valley currents are perfectly filtered, directly induced by topological phase transitions. Control of currents, carried by single, double and triple channels of spin-valley electrons in silicene junction, may be achievable by adjusting magnitudes of electric, exchange fields and circularly polarized light. We may identify that the key factor behind the spin-valley current filtered at the transition points may be due to zero and non-zero Chern numbers. Electrons that are allowed to transport at the transition points must obey zero-Chern number which is equivalent to zero mass and zero-Berry's curvature, while electrons with non-zero Chern number are perfectly suppressed. Very large magnetoresistance dips are found directly induced by topological phase transition points. Our study also discusses the effect of spin-valley dependent Hall conductivity at the transition points on ballistic transport and reveals the potential of silicene as a topological material for spin-valleytronics.
Budiyono, Agung; Rohrlich, Daniel
2017-11-03
Where does quantum mechanics part ways with classical mechanics? How does quantum randomness differ fundamentally from classical randomness? We cannot fully explain how the theories differ until we can derive them within a single axiomatic framework, allowing an unambiguous account of how one theory is the limit of the other. Here we derive non-relativistic quantum mechanics and classical statistical mechanics within a common framework. The common axioms include conservation of average energy and conservation of probability current. But two axioms distinguish quantum mechanics from classical statistical mechanics: an "ontic extension" defines a nonseparable (global) random variable that generates physical correlations, and an "epistemic restriction" constrains allowed phase space distributions. The ontic extension and epistemic restriction, with strength on the order of Planck's constant, imply quantum entanglement and uncertainty relations. This framework suggests that the wave function is epistemic, yet it does not provide an ontic dynamics for individual systems.
Decoherence can relax cosmic acceleration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Markkanen, Tommi
In this work we investigate the semi-classical backreaction for a quantised conformal scalar field and classical vacuum energy. In contrast to the usual approximation of a closed system, our analysis includes an environmental sector such that a quantum-to-classical transition can take place. We show that when the system decoheres into a mixed state with particle number as the classical observable de Sitter space is destabilized, which is observable as a gradually decreasing Hubble rate. In particular we show that at late times this mechanism can drive the curvature of the Universe to zero and has an interpretation as the decaymore » of the vacuum energy demonstrating that quantum effects can be relevant for the fate of the Universe.« less
Hurdle models for multilevel zero-inflated data via h-likelihood.
Molas, Marek; Lesaffre, Emmanuel
2010-12-30
Count data often exhibit overdispersion. One type of overdispersion arises when there is an excess of zeros in comparison with the standard Poisson distribution. Zero-inflated Poisson and hurdle models have been proposed to perform a valid likelihood-based analysis to account for the surplus of zeros. Further, data often arise in clustered, longitudinal or multiple-membership settings. The proper analysis needs to reflect the design of a study. Typically random effects are used to account for dependencies in the data. We examine the h-likelihood estimation and inference framework for hurdle models with random effects for complex designs. We extend the h-likelihood procedures to fit hurdle models, thereby extending h-likelihood to truncated distributions. Two applications of the methodology are presented. Copyright © 2010 John Wiley & Sons, Ltd.
Cheng, Ji; Pullenayegum, Eleanor; Marshall, John K; Thabane, Lehana
2016-01-01
Objectives There is no consensus on whether studies with no observed events in the treatment and control arms, the so-called both-armed zero-event studies, should be included in a meta-analysis of randomised controlled trials (RCTs). Current analytic approaches handled them differently depending on the choice of effect measures and authors' discretion. Our objective is to evaluate the impact of including or excluding both-armed zero-event (BA0E) studies in meta-analysis of RCTs with rare outcome events through a simulation study. Method We simulated 2500 data sets for different scenarios varying the parameters of baseline event rate, treatment effect and number of patients in each trial, and between-study variance. We evaluated the performance of commonly used pooling methods in classical meta-analysis—namely, Peto, Mantel-Haenszel with fixed-effects and random-effects models, and inverse variance method with fixed-effects and random-effects models—using bias, root mean square error, length of 95% CI and coverage. Results The overall performance of the approaches of including or excluding BA0E studies in meta-analysis varied according to the magnitude of true treatment effect. Including BA0E studies introduced very little bias, decreased mean square error, narrowed the 95% CI and increased the coverage when no true treatment effect existed. However, when a true treatment effect existed, the estimates from the approach of excluding BA0E studies led to smaller bias than including them. Among all evaluated methods, the Peto method excluding BA0E studies gave the least biased results when a true treatment effect existed. Conclusions We recommend including BA0E studies when treatment effects are unlikely, but excluding them when there is a decisive treatment effect. Providing results of including and excluding BA0E studies to assess the robustness of the pooled estimated effect is a sensible way to communicate the results of a meta-analysis when the treatment effects are unclear. PMID:27531725
A random matrix approach to credit risk.
Münnix, Michael C; Schäfer, Rudi; Guhr, Thomas
2014-01-01
We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided.
A Random Matrix Approach to Credit Risk
Guhr, Thomas
2014-01-01
We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided. PMID:24853864
Sarkar, Biplab; Ghosh, Bhaswar; Sriramprasath; Mahendramohan, Sukumaran; Basu, Ayan; Goswami, Jyotirup; Ray, Amitabh
2010-01-01
The study was aimed to compare accuracy of monitor unit verification in intensity modulated radiation therapy (IMRT) using 6 MV photons by three different methodologies with different detector phantom combinations. Sixty patients were randomly chosen. Zero degree couch and gantry angle plans were generated in a plastic universal IMRT verification phantom and 30×30×30 cc water phantom and measured using 0.125 cc and 0.6 cc chambers, respectively. Actual gantry and couch angle plans were also measured in water phantom using 0.6 cc chamber. A suitable point of measurement was chosen from the beam profile for each field. When the zero-degree gantry, couch angle plans and actual gantry, couch angle plans were measured by 0.6 cc chamber in water phantom, the percentage mean difference (MD) was 1.35%, 2.94 % and Standard Deviation (SD) was 2.99%, 5.22%, respectively. The plastic phantom measurements with 0.125 cc chamber Semiflex ionisation chamber (SIC) showed an MD=4.21% and SD=2.73 %, but when corrected for chamber-medium response, they showed an improvement, with MD=3.38 % and SD=2.59 %. It was found that measurements with water phantom and 0.6cc chamber at gantry angle zero degree showed better conformity than other measurements of medium-detector combinations. Correction in plastic phantom measurement improved the result only marginally, and actual gantry angle measurement in a flat- water phantom showed higher deviation. PMID:20927221
The Role of Eigensolutions in Nonlinear Inverse Cavity-Flow-Theory.
1983-01-25
ere, side if necessary and id.ntify hv hlock number) " The method of Levi Civita is applied to an isolated fully cavitating body at zero cavitation... Levi Civita is applied to an isolated fully cavitating body at zero cavitation number and adapted to the solution of the inverse problem in which one...case, the classical method of Levi Civita [71 can be applied to an isolated •Numbers in square brackets indicate citations in the references listed below
Luigi Gatteschi's work on asymptotics of special functions and their zeros
NASA Astrophysics Data System (ADS)
Gautschi, Walter; Giordano, Carla
2008-12-01
A good portion of Gatteschi's research publications-about 65%-is devoted to asymptotics of special functions and their zeros. Most prominently among the special functions studied figure classical orthogonal polynomials, notably Jacobi polynomials and their special cases, Laguerre polynomials, and Hermite polynomials by implication. Other important classes of special functions dealt with are Bessel functions of the first and second kind, Airy functions, and confluent hypergeometric functions, both in Tricomi's and Whittaker's form. This work is reviewed here, and organized along methodological lines.
Offdiagonal complexity: A computationally quick complexity measure for graphs and networks
NASA Astrophysics Data System (ADS)
Claussen, Jens Christian
2007-02-01
A vast variety of biological, social, and economical networks shows topologies drastically differing from random graphs; yet the quantitative characterization remains unsatisfactory from a conceptual point of view. Motivated from the discussion of small scale-free networks, a biased link distribution entropy is defined, which takes an extremum for a power-law distribution. This approach is extended to the node-node link cross-distribution, whose nondiagonal elements characterize the graph structure beyond link distribution, cluster coefficient and average path length. From here a simple (and computationally cheap) complexity measure can be defined. This offdiagonal complexity (OdC) is proposed as a novel measure to characterize the complexity of an undirected graph, or network. While both for regular lattices and fully connected networks OdC is zero, it takes a moderately low value for a random graph and shows high values for apparently complex structures as scale-free networks and hierarchical trees. The OdC approach is applied to the Helicobacter pylori protein interaction network and randomly rewired surrogates.
Synthesis of hover autopilots for rotary-wing VTOL aircraft
NASA Technical Reports Server (NTRS)
Hall, W. E.; Bryson, A. E., Jr.
1972-01-01
The practical situation is considered where imperfect information on only a few rotor and fuselage state variables is available. Filters are designed to estimate all the state variables from noisy measurements of fuselage pitch/roll angles and from noisy measurements of both fuselage and rotor pitch/roll angles. The mean square response of the vehicle to a very gusty, random wind is computed using various filter/controllers and is found to be quite satisfactory although, of course, not so good as when one has perfect information (idealized case). The second part of the report considers precision hover over a point on the ground. A vehicle model without rotor dynamics is used and feedback signals in position and integral of position error are added. The mean square response of the vehicle to a very gusty, random wind is computed, assuming perfect information feedback, and is found to be excellent. The integral error feedback gives zero position error for a steady wind, and smaller position error for a random wind.
On axiomatizations of the Shapley value for bi-cooperative games
NASA Astrophysics Data System (ADS)
Meirong, Wu; Shaochen, Cao; Huazhen, Zhu
2016-06-01
There are three decisions available for each participant in bi-cooperative games which can depict real life accurately in this paper. This paper researches the Shapley value of bi-cooperative games and completes the unique characterization. The axiom similar to classical cooperative games which could be used to characterize the Shapley value of bi-cooperative games as well. Meanwhile, it introduces a structural axiom and a zero excluded axiom instead of effective axiom in classical cooperative games.
A General Theory of Unsteady Compressible Potential Aerodynamics
NASA Technical Reports Server (NTRS)
Morino, L.
1974-01-01
The general theory of potential aerodynamic flow around a lifting body having arbitrary shape and motion is presented. By using the Green function method, an integral representation for the potential is obtained for both supersonic and subsonic flow. Under small perturbation assumption, the potential at any point, P, in the field depends only upon the values of the potential and its normal derivative on the surface, sigma, of the body. Hence, if the point P approaches the surface of the body, the representation reduces to an integro-differential equation relating the potential and its normal derivative (which is known from the boundary conditions) on the surface sigma. For the important practical case of small harmonic oscillation around a rest position, the equation reduces to a two-dimensional Fredholm integral equation of second-type. It is shown that this equation reduces properly to the lifting surface theories as well as other classical mathematical formulas. The question of uniqueness is examined and it is shown that, for thin wings, the operator becomes singular as the thickness approaches zero. This fact may yield numerical problems for very thin wings.
Zero adjusted models with applications to analysing helminths count data.
Chipeta, Michael G; Ngwira, Bagrey M; Simoonga, Christopher; Kazembe, Lawrence N
2014-11-27
It is common in public health and epidemiology that the outcome of interest is counts of events occurrence. Analysing these data using classical linear models is mostly inappropriate, even after transformation of outcome variables due to overdispersion. Zero-adjusted mixture count models such as zero-inflated and hurdle count models are applied to count data when over-dispersion and excess zeros exist. Main objective of the current paper is to apply such models to analyse risk factors associated with human helminths (S. haematobium) particularly in a case where there's a high proportion of zero counts. The data were collected during a community-based randomised control trial assessing the impact of mass drug administration (MDA) with praziquantel in Malawi, and a school-based cross sectional epidemiology survey in Zambia. Count data models including traditional (Poisson and negative binomial) models, zero modified models (zero inflated Poisson and zero inflated negative binomial) and hurdle models (Poisson logit hurdle and negative binomial logit hurdle) were fitted and compared. Using Akaike information criteria (AIC), the negative binomial logit hurdle (NBLH) and zero inflated negative binomial (ZINB) showed best performance in both datasets. With regards to zero count capturing, these models performed better than other models. This paper showed that zero modified NBLH and ZINB models are more appropriate methods for the analysis of data with excess zeros. The choice between the hurdle and zero-inflated models should be based on the aim and endpoints of the study.
Citation classics in periodontology: a controlled study.
Nieri, Michele; Saletta, Daniele; Guidi, Luisa; Buti, Jacopo; Franceschi, Debora; Mauro, Saverio; Pini-Prato, Giovanpaolo
2007-04-01
The aims of this study were to identify the most cited articles in Periodontology published from January 1990 to March 2005; and to analyse the differences between citation Classics and less cited articles. The search was carried out in four international periodontal journals: Journal of Periodontology, Journal of Clinical Periodontology, International Journal of Periodontics and Restorative Dentistry and Journal of Periodontal Research. The Classics, that are articles cited at least 100 times, were identified using the Science Citation Index database. From every issue of the journals that contained a Classic, another article was randomly selected and used as a Control. Fifty-five Classics and 55 Controls were identified. Classic articles were longer, used more images, had more authors, and contained more self-references than Controls. Moreover Classics had on the average a bigger sample size, often dealt with etiopathogenesis and prognosis, but were rarely controlled or randomized studies. Classic articles play an instructive role, but are often non-Controlled studies.
Mancini, John S; Bowman, Joel M
2013-03-28
We report a global, full-dimensional, ab initio potential energy surface describing the HCl-H2O dimer. The potential is constructed from a permutationally invariant fit, using Morse-like variables, to over 44,000 CCSD(T)-F12b∕aug-cc-pVTZ energies. The surface describes the complex and dissociated monomers with a total RMS fitting error of 24 cm(-1). The normal modes of the minima, low-energy saddle point and separated monomers, the double minimum isomerization pathway and electronic dissociation energy are accurately described by the surface. Rigorous quantum mechanical diffusion Monte Carlo (DMC) calculations are performed to determine the zero-point energy and wavefunction of the complex and the separated fragments. The calculated zero-point energies together with a De value calculated from CCSD(T) with a complete basis set extrapolation gives a D0 value of 1348 ± 3 cm(-1), in good agreement with the recent experimentally reported value of 1334 ± 10 cm(-1) [B. E. Casterline, A. K. Mollner, L. C. Ch'ng, and H. Reisler, J. Phys. Chem. A 114, 9774 (2010)]. Examination of the DMC wavefunction allows for confident characterization of the zero-point geometry to be dominant at the C(2v) double-well saddle point and not the C(s) global minimum. Additional support for the delocalized zero-point geometry is given by numerical solutions to the 1D Schrödinger equation along the imaginary-frequency out-of-plane bending mode, where the zero-point energy is calculated to be 52 cm(-1) above the isomerization barrier. The D0 of the fully deuterated isotopologue is calculated to be 1476 ± 3 cm(-1), which we hope will stand as a benchmark for future experimental work.
NASA Astrophysics Data System (ADS)
Freitas, Rodrigo; Frolov, Timofey; Asta, Mark
2017-04-01
A theory for the thermodynamic properties of steps on faceted crystalline surfaces is presented. The formalism leads to the definition of step excess quantities, including an excess step stress that is the step analogy of surface stress. The approach is used to develop a relationship between the temperature dependence of the step free energy (γst) and step excess quantities for energy and stress that can be readily calculated by atomistic simulations. We demonstrate the application of this formalism in thermodynamic-integration (TI) calculations of the step free energy, based on molecular-dynamics simulations, considering <110 > steps on the {111 } surface of a classical potential model for elemental Cu. In this application we employ the Frenkel-Ladd approach to compute the reference value of γst for the TI calculations. Calculated results for excess energy and stress show relatively weak temperature dependencies up to a homologous temperature of approximately 0.6, above which these quantities increase strongly and the step stress becomes more isotropic. From the calculated excess quantities we compute γst over the temperature range from zero up to the melting point (Tm). We find that γst remains finite up to Tm, indicating the absence of a roughening temperature for this {111 } surface facet, but decreases by roughly fifty percent from the zero-temperature value. The strongest temperature dependence occurs above homologous temperatures of approximately 0.6, where the step becomes configurationally disordered due to the formation of point defects and appreciable capillary fluctuations.
NASA Astrophysics Data System (ADS)
Kojima, H.; Yamada, A.; Okazaki, S.
2015-05-01
The intramolecular proton transfer reaction of malonaldehyde in neon solvent has been investigated by mixed quantum-classical molecular dynamics (QCMD) calculations and fully classical molecular dynamics (FCMD) calculations. Comparing these calculated results with those for malonaldehyde in water reported in Part I [A. Yamada, H. Kojima, and S. Okazaki, J. Chem. Phys. 141, 084509 (2014)], the solvent dependence of the reaction rate, the reaction mechanism involved, and the quantum effect therein have been investigated. With FCMD, the reaction rate in weakly interacting neon is lower than that in strongly interacting water. However, with QCMD, the order of the reaction rates is reversed. To investigate the mechanisms in detail, the reactions were categorized into three mechanisms: tunneling, thermal activation, and barrier vanishing. Then, the quantum and solvent effects were analyzed from the viewpoint of the reaction mechanism focusing on the shape of potential energy curve and its fluctuations. The higher reaction rate that was found for neon in QCMD compared with that found for water solvent arises from the tunneling reactions because of the nearly symmetric double-well shape of the potential curve in neon. The thermal activation and barrier vanishing reactions were also accelerated by the zero-point energy. The number of reactions based on these two mechanisms in water was greater than that in neon in both QCMD and FCMD because these reactions are dominated by the strength of solute-solvent interactions.
Effects of an acidic beverage (Coca-Cola) on absorption of ketoconazole.
Chin, T W; Loeb, M; Fong, I W
1995-01-01
Absorption of ketoconazole is impaired in patients with achlorhydria. The purpose of this study was to determine the effectiveness of a palatable acidic beverage (Coca-Cola Classic, pH 2.5) in improving the absorption of ketoconazole in the presence of drug-induced achlorhydria. A prospective, randomized, three-way crossover design with a 1-week wash-out period between each treatment was employed. Nine healthy nonsmoking, nonobese volunteers between 22 and 41 years old were studied. Each subject was randomized to receive three treatments: (A) ketoconazole 200-mg tablet with water (control), (B) omeprazole (60 mg) followed by ketoconazole (200 mg) taken with water, and (C) omeprazole (60 mg) followed by ketoconazole (200 mg) taken with 240 ml of Coca-Cola Classic. The pH values of gastric aspirates were checked after omeprazole was administered to confirm attainment of a pH of > 6. Multiple serum samples were obtained for measurements of ketoconazole concentrations by high-pressure liquid chromatography. The mean area under the ketoconazole concentration-time curve from zero to infinity for the control treatment (17.9 +/- 13.1 mg.h/liter) was significantly greater than that for treatment B (3.5 +/- 5.1 mg.h/liter; 16.6% +/- 15.0% of control). The mean peak concentration was highest for the control treatment (4.1 +/- 1.9 micrograms/ml), for which the mean peak concentration showed a significant increase over that for treatment B. The absorption of ketoconazole was reduced in the presence of omeprazole-induced achlorhydria. However, drug absorption was significantly increased, to approximately 65% of the mean for the control treatment, when the drug was taken with an acidic beverage, such as Coca-Cola. PMID:7486898
Classical and quantum theories of proton disorder in hexagonal water ice
NASA Astrophysics Data System (ADS)
Benton, Owen; Sikora, Olga; Shannon, Nic
2016-03-01
It has been known since the pioneering work of Bernal, Fowler, and Pauling that common, hexagonal (Ih) water ice is the archetype of a frustrated material: a proton-bonded network in which protons satisfy strong local constraints (the "ice rules") but do not order. While this proton disorder is well established, there is now a growing body of evidence that quantum effects may also have a role to play in the physics of ice at low temperatures. In this paper, we use a combination of numerical and analytic techniques to explore the nature of proton correlations in both classical and quantum models of ice Ih. In the case of classical ice Ih, we find that the ice rules have two, distinct, consequences for scattering experiments: singular "pinch points," reflecting a zero-divergence condition on the uniform polarization of the crystal, and broad, asymmetric features, coming from its staggered polarization. In the case of the quantum model, we find that the collective quantum tunneling of groups of protons can convert states obeying the ice rules into a quantum liquid, whose excitations are birefringent, emergent photons. We make explicit predictions for scattering experiments on both classical and quantum ice Ih, and show how the quantum theory can explain the "wings" of incoherent inelastic scattering observed in recent neutron scattering experiments [Bove et al., Phys. Rev. Lett. 103, 165901 (2009), 10.1103/PhysRevLett.103.165901]. These results raise the intriguing possibility that the protons in ice Ih could form a quantum liquid at low temperatures, in which protons are not merely disordered, but continually fluctuate between different configurations obeying the ice rules.
A review on models for count data with extra zeros
NASA Astrophysics Data System (ADS)
Zamri, Nik Sarah Nik; Zamzuri, Zamira Hasanah
2017-04-01
Typically, the zero inflated models are usually used in modelling count data with excess zeros. The existence of the extra zeros could be structural zeros or random which occur by chance. These types of data are commonly found in various disciplines such as finance, insurance, biomedical, econometrical, ecology, and health sciences. As found in the literature, the most popular zero inflated models used are zero inflated Poisson and zero inflated negative binomial. Recently, more complex models have been developed to account for overdispersion and unobserved heterogeneity. In addition, more extended distributions are also considered in modelling data with this feature. In this paper, we review related literature, provide a recent development and summary on models for count data with extra zeros.
Adaptive Dynamic Programming for Discrete-Time Zero-Sum Games.
Wei, Qinglai; Liu, Derong; Lin, Qiao; Song, Ruizhuo
2018-04-01
In this paper, a novel adaptive dynamic programming (ADP) algorithm, called "iterative zero-sum ADP algorithm," is developed to solve infinite-horizon discrete-time two-player zero-sum games of nonlinear systems. The present iterative zero-sum ADP algorithm permits arbitrary positive semidefinite functions to initialize the upper and lower iterations. A novel convergence analysis is developed to guarantee the upper and lower iterative value functions to converge to the upper and lower optimums, respectively. When the saddle-point equilibrium exists, it is emphasized that both the upper and lower iterative value functions are proved to converge to the optimal solution of the zero-sum game, where the existence criteria of the saddle-point equilibrium are not required. If the saddle-point equilibrium does not exist, the upper and lower optimal performance index functions are obtained, respectively, where the upper and lower performance index functions are proved to be not equivalent. Finally, simulation results and comparisons are shown to illustrate the performance of the present method.
The Photon Shell Game and the Quantum von Neumann Architecture with Superconducting Circuits
NASA Astrophysics Data System (ADS)
Mariantoni, Matteo
2012-02-01
Superconducting quantum circuits have made significant advances over the past decade, allowing more complex and integrated circuits that perform with good fidelity. We have recently implemented a machine comprising seven quantum channels, with three superconducting resonators, two phase qubits, and two zeroing registers. I will explain the design and operation of this machine, first showing how a single microwave photon | 1 > can be prepared in one resonator and coherently transferred between the three resonators. I will also show how more exotic states such as double photon states | 2 > and superposition states | 0 >+ | 1 > can be shuffled among the resonators as well [1]. I will then demonstrate how this machine can be used as the quantum-mechanical analog of the von Neumann computer architecture, which for a classical computer comprises a central processing unit and a memory holding both instructions and data. The quantum version comprises a quantum central processing unit (quCPU) that exchanges data with a quantum random-access memory (quRAM) integrated on one chip, with instructions stored on a classical computer. I will also present a proof-of-concept demonstration of a code that involves all seven quantum elements: (1), Preparing an entangled state in the quCPU, (2), writing it to the quRAM, (3), preparing a second state in the quCPU, (4), zeroing it, and, (5), reading out the first state stored in the quRAM [2]. Finally, I will demonstrate that the quantum von Neumann machine provides one unit cell of a two-dimensional qubit-resonator array that can be used for surface code quantum computing. This will allow the realization of a scalable, fault-tolerant quantum processor with the most forgiving error rates to date. [4pt] [1] M. Mariantoni et al., Nature Physics 7, 287-293 (2011.)[0pt] [2] M. Mariantoni et al., Science 334, 61-65 (2011).
Continuous-Time Classical and Quantum Random Walk on Direct Product of Cayley Graphs
NASA Astrophysics Data System (ADS)
Salimi, S.; Jafarizadeh, M. A.
2009-06-01
In this paper we define direct product of graphs and give a recipe for obtaining probability of observing particle on vertices in the continuous-time classical and quantum random walk. In the recipe, the probability of observing particle on direct product of graph is obtained by multiplication of probability on the corresponding to sub-graphs, where this method is useful to determining probability of walk on complicated graphs. Using this method, we calculate the probability of continuous-time classical and quantum random walks on many of finite direct product Cayley graphs (complete cycle, complete Kn, charter and n-cube). Also, we inquire that the classical state the stationary uniform distribution is reached as t → ∞ but for quantum state is not always satisfied.
NASA Astrophysics Data System (ADS)
Dao, Vu Hung; Frésard, Raymond
2017-10-01
The charge dynamical response function of the t-t'-U Hubbard model is investigated on the square lattice in the thermodynamical limit. The correlation function is calculated from Gaussian fluctuations around the paramagnetic saddle-point within the Kotliar and Ruckenstein slave-boson representation. The next-nearest-neighbor hopping only slightly affects the renormalization of the quasiparticle mass. In contrast a negative t'/t notably decreases (increases) their velocity, and hence the zero-sound velocity, at positive (negative) doping. For low (high) density n ≲ 0.5 (n ≳ 1.5) we find that it enhances (reduces) the damping of the zero-sound mode. Furthermore it softens (hardens) the upper-Hubbard-band collective mode at positive (negative) doping. It is also shown that our results differ markedly from the random-phase approximation in the strong-coupling limit, even at high doping, while they compare favorably with existing quantum Monte Carlo numerical simulations.
40 CFR 86.540-90 - Exhaust sample analysis.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., if appropriate, NOX: (1) Zero the analyzers and obtain a stable zero reading. Recheck after tests. (2... actual concentrations on chart. (3) Check zeros; repeat the procedure in paragraphs (a) (1) and (2) of... appropriate, NOX. concentrations of samples. (6) Check zero and span points. If difference is greater than 2...
The pH-dependent surface charging and points of zero charge: V. Update.
Kosmulski, Marek
2011-01-01
The points of zero charge (PZC) and isoelectric points (IEP) from the recent literature are discussed. This study is an update of the previous compilation [M. Kosmulski, Surface Charging and Points of Zero Charge, CRC, Boca Raton, FL, 2009] and of its previous update [J. Colloid Interface Sci. 337 (2009) 439]. In several recent publications, the terms PZC/IEP have been used outside their usual meaning. Only the PZC/IEP obtained according to the methods recommended by the present author are reported in this paper, and the other results are ignored. PZC/IEP of albite, sepiolite, and sericite, which have not been studied before, became available over the past 2 years. Copyright © 2010 Elsevier Inc. All rights reserved.
Comment on ‘The paradoxical zero reflection at zero energy’
NASA Astrophysics Data System (ADS)
van Dijk, W.; Nogami, Y.
2017-05-01
We point out that the anomalous threshold effect in one dimension occurs when the reflection probability at zero energy R(0) has some other value than unity, rather than R(0)=0 or R(0)\\ll 1 as implied by Ahmed et al in their paper entitled ‘The paradoxical zero reflection at zero energy’ (2017 Eur. J. Phys. 38 025401).
Intelligent control of PV system on the basis of the fuzzy recurrent neuronet*
NASA Astrophysics Data System (ADS)
Engel, E. A.; Kovalev, I. V.; Engel, N. E.
2016-04-01
This paper presents the fuzzy recurrent neuronet for PV system’s control. Based on the PV system’s state, the fuzzy recurrent neural net tracks the maximum power point under random perturbations. The validity and advantages of the proposed intelligent control of PV system are demonstrated by numerical simulations. The simulation results show that the proposed intelligent control of PV system achieves real-time control speed and competitive performance, as compared to a classical control scheme on the basis of the perturbation & observation algorithm.
Time-dependent reflection at the localization transition
NASA Astrophysics Data System (ADS)
Skipetrov, Sergey E.; Sinha, Aritra
2018-03-01
A short quasimonochromatic wave packet incident on a semi-infinite disordered medium gives rise to a reflected wave. The intensity of the latter decays as a power law, 1 /tα , in the long-time limit. Using the one-dimensional Aubry-André model, we show that in the vicinity of the critical point of Anderson localization transition, the decay slows down, and the power-law exponent α becomes smaller than both α =2 found in the Anderson localization regime and α =3 /2 expected for a one-dimensional random walk of classical particles.
Wave-packet formation at the zero-dispersion point in the Gardner-Ostrovsky equation.
Whitfield, A J; Johnson, E R
2015-05-01
The long-time effect of weak rotation on an internal solitary wave is the decay into inertia-gravity waves and the eventual emergence of a coherent, steadily propagating, nonlinear wave packet. There is currently no entirely satisfactory explanation as to why these wave packets form. Here the initial value problem is considered within the context of the Gardner-Ostrovsky, or rotation-modified extended Korteweg-de Vries, equation. The linear Gardner-Ostrovsky equation has maximum group velocity at a critical wave number, often called the zero-dispersion point. It is found here that a nonlinear splitting of the wave-number spectrum at the zero-dispersion point, where energy is shifted into the modulationally unstable regime of the Gardner-Ostrovsky equation, is responsible for the wave-packet formation. Numerical comparisons of the decay of a solitary wave in the Gardner-Ostrovsky equation and a derived nonlinear Schrödinger equation at the zero-dispersion point are used to confirm the spectral splitting.
Generating and controlling homogeneous air turbulence using random jet arrays
NASA Astrophysics Data System (ADS)
Carter, Douglas; Petersen, Alec; Amili, Omid; Coletti, Filippo
2016-12-01
The use of random jet arrays, already employed in water tank facilities to generate zero-mean-flow homogeneous turbulence, is extended to air as a working fluid. A novel facility is introduced that uses two facing arrays of individually controlled jets (256 in total) to force steady homogeneous turbulence with negligible mean flow, shear, and strain. Quasi-synthetic jet pumps are created by expanding pressurized air through small straight nozzles and are actuated by fast-response low-voltage solenoid valves. Velocity fields, two-point correlations, energy spectra, and second-order structure functions are obtained from 2D PIV and are used to characterize the turbulence from the integral-to-the Kolmogorov scales. Several metrics are defined to quantify how well zero-mean-flow homogeneous turbulence is approximated for a wide range of forcing and geometric parameters. With increasing jet firing time duration, both the velocity fluctuations and the integral length scales are augmented and therefore the Reynolds number is increased. We reach a Taylor-microscale Reynolds number of 470, a large-scale Reynolds number of 74,000, and an integral-to-Kolmogorov length scale ratio of 680. The volume of the present homogeneous turbulence, the largest reported to date in a zero-mean-flow facility, is much larger than the integral length scale, allowing for the natural development of the energy cascade. The turbulence is found to be anisotropic irrespective of the distance between the jet arrays. Fine grids placed in front of the jets are effective at modulating the turbulence, reducing both velocity fluctuations and integral scales. Varying the jet-to-jet spacing within each array has no effect on the integral length scale, suggesting that this is dictated by the length scale of the jets.
Compound simulator IR radiation characteristics test and calibration
NASA Astrophysics Data System (ADS)
Li, Yanhong; Zhang, Li; Li, Fan; Tian, Yi; Yang, Yang; Li, Zhuo; Shi, Rui
2015-10-01
The Hardware-in-the-loop simulation can establish the target/interference physical radiation and interception of product flight process in the testing room. In particular, the simulation of environment is more difficult for high radiation energy and complicated interference model. Here the development in IR scene generation produced by a fiber array imaging transducer with circumferential lamp spot sources is introduced. The IR simulation capability includes effective simulation of aircraft signatures and point-source IR countermeasures. Two point-sources as interference can move in two-dimension random directions. For simulation the process of interference release, the radiation and motion characteristic is tested. Through the zero calibration for optical axis of simulator, the radiation can be well projected to the product detector. The test and calibration results show the new type compound simulator can be used in the hardware-in-the-loop simulation trial.
NASA Astrophysics Data System (ADS)
Konno, Rikio; Hatayama, Nobukuni; Takahashi, Yoshinori
2018-05-01
We have investigated the temperature dependence of the magnetic susceptibility of itinerant nearly ferromagnetic compounds based on the spin fluctuation theory. It is based on the conservation of the local spin amplitude that consists of both the thermal and the zero-point components. The linear dependence of the zero-point spin fluctuation amplitude on the inverse of magnetic susceptibility is usually assumed. The purpose of our present study is to include its higher order terms and to see their effects on the magnetic susceptibility. For the thermal amplitude, it shows T2-linear temperature dependence at low temperatures.
Normal forms of Hopf-zero singularity
NASA Astrophysics Data System (ADS)
Gazor, Majid; Mokhtari, Fahimeh
2015-01-01
The Lie algebra generated by Hopf-zero classical normal forms is decomposed into two versal Lie subalgebras. Some dynamical properties for each subalgebra are described; one is the set of all volume-preserving conservative systems while the other is the maximal Lie algebra of nonconservative systems. This introduces a unique conservative-nonconservative decomposition for the normal form systems. There exists a Lie-subalgebra that is Lie-isomorphic to a large family of vector fields with Bogdanov-Takens singularity. This gives rise to a conclusion that the local dynamics of formal Hopf-zero singularities is well-understood by the study of Bogdanov-Takens singularities. Despite this, the normal form computations of Bogdanov-Takens and Hopf-zero singularities are independent. Thus, by assuming a quadratic nonzero condition, complete results on the simplest Hopf-zero normal forms are obtained in terms of the conservative-nonconservative decomposition. Some practical formulas are derived and the results implemented using Maple. The method has been applied on the Rössler and Kuramoto-Sivashinsky equations to demonstrate the applicability of our results.
NASA Astrophysics Data System (ADS)
Brokešová, Johana; Málek, Jiří
2018-07-01
A new method for representing seismograms by using zero-crossing points is described. This method is based on decomposing a seismogram into a set of quasi-harmonic components and, subsequently, on determining the precise zero-crossing times of these components. An analogous approach can be applied to determine extreme points that represent the zero-crossings of the first time derivative of the quasi-harmonics. Such zero-crossing and/or extreme point seismogram representation can be used successfully to reconstruct single-station seismograms, but the main application is to small-aperture array data analysis to which standard methods cannot be applied. The precise times of the zero-crossing and/or extreme points make it possible to determine precise time differences across the array used to retrieve the parameters of a plane wave propagating across the array, namely, its backazimuth and apparent phase velocity along the Earth's surface. The applicability of this method is demonstrated using two synthetic examples. In the real-data example from the Příbram-Háje array in central Bohemia (Czech Republic) for the Mw 6.4 Crete earthquake of October 12, 2013, this method is used to determine the phase velocity dispersion of both Rayleigh and Love waves. The resulting phase velocities are compared with those obtained by employing the seismic plane-wave rotation-to-translation relations. In this approach, the phase velocity is calculated by obtaining the amplitude ratios between the rotation and translation components. Seismic rotations are derived from the array data, for which the small aperture is not only an advantage but also an applicability condition.
Stress reduction through listening to Indian classical music during gastroscopy.
Kotwal, M R; Rinchhen, C Z; Ringe, V V
1998-01-01
The purpose of this study was to examine the effects of music on elevated state of anxiety as many patients become stressed and anxious during diagnostic procedures. The study was conducted on 104 consecutive patients undergoing GI endoscopy for various reasons. Patients were randomly assigned to two groups regardless of sex, age and underlying disease. One group of 54 patients were made to listen to a recorded Indian classical instrumental music before and during the procedure, while the other group of 50 patients did not. Blood pressure, heart rate and respiratory rate were recorded at the beginning of consultation and end of procedure. Perception of procedure using a three point attitude scale was assessed. Our results indicate that the background Indian classical music is efficacious in reducing psychological distress during a gastroscopic examination. We suggest that music could be applied to other medical situations as well, which tend to generate undue psychological stress and anxiety. Music, as a familiar personal and culture medium, can be used to ease anxiety, to act as distractor, to increase discomfort and pain threshold.
Two-Way Communication with a Single Quantum Particle.
Del Santo, Flavio; Dakić, Borivoje
2018-02-09
In this Letter we show that communication when restricted to a single information carrier (i.e., single particle) and finite speed of propagation is fundamentally limited for classical systems. On the other hand, quantum systems can surpass this limitation. We show that communication bounded to the exchange of a single quantum particle (in superposition of different spatial locations) can result in "two-way signaling," which is impossible in classical physics. We quantify the discrepancy between classical and quantum scenarios by the probability of winning a game played by distant players. We generalize our result to an arbitrary number of parties and we show that the probability of success is asymptotically decreasing to zero as the number of parties grows, for all classical strategies. In contrast, quantum strategy allows players to win the game with certainty.
Two-Way Communication with a Single Quantum Particle
NASA Astrophysics Data System (ADS)
Del Santo, Flavio; Dakić, Borivoje
2018-02-01
In this Letter we show that communication when restricted to a single information carrier (i.e., single particle) and finite speed of propagation is fundamentally limited for classical systems. On the other hand, quantum systems can surpass this limitation. We show that communication bounded to the exchange of a single quantum particle (in superposition of different spatial locations) can result in "two-way signaling," which is impossible in classical physics. We quantify the discrepancy between classical and quantum scenarios by the probability of winning a game played by distant players. We generalize our result to an arbitrary number of parties and we show that the probability of success is asymptotically decreasing to zero as the number of parties grows, for all classical strategies. In contrast, quantum strategy allows players to win the game with certainty.
Biological monitoring of environmental quality: The use of developmental instability
Freeman, D.C.; Emlen, J.M.; Graham, J.H.; Hough, R. A.; Bannon, T.A.
1994-01-01
Distributed robustness is thought to influence the buffering of random phenotypic variation through the scale-free topology of gene regulatory, metabolic, and protein-protein interaction networks. If this hypothesis is true, then the phenotypic response to the perturbation of particular nodes in such a network should be proportional to the number of links those nodes make with neighboring nodes. This suggests a probability distribution approximating an inverse power-law of random phenotypic variation. Zero phenotypic variation, however, is impossible, because random molecular and cellular processes are essential to normal development. Consequently, a more realistic distribution should have a y-intercept close to zero in the lower tail, a mode greater than zero, and a long (fat) upper tail. The double Pareto-lognormal (DPLN) distribution is an ideal candidate distribution. It consists of a mixture of a lognormal body and upper and lower power-law tails.
2011-01-01
Background Logistic random effects models are a popular tool to analyze multilevel also called hierarchical data with a binary or ordinal outcome. Here, we aim to compare different statistical software implementations of these models. Methods We used individual patient data from 8509 patients in 231 centers with moderate and severe Traumatic Brain Injury (TBI) enrolled in eight Randomized Controlled Trials (RCTs) and three observational studies. We fitted logistic random effects regression models with the 5-point Glasgow Outcome Scale (GOS) as outcome, both dichotomized as well as ordinal, with center and/or trial as random effects, and as covariates age, motor score, pupil reactivity or trial. We then compared the implementations of frequentist and Bayesian methods to estimate the fixed and random effects. Frequentist approaches included R (lme4), Stata (GLLAMM), SAS (GLIMMIX and NLMIXED), MLwiN ([R]IGLS) and MIXOR, Bayesian approaches included WinBUGS, MLwiN (MCMC), R package MCMCglmm and SAS experimental procedure MCMC. Three data sets (the full data set and two sub-datasets) were analysed using basically two logistic random effects models with either one random effect for the center or two random effects for center and trial. For the ordinal outcome in the full data set also a proportional odds model with a random center effect was fitted. Results The packages gave similar parameter estimates for both the fixed and random effects and for the binary (and ordinal) models for the main study and when based on a relatively large number of level-1 (patient level) data compared to the number of level-2 (hospital level) data. However, when based on relatively sparse data set, i.e. when the numbers of level-1 and level-2 data units were about the same, the frequentist and Bayesian approaches showed somewhat different results. The software implementations differ considerably in flexibility, computation time, and usability. There are also differences in the availability of additional tools for model evaluation, such as diagnostic plots. The experimental SAS (version 9.2) procedure MCMC appeared to be inefficient. Conclusions On relatively large data sets, the different software implementations of logistic random effects regression models produced similar results. Thus, for a large data set there seems to be no explicit preference (of course if there is no preference from a philosophical point of view) for either a frequentist or Bayesian approach (if based on vague priors). The choice for a particular implementation may largely depend on the desired flexibility, and the usability of the package. For small data sets the random effects variances are difficult to estimate. In the frequentist approaches the MLE of this variance was often estimated zero with a standard error that is either zero or could not be determined, while for Bayesian methods the estimates could depend on the chosen "non-informative" prior of the variance parameter. The starting value for the variance parameter may be also critical for the convergence of the Markov chain. PMID:21605357
40 CFR 89.324 - Calibration of other equipment.
Code of Federal Regulations, 2011 CFR
2011-07-01
... and operation. Adjust the analyzer to optimize performance. (2) Zero the methane analyzer with zero...-fit straight line is 2 percent or less of the value at each non-zero data point and within ± 0.3 percent of full scale on the zero, concentration values may be calculated by use of a single calibration...
40 CFR 89.324 - Calibration of other equipment.
Code of Federal Regulations, 2013 CFR
2013-07-01
... and operation. Adjust the analyzer to optimize performance. (2) Zero the methane analyzer with zero...-fit straight line is 2 percent or less of the value at each non-zero data point and within ± 0.3 percent of full scale on the zero, concentration values may be calculated by use of a single calibration...
40 CFR 89.324 - Calibration of other equipment.
Code of Federal Regulations, 2012 CFR
2012-07-01
... and operation. Adjust the analyzer to optimize performance. (2) Zero the methane analyzer with zero...-fit straight line is 2 percent or less of the value at each non-zero data point and within ± 0.3 percent of full scale on the zero, concentration values may be calculated by use of a single calibration...
40 CFR 89.324 - Calibration of other equipment.
Code of Federal Regulations, 2014 CFR
2014-07-01
... and operation. Adjust the analyzer to optimize performance. (2) Zero the methane analyzer with zero...-fit straight line is 2 percent or less of the value at each non-zero data point and within ± 0.3 percent of full scale on the zero, concentration values may be calculated by use of a single calibration...
2010-12-01
Air Force Reseach Laboratory, Hanscom AFB, MA 928, 2010 December © 2010, The American Astronomical Society. 14. ABSTRACT The absolutely calibrated...the visible and Sirius (a CMa) in the infrared. The resulting zero-point SED tests well against solar analog data presented by Rieke et al. while also...resulting zero-point SED tests well against solar analog data presented by Rieke et al. while also maintaining an unambiguous link to specific
NASA Astrophysics Data System (ADS)
Buffoni, Boris; Groves, Mark D.; Wahlén, Erik
2017-12-01
Fully localised solitary waves are travelling-wave solutions of the three- dimensional gravity-capillary water wave problem which decay to zero in every horizontal spatial direction. Their existence has been predicted on the basis of numerical simulations and model equations (in which context they are usually referred to as `lumps'), and a mathematically rigorous existence theory for strong surface tension (Bond number {β} greater than {1/3} ) has recently been given. In this article we present an existence theory for the physically more realistic case {0 < β < 1/3} . A classical variational principle for fully localised solitary waves is reduced to a locally equivalent variational principle featuring a perturbation of the functional associated with the Davey-Stewartson equation. A nontrivial critical point of the reduced functional is found by minimising it over its natural constraint set.
NASA Astrophysics Data System (ADS)
Buffoni, Boris; Groves, Mark D.; Wahlén, Erik
2018-06-01
Fully localised solitary waves are travelling-wave solutions of the three- dimensional gravity-capillary water wave problem which decay to zero in every horizontal spatial direction. Their existence has been predicted on the basis of numerical simulations and model equations (in which context they are usually referred to as `lumps'), and a mathematically rigorous existence theory for strong surface tension (Bond number {β} greater than {1/3}) has recently been given. In this article we present an existence theory for the physically more realistic case {0 < β < 1/3}. A classical variational principle for fully localised solitary waves is reduced to a locally equivalent variational principle featuring a perturbation of the functional associated with the Davey-Stewartson equation. A nontrivial critical point of the reduced functional is found by minimising it over its natural constraint set.
NASA Astrophysics Data System (ADS)
Gavrilov, S. N.; Krivtsov, A. M.; Tsvetkov, D. V.
2018-05-01
We consider unsteady heat transfer in a one-dimensional harmonic crystal surrounded by a viscous environment and subjected to an external heat supply. The basic equations for the crystal particles are stated in the form of a system of stochastic differential equations. We perform a continualization procedure and derive an infinite set of linear partial differential equations for covariance variables. An exact analytic solution describing unsteady ballistic heat transfer in the crystal is obtained. It is shown that the stationary spatial profile of the kinetic temperature caused by a point source of heat supply of constant intensity is described by the Macdonald function of zero order. A comparison with the results obtained in the framework of the classical heat equation is presented. We expect that the results obtained in the paper can be verified by experiments with laser excitation of low-dimensional nanostructures.
Laminated beams: deflection and stress as a function of epoxy shear modulus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bialek, J.
1976-01-01
The large toroidal field coil deflections observed during the PLT power test are due to the poor shear behavior of the insulation material used between layers of copper. Standard techniques for analyzing such laminated structures do not account for this effect. This paper presents an analysis of laminated beams that corrects this deficiency. The analysis explicitly models the mechanical behavior of each layer in a laminated beam and hence avoids the pitfalls involved in any averaging technique. In particular, the shear modulus of the epoxy in a laminated beam (consisting of alternate layers of metal and epoxy) may span themore » entire range of values from zero to classical. Solution of the governing differential equations defines the stress, strain, and deflection for any point within a laminated beam. The paper summarizes these governing equations and also includes a parametric study of a simple laminated beam.« less
Xiao, Cong; Li, Dingping
2016-06-15
Semiclassical magnetoelectric and magnetothermoelectric transport in strongly spin-orbit coupled Rashba two-dimensional electron systems is investigated. In the presence of a perpendicular classically weak magnetic field and short-range impurity scattering, we solve the linearized Boltzmann equation self-consistently. Using the solution, it is found that when Fermi energy E F locates below the band crossing point (BCP), the Hall coefficient is a nonmonotonic function of electron density n e and not inversely proportional to n e. While the magnetoresistance (MR) and Nernst coefficient vanish when E F locates above the BCP, non-zero MR and enhanced Nernst coefficient emerge when E F decreases below the BCP. Both of them are nonmonotonic functions of E F below the BCP. The different semiclassical magnetotransport behaviors between the two sides of the BCP can be helpful to experimental identifications of the band valley regime and topological change of Fermi surface in considered systems.
NASA Astrophysics Data System (ADS)
Xiao, Cong; Li, Dingping
2016-06-01
Semiclassical magnetoelectric and magnetothermoelectric transport in strongly spin-orbit coupled Rashba two-dimensional electron systems is investigated. In the presence of a perpendicular classically weak magnetic field and short-range impurity scattering, we solve the linearized Boltzmann equation self-consistently. Using the solution, it is found that when Fermi energy E F locates below the band crossing point (BCP), the Hall coefficient is a nonmonotonic function of electron density n e and not inversely proportional to n e. While the magnetoresistance (MR) and Nernst coefficient vanish when E F locates above the BCP, non-zero MR and enhanced Nernst coefficient emerge when E F decreases below the BCP. Both of them are nonmonotonic functions of E F below the BCP. The different semiclassical magnetotransport behaviors between the two sides of the BCP can be helpful to experimental identifications of the band valley regime and topological change of Fermi surface in considered systems.
NASA Astrophysics Data System (ADS)
Alexandrov, A. N.; Zhdanov, V. I.; Koval, S. M.
We derive approximate formulas for the coordinates and magnification of critical images of a point source in a vicinity of a cusp caustic arising in the gravitational lens mapping. In the lowest (zero-order) approximation, these formulas were obtained in the classical work by Schneider&Weiss (1992) and then studied by a number of authors; first-order corrections in powers of the proximity parameter were treated by Congdon, Keeton and Nordgren. We have shown that the first-order corrections are solely due to the asymmetry of the cusp. We found expressions for the second-order corrections in the case of general lens potential and for an arbitrary position of the source near a symmetric cusp. Applications to a lensing galaxy model represented by a singular isothermal sphere with an external shear y are studied and the role of the second-order corrections is demonstrated.
Zero-temperature quantum annealing bottlenecks in the spin-glass phase.
Knysh, Sergey
2016-08-05
A promising approach to solving hard binary optimization problems is quantum adiabatic annealing in a transverse magnetic field. An instantaneous ground state-initially a symmetric superposition of all possible assignments of N qubits-is closely tracked as it becomes more and more localized near the global minimum of the classical energy. Regions where the energy gap to excited states is small (for instance at the phase transition) are the algorithm's bottlenecks. Here I show how for large problems the complexity becomes dominated by O(log N) bottlenecks inside the spin-glass phase, where the gap scales as a stretched exponential. For smaller N, only the gap at the critical point is relevant, where it scales polynomially, as long as the phase transition is second order. This phenomenon is demonstrated rigorously for the two-pattern Gaussian Hopfield model. Qualitative comparison with the Sherrington-Kirkpatrick model leads to similar conclusions.
An integral equation formulation for rigid bodies in Stokes flow in three dimensions
NASA Astrophysics Data System (ADS)
Corona, Eduardo; Greengard, Leslie; Rachh, Manas; Veerapaneni, Shravan
2017-03-01
We present a new derivation of a boundary integral equation (BIE) for simulating the three-dimensional dynamics of arbitrarily-shaped rigid particles of genus zero immersed in a Stokes fluid, on which are prescribed forces and torques. Our method is based on a single-layer representation and leads to a simple second-kind integral equation. It avoids the use of auxiliary sources within each particle that play a role in some classical formulations. We use a spectrally accurate quadrature scheme to evaluate the corresponding layer potentials, so that only a small number of spatial discretization points per particle are required. The resulting discrete sums are computed in O (n) time, where n denotes the number of particles, using the fast multipole method (FMM). The particle positions and orientations are updated by a high-order time-stepping scheme. We illustrate the accuracy, conditioning and scaling of our solvers with several numerical examples.
Squeezed cooling of mechanical motion beyond the resolved-sideband limit
NASA Astrophysics Data System (ADS)
Yang, Cheng; Zhang, Lin; Zhang, Weiping
2018-04-01
Cavity optomechanics provides a unique platform for controlling micromechanical systems by means of optical fields that cross the classical-quantum boundary to achieve solid foundations for quantum technologies. Currently, optomechanical resonators have become promising candidates for the development of precisely controlled nano-motors, ultrasensitive sensors and robust quantum information processors. For all these applications, a crucial requirement is to cool the mechanical resonators down to their quantum ground states. In this paper, we present a novel cooling scheme to further cool a micromechanical resonator via the noise squeezing effect. One quadrature in such a resonator can be squeezed to induce enhanced fluctuations in the other, “heated” quadrature, which can then be used to cool the mechanical motion via conventional optomechanical coupling. Our theoretical analysis and numerical calculations demonstrate that this squeeze-and-cool mechanism offers a quick technique for deeply cooling a macroscopic mechanical resonator to an unprecedented temperature region below the zero-point fluctuations.
Preisser, John S; Long, D Leann; Stamm, John W
2017-01-01
Marginalized zero-inflated count regression models have recently been introduced for the statistical analysis of dental caries indices and other zero-inflated count data as alternatives to traditional zero-inflated and hurdle models. Unlike the standard approaches, the marginalized models directly estimate overall exposure or treatment effects by relating covariates to the marginal mean count. This article discusses model interpretation and model class choice according to the research question being addressed in caries research. Two data sets, one consisting of fictional dmft counts in 2 groups and the other on DMFS among schoolchildren from a randomized clinical trial comparing 3 toothpaste formulations to prevent incident dental caries, are analyzed with negative binomial hurdle, zero-inflated negative binomial, and marginalized zero-inflated negative binomial models. In the first example, estimates of treatment effects vary according to the type of incidence rate ratio (IRR) estimated by the model. Estimates of IRRs in the analysis of the randomized clinical trial were similar despite their distinctive interpretations. The choice of statistical model class should match the study's purpose, while accounting for the broad decline in children's caries experience, such that dmft and DMFS indices more frequently generate zero counts. Marginalized (marginal mean) models for zero-inflated count data should be considered for direct assessment of exposure effects on the marginal mean dental caries count in the presence of high frequencies of zero counts. © 2017 S. Karger AG, Basel.
Zero-Point Calibration for AGN Black-Hole Mass Estimates
NASA Technical Reports Server (NTRS)
Peterson, B. M.; Onken, C. A.
2004-01-01
We discuss the measurement and associated uncertainties of AGN reverberation-based black-hole masses, since these provide the zero-point calibration for scaling relationships that allow black-hole mass estimates for quasars. We find that reverberation-based mass estimates appear to be accurate to within a factor of about 3.
Sampling command generator corrects for noise and dropouts in recorded data
NASA Technical Reports Server (NTRS)
Anderson, T. O.
1973-01-01
Generator measures period between zero crossings of reference signal and accepts as correct timing points only those zero crossings which occur acceptably close to nominal time predicted from last accepted command. Unidirectional crossover points are used exclusively so errors from analog nonsymmetry of crossover detector are avoided.
Small massless excitations against a nontrivial background
NASA Astrophysics Data System (ADS)
Khariton, N. G.; Svetovoy, V. B.
1994-03-01
We propose a systematic approach for finding bosonic zero modes of nontrivial classical solutions in a gauge theory. The method allows us to find all the modes connected with the broken space-time and gauge symmetries. The ground state is supposed to be dependent on some space coordinates yα and independent of the rest of the coordinates xi. The main problem which is solved is how to construct the zero modes corresponding to the broken xiyα rotations in vacuum and which boundary conditions specify them. It is found that the rotational modes are typically singular at the origin or at infinity, but their energy remains finite. They behave as massless vector fields in x space. We analyze local and global symmetries affecting the zero modes. An algorithm for constructing the zero mode excitations is formulated. The main results are illustrated in the Abelian Higgs model with the string background.
Random walk in generalized quantum theory
NASA Astrophysics Data System (ADS)
Martin, Xavier; O'Connor, Denjoe; Sorkin, Rafael D.
2005-01-01
One can view quantum mechanics as a generalization of classical probability theory that provides for pairwise interference among alternatives. Adopting this perspective, we “quantize” the classical random walk by finding, subject to a certain condition of “strong positivity”, the most general Markovian, translationally invariant “decoherence functional” with nearest neighbor transitions.
Marginalized zero-altered models for longitudinal count data.
Tabb, Loni Philip; Tchetgen, Eric J Tchetgen; Wellenius, Greg A; Coull, Brent A
2016-10-01
Count data often exhibit more zeros than predicted by common count distributions like the Poisson or negative binomial. In recent years, there has been considerable interest in methods for analyzing zero-inflated count data in longitudinal or other correlated data settings. A common approach has been to extend zero-inflated Poisson models to include random effects that account for correlation among observations. However, these models have been shown to have a few drawbacks, including interpretability of regression coefficients and numerical instability of fitting algorithms even when the data arise from the assumed model. To address these issues, we propose a model that parameterizes the marginal associations between the count outcome and the covariates as easily interpretable log relative rates, while including random effects to account for correlation among observations. One of the main advantages of this marginal model is that it allows a basis upon which we can directly compare the performance of standard methods that ignore zero inflation with that of a method that explicitly takes zero inflation into account. We present simulations of these various model formulations in terms of bias and variance estimation. Finally, we apply the proposed approach to analyze toxicological data of the effect of emissions on cardiac arrhythmias.
Marginalized zero-altered models for longitudinal count data
Tabb, Loni Philip; Tchetgen, Eric J. Tchetgen; Wellenius, Greg A.; Coull, Brent A.
2015-01-01
Count data often exhibit more zeros than predicted by common count distributions like the Poisson or negative binomial. In recent years, there has been considerable interest in methods for analyzing zero-inflated count data in longitudinal or other correlated data settings. A common approach has been to extend zero-inflated Poisson models to include random effects that account for correlation among observations. However, these models have been shown to have a few drawbacks, including interpretability of regression coefficients and numerical instability of fitting algorithms even when the data arise from the assumed model. To address these issues, we propose a model that parameterizes the marginal associations between the count outcome and the covariates as easily interpretable log relative rates, while including random effects to account for correlation among observations. One of the main advantages of this marginal model is that it allows a basis upon which we can directly compare the performance of standard methods that ignore zero inflation with that of a method that explicitly takes zero inflation into account. We present simulations of these various model formulations in terms of bias and variance estimation. Finally, we apply the proposed approach to analyze toxicological data of the effect of emissions on cardiac arrhythmias. PMID:27867423
NASA Astrophysics Data System (ADS)
Williams, Robert W.; Schlücker, Sebastian; Hudson, Bruce S.
2008-01-01
A scaled quantum mechanical harmonic force field (SQMFF) corrected for anharmonicity is obtained for the 23 K L-alanine crystal structure using van der Waals corrected periodic boundary condition density functional theory (DFT) calculations with the PBE functional. Scale factors are obtained with comparisons to inelastic neutron scattering (INS), Raman, and FT-IR spectra of polycrystalline L-alanine at 15-23 K. Calculated frequencies for all 153 normal modes differ from observed frequencies with a standard deviation of 6 wavenumbers. Non-bonded external k = 0 lattice modes are included, but assignments to these modes are presently ambiguous. The extension of SQMFF methodology to lattice modes is new, as are the procedures used here for providing corrections for anharmonicity and van der Waals interactions in DFT calculations on crystals. First principles Born-Oppenheimer molecular dynamics (BOMD) calculations are performed on the L-alanine crystal structure at a series of classical temperatures ranging from 23 K to 600 K. Corrections for zero-point energy (ZPE) are estimated by finding the classical temperature that reproduces the mean square displacements (MSDs) measured from the diffraction data at 23 K. External k = 0 lattice motions are weakly coupled to bonded internal modes.
Liu, Jian; Miller, William H
2008-09-28
The maximum entropy analytic continuation (MEAC) method is used to extend the range of accuracy of the linearized semiclassical initial value representation (LSC-IVR)/classical Wigner approximation for real time correlation functions. LSC-IVR provides a very effective "prior" for the MEAC procedure since it is very good for short times, exact for all time and temperature for harmonic potentials (even for correlation functions of nonlinear operators), and becomes exact in the classical high temperature limit. This combined MEAC+LSC/IVR approach is applied here to two highly nonlinear dynamical systems, a pure quartic potential in one dimensional and liquid para-hydrogen at two thermal state points (25 and 14 K under nearly zero external pressure). The former example shows the MEAC procedure to be a very significant enhancement of the LSC-IVR for correlation functions of both linear and nonlinear operators, and especially at low temperature where semiclassical approximations are least accurate. For liquid para-hydrogen, the LSC-IVR is seen already to be excellent at T=25 K, but the MEAC procedure produces a significant correction at the lower temperature (T=14 K). Comparisons are also made as to how the MEAC procedure is able to provide corrections for other trajectory-based dynamical approximations when used as priors.
Quantum-like model of brain's functioning: decision making from decoherence.
Asano, Masanari; Ohya, Masanori; Tanaka, Yoshiharu; Basieva, Irina; Khrennikov, Andrei
2011-07-21
We present a quantum-like model of decision making in games of the Prisoner's Dilemma type. By this model the brain processes information by using representation of mental states in a complex Hilbert space. Driven by the master equation the mental state of a player, say Alice, approaches an equilibrium point in the space of density matrices (representing mental states). This equilibrium state determines Alice's mixed (i.e., probabilistic) strategy. We use a master equation in which quantum physics describes the process of decoherence as the result of interaction with environment. Thus our model is a model of thinking through decoherence of the initially pure mental state. Decoherence is induced by the interaction with memory and the external mental environment. We study (numerically) the dynamics of quantum entropy of Alice's mental state in the process of decision making. We also consider classical entropy corresponding to Alice's choices. We introduce a measure of Alice's diffidence as the difference between classical and quantum entropies of Alice's mental state. We see that (at least in our model example) diffidence decreases (approaching zero) in the process of decision making. Finally, we discuss the problem of neuronal realization of quantum-like dynamics in the brain; especially roles played by lateral prefrontal cortex or/and orbitofrontal cortex. Copyright © 2011 Elsevier Ltd. All rights reserved.
Selecting a distributional assumption for modelling relative densities of benthic macroinvertebrates
Gray, B.R.
2005-01-01
The selection of a distributional assumption suitable for modelling macroinvertebrate density data is typically challenging. Macroinvertebrate data often exhibit substantially larger variances than expected under a standard count assumption, that of the Poisson distribution. Such overdispersion may derive from multiple sources, including heterogeneity of habitat (historically and spatially), differing life histories for organisms collected within a single collection in space and time, and autocorrelation. Taken to extreme, heterogeneity of habitat may be argued to explain the frequent large proportions of zero observations in macroinvertebrate data. Sampling locations may consist of habitats defined qualitatively as either suitable or unsuitable. The former category may yield random or stochastic zeroes and the latter structural zeroes. Heterogeneity among counts may be accommodated by treating the count mean itself as a random variable, while extra zeroes may be accommodated using zero-modified count assumptions, including zero-inflated and two-stage (or hurdle) approaches. These and linear assumptions (following log- and square root-transformations) were evaluated using 9 years of mayfly density data from a 52 km, ninth-order reach of the Upper Mississippi River (n = 959). The data exhibited substantial overdispersion relative to that expected under a Poisson assumption (i.e. variance:mean ratio = 23 ??? 1), and 43% of the sampling locations yielded zero mayflies. Based on the Akaike Information Criterion (AIC), count models were improved most by treating the count mean as a random variable (via a Poisson-gamma distributional assumption) and secondarily by zero modification (i.e. improvements in AIC values = 9184 units and 47-48 units, respectively). Zeroes were underestimated by the Poisson, log-transform and square root-transform models, slightly by the standard negative binomial model but not by the zero-modified models (61%, 24%, 32%, 7%, and 0%, respectively). However, the zero-modified Poisson models underestimated small counts (1 ??? y ??? 4) and overestimated intermediate counts (7 ??? y ??? 23). Counts greater than zero were estimated well by zero-modified negative binomial models, while counts greater than one were also estimated well by the standard negative binomial model. Based on AIC and percent zero estimation criteria, the two-stage and zero-inflated models performed similarly. The above inferences were largely confirmed when the models were used to predict values from a separate, evaluation data set (n = 110). An exception was that, using the evaluation data set, the standard negative binomial model appeared superior to its zero-modified counterparts using the AIC (but not percent zero criteria). This and other evidence suggest that a negative binomial distributional assumption should be routinely considered when modelling benthic macroinvertebrate data from low flow environments. Whether negative binomial models should themselves be routinely examined for extra zeroes requires, from a statistical perspective, more investigation. However, this question may best be answered by ecological arguments that may be specific to the sampled species and locations. ?? 2004 Elsevier B.V. All rights reserved.
20 CFR 345.303 - Computation of rate.
Code of Federal Regulations, 2010 CFR
2010-04-01
... a percentage rate, and then round such rate to the nearest 100th of one percent. If the rate so computed is zero or less than zero, the percentage rate will be deemed zero at this point; (5) Step 5. Add...
Zero Gravity Cryogenic Vent System Concepts for Upper Stages
NASA Technical Reports Server (NTRS)
Flachbart, Robin H.; Holt, James B.; Hastings, Leon J.
1999-01-01
The capability to vent in zero gravity without resettling is a technology need that involves practically all uses of sub-critical cryogenics in space. Venting without resettling would extend cryogenic orbital transfer vehicle capabilities. However, the lack of definition regarding liquid/ullage orientation coupled with the somewhat random nature of the thermal stratification and resulting pressure rise rates, lead to significant technical challenges. Typically a zero gravity vent concept, termed a thermodynamic vent system (TVS), consists of a tank mixer to destratify the propellant, combined with a Joule-Thomson (J-T) valve to extract thermal energy from the propellant. Marshall Space Flight Center's (MSFC's) Multipurpose Hydrogen Test Bed (MHTB) was used to test both spray bar and axial jet TVS concepts. The axial jet system consists of a recirculation pump heat exchanger unit. The spray bar system consists of a recirculation pump, a parallel flow concentric tube, heat exchanger, and a spray bar positioned close to the longitudinal axis of the tank. The operation of both concepts is similar. In the mixing mode, the recirculation pump withdraws liquid from the tank and sprays it into the tank liquid, ullage, and exposed tank surfaces. When energy is required. a small portion of the recirculated liquid is passed sequentially through the J-T expansion valve, the heat exchanger, and is vented overboard. The vented vapor cools the circulated bulk fluid, thereby removing thermal energy and reducing tank pressure. The pump operates alone, cycling on and off, to destratify the tank liquid and ullage until the liquid vapor pressure reaches the lower set point. At that point. the J-T valve begins to cycle on and off with the pump. Thus, for short duration missions, only the mixer may operate, thus minimizing or even eliminating, boil-off losses.
Zero Gravity Cryogenic Vent System Concepts for Upper Stages
NASA Technical Reports Server (NTRS)
Flachbart, Robin H.; Holt, James B.; Hastings, Leon J.
2001-01-01
The capability to vent in zero gravity without resettling is a technology need that involves practically all uses of sub-critical cryogenics in space, and would extend cryogenic orbital transfer vehicle capabilities. However, the lack of definition regarding liquid/ullage orientation coupled with the somewhat random nature of the thermal stratification and resulting pressure rise rates, lead to significant technical challenges. Typically a zero gravity vent concept, termed a thermodynamic vent system (TVS), consists of a tank mixer to destratify the propellant, combined with a Joule-Thomson (J-T) valve to extract thermal energy from the propellant. Marshall Space Flight Center's (MSFC's) Multipurpose Hydrogen Test Bed (MHTB) was used to test both spray-bar and axial jet TVS concepts. The axial jet system consists of a recirculation pump heat exchanger unit. The spray-bar system consists of a recirculation pump, a parallel flow concentric tube heat exchanger, and a spray-bar positioned close to the longitudinal axis of the tank. The operation of both concepts is similar. In the mixing mode, the recirculation pump withdraws liquid from the tank and sprays it into the tank liquid, ullage, and exposed tank surfaces. When energy extraction is required, a small portion of the recirculated liquid is passed sequentially through the J-T expansion valve, the heat exchanger, and is vented overboard. The vented vapor cools the circulated bulk fluid, thereby removing thermal energy and reducing tank pressure. The pump operates alone, cycling on and off, to destratify the tank liquid and ullage until the liquid vapor pressure reaches the lower set point. At that point, the J-T valve begins to cycle on and off with the pump. Thus, for short duration missions, only the mixer may operate, thus minimizing or even eliminating boil-off losses.
Interacting charges and the classical electron radius
NASA Astrophysics Data System (ADS)
De Luca, Roberto; Di Mauro, Marco; Faella, Orazio; Naddeo, Adele
2018-03-01
The equation of the motion of a point charge q repelled by a fixed point-like charge Q is derived and studied. In solving this problem useful concepts in classical and relativistic kinematics, in Newtonian mechanics and in non-linear ordinary differential equations are revised. The validity of the approximations is discussed from the physical point of view. In particular the classical electron radius emerges naturally from the requirement that the initial distance is large enough for the non-relativistic approximation to be valid. The relevance of this topic for undergraduate physics teaching is pointed out.
Reinelt, Sebastian; Steinke, Daniel
2014-01-01
Summary In this work we report the synthesis of thermo-, oxidation- and cyclodextrin- (CD) responsive end-group-functionalized polymers, based on N,N-diethylacrylamide (DEAAm). In a classical free-radical chain transfer polymerization, using thiol-functionalized 4-alkylphenols, namely 3-(4-(1,1-dimethylethan-1-yl)phenoxy)propane-1-thiol and 3-(4-(2,4,4-trimethylpentan-2-yl)phenoxy)propane-1-thiol, poly(N,N-diethylacrylamide) (PDEAAm) with well-defined hydrophobic end-groups is obtained. These end-group-functionalized polymers show different cloud point values, depending on the degree of polymerization and the presence of randomly methylated β-cyclodextrin (RAMEB-CD). Additionally, the influence of the oxidation of the incorporated thioether linkages on the cloud point is investigated. The resulting hydrophilic sulfoxides show higher cloud point values for the lower critical solution temperature (LCST). A high degree of functionalization is supported by 1H NMR-, SEC-, FTIR- and MALDI–TOF measurements. PMID:24778720
Landau damping of quantum plasmons in metal nanostructures
Li, Xiaoguang; Xiao, Di; Zhang, Zhenyu
2013-02-06
Using the random phase approximation with both real space and discrete electron–hole (e–h) pair basis sets, we study the broadening of surface plasmons in metal structures of reduced dimensionality, where Landau damping is the dominant dissipation channel and presents an intrinsic limitation to plasmonics technology. We show that for every prototypical class of systems considered, including zero-dimensional nanoshells, one-dimensional coaxial nanotubes and two-dimensional ultrathin films, Landau damping can be drastically tuned due to energy quantization of the individual electron levels and e–h pairs. Both the generic trend and oscillatory nature of the tunability are in stark contrast with the expectationsmore » of the semiclassical surface scattering picture. Our approach also allows to vividly depict the evolution of the plasmons from the quantum to the classical regime, and to elucidate the underlying physical origin of hybridization broadening of nearly degenerate plasmon modes. Lastly, these findings may serve as a guide in the future design of plasmonic nanostructures of desirable functionalities.« less
Quantum one-way permutation over the finite field of two elements
NASA Astrophysics Data System (ADS)
de Castro, Alexandre
2017-06-01
In quantum cryptography, a one-way permutation is a bounded unitary operator U:{H} → {H} on a Hilbert space {H} that is easy to compute on every input, but hard to invert given the image of a random input. Levin (Probl Inf Transm 39(1):92-103, 2003) has conjectured that the unitary transformation g(a,x)=(a,f(x)+ax), where f is any length-preserving function and a,x \\in {GF}_{{2}^{\\Vert x\\Vert }}, is an information-theoretically secure operator within a polynomial factor. Here, we show that Levin's one-way permutation is provably secure because its output values are four maximally entangled two-qubit states, and whose probability of factoring them approaches zero faster than the multiplicative inverse of any positive polynomial poly( x) over the Boolean ring of all subsets of x. Our results demonstrate through well-known theorems that existence of classical one-way functions implies existence of a universal quantum one-way permutation that cannot be inverted in subexponential time in the worst case.
Statically screened ion potential and Bohm potential in a quantum plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moldabekov, Zhandos; Institute for Experimental and Theoretical Physics, Al-Farabi Kazakh National University, 71 Al-Farabi Str., 050040 Almaty; Schoof, Tim
2015-10-15
The effective potential Φ of a classical ion in a weakly correlated quantum plasma in thermodynamic equilibrium at finite temperature is well described by the random phase approximation screened Coulomb potential. Additionally, collision effects can be included via a relaxation time ansatz (Mermin dielectric function). These potentials are used to study the quality of various statically screened potentials that were recently proposed by Shukla and Eliasson (SE) [Phys. Rev. Lett. 108, 165007 (2012)], Akbari-Moghanjoughi (AM) [Phys. Plasmas 22, 022103 (2015)], and Stanton and Murillo (SM) [Phys. Rev. E 91, 033104 (2015)] starting from quantum hydrodynamic (QHD) theory. Our analysis revealsmore » that the SE potential is qualitatively different from the full potential, whereas the SM potential (at any temperature) and the AM potential (at zero temperature) are significantly more accurate. This confirms the correctness of the recently derived [Michta et al., Contrib. Plasma Phys. 55, 437 (2015)] pre-factor 1/9 in front of the Bohm term of QHD for fermions.« less
Statistics of resonances for a class of billiards on the Poincaré half-plane
NASA Astrophysics Data System (ADS)
Howard, P. J.; Mota-Furtado, F.; O'Mahony, P. F.; Uski, V.
2005-12-01
The lower boundary of Artin's billiard on the Poincaré half-plane is continuously deformed to generate a class of billiards with classical dynamics varying from fully integrable to completely chaotic. The quantum scattering problem in these open billiards is described and the statistics of both real and imaginary parts of the resonant momenta are investigated. The evolution of the resonance positions is followed as the boundary is varied which leads to large changes in their distribution. The transition to arithmetic chaos in Artin's billiard, which is responsible for the Poissonian level-spacing statistics of the bound states in the continuum (cusp forms) at the same time as the formation of a set of resonances all with width \\frac{1}{4} and real parts determined by the zeros of Riemann's zeta function, is closely examined. Regimes are found which obey the universal predictions of random matrix theory (RMT) as well as exhibiting non-universal long-range correlations. The Brody parameter is used to describe the transitions between different regimes.
Probabilistic SSME blades structural response under random pulse loading
NASA Technical Reports Server (NTRS)
Shiao, Michael; Rubinstein, Robert; Nagpal, Vinod K.
1987-01-01
The purpose is to develop models of random impacts on a Space Shuttle Main Engine (SSME) turbopump blade and to predict the probabilistic structural response of the blade to these impacts. The random loading is caused by the impact of debris. The probabilistic structural response is characterized by distribution functions for stress and displacements as functions of the loading parameters which determine the random pulse model. These parameters include pulse arrival, amplitude, and location. The analysis can be extended to predict level crossing rates. This requires knowledge of the joint distribution of the response and its derivative. The model of random impacts chosen allows the pulse arrivals, pulse amplitudes, and pulse locations to be random. Specifically, the pulse arrivals are assumed to be governed by a Poisson process, which is characterized by a mean arrival rate. The pulse intensity is modelled as a normally distributed random variable with a zero mean chosen independently at each arrival. The standard deviation of the distribution is a measure of pulse intensity. Several different models were used for the pulse locations. For example, three points near the blade tip were chosen at which pulses were allowed to arrive with equal probability. Again, the locations were chosen independently at each arrival. The structural response was analyzed both by direct Monte Carlo simulation and by a semi-analytical method.
Simple tunnel diode circuit for accurate zero crossing timing
NASA Technical Reports Server (NTRS)
Metz, A. J.
1969-01-01
Tunnel diode circuit, capable of timing the zero crossing point of bipolar pulses, provides effective design for a fast crossing detector. It combines a nonlinear load line with the diode to detect the zero crossing of a wide range of input waveshapes.
Beyond Moore's law: towards competitive quantum devices
NASA Astrophysics Data System (ADS)
Troyer, Matthias
2015-05-01
A century after the invention of quantum theory and fifty years after Bell's inequality we see the first quantum devices emerge as products that aim to be competitive with the best classical computing devices. While a universal quantum computer of non-trivial size is still out of reach there exist a number commercial and experimental devices: quantum random number generators, quantum simulators and quantum annealers. In this colloquium I will present some of these devices and validation tests we performed on them. Quantum random number generators use the inherent randomness in quantum measurements to produce true random numbers, unlike classical pseudorandom number generators which are inherently deterministic. Optical lattice emulators use ultracold atomic gases in optical lattices to mimic typical models of condensed matter physics. In my talk I will focus especially on the devices built by Canadian company D-Wave systems, which are special purpose quantum simulators for solving hard classical optimization problems. I will review the controversy around the quantum nature of these devices and will compare them to state of the art classical algorithms. I will end with an outlook towards universal quantum computing and end with the question: which important problems that are intractable even for post-exa-scale classical computers could we expect to solve once we have a universal quantum computer?
Joint Processing of Envelope Alignment and Phase Compensation for Isar Imaging
NASA Astrophysics Data System (ADS)
Chen, Tao; Jin, Guanghu; Dong, Zhen
2018-04-01
Range envelope alignment and phase compensation are spilt into two isolated parts in the classical methods of translational motion compensation in Inverse Synthetic Aperture Radar (ISAR) imaging. In classic method of the rotating object imaging, the two reference points of the envelope alignment and the Phase Difference (PD) estimation are probably not the same point, making it difficult to uncouple the coupling term by conducting the correction of Migration Through Resolution Cell (MTRC). In this paper, an improved approach of joint processing which chooses certain scattering point as the sole reference point is proposed to perform with utilizing the Prominent Point Processing (PPP) method. With this end in view, we firstly get the initial image using classical methods from which a certain scattering point can be chose. The envelope alignment and phase compensation using the selected scattering point as the same reference point are subsequently conducted. The keystone transform is thus smoothly applied to further improve imaging quality. Both simulation experiments and real data processing are provided to demonstrate the performance of the proposed method compared with classical method.
Capillary rise between planar surfaces
NASA Astrophysics Data System (ADS)
Bullard, Jeffrey W.; Garboczi, Edward J.
2009-01-01
Minimization of free energy is used to calculate the equilibrium vertical rise and meniscus shape of a liquid column between two closely spaced, parallel planar surfaces that are inert and immobile. States of minimum free energy are found using standard variational principles, which lead not only to an Euler-Lagrange differential equation for the meniscus shape and elevation, but also to the boundary conditions at the three-phase junction where the liquid meniscus intersects the solid walls. The analysis shows that the classical Young-Dupré equation for the thermodynamic contact angle is valid at the three-phase junction, as already shown for sessile drops with or without the influence of a gravitational field. Integration of the Euler-Lagrange equation shows that a generalized Laplace-Young (LY) equation first proposed by O’Brien, Craig, and Peyton [J. Colloid Interface Sci. 26, 500 (1968)] gives an exact prediction of the mean elevation of the meniscus at any wall separation, whereas the classical LY equation for the elevation of the midpoint of the meniscus is accurate only when the separation approaches zero or infinity. When both walls are identical, the meniscus is symmetric about the midpoint, and the midpoint elevation is a more traditional and convenient measure of capillary rise than the mean elevation. Therefore, for this symmetric system a different equation is fitted to numerical predictions of the midpoint elevation and is shown to give excellent agreement for contact angles between 15° and 160° and wall separations up to 30mm . When the walls have dissimilar surface properties, the meniscus generally assumes an asymmetric shape, and significant elevation of the liquid column can occur even when one of the walls has a contact angle significantly greater than 90°. The height of the capillary rise depends on the spacing between the walls and also on the difference in contact angles at the two surfaces. When the contact angle at one wall is greater than 90° but the contact angle at the other wall is less than 90°, the meniscus can have an inflection point separating a region of positive curvature from a region of negative curvature, the inflection point being pinned at zero height. However, this condition arises only when the spacing between the walls exceeds a threshold value that depends on the difference in contact angles.
The pH dependent surface charging and points of zero charge. VII. Update.
Kosmulski, Marek
2018-01-01
The pristine points of zero charge (PZC) and isoelectric points (IEP) of metal oxides and IEP of other materials from the recent literature, and a few older results (overlooked in previous searches) are summarized. This study is an update of the previous compilations by the same author [Surface Charging and Points of Zero Charge, CRC, Boca Raton, 2009; J. Colloid Interface Sci. 337 (2009) 439; 353 (2011) 1; 426 (2014) 209]. The field has been very active, but most PZC and IEP are reported for materials, which are very well-documented already (silica, alumina, titania, iron oxides). IEP of (nominally) Gd 2 O 3 , NaTaO 3 , and SrTiO 3 have been reported in the recent literature. Their IEP were not reported in older studies. Copyright © 2017 Elsevier B.V. All rights reserved.
On global solutions of the random Hamilton-Jacobi equations and the KPZ problem
NASA Astrophysics Data System (ADS)
Bakhtin, Yuri; Khanin, Konstantin
2018-04-01
In this paper, we discuss possible qualitative approaches to the problem of KPZ universality. Throughout the paper, our point of view is based on the geometrical and dynamical properties of minimisers and shocks forming interlacing tree-like structures. We believe that the KPZ universality can be explained in terms of statistics of these structures evolving in time. The paper is focussed on the setting of the random Hamilton-Jacobi equations. We formulate several conjectures concerning global solutions and discuss how their properties are connected to the KPZ scalings in dimension 1 + 1. In the case of general viscous Hamilton-Jacobi equations with non-quadratic Hamiltonians, we define generalised directed polymers. We expect that their behaviour is similar to the behaviour of classical directed polymers, and present arguments in favour of this conjecture. We also define a new renormalisation transformation defined in purely geometrical terms and discuss conjectural properties of the corresponding fixed points. Most of our conjectures are widely open, and supported by only partial rigorous results for particular models.
Saccà, Maria Ludovica; Fajardo, Carmen; Costa, Gonzalo; Lobo, Carmen; Nande, Mar; Martin, Margarita
2014-06-01
Nanosized zero-valent iron (nZVI) is a new option for the remediation of contaminated soil and groundwater, but the effect of nZVI on soil biota is mostly unknown. In this work, nanotoxicological studies were performed in vitro and in two different standard soils to assess the effect of nZVI on autochthonous soil organisms by integrating classical and molecular analysis. Standardised ecotoxicity testing methods using Caenorhabditis elegans were applied in vitro and in soil experiments and changes in microbial biodiversity and biomarker gene expression were used to assess the responses of the microbial community to nZVI. The classical tests conducted in soil ruled out a toxic impact of nZVI on the soil nematode C. elegans in the test soils. The molecular analysis applied to soil microorganisms, however, revealed significant changes in the expression of the proposed biomarkers of exposure. These changes were related not only to the nZVI treatment but also to the soil characteristics, highlighting the importance of considering the soil matrix on a case by case basis. Furthermore, due to the temporal shift between transcriptional responses and the development of the corresponding phenotype, the molecular approach could anticipate adverse effects on environmental biota. Copyright © 2013 Elsevier Ltd. All rights reserved.
Computer Algorithms for Measurement Control and Signal Processing of Transient Scattering Signatures
1988-09-01
CURVE * C Y2 IS THE BACKGROUND CURVE * C NSHIF IS THE NUMBER OF POINT TO SHIFT * C SET IS THE SUM OF THE POINT TO SHIFT * C IN ORDER TO ZERO PADDING ...reduces the spec- tral content in both the low and high frequency regimes. If the threshold is set to zero , a "naive’ deconvolution results. This provides...side of equation 5.2 was close to zero , so it can be neglected. As a result, the expected power is equal to the variance. The signal plus noise power
Log amplifier with pole-zero compensation
Brookshier, W.
1985-02-08
A logarithmic amplifier circuit provides pole-zero compensation for improved stability and response time over 6-8 decades of input signal frequency. The amplifer circuit includes a first operational amplifier with a first feedback loop which includes a second, inverting operational amplifier in a second feedstock loop. The compensated output signal is provided by the second operational amplifier with the log elements, i.e., resistors, and the compensating capacitors in each of the feedback loops having equal values so that each break point is offset by a compensating break point or zero.
Log amplifier with pole-zero compensation
Brookshier, William
1987-01-01
A logarithmic amplifier circuit provides pole-zero compensation for improved stability and response time over 6-8 decades of input signal frequency. The amplifier circuit includes a first operational amplifier with a first feedback loop which includes a second, inverting operational amplifier in a second feedback loop. The compensated output signal is provided by the second operational amplifier with the log elements, i.e., resistors, and the compensating capacitors in each of the feedback loops having equal values so that each break point or pole is offset by a compensating break point or zero.
40 CFR 63.10023 - How do I establish my PM CPMS operating limit and determine compliance with it?
Code of Federal Regulations, 2013 CFR
2013-07-01
... the PM compliance test, the milliamp equivalent of zero output from your PM CPMS, and the average PM... establishing a relationship of PM CPMS signal to PM concentration using the PM CPMS instrument zero, the...) Determine your PM CPMS instrument zero output with one of the following procedures. (1) Zero point data for...
Engel, Hamutal; Doron, Dvir; Kohen, Amnon; Major, Dan Thomas
2012-04-10
The inclusion of nuclear quantum effects such as zero-point energy and tunneling is of great importance in studying condensed phase chemical reactions involving the transfer of protons, hydrogen atoms, and hydride ions. In the current work, we derive an efficient quantum simulation approach for the computation of the momentum distribution in condensed phase chemical reactions. The method is based on a quantum-classical approach wherein quantum and classical simulations are performed separately. The classical simulations use standard sampling techniques, whereas the quantum simulations employ an open polymer chain path integral formulation which is computed using an efficient Monte Carlo staging algorithm. The approach is validated by applying it to a one-dimensional harmonic oscillator and symmetric double-well potential. Subsequently, the method is applied to the dihydrofolate reductase (DHFR) catalyzed reduction of 7,8-dihydrofolate by nicotinamide adenine dinucleotide phosphate hydride (NADPH) to yield S-5,6,7,8-tetrahydrofolate and NADP(+). The key chemical step in the catalytic cycle of DHFR involves a stereospecific hydride transfer. In order to estimate the amount of quantum delocalization, we compute the position and momentum distributions for the transferring hydride ion in the reactant state (RS) and transition state (TS) using a recently developed hybrid semiempirical quantum mechanics-molecular mechanics potential energy surface. Additionally, we examine the effect of compression of the donor-acceptor distance (DAD) in the TS on the momentum distribution. The present results suggest differential quantum delocalization in the RS and TS, as well as reduced tunneling upon DAD compression.
NASA Astrophysics Data System (ADS)
Vogl, M.; Pankratov, O.; Shallcross, S.
2017-07-01
We present a tractable and physically transparent semiclassical theory of matrix-valued Hamiltonians, i.e., those that describe quantum systems with internal degrees of freedoms, based on a generalization of the Gutzwiller trace formula for a n ×n dimensional Hamiltonian H (p ̂,q ̂) . The classical dynamics is governed by n Hamilton-Jacobi (HJ) equations that act in a phase space endowed with a classical Berry curvature encoding anholonomy in the parallel transport of the eigenvectors of H (p ,q ) ; these vectors describe the internal structure of the semiclassical particles. At the O (ℏ1) level and for nondegenerate HJ systems, this curvature results in an additional semiclassical phase composed of (i) a Berry phase and (ii) a dynamical phase resulting from the classical particles "moving through the Berry curvature". We show that the dynamical part of this semiclassical phase will, generally, be zero only for the case in which the Berry phase is topological (i.e., depends only on the winding number). We illustrate the method by calculating the Landau spectrum for monolayer graphene, the four-band model of AB bilayer graphene, and for a more complicated matrix Hamiltonian describing the silicene band structure. Finally, we apply our method to an inhomogeneous system consisting of a strain engineered one-dimensional moiré in bilayer graphene, finding localized states near the Dirac point that arise from electron trapping in a semiclassical moiré potential. The semiclassical density of states of these localized states we show to be in perfect agreement with an exact quantum mechanical calculation of the density of states.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kojima, H.; Yamada, A.; Okazaki, S., E-mail: okazaki@apchem.nagoya-u.ac.jp
2015-05-07
The intramolecular proton transfer reaction of malonaldehyde in neon solvent has been investigated by mixed quantum–classical molecular dynamics (QCMD) calculations and fully classical molecular dynamics (FCMD) calculations. Comparing these calculated results with those for malonaldehyde in water reported in Part I [A. Yamada, H. Kojima, and S. Okazaki, J. Chem. Phys. 141, 084509 (2014)], the solvent dependence of the reaction rate, the reaction mechanism involved, and the quantum effect therein have been investigated. With FCMD, the reaction rate in weakly interacting neon is lower than that in strongly interacting water. However, with QCMD, the order of the reaction rates ismore » reversed. To investigate the mechanisms in detail, the reactions were categorized into three mechanisms: tunneling, thermal activation, and barrier vanishing. Then, the quantum and solvent effects were analyzed from the viewpoint of the reaction mechanism focusing on the shape of potential energy curve and its fluctuations. The higher reaction rate that was found for neon in QCMD compared with that found for water solvent arises from the tunneling reactions because of the nearly symmetric double-well shape of the potential curve in neon. The thermal activation and barrier vanishing reactions were also accelerated by the zero-point energy. The number of reactions based on these two mechanisms in water was greater than that in neon in both QCMD and FCMD because these reactions are dominated by the strength of solute–solvent interactions.« less
Quantum correlations and dynamics from classical random fields valued in complex Hilbert spaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khrennikov, Andrei
2010-08-15
One of the crucial differences between mathematical models of classical and quantum mechanics (QM) is the use of the tensor product of the state spaces of subsystems as the state space of the corresponding composite system. (To describe an ensemble of classical composite systems, one uses random variables taking values in the Cartesian product of the state spaces of subsystems.) We show that, nevertheless, it is possible to establish a natural correspondence between the classical and the quantum probabilistic descriptions of composite systems. Quantum averages for composite systems (including entangled) can be represented as averages with respect to classical randommore » fields. It is essentially what Albert Einstein dreamed of. QM is represented as classical statistical mechanics with infinite-dimensional phase space. While the mathematical construction is completely rigorous, its physical interpretation is a complicated problem. We present the basic physical interpretation of prequantum classical statistical field theory in Sec. II. However, this is only the first step toward real physical theory.« less
Managing the spatial properties and photon correlations in squeezed non-classical twisted light
NASA Astrophysics Data System (ADS)
Zakharov, R. V.; Tikhonova, O. V.
2018-05-01
Spatial photon correlations and mode content of the squeezed vacuum light generated in a system of two separated nonlinear crystals is investigated. The contribution of both the polar and azimuthal modes with non-zero orbital angular momentum is analyzed. The control and engineering of the spatial properties and degree of entanglement of the non-classical squeezed light by changing the distance between crystals and pump parameters is demonstrated. Methods for amplification of certain spatial modes and managing the output mode content and intensity profile of quantum twisted light are suggested.
Silva, Fabyano Fonseca; Tunin, Karen P.; Rosa, Guilherme J.M.; da Silva, Marcos V.B.; Azevedo, Ana Luisa Souza; da Silva Verneque, Rui; Machado, Marco Antonio; Packer, Irineu Umberto
2011-01-01
Now a days, an important and interesting alternative in the control of tick-infestation in cattle is to select resistant animals, and identify the respective quantitative trait loci (QTLs) and DNA markers, for posterior use in breeding programs. The number of ticks/animal is characterized as a discrete-counting trait, which could potentially follow Poisson distribution. However, in the case of an excess of zeros, due to the occurrence of several noninfected animals, zero-inflated Poisson and generalized zero-inflated distribution (GZIP) may provide a better description of the data. Thus, the objective here was to compare through simulation, Poisson and ZIP models (simple and generalized) with classical approaches, for QTL mapping with counting phenotypes under different scenarios, and to apply these approaches to a QTL study of tick resistance in an F2 cattle (Gyr × Holstein) population. It was concluded that, when working with zero-inflated data, it is recommendable to use the generalized and simple ZIP model for analysis. On the other hand, when working with data with zeros, but not zero-inflated, the Poisson model or a data-transformation-approach, such as square-root or Box-Cox transformation, are applicable. PMID:22215960
Classical linear-control analysis applied to business-cycle dynamics and stability
NASA Technical Reports Server (NTRS)
Wingrove, R. C.
1983-01-01
Linear control analysis is applied as an aid in understanding the fluctuations of business cycles in the past, and to examine monetary policies that might improve stabilization. The analysis shows how different policies change the frequency and damping of the economic system dynamics, and how they modify the amplitude of the fluctuations that are caused by random disturbances. Examples are used to show how policy feedbacks and policy lags can be incorporated, and how different monetary strategies for stabilization can be analytically compared. Representative numerical results are used to illustrate the main points.
[Computer-assisted education in problem-solving in neurology; a randomized educational study].
Weverling, G J; Stam, J; ten Cate, T J; van Crevel, H
1996-02-24
To determine the effect of computer-based medical teaching (CBMT) as a supplementary method to teach clinical problem-solving during the clerkship in neurology. Randomized controlled blinded study. Academic Medical Centre, Amsterdam, the Netherlands. 103 Students were assigned at random to a group with access to CBMT and a control group. CBMT consisted of 20 computer-simulated patients with neurological diseases, and was permanently available during five weeks to students in the CBMT group. The ability to recognize and solve neurological problems was assessed with two free-response tests, scored by two blinded observers. The CBMT students scored significantly better on the test related to the CBMT cases (mean score 7.5 on a zero to 10 point scale; control group 6.2; p < 0.001). There was no significant difference on the control test not related to the problems practised with CBMT. CBMT can be an effective method for teaching clinical problem-solving, when used as a supplementary teaching facility during a clinical clerkship. The increased ability to solve problems learned by CBMT had no demonstrable effect on the performance with other neurological problems.
Point processes in arbitrary dimension from fermionic gases, random matrix theory, and number theory
NASA Astrophysics Data System (ADS)
Torquato, Salvatore; Scardicchio, A.; Zachary, Chase E.
2008-11-01
It is well known that one can map certain properties of random matrices, fermionic gases, and zeros of the Riemann zeta function to a unique point process on the real line \\mathbb {R} . Here we analytically provide exact generalizations of such a point process in d-dimensional Euclidean space \\mathbb {R}^d for any d, which are special cases of determinantal processes. In particular, we obtain the n-particle correlation functions for any n, which completely specify the point processes in \\mathbb {R}^d . We also demonstrate that spin-polarized fermionic systems in \\mathbb {R}^d have these same n-particle correlation functions in each dimension. The point processes for any d are shown to be hyperuniform, i.e., infinite wavelength density fluctuations vanish, and the structure factor (or power spectrum) S(k) has a non-analytic behavior at the origin given by S(k)~|k| (k \\rightarrow 0 ). The latter result implies that the pair correlation function g2(r) tends to unity for large pair distances with a decay rate that is controlled by the power law 1/rd+1, which is a well-known property of bosonic ground states and more recently has been shown to characterize maximally random jammed sphere packings. We graphically display one-and two-dimensional realizations of the point processes in order to vividly reveal their 'repulsive' nature. Indeed, we show that the point processes can be characterized by an effective 'hard core' diameter that grows like the square root of d. The nearest-neighbor distribution functions for these point processes are also evaluated and rigorously bounded. Among other results, this analysis reveals that the probability of finding a large spherical cavity of radius r in dimension d behaves like a Poisson point process but in dimension d+1, i.e., this probability is given by exp[-κ(d)rd+1] for large r and finite d, where κ(d) is a positive d-dependent constant. We also show that as d increases, the point process behaves effectively like a sphere packing with a coverage fraction of space that is no denser than 1/2d. This coverage fraction has a special significance in the study of sphere packings in high-dimensional Euclidean spaces.
New Quasar Surveys with WIRO: Data and Calibration for Studies of Variability
NASA Astrophysics Data System (ADS)
Lyke, Bradley; Bassett, Neil; Deam, Sophie; Dixon, Don; Griffith, Emily; Harvey, William; Lee, Daniel; Haze Nunez, Evan; Parziale, Ryan; Witherspoon, Catherine; Myers, Adam D.; Findlay, Joseph; Kobulnicky, Henry A.; Dale, Daniel A.
2017-01-01
Measurements of quasar variability offer the potential for understanding the physics of accretion processes around supermassive black holes. However, generating structure functions in order to characterize quasar variability can be observationally taxing as it requires imaging of quasars over a large variety of date ranges. To begin to address this problem, we have conducted an imaging survey of sections of Sloan Digital Sky Survey (SDSS) Stripe 82 at the Wyoming Infrared Observatory (WIRO). We used standard stars to calculate zero-point offsets between WIRO and SDSS observations in the urgiz magnitude system. After finding the zero-point offset, we accounted for further offsets by comparing standard star magnitudes in each WIRO frame to coadded magnitudes from Stripe 82 and applying a linear correction. Known (i.e. spectroscopically confirmed) quasars at the epoch we conducted WIRO observations (Summer, 2016) and at every epoch in SDSS Stripe 82 (~80 total dates) were hence calibrated to a similar magnitude system. The algorithm for this calibration compared 1500 randomly selected standard stars with an MJD within 0.07 of the MJD of each quasar of interest, for each of the five ugriz filters. Ultimately ~1000 known quasars in Stripe 82 were identified by WIRO and their SDSS-WIRO magnitudes were calibrated to a similar scale in order to generate ensemble structure functions.This work is supported by the National Science Foundation under REU grant AST 1560461.
Aperture shape dependencies in extended depth of focus for imaging camera by wavefront coding
NASA Astrophysics Data System (ADS)
Sakita, Koichi; Ohta, Mitsuhiko; Shimano, Takeshi; Sakemoto, Akito
2015-02-01
Optical transfer functions (OTFs) on various directional spatial frequency axes for cubic phase mask (CPM) with circular and square apertures are investigated. Although OTF has no zero points, it has a very close value to zero for a circular aperture at low frequencies on diagonal axis, which results in degradation of restored images. The reason for close-to-zero value in OTF is also analyzed in connection with point spread function profiles using Fourier slice theorem. To avoid close-to-zero condition, square aperture with CPM is indispensable in WFC. We optimized cubic coefficient α of CPM and coefficients of digital filter, and succeeded to get excellent de-blurred images at large depth of field.
Testing Pattern Hypotheses for Correlation Matrices
ERIC Educational Resources Information Center
McDonald, Roderick P.
1975-01-01
The treatment of covariance matrices given by McDonald (1974) can be readily modified to cover hypotheses prescribing zeros and equalities in the correlation matrix rather than the covariance matrix, still with the convenience of the closed-form Least Squares solution and the classical Newton method. (Author/RC)
Advanced analysis of forest fire clustering
NASA Astrophysics Data System (ADS)
Kanevski, Mikhail; Pereira, Mario; Golay, Jean
2017-04-01
Analysis of point pattern clustering is an important topic in spatial statistics and for many applications: biodiversity, epidemiology, natural hazards, geomarketing, etc. There are several fundamental approaches used to quantify spatial data clustering using topological, statistical and fractal measures. In the present research, the recently introduced multi-point Morisita index (mMI) is applied to study the spatial clustering of forest fires in Portugal. The data set consists of more than 30000 fire events covering the time period from 1975 to 2013. The distribution of forest fires is very complex and highly variable in space. mMI is a multi-point extension of the classical two-point Morisita index. In essence, mMI is estimated by covering the region under study by a grid and by computing how many times more likely it is that m points selected at random will be from the same grid cell than it would be in the case of a complete random Poisson process. By changing the number of grid cells (size of the grid cells), mMI characterizes the scaling properties of spatial clustering. From mMI, the data intrinsic dimension (fractal dimension) of the point distribution can be estimated as well. In this study, the mMI of forest fires is compared with the mMI of random patterns (RPs) generated within the validity domain defined as the forest area of Portugal. It turns out that the forest fires are highly clustered inside the validity domain in comparison with the RPs. Moreover, they demonstrate different scaling properties at different spatial scales. The results obtained from the mMI analysis are also compared with those of fractal measures of clustering - box counting and sand box counting approaches. REFERENCES Golay J., Kanevski M., Vega Orozco C., Leuenberger M., 2014: The multipoint Morisita index for the analysis of spatial patterns. Physica A, 406, 191-202. Golay J., Kanevski M. 2015: A new estimator of intrinsic dimension based on the multipoint Morisita index. Pattern Recognition, 48, 4070-4081.
Orthonormal aberration polynomials for anamorphic optical imaging systems with circular pupils.
Mahajan, Virendra N
2012-06-20
In a recent paper, we considered the classical aberrations of an anamorphic optical imaging system with a rectangular pupil, representing the terms of a power series expansion of its aberration function. These aberrations are inherently separable in the Cartesian coordinates (x,y) of a point on the pupil. Accordingly, there is x-defocus and x-coma, y-defocus and y-coma, and so on. We showed that the aberration polynomials orthonormal over the pupil and representing balanced aberrations for such a system are represented by the products of two Legendre polynomials, one for each of the two Cartesian coordinates of the pupil point; for example, L(l)(x)L(m)(y), where l and m are positive integers (including zero) and L(l)(x), for example, represents an orthonormal Legendre polynomial of degree l in x. The compound two-dimensional (2D) Legendre polynomials, like the classical aberrations, are thus also inherently separable in the Cartesian coordinates of the pupil point. Moreover, for every orthonormal polynomial L(l)(x)L(m)(y), there is a corresponding orthonormal polynomial L(l)(y)L(m)(x) obtained by interchanging x and y. These polynomials are different from the corresponding orthogonal polynomials for a system with rotational symmetry but a rectangular pupil. In this paper, we show that the orthonormal aberration polynomials for an anamorphic system with a circular pupil, obtained by the Gram-Schmidt orthogonalization of the 2D Legendre polynomials, are not separable in the two coordinates. Moreover, for a given polynomial in x and y, there is no corresponding polynomial obtained by interchanging x and y. For example, there are polynomials representing x-defocus, balanced x-coma, and balanced x-spherical aberration, but no corresponding y-aberration polynomials. The missing y-aberration terms are contained in other polynomials. We emphasize that the Zernike circle polynomials, although orthogonal over a circular pupil, are not suitable for an anamorphic system as they do not represent balanced aberrations for such a system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butlitsky, M. A.; Zelener, B. V.; Zelener, B. B.
A two-component plasma model, which we called a “shelf Coulomb” model has been developed in this work. A Monte Carlo study has been undertaken to calculate equations of state, pair distribution functions, internal energies, and other thermodynamics properties. A canonical NVT ensemble with periodic boundary conditions was used. The motivation behind the model is also discussed in this work. The “shelf Coulomb” model can be compared to classical two-component (electron-proton) model where charges with zero size interact via a classical Coulomb law. With important difference for interaction of opposite charges: electrons and protons interact via the Coulomb law for largemore » distances between particles, while interaction potential is cut off on small distances. The cut off distance is defined by an arbitrary ε parameter, which depends on system temperature. All the thermodynamics properties of the model depend on dimensionless parameters ε and γ = βe{sup 2}n{sup 1/3} (where β = 1/k{sub B}T, n is the particle's density, k{sub B} is the Boltzmann constant, and T is the temperature) only. In addition, it has been shown that the virial theorem works in this model. All the calculations were carried over a wide range of dimensionless ε and γ parameters in order to find the phase transition region, critical point, spinodal, and binodal lines of a model system. The system is observed to undergo a first order gas-liquid type phase transition with the critical point being in the vicinity of ε{sub crit}≈13(T{sub crit}{sup *}≈0.076),γ{sub crit}≈1.8(v{sub crit}{sup *}≈0.17),P{sub crit}{sup *}≈0.39, where specific volume v* = 1/γ{sup 3} and reduced temperature T{sup *} = ε{sup −1}.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deniz, Coskun, E-mail: coskun.deniz@ege.edu.tr
JWKB solutions to the Initial Value Problems (IVPs) of the Time Independent Schrodinger's Equation (TISE) for the Simple Linear Potentials (SLPs) with a turning point parameter have been studied according to the turning points by graphical analysis to test the results of the JWKB solutions and suggested modifications. The anomalies happening in the classically inaccessible region where the SLP function is smaller than zero and the results of the suggested modifications, which are in consistent with the quantum mechanical theories, to remove these anomalies in this region have been presented. The origins of the anomalies and verifications of the suggestedmore » modifications showing a great success in the results have also been studied in terms of a suggested M{sub ij}=S{sup {approx}}{sub i-1,j} matrix elements made up of the JWKB expansion terms, S{sub i-1,j} (where i = 1, 2, 3 and j 1, 2). The results of the modifications for the IVPs and their application to the Bound State Problems (BSPs) with an example application of the Harmonic Oscillator (HO) have been presented and their generalization for any potential function have been discussed and classified accordingly.« less
Particle detection and non-detection in a quantum time of arrival measurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sombillo, Denny Lane B., E-mail: dsombillo@nip.upd.edu.ph; Galapon, Eric A.
2016-01-15
The standard time-of-arrival distribution cannot reproduce both the temporal and the spatial profile of the modulus squared of the time-evolved wave function for an arbitrary initial state. In particular, the time-of-arrival distribution gives a non-vanishing probability even if the wave function is zero at a given point for all values of time. This poses a problem in the standard formulation of quantum mechanics where one quantizes a classical observable and uses its spectral resolution to calculate the corresponding distribution. In this work, we show that the modulus squared of the time-evolved wave function is in fact contained in one ofmore » the degenerate eigenfunctions of the quantized time-of-arrival operator. This generalizes our understanding of quantum arrival phenomenon where particle detection is not a necessary requirement, thereby providing a direct link between time-of-arrival quantization and the outcomes of the two-slit experiment. -- Highlights: •The time-evolved position density is contained in the standard TOA distribution. •Particle may quantum mechanically arrive at a given point without being detected. •The eigenstates of the standard TOA operator are linked to the two-slit experiment.« less
The Dynamics of a Viscous Gas Ring around a Kerr Black Hole
NASA Astrophysics Data System (ADS)
Riffert, H.
2000-01-01
The dynamics of a rotationally symmetric viscous gas ring around a Kerr black hole is calculated in the thin-disk approximation. An evolution equation for the surface density Σ(t,r) is derived, which is the relativistic extension of a classical equation obtained by R. Lüst. A singular point appears at the radius of the last stable circular orbit r=rc. The nature of this point is investigated, and it turns out that the solution is always bounded at rc, and no boundary condition can be obtained at this radius. A unique solution of an initial value problem requires a matching condition at rc which follows from the flow structure between rc and the horizon. In the model presented here, the density in this domain is zero, and the resulting boundary condition leads to a vanishing shear stress at r=rc, which is the condition used in the standard stationary thin-disk model of Novikov & Thorne. Numerical solutions of the evolution equation are presented for two different angular momenta of the black hole. The time evolution of the resulting accretion rate depends strongly on this angular momentum.
Gaussian random bridges and a geometric model for information equilibrium
NASA Astrophysics Data System (ADS)
Mengütürk, Levent Ali
2018-03-01
The paper introduces a class of conditioned stochastic processes that we call Gaussian random bridges (GRBs) and proves some of their properties. Due to the anticipative representation of any GRB as the sum of a random variable and a Gaussian (T , 0) -bridge, GRBs can model noisy information processes in partially observed systems. In this spirit, we propose an asset pricing model with respect to what we call information equilibrium in a market with multiple sources of information. The idea is to work on a topological manifold endowed with a metric that enables us to systematically determine an equilibrium point of a stochastic system that can be represented by multiple points on that manifold at each fixed time. In doing so, we formulate GRB-based information diversity over a Riemannian manifold and show that it is pinned to zero over the boundary determined by Dirac measures. We then define an influence factor that controls the dominance of an information source in determining the best estimate of a signal in the L2-sense. When there are two sources, this allows us to construct information equilibrium as a functional of a geodesic-valued stochastic process, which is driven by an equilibrium convergence rate representing the signal-to-noise ratio. This leads us to derive price dynamics under what can be considered as an equilibrium probability measure. We also provide a semimartingale representation of Markovian GRBs associated with Gaussian martingales and a non-anticipative representation of fractional Brownian random bridges that can incorporate degrees of information coupling in a given system via the Hurst exponent.
Medical male circumcision: How does price affect the risk-profile of take-up?
Thornton, Rebecca; Godlonton, Susan
2016-11-01
The benefit of male circumcision is greatest among men who are most at risk of HIV infection. Encouraging this population of men to get circumcised maximizes the benefit that can be achieved through the scale-up of circumcision programs. This paper examines how the price of circumcision affects the risk profile of men who receive a voluntary medical circumcision. In 2010, 1649 uncircumcised adult men in urban Malawi were interviewed and provided a voucher for a subsidized voluntary medical male circumcision, at randomly assigned prices. Clinical data were collected indicating whether the men in the study received a circumcision. Men who took-up circumcision with a zero-priced voucher were 25 percentage points less likely than those who took-up with a positive-price voucher, to be from a tribe that traditionally circumcises (p=0.101). Zero-priced vouchers also brought in men with more sexual partners in the past year (p=0.075) and past month (p=0.003). None of the men who were most at risk of HIV at baseline (those with multiple partners and who did not use a condom the last time they had sex) received a circumcision if they were offered a positive-priced voucher. Lowering the price to zero increased circumcision take-up to 25% for men of this risk group. The effect of price on take-up was largest among those at highest risk (p=0.096). Reducing the price of circumcision surgery to zero can increase take-up among those who are most at risk of HIV infection. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Korotey, E. V.; Sinyavskii, N. Ya.
2007-07-01
A new method for determination of rheological parameters of liquid crystals with zero anisotropy of diamagnetic susceptibility is proposed, which is based on the measurement of the quadrupole splitting line of the NMR 2H spectrum. The method provides higher information content of the experiments, with the shear flow discarded from consideration, compared to that obtained by the classical Leslie-Ericksen theory. A comparison with the experiment is performed, the coefficients of anisotropic viscosity of lecithin/D2O/cyclohexane are determined, and a conclusion is drawn as concerns the domain shapes.
24 CFR 902.62 - Failure to submit data.
Code of Federal Regulations, 2011 CFR
2011-04-01
... receive a presumptive rating of failure for its unaudited information and shall receive zero points for... timely submission of audited information does not negate the score of zero received for the unaudited... subindicator(s) shall receive a score of zero for the relevant indicator(s) or subindicator(s) and its overall...
Time delay and distance measurement
NASA Technical Reports Server (NTRS)
Abshire, James B. (Inventor); Sun, Xiaoli (Inventor)
2011-01-01
A method for measuring time delay and distance may include providing an electromagnetic radiation carrier frequency and modulating one or more of amplitude, phase, frequency, polarization, and pointing angle of the carrier frequency with a return to zero (RZ) pseudo random noise (PN) code. The RZ PN code may have a constant bit period and a pulse duration that is less than the bit period. A receiver may detect the electromagnetic radiation and calculate the scattering profile versus time (or range) by computing a cross correlation function between the recorded received signal and a three-state RZ PN code kernel in the receiver. The method also may be used for pulse delay time (i.e., PPM) communications.
Residual Defect Density in Random Disks Deposits.
Topic, Nikola; Pöschel, Thorsten; Gallas, Jason A C
2015-08-03
We investigate the residual distribution of structural defects in very tall packings of disks deposited randomly in large channels. By performing simulations involving the sedimentation of up to 50 × 10(9) particles we find all deposits to consistently show a non-zero residual density of defects obeying a characteristic power-law as a function of the channel width. This remarkable finding corrects the widespread belief that the density of defects should vanish algebraically with growing height. A non-zero residual density of defects implies a type of long-range spatial order in the packing, as opposed to only local ordering. In addition, we find deposits of particles to involve considerably less randomness than generally presumed.
Opening of DNA chain due to force applied on different locations.
Singh, Amar; Modi, Tushar; Singh, Navin
2016-09-01
We consider a homogeneous DNA molecule and investigate the effect of random force applied on the unzipping profile of the molecule. How the critical force varies as a function of the chain length or number of base pairs is the objective of this study. In general, the ratio of the critical forces that is applied on the middle of the chain to that which is applied on one of the ends is two. Our study shows that this ratio depends on the length of the chain. This means that the force which is applied to a point can be experienced by a section of the chain. Beyond a length, the base pairs have no information about the applied force. In the case when the chain length is shorter than this length, this ratio may vary. Only in the case when the chain length exceeds a critical length, this ratio is found to be two. Based on the de Gennes formulation, we developed a method to calculate these forces at zero temperature. The exact results at zero temperature match numerical calculations.
The influence of micro-vibration on space-borne Fourier transform spectrometers
NASA Astrophysics Data System (ADS)
Bai, Shaojun; Hou, Lizhou; Ke, Junyu
2014-11-01
The space-borne Fourier Transform Spectrometers (FTS) are widely used for atmospheric studies and planetary explorations. An adapted version of the classical Michelson interferometer have succeeded in several space missions, which utilized a rotating arm carrying a pair of cube corner retro-reflectors to produce a variable optical path difference (OPD), and a metrology laser source to generate the trigger signals. One characteristic of this kind of FTS is that it is highly sensitive to micro-vibration disturbances. However, a variety of mechanical disturbances are present as the satellite is in orbit, such as flying wheels, pointing mechanisms and cryocoolers. Therefore, this paper investigates the influence of micro-vibration on the space-borne FTS. Firstly, the interferogram of metrology laser under harmonic disturbances is analyzed. The results show that the zero crossings of interferogram shift periodically, and it gives rise to ghost lines in the retrieved spectra. The amplitudes of ghost lines increase rapidly with the increasing of micro-vibration levels. As to the system that employs the constant OPD sampling strategy, the effect of zero-crossing shifting is reduced significantly. Nevertheless, the time delays between the reference signal and the main signal acquisition are inevitable because of the electronic circuit. Thus, the effect of time delays on the interferogram and eventually on the spectra is simulated. The analysis suggests that the amplitudes of ghost line in spectra increase with the increasing of time delay intervals.
Self-organization in a diversity induced thermodynamics.
Scirè, Alessandro; Annovazzi-Lodi, Valerio
2017-01-01
In this work we show how global self-organized patterns can come out of a disordered ensemble of point oscillators, as a result of a deterministic, and not of a random, cooperative process. The resulting system dynamics has many characteristics of classical thermodynamics. To this end, a modified Kuramoto model is introduced, by including Euclidean degrees of freedom and particle polarity. The standard deviation of the frequency distribution is the disorder parameter, diversity, acting as temperature, which is both a source of motion and of disorder. For zero and low diversity, robust static phase-synchronized patterns (crystals) appear, and the problem reverts to a generic dissipative many-body problem. From small to moderate diversity crystals display vibrations followed by structure disintegration in a competition of smaller dynamic patterns, internally synchronized, each of which is capable to manage its internal diversity. In this process a huge variety of self-organized dynamic shapes is formed. Such patterns can be seen again as (more complex) oscillators, where the same description can be applied in turn, renormalizing the problem to a bigger scale, opening the possibility of pattern evolution. The interaction functions are kept local because our idea is to build a system able to produce global patterns when its constituents only interact at the bond scale. By further increasing the oscillator diversity, the dynamics becomes erratic, dynamic patterns show short lifetime, and finally disappear for high diversity. Results are neither qualitatively dependent on the specific choice of the interaction functions nor on the shape of the probability function assumed for the frequencies. The system shows a phase transition and a critical behaviour for a specific value of diversity.
Self field electromagnetism and quantum phenomena
NASA Astrophysics Data System (ADS)
Schatten, Kenneth H.
1994-07-01
Quantum Electrodynamics (QED) has been extremely successful inits predictive capability for atomic phenomena. Thus the greatest hope for any alternative view is solely to mimic the predictive capability of quantum mechanics (QM), and perhaps its usefulness will lie in gaining a better understanding of microscopic phenomena. Many ?paradoxes? and problematic situations emerge in QED. To combat the QED problems, the field of Stochastics Electrodynamics (SE) emerged, wherein a random ?zero point radiation? is assumed to fill all of space in an attmept to explain quantum phenomena, without some of the paradoxical concerns. SE, however, has greater failings. One is that the electromagnetic field energy must be infinit eto work. We have examined a deterministic side branch of SE, ?self field? electrodynamics, which may overcome the probelms of SE. Self field electrodynamics (SFE) utilizes the chaotic nature of electromagnetic emissions, as charges lose energy near atomic dimensions, to try to understand and mimic quantum phenomena. These fields and charges can ?interact with themselves? in a non-linear fashion, and may thereby explain many quantum phenomena from a semi-classical viewpoint. Referred to as self fields, they have gone by other names in the literature: ?evanesccent radiation?, ?virtual photons?, and ?vacuum fluctuations?. Using self fields, we discuss the uncertainty principles, the Casimir effects, and the black-body radiation spectrum, diffraction and interference effects, Schrodinger's equation, Planck's constant, and the nature of the electron and how they might be understood in the present framework. No new theory could ever replace QED. The self field view (if correct) would, at best, only serve to provide some understanding of the processes by which strange quantum phenomena occur at the atomic level. We discuss possible areas where experiments might be employed to test SFE, and areas where future work may lie.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nimbalkar, Sachin U.; Wenning, Thomas J.; Guo, Wei
In the United States, manufacturing facilities account for about 32% of total domestic energy consumption in 2014. Robust energy tracking methodologies are critical to understanding energy performance in manufacturing facilities. Due to its simplicity and intuitiveness, the classic energy intensity method (i.e. the ratio of total energy use over total production) is the most widely adopted. However, the classic energy intensity method does not take into account the variation of other relevant parameters (i.e. product type, feed stock type, weather, etc.). Furthermore, the energy intensity method assumes that the facilities’ base energy consumption (energy use at zero production) is zero,more » which rarely holds true. Therefore, it is commonly recommended to utilize regression models rather than the energy intensity approach for tracking improvements at the facility level. Unfortunately, many energy managers have difficulties understanding why regression models are statistically better than utilizing the classic energy intensity method. While anecdotes and qualitative information may convince some, many have major reservations about the accuracy of regression models and whether it is worth the time and effort to gather data and build quality regression models. This paper will explain why regression models are theoretically and quantitatively more accurate for tracking energy performance improvements. Based on the analysis of data from 114 manufacturing plants over 12 years, this paper will present quantitative results on the importance of utilizing regression models over the energy intensity methodology. This paper will also document scenarios where regression models do not have significant relevance over the energy intensity method.« less
Uncertainty relations, zero point energy and the linear canonical group
NASA Technical Reports Server (NTRS)
Sudarshan, E. C. G.
1993-01-01
The close relationship between the zero point energy, the uncertainty relations, coherent states, squeezed states, and correlated states for one mode is investigated. This group-theoretic perspective enables the parametrization and identification of their multimode generalization. In particular the generalized Schroedinger-Robertson uncertainty relations are analyzed. An elementary method of determining the canonical structure of the generalized correlated states is presented.
The moving-least-squares-particle hydrodynamics method (MLSPH)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dilts, G.
1997-12-31
An enhancement of the smooth-particle hydrodynamics (SPH) method has been developed using the moving-least-squares (MLS) interpolants of Lancaster and Salkauskas which simultaneously relieves the method of several well-known undesirable behaviors, including spurious boundary effects, inaccurate strain and rotation rates, pressure spikes at impact boundaries, and the infamous tension instability. The classical SPH method is derived in a novel manner by means of a Galerkin approximation applied to the Lagrangian equations of motion for continua using as basis functions the SPH kernel function multiplied by the particle volume. This derivation is then modified by simply substituting the MLS interpolants for themore » SPH Galerkin basis, taking care to redefine the particle volume and mass appropriately. The familiar SPH kernel approximation is now equivalent to a colocation-Galerkin method. Both classical conservative and recent non-conservative formulations of SPH can be derived and emulated. The non-conservative forms can be made conservative by adding terms that are zero within the approximation at the expense of boundary-value considerations. The familiar Monaghan viscosity is used. Test calculations of uniformly expanding fluids, the Swegle example, spinning solid disks, impacting bars, and spherically symmetric flow illustrate the superiority of the technique over SPH. In all cases it is seen that the marvelous ability of the MLS interpolants to add up correctly everywhere civilizes the noisy, unpredictable nature of SPH. Being a relatively minor perturbation of the SPH method, it is easily retrofitted into existing SPH codes. On the down side, computational expense at this point is significant, the Monaghan viscosity undoes the contribution of the MLS interpolants, and one-point quadrature (colocation) is not accurate enough. Solutions to these difficulties are being pursued vigorously.« less
Michalski, G; Jost, R; Sugny, D; Joyeux, M; Thiemens, M
2004-10-15
We have measured the rotationless photodissociation threshold of six isotopologues of NO2 containing 14N, 15N, 16O, and 18O isotopes using laser induced fluorescence detection and jet cooled NO2 (to avoid rotational congestion). For each isotopologue, the spectrum is very dense below the dissociation energy while fluorescence disappears abruptly above it. The six dissociation energies ranged from 25 128.56 cm(-1) for 14N16O2 to 25 171.80 cm(-1) for 15N18O2. The zero point energy for the NO2 isotopologues was determined from experimental vibrational energies, application of the Dunham expansion, and from canonical perturbation theory using several potential energy surfaces. Using the experimentally determined dissociation energies and the calculated zero point energies of the parent NO2 isotopologue and of the NO product(s) we determined that there is a common De = 26 051.17+/-0.70 cm(-1) using the Born-Oppenheimer approximation. The canonical perturbation theory was then used to calculate the zero point energy of all stable isotopologues of SO2, CO2, and O3, which are compared with previous determinations.
Yang, Kin S; Hudson, Bruce
2010-11-25
Replacement of H by D perturbs the (13)C NMR chemical shifts of an alkane molecule. This effect is largest for the carbon to which the D is attached, diminishing rapidly with intervening bonds. The effect is sensitive to stereochemistry and is large enough to be measured reliably. A simple model based on the ground (zero point) vibrational level and treating only the C-H(D) degrees of freedom (local mode approach) is presented. The change in CH bond length with H/D substitution as well as the reduction in the range of the zero-point level probability distribution for the stretch and both bend degrees of freedom are computed. The (13)C NMR chemical shifts are computed with variation in these three degrees of freedom, and the results are averaged with respect to the H and D distribution functions. The resulting differences in the zero-point averaged chemical shifts are compared with experimental values of the H/D shifts for a series of cycloalkanes, norbornane, adamantane, and protoadamantane. Agreement is generally very good. The remaining differences are discussed. The proton spectrum of cyclohexane- is revisited and updated with improved agreement with experiment.
Some Properties of Estimated Scale Invariant Covariance Structures.
ERIC Educational Resources Information Center
Dijkstra, T. K.
1990-01-01
An example of scale invariance is provided via the LISREL model that is subject only to classical normalizations and zero constraints on the parameters. Scale invariance implies that the estimated covariance matrix must satisfy certain equations, and the nature of these equations depends on the fitting function used. (TJH)
Asymptotics of quantum weighted Hurwitz numbers
NASA Astrophysics Data System (ADS)
Harnad, J.; Ortmann, Janosch
2018-06-01
This work concerns both the semiclassical and zero temperature asymptotics of quantum weighted double Hurwitz numbers. The partition function for quantum weighted double Hurwitz numbers can be interpreted in terms of the energy distribution of a quantum Bose gas with vanishing fugacity. We compute the leading semiclassical term of the partition function for three versions of the quantum weighted Hurwitz numbers, as well as lower order semiclassical corrections. The classical limit is shown to reproduce the simple single and double Hurwitz numbers studied by Okounkov and Pandharipande (2000 Math. Res. Lett. 7 447–53, 2000 Lett. Math. Phys. 53 59–74). The KP-Toda τ-function that serves as generating function for the quantum Hurwitz numbers is shown to have the τ-function of Okounkov and Pandharipande (2000 Math. Res. Lett. 7 447–53, 2000 Lett. Math. Phys. 53 59–74) as its leading term in the classical limit, and, with suitable scaling, the same holds for the partition function, the weights and expectations of Hurwitz numbers. We also compute the zero temperature limit of the partition function and quantum weighted Hurwitz numbers. The KP or Toda τ-function serving as generating function for the quantum Hurwitz numbers are shown to give the one for Belyi curves in the zero temperature limit and, with suitable scaling, the same holds true for the partition function, the weights and the expectations of Hurwitz numbers.
Electronic zero-point fluctuation forces inside circuit components
Leonhardt, Ulf
2018-01-01
One of the most intriguing manifestations of quantum zero-point fluctuations are the van der Waals and Casimir forces, often associated with vacuum fluctuations of the electromagnetic field. We study generalized fluctuation potentials acting on internal degrees of freedom of components in electrical circuits. These electronic Casimir-like potentials are induced by the zero-point current fluctuations of any general conductive circuit. For realistic examples of an electromechanical capacitor and a superconducting qubit, our results reveal the possibility of tunable forces between the capacitor plates, or the level shifts of the qubit, respectively. Our analysis suggests an alternative route toward the exploration of Casimir-like fluctuation potentials, namely, by characterizing and measuring them as a function of parameters of the environment. These tunable potentials may be useful for future nanoelectromechanical and quantum technologies. PMID:29719863
40 CFR 91.321 - NDIR analyzer calibration.
Code of Federal Regulations, 2010 CFR
2010-07-01
... curve for each range used as follows: (1) Zero the analyzer. (2) Span the analyzer to give a response of approximately 90 percent of full-scale chart deflection. (3) Recheck the zero response. If it has changed more... the form of equation (1) or (2). Include zero as a data point. Compensation for known impurities in...
40 CFR 90.321 - NDIR analyzer calibration.
Code of Federal Regulations, 2010 CFR
2010-07-01
... curve. Develop a calibration curve for each range used as follows: (1) Zero the analyzer. (2) Span the... zero response. If it has changed more than 0.5 percent of full scale, repeat the steps given in... the form of the following equation (1) or (2). Include zero as a data point. Compensation for known...
16 CFR Figure 5 to Subpart A of... - Zero Reference Point Related to Detecting Plane
Code of Federal Regulations, 2011 CFR
2011-01-01
... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Zero Reference Point Related to Detecting Plane 5 Figure 5 to Subpart A of Part 1209 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION CONSUMER PRODUCT SAFETY ACT REGULATIONS INTERIM SAFETY STANDARD FOR CELLULOSE INSULATION The Standard Pt. 1209, Subpt. A, Fig. 5 Figure 5 to Subpart A o...
NASA Astrophysics Data System (ADS)
da Silva, W. M.; Montenegro-Filho, R. R.
2017-12-01
Quantum critical (QC) phenomena can be accessed by studying quantum magnets under an applied magnetic field (B ). The QC points are located at the end points of magnetization plateaus and separate gapped and gapless phases. In one dimension, the low-energy excitations of the gapless phase form a Luttinger liquid (LL), and crossover lines bound insulating (plateau) and LL regimes, as well as the QC regime. Alternating ferrimagnetic chains have a spontaneous magnetization at T =0 and gapped excitations at zero field. Besides the plateau at the fully polarized (FP) magnetization, due to the gap there is another magnetization plateau at the ferrimagnetic (FRI) magnetization. We develop spin-wave theories to study the thermal properties of these chains under an applied magnetic field: one from the FRI classical state and another from the FP state, comparing their results with quantum Monte Carlo data. We deepen the theory from the FP state, obtaining the crossover lines in the T vs B low-T phase diagram. In particular, from local extreme points in the susceptibility and magnetization curves, we identify the crossover between an LL regime formed by excitations from the FRI state to another built from excitations of the FP state. These two LL regimes are bounded by an asymmetric domelike crossover line, as observed in the phase diagram of other quantum magnets under an applied magnetic field.
Schreck, Simon; Wernet, Philippe
2016-09-14
The effects of isotope substitution in liquid water are probed by x-ray absorption spectroscopy at the O K-edge as measured in transmission mode. Confirming earlier x-ray Raman scattering experiments, the D2O spectrum is found to be blue shifted with respect to H2O, and the D2O spectrum to be less broadened. Following the earlier interpretations of UV and x-ray Raman spectra, the shift is related to the difference in ground-state zero-point energies between D2O and H2O, while the difference in broadening is related to the difference in ground-state vibrational zero-point distributions. We demonstrate that the transmission-mode measurements allow for determining the spectral shapes with unprecedented accuracy. Owing in addition to the increased spectral resolution and signal to noise ratio compared to the earlier measurements, the new data enable the stringent determination of blue shift and broadening in the O K-edge x-ray absorption spectrum of liquid water upon isotope substitution. The results are compared to UV absorption data, and it is discussed to which extent they reflect the differences in zero-point energies and vibrational zero-point distributions in the ground-states of the liquids. The influence of the shape of the final-state potential, inclusion of the Franck-Condon structure, and differences between liquid H2O and D2O resulting from different hydrogen-bond environments in the liquids are addressed. The differences between the O K-edge absorption spectra of water from our transmission-mode measurements and from the state-of-the-art x-ray Raman scattering experiments are discussed in addition. The experimentally extracted values of blue shift and broadening are proposed to serve as a test for calculations of ground-state zero-point energies and vibrational zero-point distributions in liquid H2O and D2O. This clearly motivates the need for new calculations of the O K-edge x-ray absorption spectrum of liquid water.
Quantum random number generation
Ma, Xiongfeng; Yuan, Xiao; Cao, Zhu; ...
2016-06-28
Quantum physics can be exploited to generate true random numbers, which play important roles in many applications, especially in cryptography. Genuine randomness from the measurement of a quantum system reveals the inherent nature of quantumness -- coherence, an important feature that differentiates quantum mechanics from classical physics. The generation of genuine randomness is generally considered impossible with only classical means. Based on the degree of trustworthiness on devices, quantum random number generators (QRNGs) can be grouped into three categories. The first category, practical QRNG, is built on fully trusted and calibrated devices and typically can generate randomness at a highmore » speed by properly modeling the devices. The second category is self-testing QRNG, where verifiable randomness can be generated without trusting the actual implementation. The third category, semi-self-testing QRNG, is an intermediate category which provides a tradeoff between the trustworthiness on the device and the random number generation speed.« less
Structural zeroes and zero-inflated models.
He, Hua; Tang, Wan; Wang, Wenjuan; Crits-Christoph, Paul
2014-08-01
In psychosocial and behavioral studies count outcomes recording the frequencies of the occurrence of some health or behavior outcomes (such as the number of unprotected sexual behaviors during a period of time) often contain a preponderance of zeroes because of the presence of 'structural zeroes' that occur when some subjects are not at risk for the behavior of interest. Unlike random zeroes (responses that can be greater than zero, but are zero due to sampling variability), structural zeroes are usually very different, both statistically and clinically. False interpretations of results and study findings may result if differences in the two types of zeroes are ignored. However, in practice, the status of the structural zeroes is often not observed and this latent nature complicates the data analysis. In this article, we focus on one model, the zero-inflated Poisson (ZIP) regression model that is commonly used to address zero-inflated data. We first give a brief overview of the issues of structural zeroes and the ZIP model. We then given an illustration of ZIP with data from a study on HIV-risk sexual behaviors among adolescent girls. Sample codes in SAS and Stata are also included to help perform and explain ZIP analyses.
Uncertainty assessment in geodetic network adjustment by combining GUM and Monte-Carlo-simulations
NASA Astrophysics Data System (ADS)
Niemeier, Wolfgang; Tengen, Dieter
2017-06-01
In this article first ideas are presented to extend the classical concept of geodetic network adjustment by introducing a new method for uncertainty assessment as two-step analysis. In the first step the raw data and possible influencing factors are analyzed using uncertainty modeling according to GUM (Guidelines to the Expression of Uncertainty in Measurements). This approach is well established in metrology, but rarely adapted within Geodesy. The second step consists of Monte-Carlo-Simulations (MC-simulations) for the complete processing chain from raw input data and pre-processing to adjustment computations and quality assessment. To perform these simulations, possible realizations of raw data and the influencing factors are generated, using probability distributions for all variables and the established concept of pseudo-random number generators. Final result is a point cloud which represents the uncertainty of the estimated coordinates; a confidence region can be assigned to these point clouds, as well. This concept may replace the common concept of variance propagation and the quality assessment of adjustment parameters by using their covariance matrix. It allows a new way for uncertainty assessment in accordance with the GUM concept for uncertainty modelling and propagation. As practical example the local tie network in "Metsähovi Fundamental Station", Finland is used, where classical geodetic observations are combined with GNSS data.
De sitter space and perpetuum mobile
NASA Astrophysics Data System (ADS)
Akhmedov, Emil T.; Buividovich, P. V.; Singleton, Douglas A.
2012-04-01
The general arguments that any interacting nonconformal classical field theory in de Sitter space leads to the possibility of constructing a perpetuum mobile is given. The arguments are based on the observation that massive free falling particles can radiate other massive particles on the classical level as seen by the free falling observer. The intensity of the radiation process is not zero even for particles with any finite mass, i.e., with a wavelength which is within causal domain. Hence, we conclude that either de Sitter space cannot exist eternally or that one can build a perpetuum mobile.
De sitter space and perpetuum mobile
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akhmedov, Emil T.; Buividovich, P. V.; Singleton, Douglas A.
2012-04-15
The general arguments that any interacting nonconformal classical field theory in de Sitter space leads to the possibility of constructing a perpetuum mobile is given. The arguments are based on the observation that massive free falling particles can radiate other massive particles on the classical level as seen by the free falling observer. The intensity of the radiation process is not zero even for particles with any finite mass, i.e., with a wavelength which is within causal domain. Hence, we conclude that either de Sitter space cannot exist eternally or that one can build a perpetuum mobile.
Jarlborg, Thomas; Bianconi, Antonio
2016-04-20
While 203 K high temperature superconductivity in H3S has been interpreted by BCS theory in the dirty limit here we focus on the effects of hydrogen zero-point-motion and the multiband electronic structure relevant for multigap superconductivity near Lifshitz transitions. We describe how the topology of the Fermi surfaces evolves with pressure giving different Lifshitz-transitions. A neck-disrupting Lifshitz-transition (type 2) occurs where the van Hove singularity, vHs, crosses the chemical potential at 210 GPa and new small 2D Fermi surface portions appear with slow Fermi velocity where the Migdal-approximation becomes questionable. We show that the neglected hydrogen zero-point motion ZPM, plays a key role at Lifshitz transitions. It induces an energy shift of about 600 meV of the vHs. The other Lifshitz-transition (of type 1) for the appearing of a new Fermi surface occurs at 130 GPa where new Fermi surfaces appear at the Γ point of the Brillouin zone here the Migdal-approximation breaks down and the zero-point-motion induces large fluctuations. The maximum Tc = 203 K occurs at 160 GPa where EF/ω0 = 1 in the small Fermi surface pocket at Γ. A Feshbach-like resonance between a possible BEC-BCS condensate at Γ and the BCS condensate in different k-space spots is proposed.
Origin and implications of zero degeneracy in networks spectra.
Yadav, Alok; Jalan, Sarika
2015-04-01
The spectra of many real world networks exhibit properties which are different from those of random networks generated using various models. One such property is the existence of a very high degeneracy at the zero eigenvalue. In this work, we provide all the possible reasons behind the occurrence of the zero degeneracy in the network spectra, namely, the complete and partial duplications, as well as their implications. The power-law degree sequence and the preferential attachment are the properties which enhances the occurrence of such duplications and hence leading to the zero degeneracy. A comparison of the zero degeneracy in protein-protein interaction networks of six different species and in their corresponding model networks indicates importance of the degree sequences and the power-law exponent for the occurrence of zero degeneracy.
Blessing of dimensionality: mathematical foundations of the statistical physics of data.
Gorban, A N; Tyukin, I Y
2018-04-28
The concentrations of measure phenomena were discovered as the mathematical background to statistical mechanics at the end of the nineteenth/beginning of the twentieth century and have been explored in mathematics ever since. At the beginning of the twenty-first century, it became clear that the proper utilization of these phenomena in machine learning might transform the curse of dimensionality into the blessing of dimensionality This paper summarizes recently discovered phenomena of measure concentration which drastically simplify some machine learning problems in high dimension, and allow us to correct legacy artificial intelligence systems. The classical concentration of measure theorems state that i.i.d. random points are concentrated in a thin layer near a surface (a sphere or equators of a sphere, an average or median-level set of energy or another Lipschitz function, etc.). The new stochastic separation theorems describe the thin structure of these thin layers: the random points are not only concentrated in a thin layer but are all linearly separable from the rest of the set, even for exponentially large random sets. The linear functionals for separation of points can be selected in the form of the linear Fisher's discriminant. All artificial intelligence systems make errors. Non-destructive correction requires separation of the situations (samples) with errors from the samples corresponding to correct behaviour by a simple and robust classifier. The stochastic separation theorems provide us with such classifiers and determine a non-iterative (one-shot) procedure for their construction.This article is part of the theme issue 'Hilbert's sixth problem'. © 2018 The Author(s).
Blessing of dimensionality: mathematical foundations of the statistical physics of data
NASA Astrophysics Data System (ADS)
Gorban, A. N.; Tyukin, I. Y.
2018-04-01
The concentrations of measure phenomena were discovered as the mathematical background to statistical mechanics at the end of the nineteenth/beginning of the twentieth century and have been explored in mathematics ever since. At the beginning of the twenty-first century, it became clear that the proper utilization of these phenomena in machine learning might transform the curse of dimensionality into the blessing of dimensionality. This paper summarizes recently discovered phenomena of measure concentration which drastically simplify some machine learning problems in high dimension, and allow us to correct legacy artificial intelligence systems. The classical concentration of measure theorems state that i.i.d. random points are concentrated in a thin layer near a surface (a sphere or equators of a sphere, an average or median-level set of energy or another Lipschitz function, etc.). The new stochastic separation theorems describe the thin structure of these thin layers: the random points are not only concentrated in a thin layer but are all linearly separable from the rest of the set, even for exponentially large random sets. The linear functionals for separation of points can be selected in the form of the linear Fisher's discriminant. All artificial intelligence systems make errors. Non-destructive correction requires separation of the situations (samples) with errors from the samples corresponding to correct behaviour by a simple and robust classifier. The stochastic separation theorems provide us with such classifiers and determine a non-iterative (one-shot) procedure for their construction. This article is part of the theme issue `Hilbert's sixth problem'.
A survey of noninteractive zero knowledge proof system and its applications.
Wu, Huixin; Wang, Feng
2014-01-01
Zero knowledge proof system which has received extensive attention since it was proposed is an important branch of cryptography and computational complexity theory. Thereinto, noninteractive zero knowledge proof system contains only one message sent by the prover to the verifier. It is widely used in the construction of various types of cryptographic protocols and cryptographic algorithms because of its good privacy, authentication, and lower interactive complexity. This paper reviews and analyzes the basic principles of noninteractive zero knowledge proof system, and summarizes the research progress achieved by noninteractive zero knowledge proof system on the following aspects: the definition and related models of noninteractive zero knowledge proof system, noninteractive zero knowledge proof system of NP problems, noninteractive statistical and perfect zero knowledge, the connection between noninteractive zero knowledge proof system, interactive zero knowledge proof system, and zap, and the specific applications of noninteractive zero knowledge proof system. This paper also points out the future research directions.
Comparison of Travel-Time and Amplitude Measurements for Deep-Focusing Time-Distance Helioseismology
NASA Astrophysics Data System (ADS)
Pourabdian, Majid; Fournier, Damien; Gizon, Laurent
2018-04-01
The purpose of deep-focusing time-distance helioseismology is to construct seismic measurements that have a high sensitivity to the physical conditions at a desired target point in the solar interior. With this technique, pairs of points on the solar surface are chosen such that acoustic ray paths intersect at this target (focus) point. Considering acoustic waves in a homogeneous medium, we compare travel-time and amplitude measurements extracted from the deep-focusing cross-covariance functions. Using a single-scattering approximation, we find that the spatial sensitivity of deep-focusing travel times to sound-speed perturbations is zero at the target location and maximum in a surrounding shell. This is unlike the deep-focusing amplitude measurements, which have maximum sensitivity at the target point. We compare the signal-to-noise ratio for travel-time and amplitude measurements for different types of sound-speed perturbations, under the assumption that noise is solely due to the random excitation of the waves. We find that, for highly localized perturbations in sound speed, the signal-to-noise ratio is higher for amplitude measurements than for travel-time measurements. We conclude that amplitude measurements are a useful complement to travel-time measurements in time-distance helioseismology.
Quantum effects in amplitude death of coupled anharmonic self-oscillators
NASA Astrophysics Data System (ADS)
Amitai, Ehud; Koppenhöfer, Martin; Lörch, Niels; Bruder, Christoph
2018-05-01
Coupling two or more self-oscillating systems may stabilize their zero-amplitude rest state, therefore quenching their oscillation. This phenomenon is termed "amplitude death." Well known and studied in classical self-oscillators, amplitude death was only recently investigated in quantum self-oscillators [Ishibashi and Kanamoto, Phys. Rev. E 96, 052210 (2017), 10.1103/PhysRevE.96.052210]. Quantitative differences between the classical and quantum descriptions were found. Here, we demonstrate that for quantum self-oscillators with anharmonicity in their energy spectrum, multiple resonances in the mean phonon number can be observed. This is a result of the discrete energy spectrum of these oscillators, and is not present in the corresponding classical model. Experiments can be realized with current technology and would demonstrate these genuine quantum effects in the amplitude death phenomenon.
The Schrödinger Equation, the Zero-Point Electromagnetic Radiation, and the Photoelectric Effect
NASA Astrophysics Data System (ADS)
França, H. M.; Kamimura, A.; Barreto, G. A.
2016-04-01
A Schrödinger type equation for a mathematical probability amplitude Ψ( x, t) is derived from the generalized phase space Liouville equation valid for the motion of a microscopic particle, with mass M and charge e, moving in a potential V( x). The particle phase space probability density is denoted Q( x, p, t), and the entire system is immersed in the "vacuum" zero-point electromagnetic radiation. We show, in the first part of the paper, that the generalized Liouville equation is reduced to a simpler Liouville equation in the equilibrium limit where the small radiative corrections cancel each other approximately. This leads us to a simpler Liouville equation that will facilitate the calculations in the second part of the paper. Within this second part, we address ourselves to the following task: Since the Schrödinger equation depends on hbar , and the zero-point electromagnetic spectral distribution, given by ρ 0{(ω )} = hbar ω 3/2 π 2 c3, also depends on hbar , it is interesting to verify the possible dynamical connection between ρ 0( ω) and the Schrödinger equation. We shall prove that the Planck's constant, present in the momentum operator of the Schrödinger equation, is deeply related with the ubiquitous zero-point electromagnetic radiation with spectral distribution ρ 0( ω). For simplicity, we do not use the hypothesis of the existence of the L. de Broglie matter-waves. The implications of our study for the standard interpretation of the photoelectric effect are discussed by considering the main characteristics of the phenomenon. We also mention, briefly, the effects of the zero-point radiation in the tunneling phenomenon and the Compton's effect.
Space-time models based on random fields with local interactions
NASA Astrophysics Data System (ADS)
Hristopulos, Dionissios T.; Tsantili, Ivi C.
2016-08-01
The analysis of space-time data from complex, real-life phenomena requires the use of flexible and physically motivated covariance functions. In most cases, it is not possible to explicitly solve the equations of motion for the fields or the respective covariance functions. In the statistical literature, covariance functions are often based on mathematical constructions. In this paper, we propose deriving space-time covariance functions by solving “effective equations of motion”, which can be used as statistical representations of systems with diffusive behavior. In particular, we propose to formulate space-time covariance functions based on an equilibrium effective Hamiltonian using the linear response theory. The effective space-time dynamics is then generated by a stochastic perturbation around the equilibrium point of the classical field Hamiltonian leading to an associated Langevin equation. We employ a Hamiltonian which extends the classical Gaussian field theory by including a curvature term and leads to a diffusive Langevin equation. Finally, we derive new forms of space-time covariance functions.
Effective response theory for zero-energy Majorana bound states in three spatial dimensions
NASA Astrophysics Data System (ADS)
Lopes, Pedro L. e. S.; Teo, Jeffrey C. Y.; Ryu, Shinsei
2015-05-01
We propose a gravitational response theory for point defects (hedgehogs) binding Majorana zero modes in (3 + 1)-dimensional superconductors. Starting in 4 + 1 dimensions, where the point defect is extended into a line, a coupling of the bulk defect texture with the gravitational field is introduced. Diffeomorphism invariance then leads to an S U (2) 2 Kac-Moody current running along the defect line. The S U (2) 2 Kac-Moody algebra accounts for the non-Abelian nature of the zero modes in 3 + 1 dimensions. It is then shown to also encode the angular momentum density which permeates throughout the bulk between hedgehog-antihedgehog pairs.
NASA Astrophysics Data System (ADS)
Russell, Matthew J.; Jensen, Oliver E.; Galla, Tobias
2016-10-01
Motivated by uncertainty quantification in natural transport systems, we investigate an individual-based transport process involving particles undergoing a random walk along a line of point sinks whose strengths are themselves independent random variables. We assume particles are removed from the system via first-order kinetics. We analyze the system using a hierarchy of approaches when the sinks are sparsely distributed, including a stochastic homogenization approximation that yields explicit predictions for the extrinsic disorder in the stationary state due to sink strength fluctuations. The extrinsic noise induces long-range spatial correlations in the particle concentration, unlike fluctuations due to the intrinsic noise alone. Additionally, the mean concentration profile, averaged over both intrinsic and extrinsic noise, is elevated compared with the corresponding profile from a uniform sink distribution, showing that the classical homogenization approximation can be a biased estimator of the true mean.
NASA Technical Reports Server (NTRS)
Kattawar, G. W.; Plass, G. N.; Hitzfelder, S. J.
1976-01-01
The matrix operator method was used to calculate the polarization of radiation scattered on layers of various optical thicknesses, with results compared for Rayleigh scattering and for scattering from a continental haze. In both cases, there are neutral points arising from the zeros of the polarization of single scattered photons at scattering angles of zero and 180 degrees. The angular position of these Rayleigh-like neutral points (RNP) in the sky shows appreciable variation with the optical thickness of the scattering layer for a Rayleigh phase matrix, but only a small variation for haze L phase matrix. Another type of neutral point exists for non-Rayleigh phase functions that is associated with the zeros of the polarization for single scattering which occurs between the end points of the curve. A comparison of radiances calculated from the complete theory of radiative transfer using Stokes vectors with those obtained from the scalar theory shows that differences of the order of 23% may be obtained for Rayleigh scattering, while the largest difference found for a haze L phase function was of the order of 0.1%.
Hierarchy in directed random networks.
Mones, Enys
2013-02-01
In recent years, the theory and application of complex networks have been quickly developing in a markable way due to the increasing amount of data from real systems and the fruitful application of powerful methods used in statistical physics. Many important characteristics of social or biological systems can be described by the study of their underlying structure of interactions. Hierarchy is one of these features that can be formulated in the language of networks. In this paper we present some (qualitative) analytic results on the hierarchical properties of random network models with zero correlations and also investigate, mainly numerically, the effects of different types of correlations. The behavior of the hierarchy is different in the absence and the presence of giant components. We show that the hierarchical structure can be drastically different if there are one-point correlations in the network. We also show numerical results suggesting that the hierarchy does not change monotonically with the correlations and there is an optimal level of nonzero correlations maximizing the level of hierarchy.
Quantum groups, Yang-Baxter maps and quasi-determinants
NASA Astrophysics Data System (ADS)
Tsuboi, Zengo
2018-01-01
For any quasi-triangular Hopf algebra, there exists the universal R-matrix, which satisfies the Yang-Baxter equation. It is known that the adjoint action of the universal R-matrix on the elements of the tensor square of the algebra constitutes a quantum Yang-Baxter map, which satisfies the set-theoretic Yang-Baxter equation. The map has a zero curvature representation among L-operators defined as images of the universal R-matrix. We find that the zero curvature representation can be solved by the Gauss decomposition of a product of L-operators. Thereby obtained a quasi-determinant expression of the quantum Yang-Baxter map associated with the quantum algebra Uq (gl (n)). Moreover, the map is identified with products of quasi-Plücker coordinates over a matrix composed of the L-operators. We also consider the quasi-classical limit, where the underlying quantum algebra reduces to a Poisson algebra. The quasi-determinant expression of the quantum Yang-Baxter map reduces to ratios of determinants, which give a new expression of a classical Yang-Baxter map.
Computation of solar perturbations with Poisson series
NASA Technical Reports Server (NTRS)
Broucke, R.
1974-01-01
Description of a project for computing first-order perturbations of natural or artificial satellites by integrating the equations of motion on a computer with automatic Poisson series expansions. A basic feature of the method of solution is that the classical variation-of-parameters formulation is used rather than rectangular coordinates. However, the variation-of-parameters formulation uses the three rectangular components of the disturbing force rather than the classical disturbing function, so that there is no problem in expanding the disturbing function in series. Another characteristic of the variation-of-parameters formulation employed is that six rather unusual variables are used in order to avoid singularities at the zero eccentricity and zero (or 90 deg) inclination. The integration process starts by assuming that all the orbit elements present on the right-hand sides of the equations of motion are constants. These right-hand sides are then simple Poisson series which can be obtained with the use of the Bessel expansions of the two-body problem in conjunction with certain interation methods. These Poisson series can then be integrated term by term, and a first-order solution is obtained.
Phase diagram of the disordered Bose-Hubbard model
NASA Astrophysics Data System (ADS)
Gurarie, V.; Pollet, L.; Prokof'Ev, N. V.; Svistunov, B. V.; Troyer, M.
2009-12-01
We establish the phase diagram of the disordered three-dimensional Bose-Hubbard model at unity filling which has been controversial for many years. The theorem of inclusions, proven by Pollet [Phys. Rev. Lett. 103, 140402 (2009)] states that the Bose-glass phase always intervenes between the Mott insulating and superfluid phases. Here, we note that assumptions on which the theorem is based exclude phase transitions between gapped (Mott insulator) and gapless phases (Bose glass). The apparent paradox is resolved through a unique mechanism: such transitions have to be of the Griffiths type when the vanishing of the gap at the critical point is due to a zero concentration of rare regions where extreme fluctuations of disorder mimic a regular gapless system. An exactly solvable random transverse field Ising model in one dimension is used to illustrate the point. A highly nontrivial overall shape of the phase diagram is revealed with the worm algorithm. The phase diagram features a long superfluid finger at strong disorder and on-site interaction. Moreover, bosonic superfluidity is extremely robust against disorder in a broad range of interaction parameters; it persists in random potentials nearly 50 (!) times larger than the particle half-bandwidth. Finally, we comment on the feasibility of obtaining this phase diagram in cold-atom experiments, which work with trapped systems at finite temperature.
Six-vertex model and Schramm-Loewner evolution.
Kenyon, Richard; Miller, Jason; Sheffield, Scott; Wilson, David B
2017-05-01
Square ice is a statistical mechanics model for two-dimensional ice, widely believed to have a conformally invariant scaling limit. We associate a Peano (space-filling) curve to a square ice configuration, and more generally to a so-called six-vertex model configuration, and argue that its scaling limit is a space-filling version of the random fractal curve SLE_{κ}, Schramm-Loewner evolution with parameter κ, where 4<κ≤12+8sqrt[2]. For square ice, κ=12. At the "free-fermion point" of the six-vertex model, κ=8+4sqrt[3]. These unusual values lie outside the classical interval 2≤κ≤8.
Vector control of wind turbine on the basis of the fuzzy selective neural net*
NASA Astrophysics Data System (ADS)
Engel, E. A.; Kovalev, I. V.; Engel, N. E.
2016-04-01
An article describes vector control of wind turbine based on fuzzy selective neural net. Based on the wind turbine system’s state, the fuzzy selective neural net tracks an maximum power point under random perturbations. Numerical simulations are accomplished to clarify the applicability and advantages of the proposed vector wind turbine’s control on the basis of the fuzzy selective neuronet. The simulation results show that the proposed intelligent control of wind turbine achieves real-time control speed and competitive performance, as compared to a classical control model with PID controllers based on traditional maximum torque control strategy.
Sadhukhan, Debasis; Roy, Sudipto Singha; Rakshit, Debraj; Prabhu, R; Sen De, Aditi; Sen, Ujjwal
2016-01-01
Classical correlation functions of ground states typically decay exponentially and polynomially, respectively, for gapped and gapless short-range quantum spin systems. In such systems, entanglement decays exponentially even at the quantum critical points. However, quantum discord, an information-theoretic quantum correlation measure, survives long lattice distances. We investigate the effects of quenched disorder on quantum correlation lengths of quenched averaged entanglement and quantum discord, in the anisotropic XY and XYZ spin glass and random field chains. We find that there is virtually neither reduction nor enhancement in entanglement length while quantum discord length increases significantly with the introduction of the quenched disorder.
Nonlinear system guidance in the presence of transmission zero dynamics
NASA Technical Reports Server (NTRS)
Meyer, G.; Hunt, L. R.; Su, R.
1995-01-01
An iterative procedure is proposed for computing the commanded state trajectories and controls that guide a possibly multiaxis, time-varying, nonlinear system with transmission zero dynamics through a given arbitrary sequence of control points. The procedure is initialized by the system inverse with the transmission zero effects nulled out. Then the 'steady state' solution of the perturbation model with the transmission zero dynamics intact is computed and used to correct the initial zero-free solution. Both time domain and frequency domain methods are presented for computing the steady state solutions of the possibly nonminimum phase transmission zero dynamics. The procedure is illustrated by means of linear and nonlinear examples.
NASA Astrophysics Data System (ADS)
Job, Joshua; Wang, Zhihui; Rønnow, Troels; Troyer, Matthias; Lidar, Daniel
2014-03-01
We report on experimental work benchmarking the performance of the D-Wave Two programmable annealer on its native Ising problem, and a comparison to available classical algorithms. In this talk we will focus on the comparison with an algorithm originally proposed and implemented by Alex Selby. This algorithm uses dynamic programming to repeatedly optimize over randomly selected maximal induced trees of the problem graph starting from a random initial state. If one is looking for a quantum advantage over classical algorithms, one should compare to classical algorithms which are designed and optimized to maximally take advantage of the structure of the type of problem one is using for the comparison. In that light, this classical algorithm should serve as a good gauge for any potential quantum speedup for the D-Wave Two.
Woods, Katherine M.; Petron, David J.; Shultz, Barry B.; Hicks-Little, Charlie A.
2015-01-01
Context Chronic exertional compartment syndrome (CECS) is a debilitating condition resulting in loss of function and a decrease in athletic performance. Cases of CECS are increasing among Nordic skiers; therefore, analysis of intracompartmental pressures (ICPs) before and after Nordic skiing is warranted. Objective To determine if lower leg anterior and lateral ICPs and subjective lower leg pain levels increased after a 20-minute Nordic rollerskiing time trial and to examine if differences existed between postexercise ICPs for the 2 Nordic rollerskiing techniques, classic and skate. Design Crossover study. Setting Outdoor paved loop. Patients or Other Participants Seven healthy Division I Nordic skiers (3 men, 4 women; age = 22.71 ± 1.38 y, height = 175.36 ± 6.33 cm, mass = 70.71 ± 6.58 kg). Intervention(s) Participants completed two 20-minute rollerskiing time trials using the classic and skate technique in random order. The time trials were completed 7 days apart. Anterior and lateral ICPs and lower leg pain scores were obtained at baseline and at minutes 1 and 5 after rollerskiing. Main Outcome Measure(s) Anterior and lateral ICPs (mm Hg) were measured using a Stryker Quic STIC handheld monitor. Subjective measures of lower leg pain were recorded using the 11-point Numeric Rating Scale. Results Increases in both anterior (P = .000) and lateral compartment (P = .002) ICPs were observed, regardless of rollerskiing technique used. Subjective lower leg pain increased after the classic technique for the men from baseline to 1 minute postexercise and after the skate technique for the women. Significant 3-way interactions (technique × time × sex) were observed for the anterior (P = .002) and lateral (P = .009) compartment ICPs and lower leg pain (P = .005). Conclusions Postexercise anterior and lateral ICPs increased compared with preexercise ICPs after both classic and skate rollerskiing techniques. Lower leg pain is a primary symptom of CECS. The subjective lower leg pain 11-point Numeric Rating Scale results indicate that increases in lower leg ICPs sustained during Nordic rollerskiing may increase discomfort during activity. Our results therefore suggest that Nordic rollerskiing contributes to increases in ICPs, which may lead to the development of CECS. PMID:26090709
Woods, Katherine M; Petron, David J; Shultz, Barry B; Hicks-Little, Charlie A
2015-08-01
Chronic exertional compartment syndrome (CECS) is a debilitating condition resulting in loss of function and a decrease in athletic performance. Cases of CECS are increasing among Nordic skiers; therefore, analysis of intracompartmental pressures (ICPs) before and after Nordic skiing is warranted. To determine if lower leg anterior and lateral ICPs and subjective lower leg pain levels increased after a 20-minute Nordic rollerskiing time trial and to examine if differences existed between postexercise ICPs for the 2 Nordic rollerskiing techniques, classic and skate. Crossover study. Outdoor paved loop. Seven healthy Division I Nordic skiers (3 men, 4 women; age = 22.71 ± 1.38 y, height = 175.36 ± 6.33 cm, mass = 70.71 ± 6.58 kg). Participants completed two 20-minute rollerskiing time trials using the classic and skate technique in random order. The time trials were completed 7 days apart. Anterior and lateral ICPs and lower leg pain scores were obtained at baseline and at minutes 1 and 5 after rollerskiing. Anterior and lateral ICPs (mm Hg) were measured using a Stryker Quic STIC handheld monitor. Subjective measures of lower leg pain were recorded using the 11-point Numeric Rating Scale. Increases in both anterior (P = .000) and lateral compartment (P = .002) ICPs were observed, regardless of rollerskiing technique used. Subjective lower leg pain increased after the classic technique for the men from baseline to 1 minute postexercise and after the skate technique for the women. Significant 3-way interactions (technique × time × sex) were observed for the anterior (P = .002) and lateral (P = .009) compartment ICPs and lower leg pain (P = .005). Postexercise anterior and lateral ICPs increased compared with preexercise ICPs after both classic and skate rollerskiing techniques. Lower leg pain is a primary symptom of CECS. The subjective lower leg pain 11-point Numeric Rating Scale results indicate that increases in lower leg ICPs sustained during Nordic rollerskiing may increase discomfort during activity. Our results therefore suggest that Nordic rollerskiing contributes to increases in ICPs, which may lead to the development of CECS.
Mutual interactions of phonons, rotons, and gravity
NASA Astrophysics Data System (ADS)
Nicolis, Alberto; Penco, Riccardo
2018-04-01
We introduce an effective point-particle action for generic particles living in a zero-temperature superfluid. This action describes the motion of the particles in the medium at equilibrium as well as their couplings to sound waves and generic fluid flows. While we place the emphasis on elementary excitations such as phonons and rotons, our formalism applies also to macroscopic objects such as vortex rings and rigid bodies interacting with long-wavelength fluid modes. Within our approach, we reproduce phonon decay and phonon-phonon scattering as predicted using a purely field-theoretic description of phonons. We also correct classic results by Landau and Khalatnikov on roton-phonon scattering. Finally, we discuss how phonons and rotons couple to gravity, and show that the former tend to float while the latter tend to sink but with rather peculiar trajectories. Our formalism can be easily extended to include (general) relativistic effects and couplings to additional matter fields. As such, it can be relevant in contexts as diverse as neutron star physics and light dark matter detection.
Dimensional flow and fuzziness in quantum gravity: Emergence of stochastic spacetime
NASA Astrophysics Data System (ADS)
Calcagni, Gianluca; Ronco, Michele
2017-10-01
We show that the uncertainty in distance and time measurements found by the heuristic combination of quantum mechanics and general relativity is reproduced in a purely classical and flat multi-fractal spacetime whose geometry changes with the probed scale (dimensional flow) and has non-zero imaginary dimension, corresponding to a discrete scale invariance at short distances. Thus, dimensional flow can manifest itself as an intrinsic measurement uncertainty and, conversely, measurement-uncertainty estimates are generally valid because they rely on this universal property of quantum geometries. These general results affect multi-fractional theories, a recent proposal related to quantum gravity, in two ways: they can fix two parameters previously left free (in particular, the value of the spacetime dimension at short scales) and point towards a reinterpretation of the ultraviolet structure of geometry as a stochastic foam or fuzziness. This is also confirmed by a correspondence we establish between Nottale scale relativity and the stochastic geometry of multi-fractional models.
NASA Astrophysics Data System (ADS)
Ulmer, Christopher J.; Motta, Arthur T.
2017-11-01
The development of TEM-visible damage in materials under irradiation at cryogenic temperatures cannot be explained using classical rate theory modeling with thermally activated reactions since at low temperatures thermal reaction rates are too low. Although point defect mobility approaches zero at low temperature, the thermal spikes induced by displacement cascades enable some atom mobility as it cools. In this work a model is developed to calculate "athermal" reaction rates from the atomic mobility within the irradiation-induced thermal spikes, including both displacement cascades and electronic stopping. The athermal reaction rates are added to a simple rate theory cluster dynamics model to allow for the simulation of microstructure evolution during irradiation at cryogenic temperatures. The rate theory model is applied to in-situ irradiation of ZrC and compares well at cryogenic temperatures. The results show that the addition of the thermal spike model makes it possible to rationalize microstructure evolution in the low temperature regime.
On-the-Fly ab Initio Semiclassical Calculation of Glycine Vibrational Spectrum
2017-01-01
We present an on-the-fly ab initio semiclassical study of vibrational energy levels of glycine, calculated by Fourier transform of the wavepacket correlation function. It is based on a multiple coherent states approach integrated with monodromy matrix regularization for chaotic dynamics. All four lowest-energy glycine conformers are investigated by means of single-trajectory semiclassical spectra obtained upon classical evolution of on-the-fly trajectories with harmonic zero-point energy. For the most stable conformer I, direct dynamics trajectories are also run for each vibrational mode with energy equal to the first harmonic excitation. An analysis of trajectories evolved up to 50 000 atomic time units demonstrates that, in this time span, conformers II and III can be considered as isolated species, while conformers I and IV show a pretty facile interconversion. Therefore, previous perturbative studies based on the assumption of isolated conformers are often reliable but might be not completely appropriate in the case of conformer IV and conformer I for which interconversion occurs promptly. PMID:28489368
Dzierlenga, Michael W; Antoniou, Dimitri; Schwartz, Steven D
2015-04-02
The mechanisms involved in enzymatic hydride transfer have been studied for years, but questions remain due, in part, to the difficulty of probing the effects of protein motion and hydrogen tunneling. In this study, we use transition path sampling (TPS) with normal mode centroid molecular dynamics (CMD) to calculate the barrier to hydride transfer in yeast alcohol dehydrogenase (YADH) and human heart lactate dehydrogenase (LDH). Calculation of the work applied to the hydride allowed for observation of the change in barrier height upon inclusion of quantum dynamics. Similar calculations were performed using deuterium as the transferring particle in order to approximate kinetic isotope effects (KIEs). The change in barrier height in YADH is indicative of a zero-point energy (ZPE) contribution and is evidence that catalysis occurs via a protein compression that mediates a near-barrierless hydride transfer. Calculation of the KIE using the difference in barrier height between the hydride and deuteride agreed well with experimental results.
NASA Astrophysics Data System (ADS)
Berges, J.; Boguslavski, K.; Chatrchyan, A.; Jaeckel, J.
2017-10-01
We study the impact of attractive self-interactions on the nonequilibrium dynamics of relativistic quantum fields with large occupancies at low momenta. Our primary focus is on Bose-Einstein condensation and nonthermal fixed points in such systems. For a model system, we consider O (N ) -symmetric scalar field theories. We use classical-statistical real-time simulations as well as a systematic 1 /N expansion of the quantum (two-particle-irreducible) effective action to next-to-leading order. When the mean self-interactions are repulsive, condensation occurs as a consequence of a universal inverse particle cascade to the zero-momentum mode with self-similar scaling behavior. For attractive mean self-interactions, the inverse cascade is absent, and the particle annihilation rate is enhanced compared to the repulsive case, which counteracts the formation of coherent field configurations. For N ≥2 , the presence of a nonvanishing conserved charge can suppress number-changing processes and lead to the formation of stable localized charge clumps, i.e., Q balls.
The many faces of the quantum Liouville exponentials
NASA Astrophysics Data System (ADS)
Gervais, Jean-Loup; Schnittger, Jens
1994-01-01
First, it is proven that the three main operator approaches to the quantum Liouville exponentials—that is the one of Gervais-Neveu (more recently developed further by Gervais), Braaten-Curtright-Ghandour-Thorn, and Otto-Weigt—are equivalent since they are related by simple basis transformations in the Fock space of the free field depending upon the zero-mode only. Second, the GN-G expressions for quantum Liouville exponentials, where the U q( sl(2)) quantum-group structure is manifest, are shown to be given by q-binomial sums over powers of the chiral fields in the J = {1}/{2} representation. Third, the Liouville exponentials are expressed as operator tau functions, whose chiral expansion exhibits a q Gauss decomposition, which is the direct quantum analogue of the classical solution of Leznov and Saveliev. It involves q exponentials of quantum-group generators with group "parameters" equal to chiral components of the quantum metric. Fourth, we point out that the OPE of the J = {1}/{2} Liouville exponential provides the quantum version of the Hirota bilinear equation.
Dense Chern-Simons matter with fermions at large N
NASA Astrophysics Data System (ADS)
Geracie, Michael; Goykhman, Mikhail; Son, Dam T.
2016-04-01
In this paper we investigate properties of Chern-Simons theory coupled to massive fermions in the large N limit. We demonstrate that at low temperatures the system is in a Fermi liquid state whose features can be systematically compared to the standard phenomenological theory of Landau Fermi liquids. This includes matching microscopically derived Landau parameters with thermodynamic predictions of Landau Fermi liquid theory. We also calculate the exact conductivity and viscosity tensors at zero temperature and finite chemical potential. In particular we point out that the Hall conductivity of an interacting system is not entirely accounted for by the Berry flux through the Fermi sphere. Furthermore, investigation of the thermodynamics in the non-relativistic limit reveals novel phenomena at strong coupling. As the 't Hooft coupling λ approaches 1, the system exhibits an extended intermediate temperature regime in which the thermodynamics is described by neither the quantum Fermi liquid theory nor the classical ideal gas law. Instead, it can be interpreted as a weakly coupled quantum Bose gas.
Cepheid Period-Luminosity Relation and Kinematics Based on the Revised Hipparcos Catalogue
NASA Astrophysics Data System (ADS)
Zhang, H.; Shen, M.; Zhu, Z.
2011-12-01
The revised Hipparcos catalogue was released by van Leeuwen in 2007. The revised parallaxes of the classical Cepheids yield the zero-point of the period-luminosity relation ρ=-1.37± 0.07 in the optical BV bands, which is 0.06mag fainter than that given by Feast & Catchpole from the old Hipparcos data. Moreover, we discuss the kinematic parameters of the Galaxy based on an axisymmetric model. The Oort constants are A=17.42± 1.17km s-1kpc-1, B=-12.46± 0.86km s-1kpc-1, and the peculiar motion of the Sun is (12.58±1.09,14.52± 1.06, 8.98±0.98)km s-1. Using a dynamical model for an assumed elliptical disk, a weak elliptical potential of the disk is found with eccentricity ɛ(R0)=0.067± 0.036 and the direction of minor axis φb=31.7°± 14.5°.
NASA Astrophysics Data System (ADS)
Hadjiagapiou, Ioannis A.; Velonakis, Ioannis N.
2018-07-01
The Sherrington-Kirkpatrick Ising spin glass model, in the presence of a random magnetic field, is investigated within the framework of the one-step replica symmetry breaking. The two random variables (exchange integral interaction Jij and random magnetic field hi) are drawn from a joint Gaussian probability density function characterized by a correlation coefficient ρ, assuming positive and negative values. The thermodynamic properties, the three different phase diagrams and system's parameters are computed with respect to the natural parameters of the joint Gaussian probability density function at non-zero and zero temperatures. The low temperature negative entropy controversy, a result of the replica symmetry approach, has been partly remedied in the current study, leading to a less negative result. In addition, the present system possesses two successive spin glass phase transitions with characteristic temperatures.
Qi, Bing; Lim, Charles Ci Wen
2018-05-07
Recently, we proposed a simultaneous quantum and classical communication (SQCC) protocol where random numbers for quantum key distribution and bits for classical communication are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Such a scheme could be appealing in practice since a single coherent communication system can be used for multiple purposes. However, previous studies show that the SQCC protocol can tolerate only very small phase noise. This makes it incompatible with the coherent communication scheme using a true local oscillator (LO), which presents a relatively high phase noise due to the fact thatmore » the signal and the LO are generated from two independent lasers. We improve the phase noise tolerance of the SQCC scheme using a true LO by adopting a refined noise model where phase noises originating from different sources are treated differently: on the one hand, phase noise associated with the coherent receiver may be regarded as trusted noise since the detector can be calibrated locally and the photon statistics of the detected signals can be determined from the measurement results; on the other hand, phase noise due to the instability of fiber interferometers may be regarded as untrusted noise since its randomness (from the adversary’s point of view) is hard to justify. Simulation results show the tolerable phase noise in this refined noise model is significantly higher than that in the previous study, where all of the phase noises are assumed to be untrusted. In conclusion, we conduct an experiment to show that the required phase stability can be achieved in a coherent communication system using a true LO.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, Bing; Lim, Charles Ci Wen
Recently, we proposed a simultaneous quantum and classical communication (SQCC) protocol where random numbers for quantum key distribution and bits for classical communication are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Such a scheme could be appealing in practice since a single coherent communication system can be used for multiple purposes. However, previous studies show that the SQCC protocol can tolerate only very small phase noise. This makes it incompatible with the coherent communication scheme using a true local oscillator (LO), which presents a relatively high phase noise due to the fact thatmore » the signal and the LO are generated from two independent lasers. We improve the phase noise tolerance of the SQCC scheme using a true LO by adopting a refined noise model where phase noises originating from different sources are treated differently: on the one hand, phase noise associated with the coherent receiver may be regarded as trusted noise since the detector can be calibrated locally and the photon statistics of the detected signals can be determined from the measurement results; on the other hand, phase noise due to the instability of fiber interferometers may be regarded as untrusted noise since its randomness (from the adversary’s point of view) is hard to justify. Simulation results show the tolerable phase noise in this refined noise model is significantly higher than that in the previous study, where all of the phase noises are assumed to be untrusted. In conclusion, we conduct an experiment to show that the required phase stability can be achieved in a coherent communication system using a true LO.« less
NASA Astrophysics Data System (ADS)
Qi, Bing; Lim, Charles Ci Wen
2018-05-01
Recently, we proposed a simultaneous quantum and classical communication (SQCC) protocol where random numbers for quantum key distribution and bits for classical communication are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Such a scheme could be appealing in practice since a single coherent communication system can be used for multiple purposes. However, previous studies show that the SQCC protocol can tolerate only very small phase noise. This makes it incompatible with the coherent communication scheme using a true local oscillator (LO), which presents a relatively high phase noise due to the fact that the signal and the LO are generated from two independent lasers. We improve the phase noise tolerance of the SQCC scheme using a true LO by adopting a refined noise model where phase noises originating from different sources are treated differently: on the one hand, phase noise associated with the coherent receiver may be regarded as trusted noise since the detector can be calibrated locally and the photon statistics of the detected signals can be determined from the measurement results; on the other hand, phase noise due to the instability of fiber interferometers may be regarded as untrusted noise since its randomness (from the adversary's point of view) is hard to justify. Simulation results show the tolerable phase noise in this refined noise model is significantly higher than that in the previous study, where all of the phase noises are assumed to be untrusted. We conduct an experiment to show that the required phase stability can be achieved in a coherent communication system using a true LO.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Xiongfeng; Yuan, Xiao; Cao, Zhu
Quantum physics can be exploited to generate true random numbers, which play important roles in many applications, especially in cryptography. Genuine randomness from the measurement of a quantum system reveals the inherent nature of quantumness -- coherence, an important feature that differentiates quantum mechanics from classical physics. The generation of genuine randomness is generally considered impossible with only classical means. Based on the degree of trustworthiness on devices, quantum random number generators (QRNGs) can be grouped into three categories. The first category, practical QRNG, is built on fully trusted and calibrated devices and typically can generate randomness at a highmore » speed by properly modeling the devices. The second category is self-testing QRNG, where verifiable randomness can be generated without trusting the actual implementation. The third category, semi-self-testing QRNG, is an intermediate category which provides a tradeoff between the trustworthiness on the device and the random number generation speed.« less
Small violations of Bell inequalities for multipartite pure random states
NASA Astrophysics Data System (ADS)
Drumond, Raphael C.; Duarte, Cristhiano; Oliveira, Roberto I.
2018-05-01
For any finite number of parts, measurements, and outcomes in a Bell scenario, we estimate the probability of random N-qudit pure states to substantially violate any Bell inequality with uniformly bounded coefficients. We prove that under some conditions on the local dimension, the probability to find any significant amount of violation goes to zero exponentially fast as the number of parts goes to infinity. In addition, we also prove that if the number of parts is at least 3, this probability also goes to zero as the local Hilbert space dimension goes to infinity.
Persistence of plasmids, cholera toxin genes, and prophage DNA in classical Vibrio cholerae O1.
Cook, W L; Wachsmuth, K; Johnson, S R; Birkness, K A; Samadi, A R
1984-07-01
Plasmid profiles, the location of cholera toxin subunit A genes, and the presence of the defective VcA1 prophage genome in classical Vibrio cholerae isolated from patients in Bangladesh in 1982 were compared with those in older classical strains isolated during the sixth pandemic and with those in selected eltor and nontoxigenic O1 isolates. Classical strains typically had two plasmids (21 and 3 megadaltons), eltor strains typically had no plasmids, and nontoxigenic O1 strains had zero to three plasmids. The old and new isolates of classical V. cholerae had two HindIII chromosomal digest fragments containing cholera toxin subunit A genes, whereas the eltor strains from Eastern countries had one fragment. The eltor strains from areas surrounding the Gulf of Mexico also had two subunit A gene fragments, which were smaller and easily distinguished from the classical pattern. All classical strains had 8 to 10 HindIII fragments containing the defective VcA1 prophage genome; none of the Eastern eltor strains had these genes, and the Gulf Coast eltor strains contained a different array of weakly hybridizing genes. These data suggest that the recent isolates of classical cholera in Bangladesh are closely related to the bacterial strain(s) which caused classical cholera during the sixth pandemic. These data do not support hypotheses that either the eltor or the nontoxigenic O1 strains are precursors of the new classical strains.
On the fluctuations of sums of independent random variables.
Feller, W
1969-07-01
If X(1), X(2),... are independent random variables with zero expectation and finite variances, the cumulative sums S(n) are, on the average, of the order of magnitude S(n), where S(n) (2) = E(S(n) (2)). The occasional maxima of the ratios S(n)/S(n) are surprisingly large and the problem is to estimate the extent of their probable fluctuations.Specifically, let S(n) (*) = (S(n) - b(n))/a(n), where {a(n)} and {b(n)}, two numerical sequences. For any interval I, denote by p(I) the probability that the event S(n) (*) epsilon I occurs for infinitely many n. Under mild conditions on {a(n)} and {b(n)}, it is shown that p(I) equals 0 or 1 according as a certain series converges or diverges. To obtain the upper limit of S(n)/a(n), one has to set b(n) = +/- epsilon a(n), but finer results are obtained with smaller b(n). No assumptions concerning the under-lying distributions are made; the criteria explain structurally which features of {X(n)} affect the fluctuations, but for concrete results something about P{S(n)>a(n)} must be known. For example, a complete solution is possible when the X(n) are normal, replacing the classical law of the iterated logarithm. Further concrete estimates may be obtained by combining the new criteria with some recently developed limit theorems.
Random attractor of non-autonomous stochastic Boussinesq lattice system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Min, E-mail: zhaomin1223@126.com; Zhou, Shengfan, E-mail: zhoushengfan@yahoo.com
2015-09-15
In this paper, we first consider the existence of tempered random attractor for second-order non-autonomous stochastic lattice dynamical system of nonlinear Boussinesq equations effected by time-dependent coupled coefficients and deterministic forces and multiplicative white noise. Then, we establish the upper semicontinuity of random attractors as the intensity of noise approaches zero.
Amini, E; Rafiei, P; Zarei, K; Gohari, M; Hamidi, M
2013-01-01
Music is considered a subset of developmental supportive care. It may act as a suitable auditory stimulant in preterm infants. Also, it may reduce stress responses in autonomic, motor and state systems. To assess and compare the influence of lullaby and classical music on physiologic parameters. This is a randomized clinical trial with cross-over design. A total of 25 stable preterm infants with birth weight of 1000-2500 grams were studied for six consecutive days. Each infant was exposed to three phases: lullaby music, classical music, and no music (control) for two days each. The sequence of these phases was assigned randomly to each subject. Babies were continuously monitored for heart rate, respiratory rate, and oxygen saturation and changes between phases were analyzed. Lullaby reduced heart rate (p < 0.001) and respiratory rate (p = 0.004). These effects extended in the period after the exposure (p < .001 and p = 0.001, respectively). Classical music reduced heart rate (p = 0.018). The effects of classical music disappeared once the music stopped. Oxygen saturation did not change during intervention. Music can affect vital signs of preterm infants; this effect can possibly be related to the reduction of stress during hospitalization. The implications of these findings on clinical and developmental outcomes need further study.
Unbounded number of channel uses may be required to detect quantum capacity.
Cubitt, Toby; Elkouss, David; Matthews, William; Ozols, Maris; Pérez-García, David; Strelchuk, Sergii
2015-03-31
Transmitting data reliably over noisy communication channels is one of the most important applications of information theory, and is well understood for channels modelled by classical physics. However, when quantum effects are involved, we do not know how to compute channel capacities. This is because the formula for the quantum capacity involves maximizing the coherent information over an unbounded number of channel uses. In fact, entanglement across channel uses can even increase the coherent information from zero to non-zero. Here we study the number of channel uses necessary to detect positive coherent information. In all previous known examples, two channel uses already sufficed. It might be that only a finite number of channel uses is always sufficient. We show that this is not the case: for any number of uses, there are channels for which the coherent information is zero, but which nonetheless have capacity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denbleyker, Alan; Liu, Yuzhi; Meurice, Y.
We consider the sign problem for classical spin models at complexmore » $$\\beta =1/g_0^2$$ on $$L\\times L$$ lattices. We show that the tensor renormalization group method allows reliable calculations for larger Im$$\\beta$$ than the reweighting Monte Carlo method. For the Ising model with complex $$\\beta$$ we compare our results with the exact Onsager-Kaufman solution at finite volume. The Fisher zeros can be determined precisely with the TRG method. We check the convergence of the TRG method for the O(2) model on $$L\\times L$$ lattices when the number of states $$D_s$$ increases. We show that the finite size scaling of the calculated Fisher zeros agrees very well with the Kosterlitz-Thouless transition assumption and predict the locations for larger volume. The location of these zeros agree with Monte Carlo reweighting calculation for small volume. The application of the method for the O(2) model with a chemical potential is briefly discussed.« less
NASA Astrophysics Data System (ADS)
Winklhofer, M.
2007-05-01
First-order-reversal curve (FORC) diagrams have proven useful in characterizing fine magnetic particle systems in terms of microscopic switching field distributions, characteristic interaction strengths and mean-field effects. Despite the profusion of measured FORC data, we still lack a simple, generally valid recipe for the quantitative analysis of FORC diagrams, the reason being that most samples do not act like classical linear Preisach systems, giving rise to reversible magnetization changes that tend to blur contributions from irreversible switching events. A good example illustrating the confounding influence of reversible contributions are FORC diagrams for particle systems in which vortex configurations occur as remanent states. For non-interacting Fe nanodots with well-defined grain sizes around the zero-field SD/PSD transition and random easy-axis orientation, we will show how a combination of micromagnetic modelling and second-order- reversal-curves can be used to disentangle reversible and irreversible contributions to the FORC diagram. It will also be shown that remanence-based Preisach diagrams do not fully capture the irreversible parts.
Ideas of Flat and Curved Space in History of Physics
NASA Astrophysics Data System (ADS)
Berezin, Alexander A.
2006-04-01
Since ``everything which is not prohibited is compulsory'' (assigned to Gell-Mann) we can postulate infinite flat Cartesian N-dimensional (N: any integer) space-time (ST) as embedding for any curved ST. Ergodicity raises quest of whether total number of inflationary and/or Everett bubbles (mini-verses) is finite, countably infinite (aleph-zero) or uncountably infinite (aleph-one). Are these bubbles form Gaussian distribution or form some non-random subsetting? Perhaps, communication between mini-verses (idea of D.Deutsch) can be facilitated by a kind of minimax non-local dynamics akin to Fermat principle? (Minimax Principle in Bubble Cosmology). Even such classical effects as magnetism and polarization have some non-local features. Can we go below the Planck length to perhaps Compton wavelength of our ``Hubble's bubble'' (h/Mc = 10 to minus 95 m, if M = 10 to 54 kg)? When talking about time loops and ergodicity (eternal return paradigm) is there some hysterisis in the way quantum states are accessed in ``forward'' or ``reverse'' direction? (reverse direction implies backward causality of J.Wheeler and/or Aristotelian final causation).
Cluster-Glass Phase in Pyrochlore X Y Antiferromagnets with Quenched Disorder
NASA Astrophysics Data System (ADS)
Andrade, Eric C.; Hoyos, José A.; Rachel, Stephan; Vojta, Matthias
2018-03-01
We study the impact of quenched disorder (random exchange couplings or site dilution) on easy-plane pyrochlore antiferromagnets. In the clean system, order by disorder selects a magnetically ordered state from a classically degenerate manifold. In the presence of randomness, however, different orders can be chosen locally depending on details of the disorder configuration. Using a combination of analytical considerations and classical Monte Carlo simulations, we argue that any long-range-ordered magnetic state is destroyed beyond a critical level of randomness where the system breaks into magnetic domains due to random exchange anisotropies, becoming, therefore, a glass of spin clusters, in accordance with the available experimental data. These random anisotropies originate from off-diagonal exchange couplings in the microscopic Hamiltonian, establishing their relevance to other magnets with strong spin-orbit coupling.
New approach for identifying the zero-order fringe in variable wavelength interferometry
NASA Astrophysics Data System (ADS)
Galas, Jacek; Litwin, Dariusz; Daszkiewicz, Marek
2016-12-01
The family of VAWI techniques (for transmitted and reflected light) is especially efficient for characterizing objects, when in the interference system the optical path difference exceeds a few wavelengths. The classical approach that consists in measuring the deflection of interference fringes fails because of strong edge effects. Broken continuity of interference fringes prevents from correct identification of the zero order fringe, which leads to significant errors. The family of these methods has been proposed originally by Professor Pluta in the 1980s but that time image processing facilities and computers were hardly available. Automated devices unfold a completely new approach to the classical measurement procedures. The Institute team has taken that new opportunity and transformed the technique into fully automated measurement devices offering commercial readiness of industry-grade quality. The method itself has been modified and new solutions and algorithms simultaneously have extended the field of application. This has concerned both construction aspects of the systems and software development in context of creating computerized instruments. The VAWI collection of instruments constitutes now the core of the Institute commercial offer. It is now practically applicable in industrial environment for measuring textile and optical fibers, strips of thin films, testing of wave plates and nonlinear affects in different materials. This paper describes new algorithms for identifying the zero order fringe, which increases the performance of the system as a whole and presents some examples of measurements of optical elements.
Classical impurities and boundary Majorana zero modes in quantum chains
NASA Astrophysics Data System (ADS)
Müller, Markus; Nersesyan, Alexander A.
2016-09-01
We study the response of classical impurities in quantum Ising chains. The Z2 degeneracy they entail renders the existence of two decoupled Majorana modes at zero energy, an exact property of a finite system at arbitrary values of its bulk parameters. We trace the evolution of these modes across the transition from the disordered phase to the ordered one and analyze the concomitant qualitative changes of local magnetic properties of an isolated impurity. In the disordered phase, the two ground states differ only close to the impurity, and they are related by the action of an explicitly constructed quasi-local operator. In this phase the local transverse spin susceptibility follows a Curie law. The critical response of a boundary impurity is logarithmically divergent and maps to the two-channel Kondo problem, while it saturates for critical bulk impurities, as well as in the ordered phase. The results for the Ising chain translate to the related problem of a resonant level coupled to a 1d p-wave superconductor or a Peierls chain, whereby the magnetic order is mapped to topological order. We find that the topological phase always exhibits a continuous impurity response to local fields as a result of the level repulsion of local levels from the boundary Majorana zero mode. In contrast, the disordered phase generically features a discontinuous magnetization or charging response. This difference constitutes a general thermodynamic fingerprint of topological order in phases with a bulk gap.
Nonrecurrence and Bell-like inequalities
NASA Astrophysics Data System (ADS)
Danforth, Douglas G.
2017-12-01
The general class, Λ, of Bell hidden variables is composed of two subclasses ΛR and ΛN such that ΛR⋃ΛN = Λ and ΛR∩ ΛN = {}. The class ΛN is very large and contains random variables whose domain is the continuum, the reals. There are an uncountable infinite number of reals. Every instance of a real random variable is unique. The probability of two instances being equal is zero, exactly zero. ΛN induces sample independence. All correlations are context dependent but not in the usual sense. There is no "spooky action at a distance". Random variables, belonging to ΛN, are independent from one experiment to the next. The existence of the class ΛN makes it impossible to derive any of the standard Bell inequalities used to define quantum entanglement.
Role of zero-point effects in stabilizing the ground state structure of bulk Fe2P
NASA Astrophysics Data System (ADS)
Bhat, Soumya S.; Gupta, Kapil; Bhattacharjee, Satadeep; Lee, Seung-Cheol
2018-05-01
Structural stability of Fe2P is investigated in detail using first-principles calculations based on density functional theory. While the orthorhombic C23 phase is found to be energetically more stable, the experiments suggest it to be hexagonal C22 phase. In the present study, we show that in order to obtain the correct ground state structure of Fe2P from the first-principles based methods it is utmost necessary to consider the zero-point effects such as zero-point vibrations and spin fluctuations. This study demonstrates an exceptional case where a bulk material is stabilized by quantum effects, which are usually important in low-dimensional materials. Our results also indicate the possibility of magnetic field induced structural quantum phase transition in Fe2P, which should form the basis for further theoretical and experimental efforts.
Zero-point fluctuations in naphthalene and their effect on charge transport parameters.
Kwiatkowski, Joe J; Frost, Jarvist M; Kirkpatrick, James; Nelson, Jenny
2008-09-25
We calculate the effect of vibronic coupling on the charge transport parameters in crystalline naphthalene, between 0 and 400 K. We find that nuclear fluctuations can cause large changes in both the energy of a charge on a molecule and on the electronic coupling between molecules. As a result, nuclear fluctuations cause wide distributions of both energies and couplings. We show that these distributions have a small temperature dependence and that, even at high temperatures, vibronic coupling is dominated by the effect of zero-point fluctuations. Because of the importance of zero-point fluctuations, we find that the distributions of energies and couplings have substantial width, even at 0 K. Furthermore, vibronic coupling with high energy modes may be significant, even though these modes are never thermally activated. Our results have implications for the temperature dependence of charge mobilities in organic semiconductors.
QUANTUM MECHANICS. Quantum squeezing of motion in a mechanical resonator.
Wollman, E E; Lei, C U; Weinstein, A J; Suh, J; Kronwald, A; Marquardt, F; Clerk, A A; Schwab, K C
2015-08-28
According to quantum mechanics, a harmonic oscillator can never be completely at rest. Even in the ground state, its position will always have fluctuations, called the zero-point motion. Although the zero-point fluctuations are unavoidable, they can be manipulated. Using microwave frequency radiation pressure, we have manipulated the thermal fluctuations of a micrometer-scale mechanical resonator to produce a stationary quadrature-squeezed state with a minimum variance of 0.80 times that of the ground state. We also performed phase-sensitive, back-action evading measurements of a thermal state squeezed to 1.09 times the zero-point level. Our results are relevant to the quantum engineering of states of matter at large length scales, the study of decoherence of large quantum systems, and for the realization of ultrasensitive sensing of force and motion. Copyright © 2015, American Association for the Advancement of Science.
Broken vertex symmetry and finite zero-point entropy in the artificial square ice ground state
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gliga, Sebastian; Kákay, Attila; Heyderman, Laura J.
In this paper, we study degeneracy and entropy in the ground state of artificial square ice. In theoretical models, individual nanomagnets are typically treated as single spins with only two degrees of freedom, leading to a twofold degenerate ground state with intensive entropy and thus no zero-point entropy. Here, we show that the internal degrees of freedom of the nanostructures can result, through edge bending of the magnetization and breaking of local magnetic symmetry at the vertices, in a transition to a highly degenerate ground state with finite zero-point entropy, similar to that of the pyrochlore spin ices. Finally, wemore » find that these additional degrees of freedom have observable consequences in the resonant spectrum of the lattice, and predict the occurrence of edge “melting” above a critical temperature at which the magnetic symmetry is restored.« less
Broken vertex symmetry and finite zero-point entropy in the artificial square ice ground state
Gliga, Sebastian; Kákay, Attila; Heyderman, Laura J.; ...
2015-08-26
In this paper, we study degeneracy and entropy in the ground state of artificial square ice. In theoretical models, individual nanomagnets are typically treated as single spins with only two degrees of freedom, leading to a twofold degenerate ground state with intensive entropy and thus no zero-point entropy. Here, we show that the internal degrees of freedom of the nanostructures can result, through edge bending of the magnetization and breaking of local magnetic symmetry at the vertices, in a transition to a highly degenerate ground state with finite zero-point entropy, similar to that of the pyrochlore spin ices. Finally, wemore » find that these additional degrees of freedom have observable consequences in the resonant spectrum of the lattice, and predict the occurrence of edge “melting” above a critical temperature at which the magnetic symmetry is restored.« less
The evolving Planck mass in classically scale-invariant theories
NASA Astrophysics Data System (ADS)
Kannike, K.; Raidal, M.; Spethmann, C.; Veermäe, H.
2017-04-01
We consider classically scale-invariant theories with non-minimally coupled scalar fields, where the Planck mass and the hierarchy of physical scales are dynamically generated. The classical theories possess a fixed point, where scale invariance is spontaneously broken. In these theories, however, the Planck mass becomes unstable in the presence of explicit sources of scale invariance breaking, such as non-relativistic matter and cosmological constant terms. We quantify the constraints on such classical models from Big Bang Nucleosynthesis that lead to an upper bound on the non-minimal coupling and require trans-Planckian field values. We show that quantum corrections to the scalar potential can stabilise the fixed point close to the minimum of the Coleman-Weinberg potential. The time-averaged motion of the evolving fixed point is strongly suppressed, thus the limits on the evolving gravitational constant from Big Bang Nucleosynthesis and other measurements do not presently constrain this class of theories. Field oscillations around the fixed point, if not damped, contribute to the dark matter density of the Universe.
Identification of bearing faults using time domain zero-crossings
NASA Astrophysics Data System (ADS)
William, P. E.; Hoffman, M. W.
2011-11-01
In this paper, zero-crossing characteristic features are employed for early detection and identification of single point bearing defects in rotating machinery. As a result of bearing defects, characteristic defect frequencies appear in the machine vibration signal, normally requiring spectral analysis or envelope analysis to identify the defect type. Zero-crossing features are extracted directly from the time domain vibration signal using only the duration between successive zero-crossing intervals and do not require estimation of the rotational frequency. The features are a time domain representation of the composite vibration signature in the spectral domain. Features are normalized by the length of the observation window and classification is performed using a multilayer feedforward neural network. The model was evaluated on vibration data recorded using an accelerometer mounted on an induction motor housing subjected to a number of single point defects with different severity levels.
A Survey of Noninteractive Zero Knowledge Proof System and Its Applications
Wu, Huixin; Wang, Feng
2014-01-01
Zero knowledge proof system which has received extensive attention since it was proposed is an important branch of cryptography and computational complexity theory. Thereinto, noninteractive zero knowledge proof system contains only one message sent by the prover to the verifier. It is widely used in the construction of various types of cryptographic protocols and cryptographic algorithms because of its good privacy, authentication, and lower interactive complexity. This paper reviews and analyzes the basic principles of noninteractive zero knowledge proof system, and summarizes the research progress achieved by noninteractive zero knowledge proof system on the following aspects: the definition and related models of noninteractive zero knowledge proof system, noninteractive zero knowledge proof system of NP problems, noninteractive statistical and perfect zero knowledge, the connection between noninteractive zero knowledge proof system, interactive zero knowledge proof system, and zap, and the specific applications of noninteractive zero knowledge proof system. This paper also points out the future research directions. PMID:24883407
Computer simulation of a cellular automata model for the immune response in a retrovirus system
NASA Astrophysics Data System (ADS)
Pandey, R. B.
1989-02-01
Immune response in a retrovirus system is modeled by a network of three binary cell elements to take into account some of the main functional features of T4 cells, T8 cells, and viruses. Two different intercell interactions are introduced, one of which leads to three fixed points while the other yields bistable fixed points oscillating between a healthy state and a sick state in a mean field treatment. Evolution of these cells is studied for quenched and annealed random interactions on a simple cubic lattice with a nearest neighbor interaction using inhomogenous cellular automata. Populations of T4 cells and viral cells oscillate together with damping (with constant amplitude) for annealed (quenched) interaction on increasing the value of mixing probability B from zero to a characteristic value B ca ( B cq). For higher B, the average number of T4 cells increases while that of the viral infected cells decreases monotonically on increasing B, suggesting a phase transition at B ca ( B cq).
Obtaining orthotropic elasticity tensor using entries zeroing method.
NASA Astrophysics Data System (ADS)
Gierlach, Bartosz; Danek, Tomasz
2017-04-01
A generally anisotropic elasticity tensor obtained from measurements can be represented by a tensor belonging to one of eight material symmetry classes. Knowledge of symmetry class and orientation is helpful for describing physical properties of a medium. For each non-trivial symmetry class except isotropic this problem is nonlinear. A common method of obtaining effective tensor is a choosing its non-trivial symmetry class and minimizing Frobenius norm between measured and effective tensor in the same coordinate system. Global optimization algorithm has to be used to determine the best rotation of a tensor. In this contribution, we propose a new approach to obtain optimal tensor, with the assumption that it is orthotropic (or at least has a similar shape to the orthotropic one). In orthotropic form tensor 24 out of 36 entries are zeros. The idea is to minimize the sum of squared entries which are supposed to be equal to zero through rotation calculated with optimization algorithm - in this case Particle Swarm Optimization (PSO) algorithm. Quaternions were used to parametrize rotations in 3D space to improve computational efficiency. In order to avoid a choice of local minima we apply PSO several times and only if we obtain similar results for the third time we consider it as a correct value and finish computations. To analyze obtained results Monte-Carlo method was used. After thousands of single runs of PSO optimization, we obtained values of quaternion parts and plot them. Points concentrate in several points of the graph following the regular pattern. It suggests the existence of more complex symmetry in the analyzed tensor. Then thousands of realizations of generally anisotropic tensor were generated - each tensor entry was replaced with a random value drawn from normal distribution having a mean equal to measured tensor entry and standard deviation of the measurement. Each of these tensors was subject of PSO based optimization delivering quaternion for optimal rotation. Computations were parallelized with OpenMP to decrease computational time what enables different tensors to be processed by different threads. As a result the distributions of rotated tensor entries values were obtained. For the entries which were to be zeroed we can observe almost normal distributions having mean equal to zero or sum of two normal distributions having inverse means. Non-zero entries represent different distributions with two or three maxima. Analysis of obtained results shows that described method produces consistent values of quaternions used to rotate tensors. Despite of less complex target function in a process of optimization in comparison to common approach, entries zeroing method provides results which can be applied to obtain an orthotropic tensor with good reliability. Modification of the method can produce also a tool for obtaining effective tensors belonging to another symmetry classes. This research was supported by the Polish National Science Center under contract No. DEC-2013/11/B/ST10/0472.
Axion as a cold dark matter candidate: analysis to third order perturbation for classical axion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noh, Hyerim; Hwang, Jai-chan; Park, Chan-Gyung, E-mail: hr@kasi.re.kr, E-mail: jchan@knu.ac.kr, E-mail: park.chan.gyung@gmail.com
2015-12-01
We investigate aspects of axion as a coherently oscillating massive classical scalar field by analyzing third order perturbations in Einstein's gravity in the axion-comoving gauge. The axion fluid has its characteristic pressure term leading to an axion Jeans scale which is cosmologically negligible for a canonical axion mass. Our classically derived axion pressure term in Einstein's gravity is identical to the one derived in the non-relativistic quantum mechanical context in the literature. We present the general relativistic continuity and Euler equations for an axion fluid valid up to third order perturbation. Equations for axion are exactly the same as thatmore » of a zero-pressure fluid in Einstein's gravity except for an axion pressure term in the Euler equation. Our analysis includes the cosmological constant.« less
NASA Technical Reports Server (NTRS)
Holms, A. G.
1980-01-01
Population model coefficients were chosen to simulate a saturated 2 to the 4th fixed-effects experiment having an unfavorable distribution of relative values. Using random number studies, deletion strategies were compared that were based on the F-distribution, on an order statistics distribution of Cochran's, and on a combination of the two. The strategies were compared under the criterion of minimizing the maximum prediction error, wherever it occurred, among the two-level factorial points. The strategies were evaluated for each of the conditions of 0, 1, 2, 3, 4, 5, or 6 center points. Three classes of strategies were identified as being appropriate, depending on the extent of the experimenter's prior knowledge. In almost every case the best strategy was found to be unique according to the number of center points. Among the three classes of strategies, a security regret class of strategy was demonstrated as being widely useful in that over a range of coefficients of variation from 4 to 65%, the maximum predictive error was never increased by more than 12% over what it would have been if the best strategy had been used for the particular coefficient of variation. The relative efficiency of the experiment, when using the security regret strategy, was examined as a function of the number of center points, and was found to be best when the design used one center point.
Zhou, Xu; Wang, Qilin; Jiang, Guangming; Liu, Peng; Yuan, Zhiguo
2015-06-01
Improvement of sludge dewaterability is crucial for reducing the costs of sludge disposal in wastewater treatment plants. This study presents a novel conditioning method for improving waste activated sludge dewaterability by combination of persulfate and zero-valent iron. The combination of zero-valent iron (0-30g/L) and persulfate (0-6g/L) under neutral pH substantially enhanced the sludge dewaterability due to the advanced oxidization reactions. The highest enhancement of sludge dewaterability was achieved at 4g persulfate/L and 15g zero-valent iron/L, with which the capillary suction time was reduced by over 50%. The release of soluble chemical oxygen demand during the conditioning process implied the decomposition of sludge structure and microorganisms, which facilitated the improvement of dewaterability due to the release of bound water that was included in sludge structure and microorganism. Economic analysis showed that the proposed conditioning process with persulfate and ZVI is more economically favorable for improving WAS dewaterability than classical Fenton reagent. Copyright © 2015 Elsevier Ltd. All rights reserved.
Quantum rewinding via phase estimation
NASA Astrophysics Data System (ADS)
Tabia, Gelo Noel
2015-03-01
In cryptography, the notion of a zero-knowledge proof was introduced by Goldwasser, Micali, and Rackoff. An interactive proof system is said to be zero-knowledge if any verifier interacting with an honest prover learns nothing beyond the validity of the statement being proven. With recent advances in quantum information technologies, it has become interesting to ask if classical zero-knowledge proof systems remain secure against adversaries with quantum computers. The standard approach to show the zero-knowledge property involves constructing a simulator for a malicious verifier that can be rewinded to a previous step when the simulation fails. In the quantum setting, the simulator can be described by a quantum circuit that takes an arbitrary quantum state as auxiliary input but rewinding becomes a nontrivial issue. Watrous proposed a quantum rewinding technique in the case where the simulation's success probability is independent of the auxiliary input. Here I present a more general quantum rewinding scheme that employs the quantum phase estimation algorithm. This work was funded by institutional research grant IUT2-1 from the Estonian Research Council and by the European Union through the European Regional Development Fund.
Deng, Xinyang; Jiang, Wen; Zhang, Jiandong
2017-01-01
The zero-sum matrix game is one of the most classic game models, and it is widely used in many scientific and engineering fields. In the real world, due to the complexity of the decision-making environment, sometimes the payoffs received by players may be inexact or uncertain, which requires that the model of matrix games has the ability to represent and deal with imprecise payoffs. To meet such a requirement, this paper develops a zero-sum matrix game model with Dempster–Shafer belief structure payoffs, which effectively represents the ambiguity involved in payoffs of a game. Then, a decomposition method is proposed to calculate the value of such a game, which is also expressed with belief structures. Moreover, for the possible computation-intensive issue in the proposed decomposition method, as an alternative solution, a Monte Carlo simulation approach is presented, as well. Finally, the proposed zero-sum matrix games with payoffs of Dempster–Shafer belief structures is illustratively applied to the sensor selection and intrusion detection of sensor networks, which shows its effectiveness and application process. PMID:28430156
Global stability of steady states in the classical Stefan problem for general boundary shapes
Hadžić, Mahir; Shkoller, Steve
2015-01-01
The classical one-phase Stefan problem (without surface tension) allows for a continuum of steady-state solutions, given by an arbitrary (but sufficiently smooth) domain together with zero temperature. We prove global-in-time stability of such steady states, assuming a sufficient degree of smoothness on the initial domain, but without any a priori restriction on the convexity properties of the initial shape. This is an extension of our previous result (Hadžić & Shkoller 2014 Commun. Pure Appl. Math. 68, 689–757 (doi:10.1002/cpa.21522)) in which we studied nearly spherical shapes. PMID:26261359
Efficient Quantum Pseudorandomness.
Brandão, Fernando G S L; Harrow, Aram W; Horodecki, Michał
2016-04-29
Randomness is both a useful way to model natural systems and a useful tool for engineered systems, e.g., in computation, communication, and control. Fully random transformations require exponential time for either classical or quantum systems, but in many cases pseudorandom operations can emulate certain properties of truly random ones. Indeed, in the classical realm there is by now a well-developed theory regarding such pseudorandom operations. However, the construction of such objects turns out to be much harder in the quantum case. Here, we show that random quantum unitary time evolutions ("circuits") are a powerful source of quantum pseudorandomness. This gives for the first time a polynomial-time construction of quantum unitary designs, which can replace fully random operations in most applications, and shows that generic quantum dynamics cannot be distinguished from truly random processes. We discuss applications of our result to quantum information science, cryptography, and understanding the self-equilibration of closed quantum dynamics.
Moreno, J P; Johnston, C A; Hernandez, D C; LeNoble, J; Papaioannou, M A; Foreyt, J P
2016-10-01
While overweight and obese children are more likely to have overweight or obese parents, less is known about the effect of parental weight status on children's success in weight management programmes. This study was a secondary data analysis of a randomized controlled trial and investigated the impact of having zero, one or two obese parents on children's success in a school-based weight management programme. Sixty-one Mexican-American children participated in a 24-week school-based weight management intervention which took place in 2005-2006. Children's heights and weights were measured at baseline, 3, 6 and 12 months. Parental weight status was assessed at baseline. Repeated measures anova and ancova were conducted to compare changes in children's weight within and between groups, respectively. Within-group comparisons revealed that the intervention led to significant decreases in standardized body mass index (zBMI) for children with zero (F = 23.16, P < .001) or one obese (F = 4.99, P < .05) parent. Between-group comparisons indicated that children with zero and one obese parents demonstrated greater decreases in zBMI compared to children with two obese parents at every time point. The school-based weight management programme appears to be most efficacious for children with one or no obese parents compared to children with two obese parents. These results demonstrate the need to consider parental weight status when engaging in childhood weight management efforts. © 2015 World Obesity.
Perpetual Points: New Tool for Localization of Coexisting Attractors in Dynamical Systems
NASA Astrophysics Data System (ADS)
Dudkowski, Dawid; Prasad, Awadhesh; Kapitaniak, Tomasz
Perpetual points (PPs) are special critical points for which the magnitude of acceleration describing the dynamics drops to zero, while the motion is still possible (stationary points are excluded), e.g. considering the motion of the particle in the potential field, at perpetual point, it has zero acceleration and nonzero velocity. We show that using PPs we can trace all the stable fixed points in the system, and that the structure of trajectories leading from former points to stable equilibria may be similar to orbits obtained from unstable stationary points. Moreover, we argue that the concept of perpetual points may be useful in tracing unexpected attractors (hidden or rare attractors with small basins of attraction). We show potential applicability of this approach by analyzing several representative systems of physical significance, including the damped oscillator, pendula, and the Henon map. We suggest that perpetual points may be a useful tool for localizing coexisting attractors in dynamical systems.
Redundant Information and the Quantum-Classical Transition
ERIC Educational Resources Information Center
Riedel, Charles Jess
2012-01-01
A state selected at random from the Hilbert space of a many-body system is overwhelmingly likely to exhibit highly non-classical correlations. For these typical states, half of the environment must be measured by an observer to determine the state of a given subsystem. The objectivity of classical reality--the fact that multiple observers can each…
Floating-point performance of ARM cores and their efficiency in classical molecular dynamics
NASA Astrophysics Data System (ADS)
Nikolskiy, V.; Stegailov, V.
2016-02-01
Supercomputing of the exascale era is going to be inevitably limited by power efficiency. Nowadays different possible variants of CPU architectures are considered. Recently the development of ARM processors has come to the point when their floating point performance can be seriously considered for a range of scientific applications. In this work we present the analysis of the floating point performance of the latest ARM cores and their efficiency for the algorithms of classical molecular dynamics.
NASA Astrophysics Data System (ADS)
Rerikh, K. V.
1998-02-01
Using classic results of algebraic geometry for birational plane mappings in plane CP 2 we present a general approach to algebraic integrability of autonomous dynamical systems in C 2 with discrete time and systems of two autonomous functional equations for meromorphic functions in one complex variable defined by birational maps in C 2. General theorems defining the invariant curves, the dynamics of a birational mapping and a general theorem about necessary and sufficient conditions for integrability of birational plane mappings are proved on the basis of a new idea — a decomposition of the orbit set of indeterminacy points of direct maps relative to the action of the inverse mappings. A general method of generating integrable mappings and their rational integrals (invariants) I is proposed. Numerical characteristics Nk of intersections of the orbits Φn- kOi of fundamental or indeterminacy points Oi ɛ O ∩ S, of mapping Φn, where O = { O i} is the set of indeterminacy points of Φn and S is a similar set for invariant I, with the corresponding set O' ∩ S, where O' = { O' i} is the set of indeterminacy points of inverse mapping Φn-1, are introduced. Using the method proposed we obtain all nine integrable multiparameter quadratic birational reversible mappings with the zero fixed point and linear projective symmetry S = CΛC-1, Λ = diag(±1), with rational invariants generated by invariant straight lines and conics. The relations of numbers Nk with such numerical characteristics of discrete dynamical systems as the Arnold complexity and their integrability are established for the integrable mappings obtained. The Arnold complexities of integrable mappings obtained are determined. The main results are presented in Theorems 2-5, in Tables 1 and 2, and in Appendix A.
On the contribution of intramolecular zero point energy to the equation of state of solid H2
NASA Technical Reports Server (NTRS)
Chandrasekharan, V.; Etters, R. D.
1978-01-01
Experimental evidence shows that the internal zero-point energy of the H2 molecule exhibits a relatively strong pressure dependence in the solid as well as changing considerably upon condensation. It is shown that these effects contribute about 6% to the total sublimation energy and to the pressure in the solid state. Methods to modify the ab initio isolated pair potential to account for these environmental effects are discussed.
Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors
NASA Astrophysics Data System (ADS)
Pandey, Akhilesh; Kumar, Avanish; Puri, Sanjay
2017-11-01
Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α , the appropriate FRCG model has the effective range d =b2/N =α2/N , for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.
Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors.
Pandey, Akhilesh; Kumar, Avanish; Puri, Sanjay
2017-11-01
Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α, the appropriate FRCG model has the effective range d=b^{2}/N=α^{2}/N, for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.
Kongsawatvorakul, Chompunoot; Charakorn, Chuenkamon; Paiwattananupant, Krissada; Lekskul, Navamol; Rattanasiri, Sasivimol; Lertkhachonsuk, Arb-Aroon
2016-01-01
Many studies have pointed to strategies to cope with patient anxiety in colposcopy. Evidence shows that patients experienced considerable distress with the large loop excision of transformation zone (LLETZ) procedure and suitable interventions should be introduced to reduce anxiety. This study aimed to investigate the effects of music therapy in patients undergoing LLETZ. A randomized controlled trial was conducted with patients undergoing LLETZ performed under local anesthesia in an out patient setting at Ramathibodi Hospital, Bangkok, Thailand, from February 2015 to January 2016. After informed consent and demographic data were obtained, we assessed the anxiety level using State Anxiety Inventory pre and post procedures. Music group patients listened to classical songs through headphones, while the control group received the standard care. Pain score was evaluated with a visual analog scale (VAS). Statistical analysis was conducted using Pearson Chi-square, Fisher's Exact test and T-Test and p-values less than 0.05 were considered statistically significant. A total of 73 patients were enrolled and randomized, resulting in 36 women in the music group and 37 women in the non-music control group. The preoperative mean anxiety score was higher in the music group (46.8 VS 45.8 points). The postoperative mean anxiety scores in the music and the non-music groups were 38.7 and 41.3 points, respectively. VAS was lower in music group (2.55 VS 3.33). The percent change of anxiety was greater in the music group, although there was no significant difference between two groups. Music therapy did not significantly reduce anxiety in patients undergoing the LLETZ procedure. However, different interventions should be developed to ease the patients' apprehension during this procedure.
Trends: Bearding the Proverbial Lion.
ERIC Educational Resources Information Center
Greckel, Wil
1989-01-01
Describes the use of television commercials to teach classical music. Points out that a large number of commercials use classical selections which can serve as a starting point for introducing students to this form. Urges music educators to broaden their views and use these truncated selections to transmit our cultural heritage. (KO)
Amplitude- and rise-time-compensated filters
Nowlin, Charles H.
1984-01-01
An amplitude-compensated rise-time-compensated filter for a pulse time-of-occurrence (TOOC) measurement system is disclosed. The filter converts an input pulse, having the characteristics of random amplitudes and random, non-zero rise times, to a bipolar output pulse wherein the output pulse has a zero-crossing time that is independent of the rise time and amplitude of the input pulse. The filter differentiates the input pulse, along the linear leading edge of the input pulse, and subtracts therefrom a pulse fractionally proportional to the input pulse. The filter of the present invention can use discrete circuit components and avoids the use of delay lines.
Phase transition in nonuniform Josephson arrays: Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Lozovik, Yu. E.; Pomirchy, L. M.
1994-01-01
Disordered 2D system with Josephson interactions is considered. Disordered XY-model describes the granular films, Josephson arrays etc. Two types of disorder are analyzed: (1) randomly diluted system: Josephson coupling constants J ij are equal to J with probability p or zero (bond percolation problem); (2) coupling constants J ij are positive and distributed randomly and uniformly in some interval either including the vicinity of zero or apart from it. These systems are simulated by Monte Carlo method. Behaviour of potential energy, specific heat, phase correlation function and helicity modulus are analyzed. The phase diagram of the diluted system in T c-p plane is obtained.
Kattawar, G W; Plass, G N; Hitzfelder, S J
1976-03-01
The complete radiation field including polarization is calculated by the matrix operator method for scattering layers of various optical thicknesses. Results obtained for Rayleigh scattering are compared with those for scattering from a continental haze. Radiances calculated using Stokes vectors show differences as large as 23% compared to the approximate scalar theory of radiative transfer, while the same differences are only of the order of 0.1% for a continental haze phase function. The polarization of the reflected and transmitted radiation is given for a wide range of optical thicknesses of the scattering layer, for various solar zenith angles, and various surface albedos. Two entirely different types of neutral points occur for aerosol phase functions. Rayleigh-like neutral points (RNP) arise from the zero polarization in single scattering that occurs for all phase functions at scattering angles of 0 degrees and 180 degrees . For Rayleigh phase functions, the position of the RNP varies appreciably with the optical thickness of the scattering layer. At low solar elevations there may be four RNP. For a continental haze phase function the position of the RNP in the reflected radiation shows only a small variation with the optical thickness, and the RNP exists in the transmitted radiation only for extremely small optical thicknesses. Another type of neutral point (NRNP) exists for aerosol phase functions. It is associated with the zeros of the single scattered polarization, which occur between the end points of the curve; these are called non-Rayleigh neutral points (NRNP). There may be from zero to four of these neutral points associated with each zero of the single scattering curve. They occur over a range of azimuthal angles, unlike the RNP that are in the principal plane only. The position of these neutral points is given as a function of solar angle and optical thickness.
pH-dependent surface charging and points of zero charge. IV. Update and new approach.
Kosmulski, Marek
2009-09-15
The recently published points of zero charge (PZC) and isoelectric points (IEPs) of various materials are compiled to update the previous compilation [M. Kosmulski, Surface Charging and Points of Zero Charge, CRC Press, Boca Raton, FL, 2009]. Unlike in previous compilations by the same author [Chemical Properties of Material Surfaces, Dekker, New York, 2001; J. Colloid Interface Sci. 253 (2002) 77; J. Colloid Interface Sci. 275 (2004) 214; J. Colloid Interface Sci. 298 (2006) 730], the materials are sorted not only by the chemical formula, but also by specific product, that is, by brand name (commercially available materials), and by recipe (home-synthesized materials). This new approach indicated that the relatively consistent PZC/IEP reported in the literature for materials having the same chemical formula are due to biased choice of specimens to be studied. Specimens which have PZC/IEP close to the "recommended" value are selected more often than other specimens (PZC/IEP not reported before or PZC/IEP reported, but different from the "recommended" value). Thus, the previously published PZC/IEP act as a self-fulfilling prophecy.
Simulations and observations of cloudtop processes
NASA Technical Reports Server (NTRS)
Siems, S. T.; Bretherton, C. S.; Baker, M. B.
1990-01-01
Turbulent entrainment at zero mean shear stratified interfaces has been studied extensively in the laboratory and theoretically for the classical situation in which density is a passive tracer of the mixing and the turbulent motions producing the entrainment are directed toward the interface. It is the purpose of the numerical simulations and data analysis to investigate these processes and, specifically, to focus on the following questions: (1) Can local cooling below cloudtop play an important role in setting up convective circulations within the cloud, and bringing about entrainment; (2) Can Cloudtop Entrainment Instability (CEI) alone lead to runaway entrainment under geophysically realistic conditions; and (3) What are the important mechanisms of entrainment at cloudtop under zero or low mean shear conditions.
NASA Technical Reports Server (NTRS)
Page, L. W.; From, T. P.
1977-01-01
The behavior of liquids in zero gravity environments is discussed with emphasis on foams, wetting, and wicks. A multipurpose electric furnace (MA-010) for the high temperature processing of metals and salts in zero-g is described. Experiments discussed include: monolectic and synthetic alloys (MA-041); multiple material melting point (MA-150); zero-g processing of metals (MA-070); surface tension induced convection (MA-041); halide eutectic growth; interface markings in crystals (MA-060); crystal growth from the vapor phase (MA-085); and photography of crystal growth (MA-028).
NASA Astrophysics Data System (ADS)
Hiebl, Johann; Frei, Christoph
2018-04-01
Spatial precipitation datasets that are long-term consistent, highly resolved and extend over several decades are an increasingly popular basis for modelling and monitoring environmental processes and planning tasks in hydrology, agriculture, energy resources management, etc. Here, we present a grid dataset of daily precipitation for Austria meant to promote such applications. It has a grid spacing of 1 km, extends back till 1961 and is continuously updated. It is constructed with the classical two-tier analysis, involving separate interpolations for mean monthly precipitation and daily relative anomalies. The former was accomplished by kriging with topographic predictors as external drift utilising 1249 stations. The latter is based on angular distance weighting and uses 523 stations. The input station network was kept largely stationary over time to avoid artefacts on long-term consistency. Example cases suggest that the new analysis is at least as plausible as previously existing datasets. Cross-validation and comparison against experimental high-resolution observations (WegenerNet) suggest that the accuracy of the dataset depends on interpretation. Users interpreting grid point values as point estimates must expect systematic overestimates for light and underestimates for heavy precipitation as well as substantial random errors. Grid point estimates are typically within a factor of 1.5 from in situ observations. Interpreting grid point values as area mean values, conditional biases are reduced and the magnitude of random errors is considerably smaller. Together with a similar dataset of temperature, the new dataset (SPARTACUS) is an interesting basis for modelling environmental processes, studying climate change impacts and monitoring the climate of Austria.
Rigorous derivation of electromagnetic self-force
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gralla, Samuel E.; Harte, Abraham I.; Wald, Robert M.
2009-07-15
During the past century, there has been considerable discussion and analysis of the motion of a point charge in an external electromagnetic field in special relativity, taking into account 'self-force' effects due to the particle's own electromagnetic field. We analyze the issue of 'particle motion' in classical electromagnetism in a rigorous and systematic way by considering a one-parameter family of solutions to the coupled Maxwell and matter equations corresponding to having a body whose charge-current density J{sup a}({lambda}) and stress-energy tensor T{sub ab}({lambda}) scale to zero size in an asymptotically self-similar manner about a worldline {gamma} as {lambda}{yields}0. In thismore » limit, the charge, q, and total mass, m, of the body go to zero, and q/m goes to a well-defined limit. The Maxwell field F{sub ab}({lambda}) is assumed to be the retarded solution associated with J{sup a}({lambda}) plus a homogeneous solution (the 'external field') that varies smoothly with {lambda}. We prove that the worldline {gamma} must be a solution to the Lorentz force equations of motion in the external field F{sub ab}({lambda}=0). We then obtain self-force, dipole forces, and spin force as first-order perturbative corrections to the center-of-mass motion of the body. We believe that this is the first rigorous derivation of the complete first-order correction to Lorentz force motion. We also address the issue of obtaining a self-consistent perturbative equation of motion associated with our perturbative result, and argue that the self-force equations of motion that have previously been written down in conjunction with the 'reduction of order' procedure should provide accurate equations of motion for a sufficiently small charged body with negligible dipole moments and spin. (There is no corresponding justification for the non-reduced-order equations.) We restrict consideration in this paper to classical electrodynamics in flat spacetime, but there should be no difficulty in extending our results to the motion of a charged body in an arbitrary globally hyperbolic curved spacetime.« less
Time series, correlation matrices and random matrix models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vinayak; Seligman, Thomas H.
2014-01-08
In this set of five lectures the authors have presented techniques to analyze open classical and quantum systems using correlation matrices. For diverse reasons we shall see that random matrices play an important role to describe a null hypothesis or a minimum information hypothesis for the description of a quantum system or subsystem. In the former case various forms of correlation matrices of time series associated with the classical observables of some system. The fact that such series are necessarily finite, inevitably introduces noise and this finite time influence lead to a random or stochastic component in these time series.more » By consequence random correlation matrices have a random component, and corresponding ensembles are used. In the latter we use random matrices to describe high temperature environment or uncontrolled perturbations, ensembles of differing chaotic systems etc. The common theme of the lectures is thus the importance of random matrix theory in a wide range of fields in and around physics.« less
Game, Madhuri D.; Gabhane, K. B.; Sakarkar, D. M.
2010-01-01
A simple, accurate and precise spectrophotometric method has been developed for simultaneous estimation of clopidogrel bisulphate and aspirin by employing first order derivative zero crossing method. The first order derivative absorption at 232.5 nm (zero cross point of aspirin) was used for clopidogrel bisulphate and 211.3 nm (zero cross point of clopidogrel bisulphate) for aspirin.Both the drugs obeyed linearity in the concentration range of 5.0 μg/ml to 25.0 μg/ml (correlation coefficient r2<1). No interference was found between both determined constituents and those of matrix. The method was validated statistically and recovery studies were carried out to confirm the accuracy of the method. PMID:21969765
Time delay of critical images in the vicinity of cusp point of gravitational-lens systems
NASA Astrophysics Data System (ADS)
Alexandrov, A.; Zhdanov, V.
2016-12-01
We consider approximate analytical formulas for time-delays of critical images of a point source in the neighborhood of a cusp-caustic. We discuss zero, first and second approximations in powers of a parameter that defines the proximity of the source to the cusp. These formulas link the time delay with characteristics of the lens potential. The formula of zero approximation was obtained by Congdon, Keeton & Nordgren (MNRAS, 2008). In case of a general lens potential we derived first order correction thereto. If the potential is symmetric with respect to the cusp axis, then this correction is identically equal to zero. For this case, we obtained second order correction. The relations found are illustrated by a simple model example.
Antiswarming: Structure and dynamics of repulsive chemically active particles
NASA Astrophysics Data System (ADS)
Yan, Wen; Brady, John F.
2017-12-01
Chemically active Brownian particles with surface catalytic reactions may repel each other due to diffusiophoretic interactions in the reaction and product concentration fields. The system behavior can be described by a "chemical" coupling parameter Γc that compares the strength of diffusiophoretic repulsion to Brownian motion, and by a mapping to the classical electrostatic one component plasma (OCP) system. When confined to a constant-volume domain, body-centered cubic (bcc) crystals spontaneously form from random initial configurations when the repulsion is strong enough to overcome Brownian motion. Face-centered cubic (fcc) crystals may also be stable. The "melting point" of the "liquid-to-crystal transition" occurs at Γc≈140 for both bcc and fcc lattices.
Suez Canal Clearance Operation, Task Force 65
1975-05-01
supporting minesweepers, GARDENIA, GIROFLEE, AJONC, and LILAS, and two minehunters CERES and CALLIOPE. TG SIX FIVE POINT ZERO . This Task Group designation...circle search line, buoy line, tether, zero visibility, and no communication with the surface, created a hazardous situation for open circuit scuba...from essentially zero to several hut7Ared thousand in Port Said and Suez City, and to a lesser degree in Ismailia. This occurred without a concomitant
Determination of point of zero charge of natural organic materials.
Bakatula, Elisee Nsimba; Richard, Dominique; Neculita, Carmen Mihaela; Zagury, Gerald J
2018-03-01
This study evaluates different methods to determine points of zero charge (PZCs) on five organic materials, namely maple sawdust, wood ash, peat moss, compost, and brown algae, used for the passive treatment of contaminated neutral drainage effluents. The PZC provides important information about metal sorption mechanisms. Three methods were used: (1) the salt addition method, measuring the PZC; (2) the zeta potential method, measuring the isoelectric point (IEP); (3) the ion adsorption method, measuring the point of zero net charge (PZNC). Natural kaolinite and synthetic goethite were also tested with both the salt addition and the ion adsorption methods in order to validate experimental protocols. Results obtained from the salt addition method in 0.05 M NaNO 3 were the following: 4.72 ± 0.06 (maple sawdust), 9.50 ± 0.07 (wood ash), 3.42 ± 0.03 (peat moss), 7.68 ± 0.01 (green compost), and 6.06 ± 0.11 (brown algae). Both the ion adsorption and the zeta potential methods failed to give points of zero charge for these substrates. The PZC of kaolinite (3.01 ± 0.03) was similar to the PZNC (2.9-3.4) and fell within the range of values reported in the literature (2.7-4.1). As for the goethite, the PZC (10.9 ± 0.05) was slightly higher than the PZNC (9.0-9.4). The salt addition method has been found appropriate and convenient to determine the PZC of natural organic substrates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harding, Lawrence B.; Georgievskii, Yuri; Klippenstein, Stephen J.
Full dimensional analytic potential energy surfaces based on CCSD(T)/cc-pVTZ calculations have been determined for 48 small combustion related molecules. The analytic surfaces have been used in Diffusion Monte Carlo calculations of the anharmonic, zero point energies. Here, the resulting anharmonicity corrections are compared to vibrational perturbation theory results based both on the same level of electronic structure theory and on lower level electronic structure methods (B3LYP and MP2).
Vetoshkin, Evgeny; Babikov, Dmitri
2007-09-28
For the first time Feshbach-type resonances important in recombination reactions are characterized using the semiclassical wave packet method. This approximation allows us to determine the energies, lifetimes, and wave functions of the resonances and also to observe a very interesting correlation between them. Most important is that this approach permits description of a quantum delta-zero-point energy effect in recombination reactions and reproduces the anomalous rates of ozone formation.
Harding, Lawrence B; Georgievskii, Yuri; Klippenstein, Stephen J
2017-06-08
Full-dimensional analytic potential energy surfaces based on CCSD(T)/cc-pVTZ calculations have been determined for 48 small combustion-related molecules. The analytic surfaces have been used in Diffusion Monte Carlo calculations of the anharmonic zero-point energies. The resulting anharmonicity corrections are compared to vibrational perturbation theory results based both on the same level of electronic structure theory and on lower-level electronic structure methods (B3LYP and MP2).
Harding, Lawrence B.; Georgievskii, Yuri; Klippenstein, Stephen J.
2017-05-17
Full dimensional analytic potential energy surfaces based on CCSD(T)/cc-pVTZ calculations have been determined for 48 small combustion related molecules. The analytic surfaces have been used in Diffusion Monte Carlo calculations of the anharmonic, zero point energies. Here, the resulting anharmonicity corrections are compared to vibrational perturbation theory results based both on the same level of electronic structure theory and on lower level electronic structure methods (B3LYP and MP2).
Meshless Local Petrov-Galerkin Method for Solving Contact, Impact and Penetration Problems
2006-11-30
Crack Growth 3 point of view, this approach makes the full use of the ex- isting FE models to avoid any model regeneration , which is extremely high in...process, at point C, the pressure reduces to zero, but the volumet- ric strain does not go to zero due to the collapsed void volume. 2.2 Damage...lease rate to go beyond the critical strain energy release rate. Thus, the micro-cracks begin to growth inside these areas. At 10 micro-seconds, these
Teleporting entanglements of cavity-field states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pires, Geisa; Baseia, B.; Almeida, N.G. de
2004-08-01
We present a scheme to teleport an entanglement of zero- and one-photon states from one cavity to another. The scheme, which has 100% success probability, relies on two perfect and identical bimodal cavities, a collection of two kinds of two-level atoms, a three-level atom in a ladder configuration driven by a classical field, Ramsey zones, and selective atomic-state detectors.
Equilibrium Fluid Interface Behavior Under Low- and Zero-Gravity Conditions. 2
NASA Technical Reports Server (NTRS)
Concus, Paul; Finn, Robert
1996-01-01
The mathematical basis for the forthcoming Angular Liquid Bridge investigation on board Mir is described. Our mathematical work is based on the classical Young-Laplace-Gauss formulation for an equilibrium free surface of liquid partly filling a container or otherwise in contact with solid support surfaces. The anticipated liquid behavior used in the apparatus design is also illustrated.
An Efficient Bundle Adjustment Model Based on Parallax Parametrization for Environmental Monitoring
NASA Astrophysics Data System (ADS)
Chen, R.; Sun, Y. Y.; Lei, Y.
2017-12-01
With the rapid development of Unmanned Aircraft Systems (UAS), more and more research fields have been successfully equipped with this mature technology, among which is environmental monitoring. One difficult task is how to acquire accurate position of ground object in order to reconstruct the scene more accurate. To handle this problem, we combine bundle adjustment method from Photogrammetry with parallax parametrization from Computer Vision to create a new method call APCP (aerial polar-coordinate photogrammetry). One impressive advantage of this method compared with traditional method is that the 3-dimensional point in space is represented using three angles (elevation angle, azimuth angle and parallax angle) rather than the XYZ value. As the basis for APCP, bundle adjustment could be used to optimize the UAS sensors' pose accurately, reconstruct the 3D models of environment, thus serving as the criterion of accurate position for monitoring. To verity the effectiveness of the proposed method, we test on several UAV dataset obtained by non-metric digital cameras with large attitude angles, and we find that our methods could achieve 1 or 2 times better efficiency with no loss of accuracy than traditional ones. For the classical nonlinear optimization of bundle adjustment model based on the rectangular coordinate, it suffers the problem of being seriously dependent on the initial values, making it unable to converge fast or converge to a stable state. On the contrary, APCP method could deal with quite complex condition of UAS when conducting monitoring as it represent the points in space with angles, including the condition that the sequential images focusing on one object have zero parallax angle. In brief, this paper presents the parameterization of 3D feature points based on APCP, and derives a full bundle adjustment model and the corresponding nonlinear optimization problems based on this method. In addition, we analyze the influence of convergence and dependence on the initial values through math formulas. At last this paper conducts experiments using real aviation data, and proves that the new model can effectively solve bottlenecks of the classical method in a certain degree, that is, this paper provides a new idea and solution for faster and more efficient environmental monitoring.
NASA Technical Reports Server (NTRS)
Kattawar, G. W.; Plass, G. N.; Hitzfelder, S. J.
1975-01-01
The complete radiation field is calculated for scattering layers of various optical thicknesses. Results obtained for Rayleigh and haze scattering are compared. Calculated radiances show differences as large as 23% compared to the approximate scalar theory of radiative transfer, while the same differences are approximately 0.1% for a continental haze phase function. The polarization of reflected and transmitted radiation is given for various optical thicknesses, solar zenith angles, and surface albedos. Two types of neutral points occur for aerosol phase functions. Rayleigh-like neutral points arise from zero polarization that occurs at scattering angles of 0 deg and 180 deg. For Rayleigh phase functions, the position of these points varies with the optical thickness of the scattering layer. Non-Rayleigh neutral points are associated with the zeros of polarization which occur between the end points of the single scattering curve, and are found over a wide range of azimuthal angles.
NASA Astrophysics Data System (ADS)
Miller, Steven David
1999-10-01
A consistent extension of the Oppenheimer-Snyder gravitational collapse formalism is presented which incorporates stochastic, conformal, vacuum fluctuations of the metric tensor. This results in a tractable approach to studying the possible effects of vacuum fluctuations on collapse and singularity formation. The motivation here, is that it is known that coupling stochastic noise to a classical field theory can lead to workable methodologies that accommodate or reproduce many aspects of quantum theory, turbulence or structure formation. The effect of statistically averaging over the metric fluctuations gives the appearance of a deterministic Riemannian structure, with an induced non-vanishing cosmological constant arising from the nonlinearity. The Oppenheimer-Snyder collapse of a perfect fluid or dust star in the fluctuating or `turbulent' spacetime, is reformulated in terms of nonlinear Einstein-Langevin field equations, with an additional noise source in the energy-momentum tensor. The smooth deterministic worldlines of collapsing matter within the classical Oppenheimer-Snyder model, now become nonlinear Brownian motions due to the backreaction induced by vacuum fluctuations. As the star collapses, the matter worldlines become increasingly randomized since the backreaction coupling to the vacuum fluctuations is nonlinear; the input assumptions of the Hawking-Penrose singularity theorems should then be violated. Solving the nonlinear Einstein-Langevin field equation for collapse - via the Ito interpretation - gives a singularity-free solution, which is equivalent to the original Oppenheimer solution but with higher-order stochastic corrections; the original singular solution is recovered in the limit of zero vacuum fluctuations. The `geometro-hydrodynamics' of noisy gravitational collapse, were also translated into an equivalent mathematical formulation in terms of nonlinear Einstein-Fokker-Planck (EFP) continuity equations with respect to comoving coordinates: these describe the collapse as a conserved flow of probability. A solution was found in the dilute limit of weak fluctuations where the EFP equation is linearized. There is zero probability that the star collapses to a singular state in the presence of background vacuum fluctuations, but the singularity returns with unit probability when the fluctuations are reduced to zero. Finally, an EFP equation was considered with respect to standard exterior coordinates. Using the thermal Brownian motion paradigm, an exact stationary or equilibrium solution was found in the infinite standard time relaxation limit. The solution gives the conditions required for the final collapsed object (a black hole) to be in thermal equilibrium with the background vacuum fluctuations. From this solution, one recovers the Hawking temperature without using field theory. The stationary solution then seems to correspond to a black hole in thermal equilibrium with a fluctuating conformal scalar field; or the Hawking-Hartle state.
Phase space theory of evaporation in neon clusters: the role of quantum effects.
Calvo, F; Parneix, P
2009-12-31
Unimolecular evaporation of neon clusters containing between 14 and 148 atoms is theoretically investigated in the framework of phase space theory. Quantum effects are incorporated in the vibrational densities of states, which include both zero-point and anharmonic contributions, and in the possible tunneling through the centrifugal barrier. The evaporation rates, kinetic energy released, and product angular momentum are calculated as a function of excess energy or temperature in the parent cluster and compared to the classical results. Quantum fluctuations are found to generally increase both the kinetic energy released and the angular momentum of the product, but the effects on the rate constants depend nontrivially on the excess energy. These results are interpreted as due to the very few vibrational states available in the product cluster when described quantum mechanically. Because delocalization also leads to much narrower thermal energy distributions, the variations of evaporation observables as a function of canonical temperature appear much less marked than in the microcanonical ensemble. While quantum effects tend to smooth the caloric curve in the product cluster, the melting phase change clearly keeps a signature on these observables. The microcanonical temperature extracted from fitting the kinetic energy released distribution using an improved Arrhenius form further suggests a backbending in the quantum Ne(13) cluster that is absent in the classical system. Finally, in contrast to delocalization effects, quantum tunneling through the centrifugal barrier does not play any appreciable role on the evaporation kinetics of these rather heavy clusters.
Definition of the Neutrosophic Probability
NASA Astrophysics Data System (ADS)
Smarandache, Florentin
2014-03-01
Neutrosophic probability (or likelihood) [1995] is a particular case of the neutrosophic measure. It is an estimation of an event (different from indeterminacy) to occur, together with an estimation that some indeterminacy may occur, and the estimation that the event does not occur. The classical probability deals with fair dice, coins, roulettes, spinners, decks of cards, random works, while neutrosophic probability deals with unfair, imperfect such objects and processes. For example, if we toss a regular die on an irregular surface which has cracks, then it is possible to get the die stuck on one of its edges or vertices in a crack (indeterminate outcome). The sample space is in this case: {1, 2, 3, 4, 5, 6, indeterminacy}. So, the probability of getting, for example 1, is less than 1/6. Since there are seven outcomes. The neutrosophic probability is a generalization of the classical probability because, when the chance of determinacy of a stochastic process is zero, these two probabilities coincide. The Neutrosophic Probability that of an event A occurs is NP (A) = (ch (A) , ch (indetA) , ch (A ̲)) = (T , I , F) , where T , I , F are subsets of [0,1], and T is the chance that A occurs, denoted ch(A); I is the indeterminate chance related to A, ch(indetermA) ; and F is the chance that A does not occur, ch (A ̲) . So, NP is a generalization of the Imprecise Probability as well. If T, I, and F are crisp numbers then: - 0 <= T + I + F <=3+ . We used the same notations (T,I,F) as in neutrosophic logic and set.
NASA Astrophysics Data System (ADS)
Hilpert, Markus; Johnson, William P.
2018-01-01
We used a recently developed simple mathematical network model to upscale pore-scale colloid transport information determined under unfavorable attachment conditions. Classical log-linear and nonmonotonic retention profiles, both well-reported under favorable and unfavorable attachment conditions, respectively, emerged from our upscaling. The primary attribute of the network is colloid transfer between bulk pore fluid, the near-surface fluid domain (NSFD), and attachment (treated as irreversible). The network model accounts for colloid transfer to the NSFD of downgradient grains and for reentrainment to bulk pore fluid via diffusion or via expulsion at rear flow stagnation zones (RFSZs). The model describes colloid transport by a sequence of random trials in a one-dimensional (1-D) network of Happel cells, which contain a grain and a pore. Using combinatorial analysis that capitalizes on the binomial coefficient, we derived from the pore-scale information the theoretical residence time distribution of colloids in the network. The transition from log-linear to nonmonotonic retention profiles occurs when the conditions underlying classical filtration theory are not fulfilled, i.e., when an NSFD colloid population is maintained. Then, nonmonotonic retention profiles result potentially both for attached and NSFD colloids. The concentration maxima shift downgradient depending on specific parameter choice. The concentration maxima were also shown to shift downgradient temporally (with continued elution) under conditions where attachment is negligible, explaining experimentally observed downgradient transport of retained concentration maxima of adhesion-deficient bacteria. For the case of zero reentrainment, we develop closed-form, analytical expressions for the shape, and the maximum of the colloid retention profile.
Does the dose-solubility ratio affect the mean dissolution time of drugs?
Lánský, P; Weiss, M
1999-09-01
To present a new model for describing drug dissolution. On the basis of the new model to characterize the dissolution profile by the distribution function of the random dissolution time of a drug molecule, which generalizes the classical first order model. Instead of assuming a constant fractional dissolution rate, as in the classical model, it is considered that the fractional dissolution rate is a decreasing function of the dissolved amount controlled by the dose-solubility ratio. The differential equation derived from this assumption is solved and the distribution measures (half-dissolution time, mean dissolution time, relative dispersion of the dissolution time, dissolution time density, and fractional dissolution rate) are calculated. Finally, instead of monotonically decreasing the fractional dissolution rate, a generalization resulting in zero dissolution rate at time origin is introduced. The behavior of the model is divided into two regions defined by q, the ratio of the dose to the solubility level: q < 1 (complete dissolution of the dose, dissolution time) and q > 1 (saturation of the solution, saturation time). The singular case q = 1 is also treated and in this situation the mean as well as the relative dispersion of the dissolution time increase to infinity. The model was successfully fitted to data (1). This empirical model is descriptive without detailed physical reasoning behind its derivation. According to the model, the mean dissolution time is affected by the dose-solubility ratio. Although this prediction appears to be in accordance with preliminary application, further validation based on more suitable experimental data is required.
Confidence in Altman-Bland plots: a critical review of the method of differences.
Ludbrook, John
2010-02-01
1. Altman and Bland argue that the virtue of plotting differences against averages in method-comparison studies is that 95% confidence limits for the differences can be constructed. These allow authors and readers to judge whether one method of measurement could be substituted for another. 2. The technique is often misused. So I have set out, by statistical argument and worked examples, to advise pharmacologists and physiologists how best to construct these limits. 3. First, construct a scattergram of differences on averages, then calculate the line of best fit for the linear regression of differences on averages. If the slope of the regression is shown to differ from zero, there is proportional bias. 4. If there is no proportional bias and if the scatter of differences is uniform (homoscedasticity), construct 'classical' 95% confidence limits. 5. If there is proportional bias yet homoscedasticity, construct hyperbolic 95% confidence limits (prediction interval) around the line of best fit. 6. If there is proportional bias and the scatter of values for differences increases progressively as the average values increase (heteroscedasticity), log-transform the raw values from the two methods and replot differences against averages. If this eliminates proportional bias and heteroscedasticity, construct 'classical' 95% confidence limits. Otherwise, construct horizontal V-shaped 95% confidence limits around the line of best fit of differences on averages or around the weighted least products line of best fit to the original data. 7. In designing a method-comparison study, consult a qualified biostatistician, obey the rules of randomization and make replicate observations.