NASA Astrophysics Data System (ADS)
Gao, Haixia; Li, Ting; Xiao, Changming
2016-05-01
When a simple system is in its nonequilibrium state, it will shift to its equilibrium state. Obviously, in this process, there are a series of nonequilibrium states. With the assistance of Bayesian statistics and hyperensemble, a probable probability distribution of these nonequilibrium states can be determined by maximizing the hyperensemble entropy. It is known that the largest probability is the equilibrium state, and the far a nonequilibrium state is away from the equilibrium one, the smaller the probability will be, and the same conclusion can also be obtained in the multi-state space. Furthermore, if the probability stands for the relative time the corresponding nonequilibrium state can stay, then the velocity of a nonequilibrium state returning back to its equilibrium can also be determined through the reciprocal of the derivative of this probability. It tells us that the far away the state from the equilibrium is, the faster the returning velocity will be; if the system is near to its equilibrium state, the velocity will tend to be smaller and smaller, and finally tends to 0 when it gets the equilibrium state.
Universal laws of human society's income distribution
NASA Astrophysics Data System (ADS)
Tao, Yong
2015-10-01
General equilibrium equations in economics play the same role with many-body Newtonian equations in physics. Accordingly, each solution of the general equilibrium equations can be regarded as a possible microstate of the economic system. Since Arrow's Impossibility Theorem and Rawls' principle of social fairness will provide a powerful support for the hypothesis of equal probability, then the principle of maximum entropy is available in a just and equilibrium economy so that an income distribution will occur spontaneously (with the largest probability). Remarkably, some scholars have observed such an income distribution in some democratic countries, e.g. USA. This result implies that the hypothesis of equal probability may be only suitable for some "fair" systems (economic or physical systems). From this meaning, the non-equilibrium systems may be "unfair" so that the hypothesis of equal probability is unavailable.
Local approximation of a metapopulation's equilibrium.
Barbour, A D; McVinish, R; Pollett, P K
2018-04-18
We consider the approximation of the equilibrium of a metapopulation model, in which a finite number of patches are randomly distributed over a bounded subset [Formula: see text] of Euclidean space. The approximation is good when a large number of patches contribute to the colonization pressure on any given unoccupied patch, and when the quality of the patches varies little over the length scale determined by the colonization radius. If this is the case, the equilibrium probability of a patch at z being occupied is shown to be close to [Formula: see text], the equilibrium occupation probability in Levins's model, at any point [Formula: see text] not too close to the boundary, if the local colonization pressure and extinction rates appropriate to z are assumed. The approximation is justified by giving explicit upper and lower bounds for the occupation probabilities, expressed in terms of the model parameters. Since the patches are distributed randomly, the occupation probabilities are also random, and we complement our bounds with explicit bounds on the probability that they are satisfied at all patches simultaneously.
Selfish routing equilibrium in stochastic traffic network: A probability-dominant description.
Zhang, Wenyi; He, Zhengbing; Guan, Wei; Ma, Rui
2017-01-01
This paper suggests a probability-dominant user equilibrium (PdUE) model to describe the selfish routing equilibrium in a stochastic traffic network. At PdUE, travel demands are only assigned to the most dominant routes in the same origin-destination pair. A probability-dominant rerouting dynamic model is proposed to explain the behavioral mechanism of PdUE. To facilitate applications, the logit formula of PdUE is developed, of which a well-designed route set is not indispensable and the equivalent varitional inequality formation is simple. Two routing strategies, i.e., the probability-dominant strategy (PDS) and the dominant probability strategy (DPS), are discussed through a hypothetical experiment. It is found that, whether out of insurance or striving for perfection, PDS is a better choice than DPS. For more general cases, the conducted numerical tests lead to the same conclusion. These imply that PdUE (rather than the conventional stochastic user equilibrium) is a desirable selfish routing equilibrium for a stochastic network, given that the probability distributions of travel time are available to travelers.
Selfish routing equilibrium in stochastic traffic network: A probability-dominant description
Zhang, Wenyi; Guan, Wei; Ma, Rui
2017-01-01
This paper suggests a probability-dominant user equilibrium (PdUE) model to describe the selfish routing equilibrium in a stochastic traffic network. At PdUE, travel demands are only assigned to the most dominant routes in the same origin-destination pair. A probability-dominant rerouting dynamic model is proposed to explain the behavioral mechanism of PdUE. To facilitate applications, the logit formula of PdUE is developed, of which a well-designed route set is not indispensable and the equivalent varitional inequality formation is simple. Two routing strategies, i.e., the probability-dominant strategy (PDS) and the dominant probability strategy (DPS), are discussed through a hypothetical experiment. It is found that, whether out of insurance or striving for perfection, PDS is a better choice than DPS. For more general cases, the conducted numerical tests lead to the same conclusion. These imply that PdUE (rather than the conventional stochastic user equilibrium) is a desirable selfish routing equilibrium for a stochastic network, given that the probability distributions of travel time are available to travelers. PMID:28829834
Calculation of the equilibrium distribution for a deleterious gene by the finite Fourier transform.
Lange, K
1982-03-01
In a population of constant size every deleterious gene eventually attains a stochastic equilibrium between mutation and selection. The individual probabilities of this equilibrium distribution can be computed by an application of the finite Fourier transform to an appropriate branching process formula. Specific numerical examples are discussed for the autosomal dominants, Huntington's chorea and chondrodystrophy, and for the X-linked recessive, Becker's muscular dystrophy.
NASA Technical Reports Server (NTRS)
Querci, F.; Kunde, V. G.; Querci, M.
1971-01-01
The basis and techniques are presented for generating opacity probability distribution functions for the CN molecule (red and violet systems) and the C2 molecule (Swan, Phillips, Ballik-Ramsay systems), two of the more important diatomic molecules in the spectra of carbon stars, with a view to including these distribution functions in equilibrium model atmosphere calculations. Comparisons to the CO molecule are also shown. T he computation of the monochromatic absorption coefficient uses the most recent molecular data with revision of the oscillator strengths for some of the band systems. The total molecular stellar mass absorption coefficient is established through fifteen equations of molecular dissociation equilibrium to relate the distribution functions to each other on a per gram of stellar material basis.
Su, Nan-Yao; Lee, Sang-Hee
2008-04-01
Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.
Exact solutions for the selection-mutation equilibrium in the Crow-Kimura evolutionary model.
Semenov, Yuri S; Novozhilov, Artem S
2015-08-01
We reformulate the eigenvalue problem for the selection-mutation equilibrium distribution in the case of a haploid asexually reproduced population in the form of an equation for an unknown probability generating function of this distribution. The special form of this equation in the infinite sequence limit allows us to obtain analytically the steady state distributions for a number of particular cases of the fitness landscape. The general approach is illustrated by examples; theoretical findings are compared with numerical calculations. Copyright © 2015. Published by Elsevier Inc.
Wu, Wei; Wang, Jin
2013-09-28
We established a potential and flux field landscape theory to quantify the global stability and dynamics of general spatially dependent non-equilibrium deterministic and stochastic systems. We extended our potential and flux landscape theory for spatially independent non-equilibrium stochastic systems described by Fokker-Planck equations to spatially dependent stochastic systems governed by general functional Fokker-Planck equations as well as functional Kramers-Moyal equations derived from master equations. Our general theory is applied to reaction-diffusion systems. For equilibrium spatially dependent systems with detailed balance, the potential field landscape alone, defined in terms of the steady state probability distribution functional, determines the global stability and dynamics of the system. The global stability of the system is closely related to the topography of the potential field landscape in terms of the basins of attraction and barrier heights in the field configuration state space. The effective driving force of the system is generated by the functional gradient of the potential field alone. For non-equilibrium spatially dependent systems, the curl probability flux field is indispensable in breaking detailed balance and creating non-equilibrium condition for the system. A complete characterization of the non-equilibrium dynamics of the spatially dependent system requires both the potential field and the curl probability flux field. While the non-equilibrium potential field landscape attracts the system down along the functional gradient similar to an electron moving in an electric field, the non-equilibrium flux field drives the system in a curly way similar to an electron moving in a magnetic field. In the small fluctuation limit, the intrinsic potential field as the small fluctuation limit of the potential field for spatially dependent non-equilibrium systems, which is closely related to the steady state probability distribution functional, is found to be a Lyapunov functional of the deterministic spatially dependent system. Therefore, the intrinsic potential landscape can characterize the global stability of the deterministic system. The relative entropy functional of the stochastic spatially dependent non-equilibrium system is found to be the Lyapunov functional of the stochastic dynamics of the system. Therefore, the relative entropy functional quantifies the global stability of the stochastic system with finite fluctuations. Our theory offers an alternative general approach to other field-theoretic techniques, to study the global stability and dynamics of spatially dependent non-equilibrium field systems. It can be applied to many physical, chemical, and biological spatially dependent non-equilibrium systems.
H theorem for generalized entropic forms within a master-equation framework
NASA Astrophysics Data System (ADS)
Casas, Gabriela A.; Nobre, Fernando D.; Curado, Evaldo M. F.
2016-03-01
The H theorem is proven for generalized entropic forms, in the case of a discrete set of states. The associated probability distributions evolve in time according to a master equation, for which the corresponding transition rates depend on these entropic forms. An important equation describing the time evolution of the transition rates and probabilities in such a way as to drive the system towards an equilibrium state is found. In the particular case of Boltzmann-Gibbs entropy, it is shown that this equation is satisfied in the microcanonical ensemble only for symmetric probability transition rates, characterizing a single path to the equilibrium state. This equation fulfils the proof of the H theorem for generalized entropic forms, associated with systems characterized by complex dynamics, e.g., presenting nonsymmetric probability transition rates and more than one path towards the same equilibrium state. Some examples considering generalized entropies of the literature are discussed, showing that they should be applicable to a wide range of natural phenomena, mainly those within the realm of complex systems.
NASA Astrophysics Data System (ADS)
Bhattacharyay, A.
2018-03-01
An alternative equilibrium stochastic dynamics for a Brownian particle in inhomogeneous space is derived. Such a dynamics can model the motion of a complex molecule in its conformation space when in equilibrium with a uniform heat bath. The derivation is done by a simple generalization of the formulation due to Zwanzig for a Brownian particle in homogeneous heat bath. We show that, if the system couples to different number of bath degrees of freedom at different conformations then the alternative model gets derived. We discuss results of an experiment by Faucheux and Libchaber which probably has indicated possible limitation of the Boltzmann distribution as equilibrium distribution of a Brownian particle in inhomogeneous space and propose experimental verification of the present theory using similar methods.
Lowe, Phillip K; Bruno, John F; Selig, Elizabeth R; Spencer, Matthew
2011-01-01
There has been substantial recent change in coral reef communities. To date, most analyses have focussed on static patterns or changes in single variables such as coral cover. However, little is known about how community-level changes occur at large spatial scales. Here, we develop Markov models of annual changes in coral and macroalgal cover in the Caribbean and Great Barrier Reef (GBR) regions. We analyzed reef surveys from the Caribbean and GBR (1996-2006). We defined a set of reef states distinguished by coral and macroalgal cover, and obtained Bayesian estimates of the annual probabilities of transitions between these states. The Caribbean and GBR had different transition probabilities, and therefore different rates of change in reef condition. This could be due to differences in species composition, management or the nature and extent of disturbances between these regions. We then estimated equilibrium probability distributions for reef states, and coral and macroalgal cover under constant environmental conditions. In both regions, the current distributions are close to equilibrium. In the Caribbean, coral cover is much lower and macroalgal cover is higher at equilibrium than in the GBR. We found no evidence for differences in transition probabilities between the first and second halves of our survey period, or between Caribbean reefs inside and outside marine protected areas. However, our power to detect such differences may have been low. We also examined the effects of altering transition probabilities on the community state equilibrium, along a continuum from unfavourable (e.g., increased sea surface temperature) to favourable (e.g., improved management) conditions. Both regions showed similar qualitative responses, but different patterns of uncertainty. In the Caribbean, uncertainty was greatest about effects of favourable changes, while in the GBR, we are most uncertain about effects of unfavourable changes. Our approach could be extended to provide risk analysis for management decisions.
Reconstructing the equilibrium Boltzmann distribution from well-tempered metadynamics.
Bonomi, M; Barducci, A; Parrinello, M
2009-08-01
Metadynamics is a widely used and successful method for reconstructing the free-energy surface of complex systems as a function of a small number of suitably chosen collective variables. This is achieved by biasing the dynamics of the system. The bias acting on the collective variables distorts the probability distribution of the other variables. Here we present a simple reweighting algorithm for recovering the unbiased probability distribution of any variable from a well-tempered metadynamics simulation. We show the efficiency of the reweighting procedure by reconstructing the distribution of the four backbone dihedral angles of alanine dipeptide from two and even one dimensional metadynamics simulation. 2009 Wiley Periodicals, Inc.
Econophysics: Two-phase behaviour of financial markets
NASA Astrophysics Data System (ADS)
Plerou, Vasiliki; Gopikrishnan, Parameswaran; Stanley, H. Eugene
2003-01-01
Buying and selling in financial markets is driven by demand, which can be quantified by the imbalance in the number of shares transacted by buyers and sellers over a given time interval. Here we analyse the probability distribution of demand, conditioned on its local noise intensity Σ, and discover the surprising existence of a critical threshold, Σc. For Σ < Σc, the most probable value of demand is roughly zero; we interpret this as an equilibrium phase in which neither buying nor selling predominates. For Σ > Σc, two most probable values emerge that are symmetrical around zero demand, corresponding to excess demand and excess supply; we interpret this as an out-of-equilibrium phase in which the market behaviour is mainly buying for half of the time, and mainly selling for the other half.
Hidden symmetries and equilibrium properties of multiplicative white-noise stochastic processes
NASA Astrophysics Data System (ADS)
González Arenas, Zochil; Barci, Daniel G.
2012-12-01
Multiplicative white-noise stochastic processes continue to attract attention in a wide area of scientific research. The variety of prescriptions available for defining them makes the development of general tools for their characterization difficult. In this work, we study equilibrium properties of Markovian multiplicative white-noise processes. For this, we define the time reversal transformation for such processes, taking into account that the asymptotic stationary probability distribution depends on the prescription. Representing the stochastic process in a functional Grassmann formalism, we avoid the necessity of fixing a particular prescription. In this framework, we analyze equilibrium properties and study hidden symmetries of the process. We show that, using a careful definition of the equilibrium distribution and taking into account the appropriate time reversal transformation, usual equilibrium properties are satisfied for any prescription. Finally, we present a detailed deduction of a covariant supersymmetric formulation of a multiplicative Markovian white-noise process and study some of the constraints that it imposes on correlation functions using Ward-Takahashi identities.
Ergodic Theory, Interpretations of Probability and the Foundations of Statistical Mechanics
NASA Astrophysics Data System (ADS)
van Lith, Janneke
The traditional use of ergodic theory in the foundations of equilibrium statistical mechanics is that it provides a link between thermodynamic observables and microcanonical probabilities. First of all, the ergodic theorem demonstrates the equality of microcanonical phase averages and infinite time averages (albeit for a special class of systems, and up to a measure zero set of exceptions). Secondly, one argues that actual measurements of thermodynamic quantities yield time averaged quantities, since measurements take a long time. The combination of these two points is held to be an explanation why calculating microcanonical phase averages is a successful algorithm for predicting the values of thermodynamic observables. It is also well known that this account is problematic. This survey intends to show that ergodic theory nevertheless may have important roles to play, and it explores three other uses of ergodic theory. Particular attention is paid, firstly, to the relevance of specific interpretations of probability, and secondly, to the way in which the concern with systems in thermal equilibrium is translated into probabilistic language. With respect to the latter point, it is argued that equilibrium should not be represented as a stationary probability distribution as is standardly done; instead, a weaker definition is presented.
Adaptive Multi-Agent Systems for Constrained Optimization
NASA Technical Reports Server (NTRS)
Macready, William; Bieniawski, Stefan; Wolpert, David H.
2004-01-01
Product Distribution (PD) theory is a new framework for analyzing and controlling distributed systems. Here we demonstrate its use for distributed stochastic optimization. First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (probability distribution of) the joint state of the agents. When the game in question is a team game with constraints, that equilibrium optimizes the expected value of the team game utility, subject to those constraints. The updating of the Lagrange parameters in the Lagrangian can be viewed as a form of automated annealing, that focuses the MAS more and more on the optimal pure strategy. This provides a simple way to map the solution of any constrained optimization problem onto the equilibrium of a Multi-Agent System (MAS). We present computer experiments involving both the Queen s problem and K-SAT validating the predictions of PD theory and its use for off-the-shelf distributed adaptive optimization.
Bayesian soft X-ray tomography using non-stationary Gaussian Processes
NASA Astrophysics Data System (ADS)
Li, Dong; Svensson, J.; Thomsen, H.; Medina, F.; Werner, A.; Wolf, R.
2013-08-01
In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.
Bayesian soft X-ray tomography using non-stationary Gaussian Processes.
Li, Dong; Svensson, J; Thomsen, H; Medina, F; Werner, A; Wolf, R
2013-08-01
In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.
Structured Modeling and Analysis of Stochastic Epidemics with Immigration and Demographic Effects
Baumann, Hendrik; Sandmann, Werner
2016-01-01
Stochastic epidemics with open populations of variable population sizes are considered where due to immigration and demographic effects the epidemic does not eventually die out forever. The underlying stochastic processes are ergodic multi-dimensional continuous-time Markov chains that possess unique equilibrium probability distributions. Modeling these epidemics as level-dependent quasi-birth-and-death processes enables efficient computations of the equilibrium distributions by matrix-analytic methods. Numerical examples for specific parameter sets are provided, which demonstrates that this approach is particularly well-suited for studying the impact of varying rates for immigration, births, deaths, infection, recovery from infection, and loss of immunity. PMID:27010993
Structured Modeling and Analysis of Stochastic Epidemics with Immigration and Demographic Effects.
Baumann, Hendrik; Sandmann, Werner
2016-01-01
Stochastic epidemics with open populations of variable population sizes are considered where due to immigration and demographic effects the epidemic does not eventually die out forever. The underlying stochastic processes are ergodic multi-dimensional continuous-time Markov chains that possess unique equilibrium probability distributions. Modeling these epidemics as level-dependent quasi-birth-and-death processes enables efficient computations of the equilibrium distributions by matrix-analytic methods. Numerical examples for specific parameter sets are provided, which demonstrates that this approach is particularly well-suited for studying the impact of varying rates for immigration, births, deaths, infection, recovery from infection, and loss of immunity.
Study of nonequilibrium work distributions from a fluctuating lattice Boltzmann model.
Nasarayya Chari, S Siva; Murthy, K P N; Inguva, Ramarao
2012-04-01
A system of ideal gas is switched from an initial equilibrium state to a final state not necessarily in equilibrium, by varying a macroscopic control variable according to a well-defined protocol. The distribution of work performed during the switching process is obtained. The equilibrium free energy difference, ΔF, is determined from the work fluctuation relation. Some of the work values in the ensemble shall be less than ΔF. We term these as ones that "violate" the second law of thermodynamics. A fluctuating lattice Boltzmann model has been employed to carry out the simulation of the switching experiment. Our results show that the probability of violation of the second law increases with the increase of switching time (τ) and tends to one-half in the reversible limit of τ→∞.
Probability distributions for multimeric systems.
Albert, Jaroslav; Rooman, Marianne
2016-01-01
We propose a fast and accurate method of obtaining the equilibrium mono-modal joint probability distributions for multimeric systems. The method necessitates only two assumptions: the copy number of all species of molecule may be treated as continuous; and, the probability density functions (pdf) are well-approximated by multivariate skew normal distributions (MSND). Starting from the master equation, we convert the problem into a set of equations for the statistical moments which are then expressed in terms of the parameters intrinsic to the MSND. Using an optimization package on Mathematica, we minimize a Euclidian distance function comprising of a sum of the squared difference between the left and the right hand sides of these equations. Comparison of results obtained via our method with those rendered by the Gillespie algorithm demonstrates our method to be highly accurate as well as efficient.
Chen, Yunjie; Roux, Benoît
2014-09-21
Hybrid schemes combining the strength of molecular dynamics (MD) and Metropolis Monte Carlo (MC) offer a promising avenue to improve the sampling efficiency of computer simulations of complex systems. A number of recently proposed hybrid methods consider new configurations generated by driving the system via a non-equilibrium MD (neMD) trajectory, which are subsequently treated as putative candidates for Metropolis MC acceptance or rejection. To obey microscopic detailed balance, it is necessary to alter the momentum of the system at the beginning and/or the end of the neMD trajectory. This strict rule then guarantees that the random walk in configurational space generated by such hybrid neMD-MC algorithm will yield the proper equilibrium Boltzmann distribution. While a number of different constructs are possible, the most commonly used prescription has been to simply reverse the momenta of all the particles at the end of the neMD trajectory ("one-end momentum reversal"). Surprisingly, it is shown here that the choice of momentum reversal prescription can have a considerable effect on the rate of convergence of the hybrid neMD-MC algorithm, with the simple one-end momentum reversal encountering particularly acute problems. In these neMD-MC simulations, different regions of configurational space end up being essentially isolated from one another due to a very small transition rate between regions. In the worst-case scenario, it is almost as if the configurational space does not constitute a single communicating class that can be sampled efficiently by the algorithm, and extremely long neMD-MC simulations are needed to obtain proper equilibrium probability distributions. To address this issue, a novel momentum reversal prescription, symmetrized with respect to both the beginning and the end of the neMD trajectory ("symmetric two-ends momentum reversal"), is introduced. Illustrative simulations demonstrate that the hybrid neMD-MC algorithm robustly yields a correct equilibrium probability distribution with this prescription.
NASA Astrophysics Data System (ADS)
Chen, Yunjie; Roux, Benoît
2014-09-01
Hybrid schemes combining the strength of molecular dynamics (MD) and Metropolis Monte Carlo (MC) offer a promising avenue to improve the sampling efficiency of computer simulations of complex systems. A number of recently proposed hybrid methods consider new configurations generated by driving the system via a non-equilibrium MD (neMD) trajectory, which are subsequently treated as putative candidates for Metropolis MC acceptance or rejection. To obey microscopic detailed balance, it is necessary to alter the momentum of the system at the beginning and/or the end of the neMD trajectory. This strict rule then guarantees that the random walk in configurational space generated by such hybrid neMD-MC algorithm will yield the proper equilibrium Boltzmann distribution. While a number of different constructs are possible, the most commonly used prescription has been to simply reverse the momenta of all the particles at the end of the neMD trajectory ("one-end momentum reversal"). Surprisingly, it is shown here that the choice of momentum reversal prescription can have a considerable effect on the rate of convergence of the hybrid neMD-MC algorithm, with the simple one-end momentum reversal encountering particularly acute problems. In these neMD-MC simulations, different regions of configurational space end up being essentially isolated from one another due to a very small transition rate between regions. In the worst-case scenario, it is almost as if the configurational space does not constitute a single communicating class that can be sampled efficiently by the algorithm, and extremely long neMD-MC simulations are needed to obtain proper equilibrium probability distributions. To address this issue, a novel momentum reversal prescription, symmetrized with respect to both the beginning and the end of the neMD trajectory ("symmetric two-ends momentum reversal"), is introduced. Illustrative simulations demonstrate that the hybrid neMD-MC algorithm robustly yields a correct equilibrium probability distribution with this prescription.
Steady-state distributions of probability fluxes on complex networks
NASA Astrophysics Data System (ADS)
Chełminiak, Przemysław; Kurzyński, Michał
2017-02-01
We consider a simple model of the Markovian stochastic dynamics on complex networks to examine the statistical properties of the probability fluxes. The additional transition, called hereafter a gate, powered by the external constant force breaks a detailed balance in the network. We argue, using a theoretical approach and numerical simulations, that the stationary distributions of the probability fluxes emergent under such conditions converge to the Gaussian distribution. By virtue of the stationary fluctuation theorem, its standard deviation depends directly on the square root of the mean flux. In turn, the nonlinear relation between the mean flux and the external force, which provides the key result of the present study, allows us to calculate the two parameters that entirely characterize the Gaussian distribution of the probability fluxes both close to as well as far from the equilibrium state. Also, the other effects that modify these parameters, such as the addition of shortcuts to the tree-like network, the extension and configuration of the gate and a change in the network size studied by means of computer simulations are widely discussed in terms of the rigorous theoretical predictions.
A New Equilibrium State for Singly Synchronous Binary Asteroids
NASA Astrophysics Data System (ADS)
Golubov, Oleksiy; Unukovych, Vladyslav; Scheeres, Daniel J.
2018-04-01
The evolution of rotation states of small asteroids is governed by the Yarkovsky–O’Keefe–Radzievskii–Paddack (YORP) effect, nonetheless some asteroids can stop their YORP evolution by attaining a stable equilibrium. The same is true for binary asteroids subjected to the binary YORP (BYORP) effect. Here we discuss a new type of equilibrium that combines these two, which is possible in a singly synchronous binary system. This equilibrium occurs when the normal YORP, the tangential YORP, and the BYORP compensate each other, and tidal torques distribute the angular momentum between the components of the system and dissipate energy. If unperturbed, such a system would remain singly synchronous in perpetuity with constant spin and orbit rates, as the tidal torques dissipate the incoming energy from impinging sunlight at the same rate. The probability of the existence of this kind of equilibrium in a binary system is found to be on the order of a few percent.
Generic finite size scaling for discontinuous nonequilibrium phase transitions into absorbing states
NASA Astrophysics Data System (ADS)
de Oliveira, M. M.; da Luz, M. G. E.; Fiore, C. E.
2015-12-01
Based on quasistationary distribution ideas, a general finite size scaling theory is proposed for discontinuous nonequilibrium phase transitions into absorbing states. Analogously to the equilibrium case, we show that quantities such as response functions, cumulants, and equal area probability distributions all scale with the volume, thus allowing proper estimates for the thermodynamic limit. To illustrate these results, five very distinct lattice models displaying nonequilibrium transitions—to single and infinitely many absorbing states—are investigated. The innate difficulties in analyzing absorbing phase transitions are circumvented through quasistationary simulation methods. Our findings (allied to numerical studies in the literature) strongly point to a unifying discontinuous phase transition scaling behavior for equilibrium and this important class of nonequilibrium systems.
NASA Astrophysics Data System (ADS)
Yan, Jie
2016-09-01
In this article [1] Dr. Vologodskii presents a comprehensive discussion on the mechanisms by which the type II topoisomerases unknot/disentangle DNA molecules. It is motivated by a mysterious capability of the nanometer-size enzymes to keep the steady-state probability of DNA entanglement/knot almost two orders of magnitude below that expected from thermal equilibrium [2-5]. In spite of obvious functional advantages of the enzymes, it raises a question regarding how such high efficiency could be achieved. The off-equilibrium steady state distribution of DNA topology is powered by ATP consumption. However, it remains unclear how this energy is utilized to bias the distribution toward disentangled/unknotted topological states of DNA.
Product Distribution Theory for Control of Multi-Agent Systems
NASA Technical Reports Server (NTRS)
Lee, Chia Fan; Wolpert, David H.
2004-01-01
Product Distribution (PD) theory is a new framework for controlling Multi-Agent Systems (MAS's). First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (probability distribution of) the joint stare of the agents. Accordingly we can consider a team game in which the shared utility is a performance measure of the behavior of the MAS. For such a scenario the game is at equilibrium - the Lagrangian is optimized - when the joint distribution of the agents optimizes the system's expected performance. One common way to find that equilibrium is to have each agent run a reinforcement learning algorithm. Here we investigate the alternative of exploiting PD theory to run gradient descent on the Lagrangian. We present computer experiments validating some of the predictions of PD theory for how best to do that gradient descent. We also demonstrate how PD theory can improve performance even when we are not allowed to rerun the MAS from different initial conditions, a requirement implicit in some previous work.
Directed Random Markets: Connectivity Determines Money
NASA Astrophysics Data System (ADS)
Martínez-Martínez, Ismael; López-Ruiz, Ricardo
2013-12-01
Boltzmann-Gibbs (BG) distribution arises as the statistical equilibrium probability distribution of money among the agents of a closed economic system where random and undirected exchanges are allowed. When considering a model with uniform savings in the exchanges, the final distribution is close to the gamma family. In this paper, we implement these exchange rules on networks and we find that these stationary probability distributions are robust and they are not affected by the topology of the underlying network. We introduce a new family of interactions: random but directed ones. In this case, it is found the topology to be determinant and the mean money per economic agent is related to the degree of the node representing the agent in the network. The relation between the mean money per economic agent and its degree is shown to be linear.
A Hierarchy of Heuristic-Based Models of Crowd Dynamics
NASA Astrophysics Data System (ADS)
Degond, P.; Appert-Rolland, C.; Moussaïd, M.; Pettré, J.; Theraulaz, G.
2013-09-01
We derive a hierarchy of kinetic and macroscopic models from a noisy variant of the heuristic behavioral Individual-Based Model of Ngai et al. (Disaster Med. Public Health Prep. 3:191-195,
Chadda, R; Robertson, J L
2016-01-01
Dimerization of membrane protein interfaces occurs during membrane protein folding and cell receptor signaling. Here, we summarize a method that allows for measurement of equilibrium dimerization reactions of membrane proteins in lipid bilayers, by measuring the Poisson distribution of subunit capture into liposomes by single-molecule photobleaching analysis. This strategy is grounded in the fact that given a comparable labeling efficiency, monomeric or dimeric forms of a membrane protein will give rise to distinctly different photobleaching probability distributions. These methods have been used to verify the dimer stoichiometry of the Fluc F - ion channel and the dimerization equilibrium constant of the ClC-ec1 Cl - /H + antiporter in lipid bilayers. This approach can be applied to any membrane protein system provided it can be purified, fluorescently labeled in a quantitative manner, and verified to be correctly folded by functional assays, even if the structure is not yet known. © 2016 Elsevier Inc. All rights reserved.
Adaptive, Distributed Control of Constrained Multi-Agent Systems
NASA Technical Reports Server (NTRS)
Bieniawski, Stefan; Wolpert, David H.
2004-01-01
Product Distribution (PO) theory was recently developed as a broad framework for analyzing and optimizing distributed systems. Here we demonstrate its use for adaptive distributed control of Multi-Agent Systems (MASS), i.e., for distributed stochastic optimization using MAS s. First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (Probability dist&&on on the joint state of the agents. When the game in question is a team game with constraints, that equilibrium optimizes the expected value of the team game utility, subject to those constraints. One common way to find that equilibrium is to have each agent run a Reinforcement Learning (E) algorithm. PD theory reveals this to be a particular type of search algorithm for minimizing the Lagrangian. Typically that algorithm i s quite inefficient. A more principled alternative is to use a variant of Newton's method to minimize the Lagrangian. Here we compare this alternative to RL-based search in three sets of computer experiments. These are the N Queen s problem and bin-packing problem from the optimization literature, and the Bar problem from the distributed RL literature. Our results confirm that the PD-theory-based approach outperforms the RL-based scheme in all three domains.
Strategic sophistication of individuals and teams. Experimental evidence
Sutter, Matthias; Czermak, Simon; Feri, Francesco
2013-01-01
Many important decisions require strategic sophistication. We examine experimentally whether teams act more strategically than individuals. We let individuals and teams make choices in simple games, and also elicit first- and second-order beliefs. We find that teams play the Nash equilibrium strategy significantly more often, and their choices are more often a best response to stated first order beliefs. Distributional preferences make equilibrium play less likely. Using a mixture model, the estimated probability to play strategically is 62% for teams, but only 40% for individuals. A model of noisy introspection reveals that teams differ from individuals in higher order beliefs. PMID:24926100
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margolin, L. G.
The applicability of Navier–Stokes equations is limited to near-equilibrium flows in which the gradients of density, velocity and energy are small. Here I propose an extension of the Chapman–Enskog approximation in which the velocity probability distribution function (PDF) is averaged in the coordinate phase space as well as the velocity phase space. I derive a PDF that depends on the gradients and represents a first-order generalization of local thermodynamic equilibrium. I then integrate this PDF to derive a hydrodynamic model. Finally, I discuss the properties of that model and its relation to the discrete equations of computational fluid dynamics.
Margolin, L. G.
2018-03-19
The applicability of Navier–Stokes equations is limited to near-equilibrium flows in which the gradients of density, velocity and energy are small. Here I propose an extension of the Chapman–Enskog approximation in which the velocity probability distribution function (PDF) is averaged in the coordinate phase space as well as the velocity phase space. I derive a PDF that depends on the gradients and represents a first-order generalization of local thermodynamic equilibrium. I then integrate this PDF to derive a hydrodynamic model. Finally, I discuss the properties of that model and its relation to the discrete equations of computational fluid dynamics.
A Computer Simulation Using Spreadsheets for Learning Concept of Steady-State Equilibrium
ERIC Educational Resources Information Center
Sharda, Vandana; Sastri, O. S. K. S.; Bhardwaj, Jyoti; Jha, Arbind K.
2016-01-01
In this paper, we present a simple spreadsheet based simulation activity that can be performed by students at the undergraduate level. This simulation is implemented in free open source software (FOSS) LibreOffice Calc, which is available for both Windows and Linux platform. This activity aims at building the probability distribution for the…
NASA Astrophysics Data System (ADS)
Lépinoux, J.; Sigli, C.
2018-01-01
In a recent paper, the authors showed how the clusters free energies are constrained by the coagulation probability, and explained various anomalies observed during the precipitation kinetics in concentrated alloys. This coagulation probability appeared to be a too complex function to be accurately predicted knowing only the cluster distribution in Cluster Dynamics (CD). Using atomistic Monte Carlo (MC) simulations, it is shown that during a transformation at constant temperature, after a short transient regime, the transformation occurs at quasi-equilibrium. It is proposed to use MC simulations until the system quasi-equilibrates then to switch to CD which is mean field but not limited by a box size like MC. In this paper, we explain how to take into account the information available before the quasi-equilibrium state to establish guidelines to safely predict the cluster free energies.
Non-equilibrium steady-state distributions of colloids in a tilted periodic potential
NASA Astrophysics Data System (ADS)
Ma, Xiaoguang; Lai, Pik-Yin; Ackerson, Bruce; Tong, Penger
A two-layer colloidal system is constructed to study the effects of the external force F on the non-equilibrium steady-state (NESS) dynamics of the diffusing particles over a tilted periodic potential, in which detailed balance is broken due to the presence of a steady particle flux. The periodic potential is provided by the bottom layer colloidal spheres forming a fixed crystalline pattern on a glass substrate. The corrugated surface of the bottom colloidal crystal provides a gravitational potential field for the top layer diffusing particles. By tilting the sample with respect to gravity, a tangential component F is applied to the diffusing particles. The measured NESS probability density function Pss (x , y) of the particles is found to deviate from the equilibrium distribution depending on the driving or distance from equilibrium. The experimental results are compared with the exact solution of the 1D Smoluchowski equation and the numerical results of the 2D Smoluchowski equation. Moreover, from the obtained exact 1D solution, we develop an analytical method to accurately extract the 1D potential U0 (x) from the measured Pss (x) . Work supported in part by the Research Grants Council of Hong Kong SAR.
Invasion resistance arises in strongly interacting species-rich model competition communities.
Case, T J
1990-01-01
I assemble stable multispecies Lotka-Volterra competition communities that differ in resident species number and average strength (and variance) of species interactions. These are then invaded with randomly constructed invaders drawn from the same distribution as the residents. The invasion success rate and the fate of the residents are determined as a function of community-and species-level properties. I show that the probability of colonization success for an invader decreases with community size and the average strength of competition (alpha). Communities composed of many strongly interacting species limit the invasion possibilities of most similar species. These communities, even for a superior invading competitor, set up a sort of "activation barrier" that repels invaders when they invade at low numbers. This "priority effect" for residents is not assumed a priori in my description for the individual population dynamics of these species; rather it emerges because species-rich and strongly interacting species sets have alternative stable states that tend to disfavor species at low densities. These models point to community-level rather than invader-level properties as the strongest determinant of differences in invasion success. The probability of extinction for a resident species increases with community size, and the probability of successful colonization by the invader decreases. Thus an equilibrium community size results wherein the probability of a resident species' extinction just balances the probability of an invader's addition. Given the distribution of alpha it is now possible to predict the equilibrium species number. The results provide a logical framework for an island-biogeographic theory in which species turnover is low even in the face of persistent invasions and for the protection of fragile native species from invading exotics. PMID:11607132
Reinforcement Learning for Constrained Energy Trading Games With Incomplete Information.
Wang, Huiwei; Huang, Tingwen; Liao, Xiaofeng; Abu-Rub, Haitham; Chen, Guo
2017-10-01
This paper considers the problem of designing adaptive learning algorithms to seek the Nash equilibrium (NE) of the constrained energy trading game among individually strategic players with incomplete information. In this game, each player uses the learning automaton scheme to generate the action probability distribution based on his/her private information for maximizing his own averaged utility. It is shown that if one of admissible mixed-strategies converges to the NE with probability one, then the averaged utility and trading quantity almost surely converge to their expected ones, respectively. For the given discontinuous pricing function, the utility function has already been proved to be upper semicontinuous and payoff secure which guarantee the existence of the mixed-strategy NE. By the strict diagonal concavity of the regularized Lagrange function, the uniqueness of NE is also guaranteed. Finally, an adaptive learning algorithm is provided to generate the strategy probability distribution for seeking the mixed-strategy NE.
Calculation of a fluctuating entropic force by phase space sampling.
Waters, James T; Kim, Harold D
2015-07-01
A polymer chain pinned in space exerts a fluctuating force on the pin point in thermal equilibrium. The average of such fluctuating force is well understood from statistical mechanics as an entropic force, but little is known about the underlying force distribution. Here, we introduce two phase space sampling methods that can produce the equilibrium distribution of instantaneous forces exerted by a terminally pinned polymer. In these methods, both the positions and momenta of mass points representing a freely jointed chain are perturbed in accordance with the spatial constraints and the Boltzmann distribution of total energy. The constraint force for each conformation and momentum is calculated using Lagrangian dynamics. Using terminally pinned chains in space and on a surface, we show that the force distribution is highly asymmetric with both tensile and compressive forces. Most importantly, the mean of the distribution, which is equal to the entropic force, is not the most probable force even for long chains. Our work provides insights into the mechanistic origin of entropic forces, and an efficient computational tool for unbiased sampling of the phase space of a constrained system.
NASA Astrophysics Data System (ADS)
Grams, G.; Giraud, S.; Fantina, A. F.; Gulminelli, F.
2018-03-01
The aim of the present study is to calculate the nuclear distribution associated at finite temperature to any given equation of state of stellar matter based on the Wigner-Seitz approximation, for direct applications in core-collapse simulations. The Gibbs free energy of the different configurations is explicitly calculated, with special care devoted to the calculation of rearrangement terms, ensuring thermodynamic consistency. The formalism is illustrated with two different applications. First, we work out the nuclear statistical equilibrium cluster distribution for the Lattimer and Swesty equation of state, widely employed in supernova simulations. Secondly, we explore the effect of including shell structure, and consider realistic nuclear mass tables from the Brussels-Montreal Hartree-Fock-Bogoliubov model (specifically, HFB-24). We show that the whole collapse trajectory is dominated by magic nuclei, with extremely spread and even bimodal distributions of the cluster probability around magic numbers, demonstrating the importance of cluster distributions with realistic mass models in core-collapse simulations. Simple analytical expressions are given, allowing further applications of the method to any relativistic or nonrelativistic subsaturation equation of state.
NASA Technical Reports Server (NTRS)
Wolpert, David H.
2005-01-01
Probability theory governs the outcome of a game; there is a distribution over mixed strat.'s, not a single "equilibrium". To predict a single mixed strategy must use our loss function (external to the game's players. Provides a quantification of any strategy's rationality. Prove rationality falls as cost of computation rises (for players who have not previously interacted). All extends to games with varying numbers of players.
Statistical prescission point model of fission fragment angular distributions
NASA Astrophysics Data System (ADS)
John, Bency; Kataria, S. K.
1998-03-01
In light of recent developments in fission studies such as slow saddle to scission motion and spin equilibration near the scission point, the theory of fission fragment angular distribution is examined and a new statistical prescission point model is developed. The conditional equilibrium of the collective angular bearing modes at the prescission point, which is guided mainly by their relaxation times and population probabilities, is taken into account in the present model. The present model gives a consistent description of the fragment angular and spin distributions for a wide variety of heavy and light ion induced fission reactions.
NASA Astrophysics Data System (ADS)
Margolin, L. G.
2018-04-01
The applicability of Navier-Stokes equations is limited to near-equilibrium flows in which the gradients of density, velocity and energy are small. Here I propose an extension of the Chapman-Enskog approximation in which the velocity probability distribution function (PDF) is averaged in the coordinate phase space as well as the velocity phase space. I derive a PDF that depends on the gradients and represents a first-order generalization of local thermodynamic equilibrium. I then integrate this PDF to derive a hydrodynamic model. I discuss the properties of that model and its relation to the discrete equations of computational fluid dynamics. This article is part of the theme issue `Hilbert's sixth problem'.
NASA Technical Reports Server (NTRS)
Li, Peng; Chou, Ming-Dah; Arking, Albert
1987-01-01
The transient response of the climate to increasing CO2 is studied using a modified version of the multilayer energy balance model of Peng et al. (1982). The main characteristics of the model are described. Latitudinal and seasonal distributions of planetary albedo, latitude-time distributions of zonal mean temperatures, and latitudinal distributions of evaporation, water vapor transport, and snow cover generated from the model and derived from actual observations are analyzed and compared. It is observed that in response to an atmospheric doubling of CO2, the model reaches within 1/e of the equilibrium response of global mean surface temperature in 9-35 years for the probable range of vertical heat diffusivity in the ocean. For CO2 increases projected by the National Research Council (1983), the model's transient response in annually and globally averaged surface temperatures is 60-75 percent of the corresponding equilibrium response, and the disequilibrium increases with increasing heat diffusivity of the ocean.
NASA Technical Reports Server (NTRS)
Klein, L.
1972-01-01
Emission and absorption spectra of water vapor plasmas generated in a wall-stabilized arc at atmospheric pressure and 4 current, and at 0.03 atm and 15 to 50 A, were measured at high spatial and spectral resolution. The gas temperature was determined from the shape of Doppler-broadened rotational lines of OH. The observed nonequilibrium population distributions over the energy levels of atoms are interpreted in terms of a theoretical state model for diffusion-controlled arc plasmas. Excellent correlation is achieved between measured and predicted occupation of hydrogen energy levels. It is shown that the population distribution over the nonpredissociating rotational-vibrational levels of the A 2 Sigma state of OH is close to an equilibrium distribution at the gas temperature, although the total density of this state is much higher than its equilibrium density. The reduced intensities of the rotational lines originating in these levels yielded Boltzmann plots that were strictly linear.
Allele frequency data of 15 autosomal STR loci in four major population groups of South Africa.
Lucassen, Anton; Ehlers, Karen; Grobler, Paul J; Shezi, Adeline L
2014-03-01
Allele frequency distributions for 15 tetrameric short tandem repeat (STR) loci were determined using the AmpFlSTR® Identifiler Plus™ PCR amplification kit. There was little evidence of departures from Hardy-Weinberg equilibrium or association of alleles of different loci in the population samples. The probability of identity values for the different populations range from 1/3.3 × 10(17) (White) to 1/1.88 × 10(18) (Coloured). The combined probability of paternal exclusion for the different population groups ranges from 0.9995858 (Coloured) to 0.9997874 (Indian).
Furbish, David; Schmeeckle, Mark; Schumer, Rina; Fathel, Siobhan
2016-01-01
We describe the most likely forms of the probability distributions of bed load particle velocities, accelerations, hop distances, and travel times, in a manner that formally appeals to inferential statistics while honoring mechanical and kinematic constraints imposed by equilibrium transport conditions. The analysis is based on E. Jaynes's elaboration of the implications of the similarity between the Gibbs entropy in statistical mechanics and the Shannon entropy in information theory. By maximizing the information entropy of a distribution subject to known constraints on its moments, our choice of the form of the distribution is unbiased. The analysis suggests that particle velocities and travel times are exponentially distributed and that particle accelerations follow a Laplace distribution with zero mean. Particle hop distances, viewed alone, ought to be distributed exponentially. However, the covariance between hop distances and travel times precludes this result. Instead, the covariance structure suggests that hop distances follow a Weibull distribution. These distributions are consistent with high-resolution measurements obtained from high-speed imaging of bed load particle motions. The analysis brings us closer to choosing distributions based on our mechanical insight.
UXO Burial Prediction Fidelity: A Summary
2017-07-01
should not be construed as representing the official position of either the Department of Defense or the sponsoring organization. For More Information ...equilibrium. Any complete picture of munition evolution in sediment would need to account for these effects. More relevant to the present topic: these...of adds uncertainty to predictions of munition fate, and assessments of risk probabilities would need to account for the statistical distribution of
Banik, Suman Kumar; Bag, Bidhan Chandra; Ray, Deb Shankar
2002-05-01
Traditionally, quantum Brownian motion is described by Fokker-Planck or diffusion equations in terms of quasiprobability distribution functions, e.g., Wigner functions. These often become singular or negative in the full quantum regime. In this paper a simple approach to non-Markovian theory of quantum Brownian motion using true probability distribution functions is presented. Based on an initial coherent state representation of the bath oscillators and an equilibrium canonical distribution of the quantum mechanical mean values of their coordinates and momenta, we derive a generalized quantum Langevin equation in c numbers and show that the latter is amenable to a theoretical analysis in terms of the classical theory of non-Markovian dynamics. The corresponding Fokker-Planck, diffusion, and Smoluchowski equations are the exact quantum analogs of their classical counterparts. The present work is independent of path integral techniques. The theory as developed here is a natural extension of its classical version and is valid for arbitrary temperature and friction (the Smoluchowski equation being considered in the overdamped limit).
Monte Carlo calculations of diatomic molecule gas flows including rotational mode excitation
NASA Technical Reports Server (NTRS)
Yoshikawa, K. K.; Itikawa, Y.
1976-01-01
The direct simulation Monte Carlo method was used to solve the Boltzmann equation for flows of an internally excited nonequilibrium gas, namely, of rotationally excited homonuclear diatomic nitrogen. The semi-classical transition probability model of Itikawa was investigated for its ability to simulate flow fields far from equilibrium. The behavior of diatomic nitrogen was examined for several different nonequilibrium initial states that are subjected to uniform mean flow without boundary interactions. A sample of 1000 model molecules was observed as the gas relaxed to a steady state starting from three specified initial states. The initial states considered are: (1) complete equilibrium, (2) nonequilibrium, equipartition (all rotational energy states are assigned the mean energy level obtained at equilibrium with a Boltzmann distribution at the translational temperature), and (3) nonequipartition (the mean rotational energy is different from the equilibrium mean value with respect to the translational energy states). In all cases investigated the present model satisfactorily simulated the principal features of the relaxation effects in nonequilibrium flow of diatomic molecules.
Distribution of injected power fluctuations in electroconvection.
Tóth-Katona, Tibor; Gleeson, J T
2003-12-31
We report on the distribution spectra of the fluctations in the amount of power injected into a liquid crystal undergoing electroconvective flow. The probability distribution functions (PDFs) of the fluc-tuations as well as the magnitude of the fluctuations have been determined in a wide range of imposed stress both for unconfined and confined flow geometries. These spectra are compared to those found in other systems held far from equilibrium, and find that in certain conditions we obtain the universal PDF form reported by Phys. Rev. Lett. 84, 3744 (2000)]. Moreover, the PDF approaches this universal form via an interesting mechanism whereby the distribution's negative tail evolves towards form in a different manner than the positive tail.
Colloquium: Statistical mechanics of money, wealth, and income
NASA Astrophysics Data System (ADS)
Yakovenko, Victor M.; Rosser, J. Barkley, Jr.
2009-10-01
This Colloquium reviews statistical models for money, wealth, and income distributions developed in the econophysics literature since the late 1990s. By analogy with the Boltzmann-Gibbs distribution of energy in physics, it is shown that the probability distribution of money is exponential for certain classes of models with interacting economic agents. Alternative scenarios are also reviewed. Data analysis of the empirical distributions of wealth and income reveals a two-class distribution. The majority of the population belongs to the lower class, characterized by the exponential (“thermal”) distribution, whereas a small fraction of the population in the upper class is characterized by the power-law (“superthermal”) distribution. The lower part is very stable, stationary in time, whereas the upper part is highly dynamical and out of equilibrium.
Excited atoms in the free-burning Ar arc: treatment of the resonance radiation
NASA Astrophysics Data System (ADS)
Golubovskii, Yu; Kalanov, D.; Gortschakow, S.; Baeva, M.; Uhrlandt, D.
2016-11-01
The collisional-radiative model with an emphasis on the accurate treatment of the resonance radiation transport is developed and applied to the free-burning Ar arc plasma. This model allows for analysis of the influence of resonance radiation on the spatial density profiles of the atoms in different excited states. The comparison of the radial density profiles obtained using an effective transition probability approximation with the results of the accurate solution demonstrates the distinct impact of transport on the profiles and absolute densities of the excited atoms, especially in the arc fringes. The departures from the Saha-Boltzmann equilibrium distributions, caused by different radiative transitions, are analyzed. For the case of the DC arc, the local thermodynamic equilibrium (LTE) state holds close to the arc axis, while strong deviations from the equilibrium state on the periphery occur. In the intermediate radial positions the conditions of partial LTE are fulfilled.
Nonequilibrium Entropy in a Shock
Margolin, Len G.
2017-07-19
In a classic paper, Morduchow and Libby use an analytic solution for the profile of a Navier–Stokes shock to show that the equilibrium thermodynamic entropy has a maximum inside the shock. There is no general nonequilibrium thermodynamic formulation of entropy; the extension of equilibrium theory to nonequililbrium processes is usually made through the assumption of local thermodynamic equilibrium (LTE). However, gas kinetic theory provides a perfectly general formulation of a nonequilibrium entropy in terms of the probability distribution function (PDF) solutions of the Boltzmann equation. In this paper I will evaluate the Boltzmann entropy for the PDF that underlies themore » Navier–Stokes equations and also for the PDF of the Mott–Smith shock solution. I will show that both monotonically increase in the shock. As a result, I will propose a new nonequilibrium thermodynamic entropy and show that it is also monotone and closely approximates the Boltzmann entropy.« less
Nonequilibrium Entropy in a Shock
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margolin, Len G.
In a classic paper, Morduchow and Libby use an analytic solution for the profile of a Navier–Stokes shock to show that the equilibrium thermodynamic entropy has a maximum inside the shock. There is no general nonequilibrium thermodynamic formulation of entropy; the extension of equilibrium theory to nonequililbrium processes is usually made through the assumption of local thermodynamic equilibrium (LTE). However, gas kinetic theory provides a perfectly general formulation of a nonequilibrium entropy in terms of the probability distribution function (PDF) solutions of the Boltzmann equation. In this paper I will evaluate the Boltzmann entropy for the PDF that underlies themore » Navier–Stokes equations and also for the PDF of the Mott–Smith shock solution. I will show that both monotonically increase in the shock. As a result, I will propose a new nonequilibrium thermodynamic entropy and show that it is also monotone and closely approximates the Boltzmann entropy.« less
Electron-phonon relaxation and excited electron distribution in gallium nitride
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhukov, V. P.; Donostia International Physics Center; Tyuterev, V. G., E-mail: valtyut00@mail.ru
2016-08-28
We develop a theory of energy relaxation in semiconductors and insulators highly excited by the long-acting external irradiation. We derive the equation for the non-equilibrium distribution function of excited electrons. The solution for this function breaks up into the sum of two contributions. The low-energy contribution is concentrated in a narrow range near the bottom of the conduction band. It has the typical form of a Fermi distribution with an effective temperature and chemical potential. The effective temperature and chemical potential in this low-energy term are determined by the intensity of carriers' generation, the speed of electron-phonon relaxation, rates ofmore » inter-band recombination, and electron capture on the defects. In addition, there is a substantial high-energy correction. This high-energy “tail” largely covers the conduction band. The shape of the high-energy “tail” strongly depends on the rate of electron-phonon relaxation but does not depend on the rates of recombination and trapping. We apply the theory to the calculation of a non-equilibrium distribution of electrons in an irradiated GaN. Probabilities of optical excitations from the valence to conduction band and electron-phonon coupling probabilities in GaN were calculated by the density functional perturbation theory. Our calculation of both parts of distribution function in gallium nitride shows that when the speed of the electron-phonon scattering is comparable with the rate of recombination and trapping then the contribution of the non-Fermi “tail” is comparable with that of the low-energy Fermi-like component. So the high-energy contribution can essentially affect the charge transport in the irradiated and highly doped semiconductors.« less
Occupation times and ergodicity breaking in biased continuous time random walks
NASA Astrophysics Data System (ADS)
Bel, Golan; Barkai, Eli
2005-12-01
Continuous time random walk (CTRW) models are widely used to model diffusion in condensed matter. There are two classes of such models, distinguished by the convergence or divergence of the mean waiting time. Systems with finite average sojourn time are ergodic and thus Boltzmann-Gibbs statistics can be applied. We investigate the statistical properties of CTRW models with infinite average sojourn time; in particular, the occupation time probability density function is obtained. It is shown that in the non-ergodic phase the distribution of the occupation time of the particle on a given lattice point exhibits bimodal U or trimodal W shape, related to the arcsine law. The key points are as follows. (a) In a CTRW with finite or infinite mean waiting time, the distribution of the number of visits on a lattice point is determined by the probability that a member of an ensemble of particles in equilibrium occupies the lattice point. (b) The asymmetry parameter of the probability distribution function of occupation times is related to the Boltzmann probability and to the partition function. (c) The ensemble average is given by Boltzmann-Gibbs statistics for either finite or infinite mean sojourn time, when detailed balance conditions hold. (d) A non-ergodic generalization of the Boltzmann-Gibbs statistical mechanics for systems with infinite mean sojourn time is found.
Refractory pulse counting processes in stochastic neural computers.
McNeill, Dean K; Card, Howard C
2005-03-01
This letter quantitiatively investigates the effect of a temporary refractory period or dead time in the ability of a stochastic Bernoulli processor to record subsequent pulse events, following the arrival of a pulse. These effects can arise in either the input detectors of a stochastic neural network or in subsequent processing. A transient period is observed, which increases with both the dead time and the Bernoulli probability of the dead-time free system, during which the system reaches equilibrium. Unless the Bernoulli probability is small compared to the inverse of the dead time, the mean and variance of the pulse count distributions are both appreciably reduced.
NASA Technical Reports Server (NTRS)
Grover, Maninder S.; Schwartzentruber, Thomas E.; Jaffe, Richard L.
2017-01-01
In this work we present a molecular level study of N2+N collisions, focusing on excitation of internal energy modes and non-equilibrium dissociation. The computation technique used here is the direct molecular simulation (DMS) method and the molecular interactions have been modeled using an ab-initio potential energy surface (PES) developed at NASA's Ames Research Center. We carried out vibrational excitation calculations between 5000K and 30000K and found that the characteristic vibrational excitation time for the N + N2 process was an order of magnitude lower than that predicted by the Millikan and White correlation. It is observed that during vibrational excitation the high energy tail of the vibrational energy distribution gets over populated first and the lower energy levels get populated as the system evolves. It is found that the non-equilibrium dissociation rate coefficients for the N + N2 process are larger than those for the N2 + N2 process. This is attributed to the non-equilibrium vibrational energy distributions for the N + N2 process being less depleted than that for the N2 +N2 process. For an isothermal simulation we find that the probability of dissociation goes as 1/T(sub tr) for molecules with internal energy (epsilon(sub int)) less than approximately 9.9eV, while for molecules with epsilon (sub int) greater than 9.9eV the dissociation probability was weakly dependent on translational temperature of the system. We compared non-equilibrium dissociation rate coefficients and characteristic vibrational excitation times obtained by using the ab-initio PES developed at NASA's Ames Research Center to those obtained by using an ab-initio PES developed at the University of Minnesota. Good agreement was found between the macroscopic properties and molecular level description of the system obtained by using the two PESs.
Maximum caliber inference of nonequilibrium processes
NASA Astrophysics Data System (ADS)
Otten, Moritz; Stock, Gerhard
2010-07-01
Thirty years ago, Jaynes suggested a general theoretical approach to nonequilibrium statistical mechanics, called maximum caliber (MaxCal) [Annu. Rev. Phys. Chem. 31, 579 (1980)]. MaxCal is a variational principle for dynamics in the same spirit that maximum entropy is a variational principle for equilibrium statistical mechanics. Motivated by the success of maximum entropy inference methods for equilibrium problems, in this work the MaxCal formulation is applied to the inference of nonequilibrium processes. That is, given some time-dependent observables of a dynamical process, one constructs a model that reproduces these input data and moreover, predicts the underlying dynamics of the system. For example, the observables could be some time-resolved measurements of the folding of a protein, which are described by a few-state model of the free energy landscape of the system. MaxCal then calculates the probabilities of an ensemble of trajectories such that on average the data are reproduced. From this probability distribution, any dynamical quantity of the system can be calculated, including population probabilities, fluxes, or waiting time distributions. After briefly reviewing the formalism, the practical numerical implementation of MaxCal in the case of an inference problem is discussed. Adopting various few-state models of increasing complexity, it is demonstrated that the MaxCal principle indeed works as a practical method of inference: The scheme is fairly robust and yields correct results as long as the input data are sufficient. As the method is unbiased and general, it can deal with any kind of time dependency such as oscillatory transients and multitime decays.
Activated recombinative desorption: A potential component in mechanisms of spacecraft glow
NASA Technical Reports Server (NTRS)
Cross, J. B.
1985-01-01
The concept of activated recombination of atomic species on surfaces can explain the production of vibrationally and translationally excited desorbed molecular species. Equilibrium statistical mechanics predicts that the molecular quantum state distributions of desorbing molecules is a function of surface temperature only when the adsorption probability is unity and independent of initial collision conditions. In most cases, the adsorption probability is dependent upon initial conditions such as collision energy or internal quantum state distribution of impinging molecules. From detailed balance, such dynamical behavior is reflected in the internal quantum state distribution of the desorbing molecule. This concept, activated recombinative desorption, may offer a common thread in proposed mechanisms of spacecraft glow. Using molecular beam techniques and equipment available at Los Alamos, which includes a high translational energy 0-atom beam source, mass spectrometric detection of desorbed species, chemiluminescence/laser induced fluorescence detection of electronic and vibrationally excited reaction products, and Auger detection of surface adsorbed reaction products, a fundamental study of the gas surface chemistry underlying the glow process is proposed.
Statistical thermodynamics of clustered populations.
Matsoukas, Themis
2014-08-01
We present a thermodynamic theory for a generic population of M individuals distributed into N groups (clusters). We construct the ensemble of all distributions with fixed M and N, introduce a selection functional that embodies the physics that governs the population, and obtain the distribution that emerges in the scaling limit as the most probable among all distributions consistent with the given physics. We develop the thermodynamics of the ensemble and establish a rigorous mapping to regular thermodynamics. We treat the emergence of a so-called giant component as a formal phase transition and show that the criteria for its emergence are entirely analogous to the equilibrium conditions in molecular systems. We demonstrate the theory by an analytic model and confirm the predictions by Monte Carlo simulation.
NASA Astrophysics Data System (ADS)
Quan, Ji; Liu, Wei; Chu, Yuqing; Wang, Xianjia
2018-07-01
Continuous noise caused by mutation is widely present in evolutionary systems. Considering the noise effects and under the optional participation mechanism, a stochastic model for evolutionary public goods game in a finite size population is established. The evolutionary process of strategies in the population is described as a multidimensional ergodic and continuous time Markov process. The stochastic stable state of the system is analyzed by the limit distribution of the stochastic process. By numerical experiments, the influences of the fixed income coefficient for non-participants and the investment income coefficient of the public goods on the stochastic stable equilibrium of the system are analyzed. Through the numerical calculation results, we found that the optional participation mechanism can change the evolutionary dynamics and the equilibrium of the public goods game, and there is a range of parameters which can effectively promote the evolution of cooperation. Further, we obtain the accurate quantitative relationship between the parameters and the probabilities for the system to choose different stable equilibriums, which can be used to realize the control of cooperation.
NASA Astrophysics Data System (ADS)
Donkov, Sava; Stefanov, Ivan Z.
2018-03-01
We have set ourselves the task of obtaining the probability distribution function of the mass density of a self-gravitating isothermal compressible turbulent fluid from its physics. We have done this in the context of a new notion: the molecular clouds ensemble. We have applied a new approach that takes into account the fractal nature of the fluid. Using the medium equations, under the assumption of steady state, we show that the total energy per unit mass is an invariant with respect to the fractal scales. As a next step we obtain a non-linear integral equation for the dimensionless scale Q which is the third root of the integral of the probability distribution function. It is solved approximately up to the leading-order term in the series expansion. We obtain two solutions. They are power-law distributions with different slopes: the first one is -1.5 at low densities, corresponding to an equilibrium between all energies at a given scale, and the second one is -2 at high densities, corresponding to a free fall at small scales.
Richard, David; Speck, Thomas
2018-03-28
We investigate the kinetics and the free energy landscape of the crystallization of hard spheres from a supersaturated metastable liquid though direct simulations and forward flux sampling. In this first paper, we describe and test two different ways to reconstruct the free energy barriers from the sampled steady state probability distribution of cluster sizes without sampling the equilibrium distribution. The first method is based on mean first passage times, and the second method is based on splitting probabilities. We verify both methods for a single particle moving in a double-well potential. For the nucleation of hard spheres, these methods allow us to probe a wide range of supersaturations and to reconstruct the kinetics and the free energy landscape from the same simulation. Results are consistent with the scaling predicted by classical nucleation theory although a quantitative fit requires a rather large effective interfacial tension.
NASA Astrophysics Data System (ADS)
Richard, David; Speck, Thomas
2018-03-01
We investigate the kinetics and the free energy landscape of the crystallization of hard spheres from a supersaturated metastable liquid though direct simulations and forward flux sampling. In this first paper, we describe and test two different ways to reconstruct the free energy barriers from the sampled steady state probability distribution of cluster sizes without sampling the equilibrium distribution. The first method is based on mean first passage times, and the second method is based on splitting probabilities. We verify both methods for a single particle moving in a double-well potential. For the nucleation of hard spheres, these methods allow us to probe a wide range of supersaturations and to reconstruct the kinetics and the free energy landscape from the same simulation. Results are consistent with the scaling predicted by classical nucleation theory although a quantitative fit requires a rather large effective interfacial tension.
On defense strategies for system of systems using aggregated correlations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Imam, Neena; Ma, Chris Y. T.
2017-04-01
We consider a System of Systems (SoS) wherein each system Si, i = 1; 2; ... ;N, is composed of discrete cyber and physical components which can be attacked and reinforced. We characterize the disruptions using aggregate failure correlation functions given by the conditional failure probability of SoS given the failure of an individual system. We formulate the problem of ensuring the survival of SoS as a game between an attacker and a provider, each with a utility function composed of asurvival probability term and a cost term, both expressed in terms of the number of components attacked and reinforced.more » The survival probabilities of systems satisfy simple product-form, first-order differential conditions, which simplify the Nash Equilibrium (NE) conditions. We derive the sensitivity functions that highlight the dependence of SoS survival probability at NE on cost terms, correlation functions, and individual system survival probabilities.We apply these results to a simplified model of distributed cloud computing infrastructure.« less
Work fluctuations for Bose particles in grand canonical initial states.
Yi, Juyeon; Kim, Yong Woon; Talkner, Peter
2012-05-01
We consider bosons in a harmonic trap and investigate the fluctuations of the work performed by an adiabatic change of the trap curvature. Depending on the reservoir conditions such as temperature and chemical potential that provide the initial equilibrium state, the exponentiated work average (EWA) defined in the context of the Crooks relation and the Jarzynski equality may diverge if the trap becomes wider. We investigate how the probability distribution function (PDF) of the work signals this divergence. It is shown that at low temperatures the PDF is highly asymmetric with a steep fall-off at one side and an exponential tail at the other side. For high temperatures it is closer to a symmetric distribution approaching a Gaussian form. These properties of the work PDF are discussed in relation to the convergence of the EWA and to the existence of the hypothetical equilibrium state to which those thermodynamic potential changes refer that enter both the Crooks relation and the Jarzynski equality.
NASA Astrophysics Data System (ADS)
Gyenis, Balázs
2017-02-01
We investigate Maxwell's attempt to justify the mathematical assumptions behind his 1860 Proposition IV according to which the velocity components of colliding particles follow the normal distribution. Contrary to the commonly held view we find that his molecular collision model plays a crucial role in reaching this conclusion, and that his model assumptions also permit inference to equalization of mean kinetic energies (temperatures), which is what he intended to prove in his discredited and widely ignored Proposition VI. If we take a charitable reading of his own proof of Proposition VI then it was Maxwell, and not Boltzmann, who gave the first proof of a tendency towards equilibrium, a sort of H-theorem. We also call attention to a potential conflation of notions of probabilistic and value independence in relevant prior works of his contemporaries and of his own, and argue that this conflation might have impacted his adoption of the suspect independence assumption of Proposition IV.
NASA Astrophysics Data System (ADS)
Fan, Tai-Fang
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
Magneto - Optical Imaging of Superconducting MgB2 Thin Films
NASA Astrophysics Data System (ADS)
Hummert, Stephanie Maria
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
Open Markov Processes and Reaction Networks
NASA Astrophysics Data System (ADS)
Swistock Pollard, Blake Stephen
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
Boron Carbide Filled Neutron Shielding Textile Polymers
NASA Astrophysics Data System (ADS)
Manzlak, Derrick Anthony
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
Parallel Unstructured Grid Generation for Complex Real-World Aerodynamic Simulations
NASA Astrophysics Data System (ADS)
Zagaris, George
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
NASA Astrophysics Data System (ADS)
Schiavone, Clinton Cleveland
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
Processing and Conversion of Algae to Bioethanol
NASA Astrophysics Data System (ADS)
Kampfe, Sara Katherine
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
The Development of the CALIPSO LiDAR Simulator
NASA Astrophysics Data System (ADS)
Powell, Kathleen A.
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
Exploring a Novel Approach to Technical Nuclear Forensics Utilizing Atomic Force Microscopy
NASA Astrophysics Data System (ADS)
Peeke, Richard Scot
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
NASA Astrophysics Data System (ADS)
Scully, Malcolm E.
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
Production of Cyclohexylene-Containing Diamines in Pursuit of Novel Radiation Shielding Materials
NASA Astrophysics Data System (ADS)
Bate, Norah G.
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
Development of Boron-Containing Polyimide Materials and Poly(arylene Ether)s for Radiation Shielding
NASA Astrophysics Data System (ADS)
Collins, Brittani May
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
Magnetization Dynamics and Anisotropy in Ferromagnetic/Antiferromagnetic Ni/NiO Bilayers
NASA Astrophysics Data System (ADS)
Petersen, Andreas
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
NASA Technical Reports Server (NTRS)
Englert, G. W.
1971-01-01
A model of the random walk is formulated to allow a simple computing procedure to replace the difficult problem of solution of the Fokker-Planck equation. The step sizes and probabilities of taking steps in the various directions are expressed in terms of Fokker-Planck coefficients. Application is made to many particle systems with Coulomb interactions. The relaxation of a highly peaked velocity distribution of particles to equilibrium conditions is illustrated.
Trouble with diffusion: Reassessing hillslope erosion laws with a particle-based model
NASA Astrophysics Data System (ADS)
Tucker, Gregory E.; Bradley, D. Nathan
2010-03-01
Many geomorphic systems involve a broad distribution of grain motion length scales, ranging from a few particle diameters to the length of an entire hillslope or stream. Studies of analogous physical systems have revealed that such broad motion distributions can have a significant impact on macroscale dynamics and can violate the assumptions behind standard, local gradient flux laws. Here, a simple particle-based model of sediment transport on a hillslope is used to study the relationship between grain motion statistics and macroscopic landform evolution. Surface grains are dislodged by random disturbance events with probabilities and distances that depend on local microtopography. Despite its simplicity, the particle model reproduces a surprisingly broad range of slope forms, including asymmetric degrading scarps and cinder cone profiles. At low slope angles the dynamics are diffusion like, with a short-range, thin-tailed hop length distribution, a parabolic, convex upward equilibrium slope form, and a linear relationship between transport rate and gradient. As slope angle steepens, the characteristic grain motion length scale begins to approach the length of the slope, leading to planar equilibrium forms that show a strongly nonlinear correlation between transport rate and gradient. These high-probability, long-distance motions violate the locality assumption embedded in many common gradient-based geomorphic transport laws. The example of a degrading scarp illustrates the potential for grain motion dynamics to vary in space and time as topography evolves. This characteristic renders models based on independent, stationary statistics inapplicable. An accompanying analytical framework based on treating grain motion as a survival process is briefly outlined.
Generating Discrete Power-Law Distributions from a Death- Multiple Immigration Population Process
NASA Astrophysics Data System (ADS)
Matthews, J. O.; Jakeman, E.; Hopcraft, K. I.
2003-04-01
We consider the evolution of a simple population process governed by deaths and multiple immigrations that arrive with rates particular to their order. For a particular choice of rates, the equilibrium solution has a discrete power-law form. The model is a generalization of a process investigated previously where immigrants arrived in pairs [1]. The general properties of this model are discussed in a companion paper. The population is initiated with precisely M individuals present and evolves to an equilibrium distribution with a power-law tail. However the power-law tails of the equilibrium distribution are established immediately, so that moments and correlation properties of the population are undefined for any non-zero time. The technique we develop to characterize this process utilizes external monitoring that counts the emigrants leaving the population in specified time intervals. This counting distribution also possesses a power-law tail for all sampling times and the resulting time series exhibits two features worthy of note, a large variation in the strength of the signal, reflecting the power-law PDF; and secondly, intermittency of the emissions. We show that counting with a detector of finite dynamic range regularizes naturally the fluctuations, in effect `clipping' the events. All previously undefined characteristics such as the mean, autocorrelation and probabilities to the first event and time between events are well defined and derived. These properties, although obtained by discarding much data, nevertheless possess embedded power-law regimes that characterize the population in a way that is analogous to box averaging determination of fractal-dimension.
NASA Astrophysics Data System (ADS)
Han, Fei; Cheng, Lin
2017-04-01
The tradable credit scheme (TCS) outperforms congestion pricing in terms of social equity and revenue neutrality, apart from the same perfect performance on congestion mitigation. This article investigates the effectiveness and efficiency of TCS on enhancing transportation network capacity in a stochastic user equilibrium (SUE) modelling framework. First, the SUE and credit market equilibrium conditions are presented; then an equivalent general SUE model with TCS is established by virtue of two constructed functions, which can be further simplified under a specific probability distribution. To enhance the network capacity by utilizing TCS, a bi-level mathematical programming model is established for the optimal TCS design problem, with the upper level optimization objective maximizing network reserve capacity and lower level being the proposed SUE model. The heuristic sensitivity analysis-based algorithm is developed to solve the bi-level model. Three numerical examples are provided to illustrate the improvement effect of TCS on the network in different scenarios.
Minimum relative entropy distributions with a large mean are Gaussian
NASA Astrophysics Data System (ADS)
Smerlak, Matteo
2016-12-01
Entropy optimization principles are versatile tools with wide-ranging applications from statistical physics to engineering to ecology. Here we consider the following constrained problem: Given a prior probability distribution q , find the posterior distribution p minimizing the relative entropy (also known as the Kullback-Leibler divergence) with respect to q under the constraint that mean (p ) is fixed and large. We show that solutions to this problem are approximately Gaussian. We discuss two applications of this result. In the context of dissipative dynamics, the equilibrium distribution of a Brownian particle confined in a strong external field is independent of the shape of the confining potential. We also derive an H -type theorem for evolutionary dynamics: The entropy of the (standardized) distribution of fitness of a population evolving under natural selection is eventually increasing in time.
Rare behavior of growth processes via umbrella sampling of trajectories
NASA Astrophysics Data System (ADS)
Klymko, Katherine; Geissler, Phillip L.; Garrahan, Juan P.; Whitelam, Stephen
2018-03-01
We compute probability distributions of trajectory observables for reversible and irreversible growth processes. These results reveal a correspondence between reversible and irreversible processes, at particular points in parameter space, in terms of their typical and atypical trajectories. Thus key features of growth processes can be insensitive to the precise form of the rate constants used to generate them, recalling the insensitivity to microscopic details of certain equilibrium behavior. We obtained these results using a sampling method, inspired by the "s -ensemble" large-deviation formalism, that amounts to umbrella sampling in trajectory space. The method is a simple variant of existing approaches, and applies to ensembles of trajectories controlled by the total number of events. It can be used to determine large-deviation rate functions for trajectory observables in or out of equilibrium.
NASA Astrophysics Data System (ADS)
Dupoyet, B.; Fiebig, H. R.; Musgrove, D. P.
2010-01-01
We report on initial studies of a quantum field theory defined on a lattice with multi-ladder geometry and the dilation group as a local gauge symmetry. The model is relevant in the cross-disciplinary area of econophysics. A corresponding proposal by Ilinski aimed at gauge modeling in non-equilibrium pricing is implemented in a numerical simulation. We arrive at a probability distribution of relative gains which matches the high frequency historical data of the NASDAQ stock exchange index.
Growth of equilibrium structures built from a large number of distinct component types.
Hedges, Lester O; Mannige, Ranjan V; Whitelam, Stephen
2014-09-14
We use simple analytic arguments and lattice-based computer simulations to study the growth of structures made from a large number of distinct component types. Components possess 'designed' interactions, chosen to stabilize an equilibrium target structure in which each component type has a defined spatial position, as well as 'undesigned' interactions that allow components to bind in a compositionally-disordered way. We find that high-fidelity growth of the equilibrium target structure can happen in the presence of substantial attractive undesigned interactions, as long as the energy scale of the set of designed interactions is chosen appropriately. This observation may help explain why equilibrium DNA 'brick' structures self-assemble even if undesigned interactions are not suppressed [Ke et al. Science, 338, 1177, (2012)]. We also find that high-fidelity growth of the target structure is most probable when designed interactions are drawn from a distribution that is as narrow as possible. We use this result to suggest how to choose complementary DNA sequences in order to maximize the fidelity of multicomponent self-assembly mediated by DNA. We also comment on the prospect of growing macroscopic structures in this manner.
NASA Astrophysics Data System (ADS)
Auslander, Joseph Simcha
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
NASA Astrophysics Data System (ADS)
Frey, Alexander
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
NASA Astrophysics Data System (ADS)
Mountz, Elizabeth M.
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
NASA Astrophysics Data System (ADS)
Abelard, Joshua Erold Robert
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
NASA Astrophysics Data System (ADS)
Harbert, Emily Grace
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
Defense strategies for asymmetric networked systems under composite utilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Ma, Chris Y. T.; Hausken, Kjell
We consider an infrastructure of networked systems with discrete components that can be reinforced at certain costs to guard against attacks. The communications network plays a critical, asymmetric role of providing the vital connectivity between the systems. We characterize the correlations within this infrastructure at two levels using (a) aggregate failure correlation function that specifies the infrastructure failure probability giventhe failure of an individual system or network, and (b) first order differential conditions on system survival probabilities that characterize component-level correlations. We formulate an infrastructure survival game between an attacker and a provider, who attacks and reinforces individual components, respectively.more » They use the composite utility functions composed of a survival probability term and a cost term, and the previously studiedsum-form and product-form utility functions are their special cases. At Nash Equilibrium, we derive expressions for individual system survival probabilities and the expected total number of operational components. We apply and discuss these estimates for a simplified model of distributed cloud computing infrastructure« less
Temperature profile and equipartition law in a Langevin harmonic chain
NASA Astrophysics Data System (ADS)
Kim, Sangrak
2017-09-01
Temperature profile in a Langevin harmonic chain is explicitly derived and the validity of the equipartition law is checked. First, we point out that the temperature profile in previous studies does not agree with the equipartition law: In thermal equilibrium, the temperature profile deviates from the same temperature distribution against the equipartition law, particularly at the ends of the chain. The matrix connecting temperatures of the heat reservoirs and the temperatures of the harmonic oscillators turns out to be a probability matrix. By explicitly calculating the power spectrum of the probability matrix, we will show that the discrepancy comes from the neglect of the power spectrum in higher frequency ω, which is in decay mode, and related with the imaginary number of wave number q.
Game-Theoretic strategies for systems of components using product-form utilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S; Ma, Cheng-Yu; Hausken, K.
Many critical infrastructures are composed of multiple systems of components which are correlated so that disruptions to one may propagate to others. We consider such infrastructures with correlations characterized in two ways: (i) an aggregate failure correlation function specifies the conditional failure probability of the infrastructure given the failure of an individual system, and (ii) a pairwise correlation function between two systems specifies the failure probability of one system given the failure of the other. We formulate a game for ensuring the resilience of the infrastructure, wherein the utility functions of the provider and attacker are products of an infrastructuremore » survival probability term and a cost term, both expressed in terms of the numbers of system components attacked and reinforced. The survival probabilities of individual systems satisfy first-order differential conditions that lead to simple Nash Equilibrium conditions. We then derive sensitivity functions that highlight the dependence of infrastructure resilience on the cost terms, correlation functions, and individual system survival probabilities. We apply these results to simplified models of distributed cloud computing and energy grid infrastructures.« less
Applications of finite-size scaling for atomic and non-equilibrium systems
NASA Astrophysics Data System (ADS)
Antillon, Edwin A.
We apply the theory of Finite-size scaling (FSS) to an atomic and a non-equilibrium system in order to extract critical parameters. In atomic systems, we look at the energy dependence on the binding charge near threshold between bound and free states, where we seek the critical nuclear charge for stability. We use different ab initio methods, such as Hartree-Fock, Density Functional Theory, and exact formulations implemented numerically with the finite-element method (FEM). Using Finite-size scaling formalism, where in this case the size of the system is related to the number of elements used in the basis expansion of the wavefunction, we predict critical parameters in the large basis limit. Results prove to be in good agreement with previous Slater-basis set calculations and demonstrate that this combined approach provides a promising first-principles approach to describe quantum phase transitions for materials and extended systems. In the second part we look at non-equilibrium one-dimensional model known as the raise and peel model describing a growing surface which grows locally and has non-local desorption. For a specific values of adsorption ( ua) and desorption (ud) the model shows interesting features. At ua = ud, the model is described by a conformal field theory (with conformal charge c = 0) and its stationary probability can be mapped to the ground state of a quantum chain and can also be related a two dimensional statistical model. For ua ≥ ud, the model shows a scale invariant phase in the avalanche distribution. In this work we study the surface dynamics by looking at avalanche distributions using FSS formalism and explore the effect of changing the boundary conditions of the model. The model shows the same universality for the cases with and with our the wall for an odd number of tiles removed, but we find a new exponent in the presence of a wall for an even number of avalanches released. We provide new conjecture for the probability distribution of avalanches with a wall obtained by using exact diagonalization of small lattices and Monte-Carlo simulations.
Functional response and population dynamics for fighting predator, based on activity distribution.
Garay, József; Varga, Zoltán; Gámez, Manuel; Cabello, Tomás
2015-03-07
The classical Holling type II functional response, describing the per capita predation as a function of prey density, was modified by Beddington and de Angelis to include interference of predators that increases with predator density and decreases the number of killed prey. In the present paper we further generalize the Beddington-de Angelis functional response, considering that all predator activities (searching and handling prey, fight and recovery) have time duration, the probabilities of predator activities depend on the encounter probabilities, and hence on the prey and predator abundance, too. Under these conditions, the aim of the study is to introduce a functional response for fighting the predator and to analyse the corresponding dynamics, when predator-predator-prey encounters also occur. From this general approach, the Holling type functional responses can also be obtained as particular cases. In terms of the activity distribution, we give biologically interpretable sufficient conditions for stable coexistence. We consider two-individual (predator-prey) and three-individual (predator-predator-prey) encounters. In the three-individual encounter model there is a relatively higher fighting rate and a lower killing rate. Using numerical simulation, we surprisingly found that when the intrinsic prey growth rate and the conversion rate are small enough, the equilibrium predator abundance is higher in the three-individual encounter case. The above means that, when the equilibrium abundance of the predator is small, coexistence appears first in the three-individual encounter model. Copyright © 2014 Elsevier Ltd. All rights reserved.
Theory and simulation of the time-dependent rate coefficients of diffusion-influenced reactions.
Zhou, H X; Szabo, A
1996-01-01
A general formalism is developed for calculating the time-dependent rate coefficient k(t) of an irreversible diffusion-influenced reaction. This formalism allows one to treat most factors that affect k(t), including rotational Brownian motion and conformational gating of reactant molecules and orientation constraint for product formation. At long times k(t) is shown to have the asymptotic expansion k(infinity)[1 + k(infinity) (pie Dt)-1/2 /4 pie D + ...], where D is the relative translational diffusion constant. An approximate analytical method for calculating k(t) is presented. This is based on the approximation that the probability density of the reactant pair in the reactive region keeps the equilibrium distribution but with a decreasing amplitude. The rate coefficient then is determined by the Green function in the absence of chemical reaction. Within the framework of this approximation, two general relations are obtained. The first relation allows the rate coefficient for an arbitrary amplitude of the reactivity to be found if the rate coefficient for one amplitude of the reactivity is known. The second relation allows the rate coefficient in the presence of conformational gating to be found from that in the absence of conformational gating. The ratio k(t)/k(0) is shown to be the survival probability of the reactant pair at time t starting from an initial distribution that is localized in the reactive region. This relation forms the basis of the calculation of k(t) through Brownian dynamics simulations. Two simulation procedures involving the propagation of nonreactive trajectories initiated only from the reactive region are described and illustrated on a model system. Both analytical and simulation results demonstrate the accuracy of the equilibrium-distribution approximation method. PMID:8913584
Rényi entropy of the totally asymmetric exclusion process
NASA Astrophysics Data System (ADS)
Wood, Anthony J.; Blythe, Richard A.; Evans, Martin R.
2017-11-01
The Rényi entropy is a generalisation of the Shannon entropy that is sensitive to the fine details of a probability distribution. We present results for the Rényi entropy of the totally asymmetric exclusion process (TASEP). We calculate explicitly an entropy whereby the squares of configuration probabilities are summed, using the matrix product formalism to map the problem to one involving a six direction lattice walk in the upper quarter plane. We derive the generating function across the whole phase diagram, using an obstinate kernel method. This gives the leading behaviour of the Rényi entropy and corrections in all phases of the TASEP. The leading behaviour is given by the result for a Bernoulli measure and we conjecture that this holds for all Rényi entropies. Within the maximal current phase the correction to the leading behaviour is logarithmic in the system size. Finally, we remark upon a special property of equilibrium systems whereby discontinuities in the Rényi entropy arise away from phase transitions, which we refer to as secondary transitions. We find no such secondary transition for this nonequilibrium system, supporting the notion that these are specific to equilibrium cases.
Direct calculation of liquid-vapor phase equilibria from transition matrix Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Errington, Jeffrey R.
2003-06-01
An approach for directly determining the liquid-vapor phase equilibrium of a model system at any temperature along the coexistence line is described. The method relies on transition matrix Monte Carlo ideas developed by Fitzgerald, Picard, and Silver [Europhys. Lett. 46, 282 (1999)]. During a Monte Carlo simulation attempted transitions between states along the Markov chain are monitored as opposed to tracking the number of times the chain visits a given state as is done in conventional simulations. Data collection is highly efficient and very precise results are obtained. The method is implemented in both the grand canonical and isothermal-isobaric ensemble. The main result from a simulation conducted at a given temperature is a density probability distribution for a range of densities that includes both liquid and vapor states. Vapor pressures and coexisting densities are calculated in a straightforward manner from the probability distribution. The approach is demonstrated with the Lennard-Jones fluid. Coexistence properties are directly calculated at temperatures spanning from the triple point to the critical point.
Work statistics of charged noninteracting fermions in slowly changing magnetic fields.
Yi, Juyeon; Talkner, Peter
2011-04-01
We consider N fermionic particles in a harmonic trap initially prepared in a thermal equilibrium state at temperature β^{-1} and examine the probability density function (pdf) of the work done by a magnetic field slowly varying in time. The behavior of the pdf crucially depends on the number of particles N but also on the temperature. At high temperatures (β≪1) the pdf is given by an asymmetric Laplace distribution for a single particle, and for many particles it approaches a Gaussian distribution with variance proportional to N/β(2). At low temperatures the pdf becomes strongly peaked at the center with a variance that still linearly increases with N but exponentially decreases with the temperature. We point out the consequences of these findings for the experimental confirmation of the Jarzynski equality such as the low probability issue at high temperatures and its solution at low temperatures, together with a discussion of the crossover behavior between the two temperature regimes. ©2011 American Physical Society
Work statistics of charged noninteracting fermions in slowly changing magnetic fields
NASA Astrophysics Data System (ADS)
Yi, Juyeon; Talkner, Peter
2011-04-01
We consider N fermionic particles in a harmonic trap initially prepared in a thermal equilibrium state at temperature β-1 and examine the probability density function (pdf) of the work done by a magnetic field slowly varying in time. The behavior of the pdf crucially depends on the number of particles N but also on the temperature. At high temperatures (β≪1) the pdf is given by an asymmetric Laplace distribution for a single particle, and for many particles it approaches a Gaussian distribution with variance proportional to N/β2. At low temperatures the pdf becomes strongly peaked at the center with a variance that still linearly increases with N but exponentially decreases with the temperature. We point out the consequences of these findings for the experimental confirmation of the Jarzynski equality such as the low probability issue at high temperatures and its solution at low temperatures, together with a discussion of the crossover behavior between the two temperature regimes.
Quantity Competition in a Differentiated Duopoly
NASA Astrophysics Data System (ADS)
Ferreira, Fernanda A.; Ferreira, Flávio; Ferreira, Miguel; Pinto, Alberto A.
In this paper, we consider a Stackelberg duopoly competition with differentiated goods, linear and symmetric demand and with unknown costs. In our model, the two firms play a non-cooperative game with two stages: in a first stage, firm F 1 chooses the quantity, q 1, that is going to produce; in the second stage, firm F 2 observes the quantity q 1 produced by firm F 1 and chooses its own quantity q 2. Firms choose their output levels in order to maximise their profits. We suppose that each firm has two different technologies, and uses one of them following a certain probability distribution. The use of either one or the other technology affects the unitary production cost. We show that there is exactly one perfect Bayesian equilibrium for this game. We analyse the variations of the expected profits with the parameters of the model, namely with the parameters of the probability distributions, and with the parameters of the demand and differentiation.
Reaction-diffusion on the fully-connected lattice: A+A\\rightarrow A
NASA Astrophysics Data System (ADS)
Turban, Loïc; Fortin, Jean-Yves
2018-04-01
Diffusion-coagulation can be simply described by a dynamic where particles perform a random walk on a lattice and coalesce with probability unity when meeting on the same site. Such processes display non-equilibrium properties with strong fluctuations in low dimensions. In this work we study this problem on the fully-connected lattice, an infinite-dimensional system in the thermodynamic limit, for which mean-field behaviour is expected. Exact expressions for the particle density distribution at a given time and survival time distribution for a given number of particles are obtained. In particular, we show that the time needed to reach a finite number of surviving particles (vanishing density in the scaling limit) displays strong fluctuations and extreme value statistics, characterized by a universal class of non-Gaussian distributions with singular behaviour.
Nash Equilibrium of Social-Learning Agents in a Restless Multiarmed Bandit Game.
Nakayama, Kazuaki; Hisakado, Masato; Mori, Shintaro
2017-05-16
We study a simple model for social-learning agents in a restless multiarmed bandit (rMAB). The bandit has one good arm that changes to a bad one with a certain probability. Each agent stochastically selects one of the two methods, random search (individual learning) or copying information from other agents (social learning), using which he/she seeks the good arm. Fitness of an agent is the probability to know the good arm in the steady state of the agent system. In this model, we explicitly construct the unique Nash equilibrium state and show that the corresponding strategy for each agent is an evolutionarily stable strategy (ESS) in the sense of Thomas. It is shown that the fitness of an agent with ESS is superior to that of an asocial learner when the success probability of social learning is greater than a threshold determined from the probability of success of individual learning, the probability of change of state of the rMAB, and the number of agents. The ESS Nash equilibrium is a solution to Rogers' paradox.
Maximum entropy approach to H -theory: Statistical mechanics of hierarchical systems
NASA Astrophysics Data System (ADS)
Vasconcelos, Giovani L.; Salazar, Domingos S. P.; Macêdo, A. M. S.
2018-02-01
A formalism, called H-theory, is applied to the problem of statistical equilibrium of a hierarchical complex system with multiple time and length scales. In this approach, the system is formally treated as being composed of a small subsystem—representing the region where the measurements are made—in contact with a set of "nested heat reservoirs" corresponding to the hierarchical structure of the system, where the temperatures of the reservoirs are allowed to fluctuate owing to the complex interactions between degrees of freedom at different scales. The probability distribution function (pdf) of the temperature of the reservoir at a given scale, conditioned on the temperature of the reservoir at the next largest scale in the hierarchy, is determined from a maximum entropy principle subject to appropriate constraints that describe the thermal equilibrium properties of the system. The marginal temperature distribution of the innermost reservoir is obtained by integrating over the conditional distributions of all larger scales, and the resulting pdf is written in analytical form in terms of certain special transcendental functions, known as the Fox H functions. The distribution of states of the small subsystem is then computed by averaging the quasiequilibrium Boltzmann distribution over the temperature of the innermost reservoir. This distribution can also be written in terms of H functions. The general family of distributions reported here recovers, as particular cases, the stationary distributions recently obtained by Macêdo et al. [Phys. Rev. E 95, 032315 (2017), 10.1103/PhysRevE.95.032315] from a stochastic dynamical approach to the problem.
Maximum entropy approach to H-theory: Statistical mechanics of hierarchical systems.
Vasconcelos, Giovani L; Salazar, Domingos S P; Macêdo, A M S
2018-02-01
A formalism, called H-theory, is applied to the problem of statistical equilibrium of a hierarchical complex system with multiple time and length scales. In this approach, the system is formally treated as being composed of a small subsystem-representing the region where the measurements are made-in contact with a set of "nested heat reservoirs" corresponding to the hierarchical structure of the system, where the temperatures of the reservoirs are allowed to fluctuate owing to the complex interactions between degrees of freedom at different scales. The probability distribution function (pdf) of the temperature of the reservoir at a given scale, conditioned on the temperature of the reservoir at the next largest scale in the hierarchy, is determined from a maximum entropy principle subject to appropriate constraints that describe the thermal equilibrium properties of the system. The marginal temperature distribution of the innermost reservoir is obtained by integrating over the conditional distributions of all larger scales, and the resulting pdf is written in analytical form in terms of certain special transcendental functions, known as the Fox H functions. The distribution of states of the small subsystem is then computed by averaging the quasiequilibrium Boltzmann distribution over the temperature of the innermost reservoir. This distribution can also be written in terms of H functions. The general family of distributions reported here recovers, as particular cases, the stationary distributions recently obtained by Macêdo et al. [Phys. Rev. E 95, 032315 (2017)10.1103/PhysRevE.95.032315] from a stochastic dynamical approach to the problem.
Applications of physics to economics and finance: Money, income, wealth, and the stock market
NASA Astrophysics Data System (ADS)
Dragulescu, Adrian Antoniu
Several problems arising in Economics and Finance are analyzed using concepts and quantitative methods from Physics. The dissertation is organized as follows: In the first chapter it is argued that in a closed economic system, money is conserved. Thus, by analogy with energy, the equilibrium probability distribution of money must follow the exponential Boltzmann-Gibbs law characterized by an effective temperature equal to the average amount of money per economic agent. The emergence of Boltzmann-Gibbs distribution is demonstrated through computer simulations of economic models. A thermal machine which extracts a monetary profit can be constructed between two economic systems with different temperatures. The role of debt and models with broken time-reversal symmetry for which the Boltzmann-Gibbs law does not hold, are discussed. In the second chapter, using data from several sources, it is found that the distribution of income is described for the great majority of population by an exponential distribution, whereas the high-end tail follows a power law. From the individual income distribution, the probability distribution of income for families with two earners is derived and it is shown that it also agrees well with the data. Data on wealth is presented and it is found that the distribution of wealth has a structure similar to the distribution of income. The Lorenz curve and Gini coefficient were calculated and are shown to be in good agreement with both income and wealth data sets. In the third chapter, the stock-market fluctuations at different time scales are investigated. A model where stock-price dynamics is governed by a geometrical (multiplicative) Brownian motion with stochastic variance is proposed. The corresponding Fokker-Planck equation can be solved exactly. Integrating out the variance, an analytic formula for the time-dependent probability distribution of stock price changes (returns) is found. The formula is in excellent agreement with the Dow-Jones index for the time lags from 1 to 250 trading days. For time lags longer than the relaxation time of variance, the probability distribution can be expressed in a scaling form using a Bessel function. The Dow-Jones data follow the scaling function for seven orders of magnitude.
Equilibrium sampling by reweighting nonequilibrium simulation trajectories
NASA Astrophysics Data System (ADS)
Yang, Cheng; Wan, Biao; Xu, Shun; Wang, Yanting; Zhou, Xin
2016-03-01
Based on equilibrium molecular simulations, it is usually difficult to efficiently visit the whole conformational space of complex systems, which are separated into some metastable regions by high free energy barriers. Nonequilibrium simulations could enhance transitions among these metastable regions and then be applied to sample equilibrium distributions in complex systems, since the associated nonequilibrium effects can be removed by employing the Jarzynski equality (JE). Here we present such a systematical method, named reweighted nonequilibrium ensemble dynamics (RNED), to efficiently sample equilibrium conformations. The RNED is a combination of the JE and our previous reweighted ensemble dynamics (RED) method. The original JE reproduces equilibrium from lots of nonequilibrium trajectories but requires that the initial distribution of these trajectories is equilibrium. The RED reweights many equilibrium trajectories from an arbitrary initial distribution to get the equilibrium distribution, whereas the RNED has both advantages of the two methods, reproducing equilibrium from lots of nonequilibrium simulation trajectories with an arbitrary initial conformational distribution. We illustrated the application of the RNED in a toy model and in a Lennard-Jones fluid to detect its liquid-solid phase coexistence. The results indicate that the RNED sufficiently extends the application of both the original JE and the RED in equilibrium sampling of complex systems.
Equilibrium sampling by reweighting nonequilibrium simulation trajectories.
Yang, Cheng; Wan, Biao; Xu, Shun; Wang, Yanting; Zhou, Xin
2016-03-01
Based on equilibrium molecular simulations, it is usually difficult to efficiently visit the whole conformational space of complex systems, which are separated into some metastable regions by high free energy barriers. Nonequilibrium simulations could enhance transitions among these metastable regions and then be applied to sample equilibrium distributions in complex systems, since the associated nonequilibrium effects can be removed by employing the Jarzynski equality (JE). Here we present such a systematical method, named reweighted nonequilibrium ensemble dynamics (RNED), to efficiently sample equilibrium conformations. The RNED is a combination of the JE and our previous reweighted ensemble dynamics (RED) method. The original JE reproduces equilibrium from lots of nonequilibrium trajectories but requires that the initial distribution of these trajectories is equilibrium. The RED reweights many equilibrium trajectories from an arbitrary initial distribution to get the equilibrium distribution, whereas the RNED has both advantages of the two methods, reproducing equilibrium from lots of nonequilibrium simulation trajectories with an arbitrary initial conformational distribution. We illustrated the application of the RNED in a toy model and in a Lennard-Jones fluid to detect its liquid-solid phase coexistence. The results indicate that the RNED sufficiently extends the application of both the original JE and the RED in equilibrium sampling of complex systems.
Analytical expressions for the evolution of many-body quantum systems quenched far from equilibrium
NASA Astrophysics Data System (ADS)
Santos, Lea F.; Torres-Herrera, E. Jonathan
2017-12-01
Possible strategies to describe analytically the dynamics of many-body quantum systems out of equilibrium include the use of solvable models and of full random matrices. None of the two approaches represent actual realistic systems, but they serve as references for the studies of these ones. We take the second path and obtain analytical expressions for the survival probability, density imbalance, and out-of-time-ordered correlator. Using these findings, we then propose an approximate expression that matches very well numerical results for the evolution of realistic finite quantum systems that are strongly chaotic and quenched far from equilibrium. In the case of the survival probability, the expression proposed covers all different time scales, from the moment the system is taken out of equilibrium to the moment it reaches a new equilibrium. The realistic systems considered are described by one-dimensional spin-1/2 models.
Work and heat fluctuations in two-state systems: a trajectory thermodynamics formalism
NASA Astrophysics Data System (ADS)
Ritort, F.
2004-10-01
Two-state models provide phenomenological descriptions of many different systems, ranging from physics to chemistry and biology. We investigate work fluctuations in an ensemble of two-state systems driven out of equilibrium under the action of an external perturbation. We calculate the probability density PN(W) that work equal to W is exerted upon the system (of size N) along a given non-equilibrium trajectory and introduce a trajectory thermodynamics formalism to quantify work fluctuations in the large-N limit. We then define a trajectory entropy SN(W) that counts the number of non-equilibrium trajectories PN(W) = exp(SN(W)/kBT) with work equal to W and characterizes fluctuations of work trajectories around the most probable value Wmp. A trajectory free energy {\\cal F}_N(W) can also be defined, which has a minimum at W = W†, this being the value of the work that has to be efficiently sampled to quantitatively test the Jarzynski equality. Within this formalism a Lagrange multiplier is also introduced, the inverse of which plays the role of a trajectory temperature. Our general solution for PN(W) exactly satisfies the fluctuation theorem by Crooks and allows us to investigate heat fluctuations for a protocol that is invariant under time reversal. The heat distribution is then characterized by a Gaussian component (describing small and frequent heat exchange events) and exponential tails (describing the statistics of large deviations and rare events). For the latter, the width of the exponential tails is related to the aforementioned trajectory temperature. Finite-size effects to the large-N theory and the recovery of work distributions for finite N are also discussed. Finally, we pay particular attention to the case of magnetic nanoparticle systems under the action of a magnetic field H where work and heat fluctuations are predicted to be observable in ramping experiments in micro-SQUIDs.
The measurable heat flux that accompanies active transport by Ca2+-ATPase.
Bedeaux, Dick; Kjelstrup, Signe
2008-12-28
We present a new mesoscopic basis which can be used to derive flux equations for the forward and reverse mode of operation of ion-pumps. We obtain a description of the fluxes far from global equilibrium. An asymmetric set of transport coefficients is obtained, by assuming that the chemical reaction as well as the ion transports are activated, and that the enzyme has a temperature independent of the activation coordinates. Close to global equilibrium, the description reduces to the well known one from non-equilibrium thermodynamics with a symmetric set of transport coefficients. We show how the measurable heat flux and the heat production under isothermal conditions, as well as thermogenesis, can be defined. Thermogenesis is defined via the onset of the chemical reaction or ion transports by a temperature drop. A prescription has been given for how to determine transport coefficients on the mesocopic level, using the macroscopic coefficient obtained from measurements, the activation enthalpy, and a proper probability distribution. The method may give new impetus to a long-standing unsolved transport problem in biophysics.
On the definition of a Monte Carlo model for binary crystal growth.
Los, J H; van Enckevort, W J P; Meekes, H; Vlieg, E
2007-02-01
We show that consistency of the transition probabilities in a lattice Monte Carlo (MC) model for binary crystal growth with the thermodynamic properties of a system does not guarantee the MC simulations near equilibrium to be in agreement with the thermodynamic equilibrium phase diagram for that system. The deviations remain small for systems with small bond energies, but they can increase significantly for systems with large melting entropy, typical for molecular systems. These deviations are attributed to the surface kinetics, which is responsible for a metastable zone below the liquidus line where no growth occurs, even in the absence of a 2D nucleation barrier. Here we propose an extension of the MC model that introduces a freedom of choice in the transition probabilities while staying within the thermodynamic constraints. This freedom can be used to eliminate the discrepancy between the MC simulations and the thermodynamic equilibrium phase diagram. Agreement is achieved for that choice of the transition probabilities yielding the fastest decrease of the free energy (i.e., largest growth rate) of the system at a temperature slightly below the equilibrium temperature. An analytical model is developed, which reproduces quite well the MC results, enabling a straightforward determination of the optimal set of transition probabilities. Application of both the MC and analytical model to conditions well away from equilibrium, giving rise to kinetic phase diagrams, shows that the effect of kinetics on segregation is even stronger than that predicted by previous models.
Decentralized learning in Markov games.
Vrancx, Peter; Verbeeck, Katja; Nowé, Ann
2008-08-01
Learning automata (LA) were recently shown to be valuable tools for designing multiagent reinforcement learning algorithms. One of the principal contributions of the LA theory is that a set of decentralized independent LA is able to control a finite Markov chain with unknown transition probabilities and rewards. In this paper, we propose to extend this algorithm to Markov games--a straightforward extension of single-agent Markov decision problems to distributed multiagent decision problems. We show that under the same ergodic assumptions of the original theorem, the extended algorithm will converge to a pure equilibrium point between agent policies.
Netz, Roland R
2018-05-14
An exactly solvable, Hamiltonian-based model of many massive particles that are coupled by harmonic potentials and driven by stochastic non-equilibrium forces is introduced. The stationary distribution and the fluctuation-dissipation relation are derived in closed form for the general non-equilibrium case. Deviations from equilibrium are on one hand characterized by the difference of the obtained stationary distribution from the Boltzmann distribution; this is possible because the model derives from a particle Hamiltonian. On the other hand, the difference between the obtained non-equilibrium fluctuation-dissipation relation and the standard equilibrium fluctuation-dissipation theorem allows us to quantify non-equilibrium in an alternative fashion. Both indicators of non-equilibrium behavior, i.e., deviations from the Boltzmann distribution and deviations from the equilibrium fluctuation-dissipation theorem, can be expressed in terms of a single non-equilibrium parameter α that involves the ratio of friction coefficients and random force strengths. The concept of a non-equilibrium effective temperature, which can be defined by the relation between fluctuations and the dissipation, is by comparison with the exactly derived stationary distribution shown not to hold, even if the effective temperature is made frequency dependent. The analysis is not confined to close-to-equilibrium situations but rather is exact and thus holds for arbitrarily large deviations from equilibrium. Also, the suggested harmonic model can be obtained from non-linear mechanical network systems by an expansion in terms of suitably chosen deviatory coordinates; the obtained results should thus be quite general. This is demonstrated by comparison of the derived non-equilibrium fluctuation dissipation relation with experimental data on actin networks that are driven out of equilibrium by energy-consuming protein motors. The comparison is excellent and allows us to extract the non-equilibrium parameter α from experimental spectral response and fluctuation data.
NASA Astrophysics Data System (ADS)
Netz, Roland R.
2018-05-01
An exactly solvable, Hamiltonian-based model of many massive particles that are coupled by harmonic potentials and driven by stochastic non-equilibrium forces is introduced. The stationary distribution and the fluctuation-dissipation relation are derived in closed form for the general non-equilibrium case. Deviations from equilibrium are on one hand characterized by the difference of the obtained stationary distribution from the Boltzmann distribution; this is possible because the model derives from a particle Hamiltonian. On the other hand, the difference between the obtained non-equilibrium fluctuation-dissipation relation and the standard equilibrium fluctuation-dissipation theorem allows us to quantify non-equilibrium in an alternative fashion. Both indicators of non-equilibrium behavior, i.e., deviations from the Boltzmann distribution and deviations from the equilibrium fluctuation-dissipation theorem, can be expressed in terms of a single non-equilibrium parameter α that involves the ratio of friction coefficients and random force strengths. The concept of a non-equilibrium effective temperature, which can be defined by the relation between fluctuations and the dissipation, is by comparison with the exactly derived stationary distribution shown not to hold, even if the effective temperature is made frequency dependent. The analysis is not confined to close-to-equilibrium situations but rather is exact and thus holds for arbitrarily large deviations from equilibrium. Also, the suggested harmonic model can be obtained from non-linear mechanical network systems by an expansion in terms of suitably chosen deviatory coordinates; the obtained results should thus be quite general. This is demonstrated by comparison of the derived non-equilibrium fluctuation dissipation relation with experimental data on actin networks that are driven out of equilibrium by energy-consuming protein motors. The comparison is excellent and allows us to extract the non-equilibrium parameter α from experimental spectral response and fluctuation data.
Species abundance distribution and population dynamics in a two-community model of neutral ecology
NASA Astrophysics Data System (ADS)
Vallade, M.; Houchmandzadeh, B.
2006-11-01
Explicit formulas for the steady-state distribution of species in two interconnected communities of arbitrary sizes are derived in the framework of Hubbell’s neutral model of biodiversity. Migrations of seeds from both communities as well as mutations in both of them are taken into account. These results generalize those previously obtained for the “island-continent” model and they allow an analysis of the influence of the ratio of the sizes of the two communities on the dominance/diversity equilibrium. Exact expressions for species abundance distributions are deduced from a master equation for the joint probability distribution of species in the two communities. Moreover, an approximate self-consistent solution is derived. It corresponds to a generalization of previous results and it proves to be accurate over a broad range of parameters. The dynamical correlations between the abundances of a species in both communities are also discussed.
Equilibrium of Global Amphibian Species Distributions with Climate
Munguía, Mariana; Rahbek, Carsten; Rangel, Thiago F.; Diniz-Filho, Jose Alexandre F.; Araújo, Miguel B.
2012-01-01
A common assumption in bioclimatic envelope modeling is that species distributions are in equilibrium with contemporary climate. A number of studies have measured departures from equilibrium in species distributions in particular regions, but such investigations were never carried out for a complete lineage across its entire distribution. We measure departures of equilibrium with contemporary climate for the distributions of the world amphibian species. Specifically, we fitted bioclimatic envelopes for 5544 species using three presence-only models. We then measured the proportion of the modeled envelope that is currently occupied by the species, as a metric of equilibrium of species distributions with climate. The assumption was that the greater the difference between modeled bioclimatic envelope and the occupied distribution, the greater the likelihood that species distribution would not be at equilibrium with contemporary climate. On average, amphibians occupied 30% to 57% of their potential distributions. Although patterns differed across regions, there were no significant differences among lineages. Species in the Neotropic, Afrotropics, Indo-Malay, and Palaearctic occupied a smaller proportion of their potential distributions than species in the Nearctic, Madagascar, and Australasia. We acknowledge that our models underestimate non equilibrium, and discuss potential reasons for the observed patterns. From a modeling perspective our results support the view that at global scale bioclimatic envelope models might perform similarly across lineages but differently across regions. PMID:22511938
A structured population model with diffusion in structure space.
Pugliese, Andrea; Milner, Fabio
2018-05-09
A structured population model is described and analyzed, in which individual dynamics is stochastic. The model consists of a PDE of advection-diffusion type in the structure variable. The population may represent, for example, the density of infected individuals structured by pathogen density x, [Formula: see text]. The individuals with density [Formula: see text] are not infected, but rather susceptible or recovered. Their dynamics is described by an ODE with a source term that is the exact flux from the diffusion and advection as [Formula: see text]. Infection/reinfection is then modeled moving a fraction of these individuals into the infected class by distributing them in the structure variable through a probability density function. Existence of a global-in-time solution is proven, as well as a classical bifurcation result about equilibrium solutions: a net reproduction number [Formula: see text] is defined that separates the case of only the trivial equilibrium existing when [Formula: see text] from the existence of another-nontrivial-equilibrium when [Formula: see text]. Numerical simulation results are provided to show the stabilization towards the positive equilibrium when [Formula: see text] and towards the trivial one when [Formula: see text], result that is not proven analytically. Simulations are also provided to show the Allee effect that helps boost population sizes at low densities.
NASA Astrophysics Data System (ADS)
Zhu, Zheng; Andresen, Juan Carlos; Moore, M. A.; Katzgraber, Helmut G.
2014-02-01
We study the equilibrium and nonequilibrium properties of Boolean decision problems with competing interactions on scale-free networks in an external bias (magnetic field). Previous studies at zero field have shown a remarkable equilibrium stability of Boolean variables (Ising spins) with competing interactions (spin glasses) on scale-free networks. When the exponent that describes the power-law decay of the connectivity of the network is strictly larger than 3, the system undergoes a spin-glass transition. However, when the exponent is equal to or less than 3, the glass phase is stable for all temperatures. First, we perform finite-temperature Monte Carlo simulations in a field to test the robustness of the spin-glass phase and show that the system has a spin-glass phase in a field, i.e., exhibits a de Almeida-Thouless line. Furthermore, we study avalanche distributions when the system is driven by a field at zero temperature to test if the system displays self-organized criticality. Numerical results suggest that avalanches (damage) can spread across the whole system with nonzero probability when the decay exponent of the interaction degree is less than or equal to 2, i.e., that Boolean decision problems on scale-free networks with competing interactions can be fragile when not in thermal equilibrium.
Lumpy investment, sectoral propagation, and business cycles (Invited Paper)
NASA Astrophysics Data System (ADS)
Nirei, Makoto
2005-05-01
This paper proposes a model of endogenous fluctuations in investment. A monopolistic producer has an incentive to invest when the aggregate demand is high. The investment at the firm level is also known to exhibit a threshold behavior called an (S,s) policy. These two facts lead us to consider that the fluctuation in aggregate investment is generated by the global coupling of the non-linear oscillators. From this perspective, we characterize the probability distribution of the investment clustering in a partial equilibrium of product markets, and show that its variance can be large enough to match the observed investment fluctuations. We then implement this mechanism in a dynamic general equilibrium model to explore an investment-driven business cycle. By calibrating the model with the SIC 4-digit level industry data, we numerically show that the model replicates the basic structure of the business cycles.
Pibida, L; Zimmerman, B; Fitzgerald, R; King, L; Cessna, J T; Bergeron, D E
2015-07-01
The currently published (223)Ra gamma-ray emission probabilities display a wide variation in the values depending on the source of the data. The National Institute of Standards and Technology performed activity measurements on a (223)Ra solution that was used to prepare several sources that were used to determine the photon emission probabilities for the main gamma-rays of (223)Ra in equilibrium with its progeny. Several high purity germanium (HPGe) detectors were used to perform the gamma-ray spectrometry measurements. Published by Elsevier Ltd.
Galactic hydrostatic equilibrium with magnetic tension and cosmic-ray diffusion
NASA Technical Reports Server (NTRS)
Boulares, Ahmed; Cox, Donald P.
1990-01-01
Three gravitational potentials differing in the content of dark matter in the Galactic plane are used to study the structure of the z-distribution of mass and pressure in the solar neighborhood. A P(0) of roughly (3.9 + or - 0.6) x 10 to the -12th dyn/sq cm is obtained, with roughly equal contributions from magnetic field, cosmic ray, and kinetic terms. This boundary condition restricts both the magnitude of gravity and the high z-pressure. It favors lower gravity and higher values for the cosmic ray, magnetic field, and probably the kinetic pressures than have been popular in the past. Inclusion of the warm H(+) distribution carries a significant mass component into the z about 1 kpc regime.
Game-theoretic strategies for asymmetric networked systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Ma, Chris Y. T.; Hausken, Kjell
Abstract—We consider an infrastructure consisting of a network of systems each composed of discrete components that can be reinforced at a certain cost to guard against attacks. The network provides the vital connectivity between systems, and hence plays a critical, asymmetric role in the infrastructure operations. We characterize the system-level correlations using the aggregate failure correlation function that specifies the infrastructure failure probability given the failure of an individual system or network. The survival probabilities of systems and network satisfy first-order differential conditions that capture the component-level correlations. We formulate the problem of ensuring the infrastructure survival as a gamemore » between anattacker and a provider, using the sum-form and product-form utility functions, each composed of a survival probability term and a cost term. We derive Nash Equilibrium conditions which provide expressions for individual system survival probabilities, and also the expected capacity specified by the total number of operational components. These expressions differ only in a single term for the sum-form and product-form utilities, despite their significant differences.We apply these results to simplified models of distributed cloud computing infrastructures.« less
Stochastic thermodynamics of quantum maps with and without equilibrium.
Barra, Felipe; Lledó, Cristóbal
2017-11-01
We study stochastic thermodynamics for a quantum system of interest whose dynamics is described by a completely positive trace-preserving (CPTP) map as a result of its interaction with a thermal bath. We define CPTP maps with equilibrium as CPTP maps with an invariant state such that the entropy production due to the action of the map on the invariant state vanishes. Thermal maps are a subgroup of CPTP maps with equilibrium. In general, for CPTP maps, the thermodynamic quantities, such as the entropy production or work performed on the system, depend on the combined state of the system plus its environment. We show that these quantities can be written in terms of system properties for maps with equilibrium. The relations that we obtain are valid for arbitrary coupling strengths between the system and the thermal bath. The fluctuations of thermodynamic quantities are considered in the framework of a two-point measurement scheme. We derive the entropy production fluctuation theorem for general maps and a fluctuation relation for the stochastic work on a system that starts in the Gibbs state. Some simplifications for the probability distributions in the case of maps with equilibrium are presented. We illustrate our results by considering spin 1/2 systems under thermal maps, nonthermal maps with equilibrium, maps with nonequilibrium steady states, and concatenations of them. Finally, and as an important application, we consider a particular limit in which the concatenation of maps generates a continuous time evolution in Lindblad form for the system of interest, and we show that the concept of maps with and without equilibrium translates into Lindblad equations with and without quantum detailed balance, respectively. The consequences for the thermodynamic quantities in this limit are discussed.
Stochastic thermodynamics of quantum maps with and without equilibrium
NASA Astrophysics Data System (ADS)
Barra, Felipe; Lledó, Cristóbal
2017-11-01
We study stochastic thermodynamics for a quantum system of interest whose dynamics is described by a completely positive trace-preserving (CPTP) map as a result of its interaction with a thermal bath. We define CPTP maps with equilibrium as CPTP maps with an invariant state such that the entropy production due to the action of the map on the invariant state vanishes. Thermal maps are a subgroup of CPTP maps with equilibrium. In general, for CPTP maps, the thermodynamic quantities, such as the entropy production or work performed on the system, depend on the combined state of the system plus its environment. We show that these quantities can be written in terms of system properties for maps with equilibrium. The relations that we obtain are valid for arbitrary coupling strengths between the system and the thermal bath. The fluctuations of thermodynamic quantities are considered in the framework of a two-point measurement scheme. We derive the entropy production fluctuation theorem for general maps and a fluctuation relation for the stochastic work on a system that starts in the Gibbs state. Some simplifications for the probability distributions in the case of maps with equilibrium are presented. We illustrate our results by considering spin 1/2 systems under thermal maps, nonthermal maps with equilibrium, maps with nonequilibrium steady states, and concatenations of them. Finally, and as an important application, we consider a particular limit in which the concatenation of maps generates a continuous time evolution in Lindblad form for the system of interest, and we show that the concept of maps with and without equilibrium translates into Lindblad equations with and without quantum detailed balance, respectively. The consequences for the thermodynamic quantities in this limit are discussed.
The non-equilibrium allele frequency spectrum in a Poisson random field framework.
Kaj, Ingemar; Mugal, Carina F
2016-10-01
In population genetic studies, the allele frequency spectrum (AFS) efficiently summarizes genome-wide polymorphism data and shapes a variety of allele frequency-based summary statistics. While existing theory typically features equilibrium conditions, emerging methodology requires an analytical understanding of the build-up of the allele frequencies over time. In this work, we use the framework of Poisson random fields to derive new representations of the non-equilibrium AFS for the case of a Wright-Fisher population model with selection. In our approach, the AFS is a scaling-limit of the expectation of a Poisson stochastic integral and the representation of the non-equilibrium AFS arises in terms of a fixation time probability distribution. The known duality between the Wright-Fisher diffusion process and a birth and death process generalizing Kingman's coalescent yields an additional representation. The results carry over to the setting of a random sample drawn from the population and provide the non-equilibrium behavior of sample statistics. Our findings are consistent with and extend a previous approach where the non-equilibrium AFS solves a partial differential forward equation with a non-traditional boundary condition. Moreover, we provide a bridge to previous coalescent-based work, and hence tie several frameworks together. Since frequency-based summary statistics are widely used in population genetics, for example, to identify candidate loci of adaptive evolution, to infer the demographic history of a population, or to improve our understanding of the underlying mechanics of speciation events, the presented results are potentially useful for a broad range of topics. Copyright © 2016 Elsevier Inc. All rights reserved.
Damos, Petros
2015-08-01
In this study, we use entropy related mixing rate modules to measure the effects of temperature on insect population stability and demographic breakdown. The uncertainty in the age of the mother of a randomly chosen newborn, and how it is moved after a finite act of time steps, is modeled using a stochastic transformation of the Leslie matrix. Age classes are represented as a cycle graph and its transitions towards the stable age distribution are brought forth as an exact Markov chain. The dynamics of divergence, from a non equilibrium state towards equilibrium, are evaluated using the Kolmogorov-Sinai entropy. Moreover, Kullback-Leibler distance is applied as information-theoretic measure to estimate exact mixing times of age transitions probabilities towards equilibrium. Using empirically data, we show that on the initial conditions and simulated projection's trough time, that population entropy can effectively be applied to detect demographic variability towards equilibrium under different temperature conditions. Changes in entropy are correlated with the fluctuations of the insect population decay rates (i.e. demographic stability towards equilibrium). Moreover, shorter mixing times are directly linked to lower entropy rates and vice versa. This may be linked to the properties of the insect model system, which in contrast to warm blooded animals has the ability to greatly change its metabolic and demographic rates. Moreover, population entropy and the related distance measures that are applied, provide a means to measure these rates. The current results and model projections provide clear biological evidence why dynamic population entropy may be useful to measure population stability. Copyright © 2015 Elsevier Inc. All rights reserved.
Perturbation analysis for patch occupancy dynamics
Martin, Julien; Nichols, James D.; McIntyre, Carol L.; Ferraz, Goncalo; Hines, James E.
2009-01-01
Perturbation analysis is a powerful tool to study population and community dynamics. This article describes expressions for sensitivity metrics reflecting changes in equilibrium occupancy resulting from small changes in the vital rates of patch occupancy dynamics (i.e., probabilities of local patch colonization and extinction). We illustrate our approach with a case study of occupancy dynamics of Golden Eagle (Aquila chrysaetos) nesting territories. Examination of the hypothesis of system equilibrium suggests that the system satisfies equilibrium conditions. Estimates of vital rates obtained using patch occupancy models are used to estimate equilibrium patch occupancy of eagles. We then compute estimates of sensitivity metrics and discuss their implications for eagle population ecology and management. Finally, we discuss the intuition underlying our sensitivity metrics and then provide examples of ecological questions that can be addressed using perturbation analyses. For instance, the sensitivity metrics lead to predictions about the relative importance of local colonization and local extinction probabilities in influencing equilibrium occupancy for rare and common species.
NASA Astrophysics Data System (ADS)
Suh, Donghyuk; Radak, Brian K.; Chipot, Christophe; Roux, Benoît
2018-01-01
Molecular dynamics (MD) trajectories based on classical equations of motion can be used to sample the configurational space of complex molecular systems. However, brute-force MD often converges slowly due to the ruggedness of the underlying potential energy surface. Several schemes have been proposed to address this problem by effectively smoothing the potential energy surface. However, in order to recover the proper Boltzmann equilibrium probability distribution, these approaches must then rely on statistical reweighting techniques or generate the simulations within a Hamiltonian tempering replica-exchange scheme. The present work puts forth a novel hybrid sampling propagator combining Metropolis-Hastings Monte Carlo (MC) with proposed moves generated by non-equilibrium MD (neMD). This hybrid neMD-MC propagator comprises three elementary elements: (i) an atomic system is dynamically propagated for some period of time using standard equilibrium MD on the correct potential energy surface; (ii) the system is then propagated for a brief period of time during what is referred to as a "boosting phase," via a time-dependent Hamiltonian that is evolved toward the perturbed potential energy surface and then back to the correct potential energy surface; (iii) the resulting configuration at the end of the neMD trajectory is then accepted or rejected according to a Metropolis criterion before returning to step 1. A symmetric two-end momentum reversal prescription is used at the end of the neMD trajectories to guarantee that the hybrid neMD-MC sampling propagator obeys microscopic detailed balance and rigorously yields the equilibrium Boltzmann distribution. The hybrid neMD-MC sampling propagator is designed and implemented to enhance the sampling by relying on the accelerated MD and solute tempering schemes. It is also combined with the adaptive biased force sampling algorithm to examine. Illustrative tests with specific biomolecular systems indicate that the method can yield a significant speedup.
Suh, Donghyuk; Radak, Brian K; Chipot, Christophe; Roux, Benoît
2018-01-07
Molecular dynamics (MD) trajectories based on classical equations of motion can be used to sample the configurational space of complex molecular systems. However, brute-force MD often converges slowly due to the ruggedness of the underlying potential energy surface. Several schemes have been proposed to address this problem by effectively smoothing the potential energy surface. However, in order to recover the proper Boltzmann equilibrium probability distribution, these approaches must then rely on statistical reweighting techniques or generate the simulations within a Hamiltonian tempering replica-exchange scheme. The present work puts forth a novel hybrid sampling propagator combining Metropolis-Hastings Monte Carlo (MC) with proposed moves generated by non-equilibrium MD (neMD). This hybrid neMD-MC propagator comprises three elementary elements: (i) an atomic system is dynamically propagated for some period of time using standard equilibrium MD on the correct potential energy surface; (ii) the system is then propagated for a brief period of time during what is referred to as a "boosting phase," via a time-dependent Hamiltonian that is evolved toward the perturbed potential energy surface and then back to the correct potential energy surface; (iii) the resulting configuration at the end of the neMD trajectory is then accepted or rejected according to a Metropolis criterion before returning to step 1. A symmetric two-end momentum reversal prescription is used at the end of the neMD trajectories to guarantee that the hybrid neMD-MC sampling propagator obeys microscopic detailed balance and rigorously yields the equilibrium Boltzmann distribution. The hybrid neMD-MC sampling propagator is designed and implemented to enhance the sampling by relying on the accelerated MD and solute tempering schemes. It is also combined with the adaptive biased force sampling algorithm to examine. Illustrative tests with specific biomolecular systems indicate that the method can yield a significant speedup.
Universal patterns of inequality
NASA Astrophysics Data System (ADS)
Banerjee, Anand; Yakovenko, Victor M.
2010-07-01
Probability distributions of money, income and energy consumption per capita are studied for ensembles of economic agents. The principle of entropy maximization for partitioning of a limited resource gives exponential distributions for the investigated variables. A non-equilibrium difference of money temperatures between different systems generates net fluxes of money and population. To describe income distribution, a stochastic process with additive and multiplicative components is introduced. The resultant distribution interpolates between exponential at the low end and power law at the high end, in agreement with the empirical data for the USA. We show that the increase in income inequality in the USA originates primarily from the increase in the income fraction going to the upper tail, which now exceeds 20% of the total income. Analyzing the data from the World Resources Institute, we find that the distribution of energy consumption per capita around the world can be approximately described by the exponential function. Comparing the data for 1990, 2000 and 2005, we discuss the effect of globalization on the inequality of energy consumption.
Gibbons, Richard A.; Dixon, Stephen N.; Pocock, David H.
1973-01-01
A specimen of intestinal glycoprotein isolated from the pig and two samples of dextran, all of which are polydisperse (that is, the preparations may be regarded as consisting of a continuous distribution of molecular weights), have been examined in the ultracentrifuge under meniscus-depletion conditions at equilibrium. They are compared with each other and with a glycoprotein from Cysticercus tenuicollis cyst fluid which is almost monodisperse. The quantity c−⅓ (c=concentration) is plotted against ξ (the reduced radius); this plot is linear when the molecular-weight distribution approximates to the `most probable', i.e. when Mn:Mw:Mz: M(z+1)....... is as 1:2:3:4: etc. The use of this plot, and related procedures, to evaluate qualitatively and semi-quantitatively molecular-weight distribution functions where they can be realistically approximated to Schulz distributions is discussed. The theoretical basis is given in an Appendix. PMID:4778265
Non-equilibrium Statistical Mechanics and the Sea Ice Thickness Distribution
NASA Astrophysics Data System (ADS)
Wettlaufer, John; Toppaladoddi, Srikanth
We use concepts from non-equilibrium statistical physics to transform the original evolution equation for the sea ice thickness distribution g (h) due to Thorndike et al., (1975) into a Fokker-Planck like conservation law. The steady solution is g (h) = calN (q) hqe - h / H , where q and H are expressible in terms of moments over the transition probabilities between thickness categories. The solution exhibits the functional form used in observational fits and shows that for h << 1 , g (h) is controlled by both thermodynamics and mechanics, whereas for h >> 1 only mechanics controls g (h) . Finally, we derive the underlying Langevin equation governing the dynamics of the ice thickness h, from which we predict the observed g (h) . This allows us to demonstrate that the ice thickness field is ergodic. The genericity of our approach provides a framework for studying the geophysical scale structure of the ice pack using methods of broad relevance in statistical mechanics. Swedish Research Council Grant No. 638-2013-9243, NASA Grant NNH13ZDA001N-CRYO and the National Science Foundation and the Office of Naval Research under OCE-1332750 for support.
NASA Astrophysics Data System (ADS)
Yu, Qian; Fang, Debin; Zhang, Xiaoling; Jin, Chen; Ren, Qiyu
2016-06-01
Stochasticity plays an important role in the evolutionary dynamic of cyclic dominance within a finite population. To investigate the stochastic evolution process of the behaviour of bounded rational individuals, we model the Rock-Scissors-Paper (RSP) game as a finite, state dependent Quasi Birth and Death (QBD) process. We assume that bounded rational players can adjust their strategies by imitating the successful strategy according to the payoffs of the last round of the game, and then analyse the limiting distribution of the QBD process for the game stochastic evolutionary dynamic. The numerical experiments results are exhibited as pseudo colour ternary heat maps. Comparisons of these diagrams shows that the convergence property of long run equilibrium of the RSP game in populations depends on population size and the parameter of the payoff matrix and noise factor. The long run equilibrium is asymptotically stable, neutrally stable and unstable respectively according to the normalised parameters in the payoff matrix. Moreover, the results show that the distribution probability becomes more concentrated with a larger population size. This indicates that increasing the population size also increases the convergence speed of the stochastic evolution process while simultaneously reducing the influence of the noise factor.
Inelastic collapse and near-wall localization of randomly accelerated particles.
Belan, S; Chernykh, A; Lebedev, V; Falkovich, G
2016-05-01
Inelastic collapse of stochastic trajectories of a randomly accelerated particle moving in half-space z>0 has been discovered by McKean [J. Math. Kyoto Univ. 2, 227 (1963)] and then independently rediscovered by Cornell et al. [Phys. Rev. Lett. 81, 1142 (1998)PRLTAO0031-900710.1103/PhysRevLett.81.1142]. The essence of this phenomenon is that the particle arrives at the wall at z=0 with zero velocity after an infinite number of inelastic collisions if the restitution coefficient β of particle velocity is smaller than the critical value β_{c}=exp(-π/sqrt[3]). We demonstrate that inelastic collapse takes place also in a wide class of models with spatially inhomogeneous random forcing and, what is more, that the critical value β_{c} is universal. That class includes an important case of inertial particles in wall-bounded random flows. To establish how inelastic collapse influences the particle distribution, we derive the exact equilibrium probability density function ρ(z,v) for the particle position and velocity. The equilibrium distribution exists only at β<β_{c} and indicates that inelastic collapse does not necessarily imply near-wall localization.
Yu, Qian; Fang, Debin; Zhang, Xiaoling; Jin, Chen; Ren, Qiyu
2016-06-27
Stochasticity plays an important role in the evolutionary dynamic of cyclic dominance within a finite population. To investigate the stochastic evolution process of the behaviour of bounded rational individuals, we model the Rock-Scissors-Paper (RSP) game as a finite, state dependent Quasi Birth and Death (QBD) process. We assume that bounded rational players can adjust their strategies by imitating the successful strategy according to the payoffs of the last round of the game, and then analyse the limiting distribution of the QBD process for the game stochastic evolutionary dynamic. The numerical experiments results are exhibited as pseudo colour ternary heat maps. Comparisons of these diagrams shows that the convergence property of long run equilibrium of the RSP game in populations depends on population size and the parameter of the payoff matrix and noise factor. The long run equilibrium is asymptotically stable, neutrally stable and unstable respectively according to the normalised parameters in the payoff matrix. Moreover, the results show that the distribution probability becomes more concentrated with a larger population size. This indicates that increasing the population size also increases the convergence speed of the stochastic evolution process while simultaneously reducing the influence of the noise factor.
Drakos, Nicole E; Wahl, Lindi M
2015-12-01
Theoretical approaches are essential to our understanding of the complex dynamics of mobile genetic elements (MGEs) within genomes. Recently, the birth-death-diversification model was developed to describe the dynamics of mobile promoters (MPs), a particular class of MGEs in prokaryotes. A unique feature of this model is that genetic diversification of elements was included. To explore the implications of diversification on the longterm fate of MGE lineages, in this contribution we analyze the extinction probabilities, extinction times and equilibrium solutions of the birth-death-diversification model. We find that diversification increases both the survival and growth rate of MGE families, but the strength of this effect depends on the rate of horizontal gene transfer (HGT). We also find that the distribution of MGE families per genome is not necessarily monotonically decreasing, as observed for MPs, but may have a peak in the distribution that is related to the HGT rate. For MPs specifically, we find that new families have a high extinction probability, and predict that the number of MPs is increasing, albeit at a very slow rate. Additionally, we develop an extension of the birth-death-diversification model which allows MGEs in different regions of the genome, for example coding and non-coding, to be described by different rates. This extension may offer a potential explanation as to why the majority of MPs are located in non-promoter regions of the genome. Copyright © 2015 Elsevier Inc. All rights reserved.
End-to-end distance and contour length distribution functions of DNA helices
NASA Astrophysics Data System (ADS)
Zoli, Marco
2018-06-01
I present a computational method to evaluate the end-to-end and the contour length distribution functions of short DNA molecules described by a mesoscopic Hamiltonian. The method generates a large statistical ensemble of possible configurations for each dimer in the sequence, selects the global equilibrium twist conformation for the molecule, and determines the average base pair distances along the molecule backbone. Integrating over the base pair radial and angular fluctuations, I derive the room temperature distribution functions as a function of the sequence length. The obtained values for the most probable end-to-end distance and contour length distance, providing a measure of the global molecule size, are used to examine the DNA flexibility at short length scales. It is found that, also in molecules with less than ˜60 base pairs, coiled configurations maintain a large statistical weight and, consistently, the persistence lengths may be much smaller than in kilo-base DNA.
The X-ray surface brightness distribution and spectral properties of six early-type galaxies
NASA Technical Reports Server (NTRS)
Trinchieri, G.; Fabbiano, G.; Canizares, C. R.
1986-01-01
Detailed analysis is presented of the Einstein X-ray observations of six early-type galaxies. The results show that effective cooling is probably present in these systems, at least in the innermost regions. Interaction with the surrounding medium has a major effect on the X-ray surface brightness distribution at large radii, at least for galaxies in clusters. The data do not warrant the general assumptions of isothermality and gravitational hydrostatic equilibrium at large radii. Comparison of the X-ray surface brightness profiles with model predictions indicate that 1/r-squared halos with masses of the order of 10 times the stellar masses are required to match the data. The physical model of White and Chevalier (1984) for steady cooling flows in a King law potential with no heavy halo gives a surface brightness distribution that resembles the data if supernovae heating is present.
Spatial distribution on high-order-harmonic generation of an H2+ molecule in intense laser fields
NASA Astrophysics Data System (ADS)
Zhang, Jun; Ge, Xin-Lei; Wang, Tian; Xu, Tong-Tong; Guo, Jing; Liu, Xue-Shen
2015-07-01
High-order-harmonic generation (HHG) for the H2 + molecule in a 3-fs, 800-nm few-cycle Gaussian laser pulse combined with a static field is investigated by solving the one-dimensional electronic and one-dimensional nuclear time-dependent Schrödinger equation within the non-Born-Oppenheimer approximation. The spatial distribution in HHG is demonstrated and the results present the recombination process of the electron with the two nuclei, respectively. The spatial distribution of the HHG spectra shows that there is little possibility of the recombination of the electron with the nuclei around the origin z =0 a.u. and equilibrium internuclear positions z =±1.3 a.u. This characteristic is irrelevant to laser parameters and is only attributed to the molecular structure. Furthermore, we investigate the time-dependent electron-nuclear wave packet and ionization probability to further explain the underlying physical mechanism.
Molecular finite-size effects in stochastic models of equilibrium chemical systems.
Cianci, Claudia; Smith, Stephen; Grima, Ramon
2016-02-28
The reaction-diffusion master equation (RDME) is a standard modelling approach for understanding stochastic and spatial chemical kinetics. An inherent assumption is that molecules are point-like. Here, we introduce the excluded volume reaction-diffusion master equation (vRDME) which takes into account volume exclusion effects on stochastic kinetics due to a finite molecular radius. We obtain an exact closed form solution of the RDME and of the vRDME for a general chemical system in equilibrium conditions. The difference between the two solutions increases with the ratio of molecular diameter to the compartment length scale. We show that an increase in the fraction of excluded space can (i) lead to deviations from the classical inverse square root law for the noise-strength, (ii) flip the skewness of the probability distribution from right to left-skewed, (iii) shift the equilibrium of bimolecular reactions so that more product molecules are formed, and (iv) strongly modulate the Fano factors and coefficients of variation. These volume exclusion effects are found to be particularly pronounced for chemical species not involved in chemical conservation laws. Finally, we show that statistics obtained using the vRDME are in good agreement with those obtained from Brownian dynamics with excluded volume interactions.
A framework for modelling gene regulation which accommodates non-equilibrium mechanisms.
Ahsendorf, Tobias; Wong, Felix; Eils, Roland; Gunawardena, Jeremy
2014-12-05
Gene regulation has, for the most part, been quantitatively analysed by assuming that regulatory mechanisms operate at thermodynamic equilibrium. This formalism was originally developed to analyse the binding and unbinding of transcription factors from naked DNA in eubacteria. Although widely used, it has made it difficult to understand the role of energy-dissipating, epigenetic mechanisms, such as DNA methylation, nucleosome remodelling and post-translational modification of histones and co-regulators, which act together with transcription factors to regulate gene expression in eukaryotes. Here, we introduce a graph-based framework that can accommodate non-equilibrium mechanisms. A gene-regulatory system is described as a graph, which specifies the DNA microstates (vertices), the transitions between microstates (edges) and the transition rates (edge labels). The graph yields a stochastic master equation for how microstate probabilities change over time. We show that this framework has broad scope by providing new insights into three very different ad hoc models, of steroid-hormone responsive genes, of inherently bounded chromatin domains and of the yeast PHO5 gene. We find, moreover, surprising complexity in the regulation of PHO5, which has not yet been experimentally explored, and we show that this complexity is an inherent feature of being away from equilibrium. At equilibrium, microstate probabilities do not depend on how a microstate is reached but, away from equilibrium, each path to a microstate can contribute to its steady-state probability. Systems that are far from equilibrium thereby become dependent on history and the resulting complexity is a fundamental challenge. To begin addressing this, we introduce a graph-based concept of independence, which can be applied to sub-systems that are far from equilibrium, and prove that history-dependent complexity can be circumvented when sub-systems operate independently. As epigenomic data become increasingly available, we anticipate that gene function will come to be represented by graphs, as gene structure has been represented by sequences, and that the methods introduced here will provide a broader foundation for understanding how genes work.
Ribosome flow model with positive feedback
Margaliot, Michael; Tuller, Tamir
2013-01-01
Eukaryotic mRNAs usually form a circular structure; thus, ribosomes that terminatae translation at the 3′ end can diffuse with increased probability to the 5′ end of the transcript, initiating another cycle of translation. This phenomenon describes ribosomal flow with positive feedback—an increase in the flow of ribosomes terminating translating the open reading frame increases the ribosomal initiation rate. The aim of this paper is to model and rigorously analyse translation with feedback. We suggest a modified version of the ribosome flow model, called the ribosome flow model with input and output. In this model, the input is the initiation rate and the output is the translation rate. We analyse this model after closing the loop with a positive linear feedback. We show that the closed-loop system admits a unique globally asymptotically stable equilibrium point. From a biophysical point of view, this means that there exists a unique steady state of ribosome distributions along the mRNA, and thus a unique steady-state translation rate. The solution from any initial distribution will converge to this steady state. The steady-state distribution demonstrates a decrease in ribosome density along the coding sequence. For the case of constant elongation rates, we obtain expressions relating the model parameters to the equilibrium point. These results may perhaps be used to re-engineer the biological system in order to obtain a desired translation rate. PMID:23720534
Nonequilibrium steady state of a weakly-driven Kardar–Parisi–Zhang equation
NASA Astrophysics Data System (ADS)
Meerson, Baruch; Sasorov, Pavel V.; Vilenkin, Arkady
2018-05-01
We consider an infinite interface of d > 2 dimensions, governed by the Kardar–Parisi–Zhang (KPZ) equation with a weak Gaussian noise which is delta-correlated in time and has short-range spatial correlations. We study the probability distribution of the interface height H at a point of the substrate, when the interface is initially flat. We show that, in stark contrast with the KPZ equation in d < 2, this distribution approaches a non-equilibrium steady state. The time of relaxation toward this state scales as the diffusion time over the correlation length of the noise. We study the steady-state distribution using the optimal-fluctuation method. The typical, small fluctuations of height are Gaussian. For these fluctuations the activation path of the system coincides with the time-reversed relaxation path, and the variance of can be found from a minimization of the (nonlocal) equilibrium free energy of the interface. In contrast, the tails of are nonequilibrium, non-Gaussian and strongly asymmetric. To determine them we calculate, analytically and numerically, the activation paths of the system, which are different from the time-reversed relaxation paths. We show that the slower-decaying tail of scales as , while the faster-decaying tail scales as . The slower-decaying tail has important implications for the statistics of directed polymers in random potential.
Space-time thermodynamics of the glass transition
NASA Astrophysics Data System (ADS)
Merolle, Mauro; Garrahan, Juan P.; Chandler, David
2005-08-01
We consider the probability distribution for fluctuations in dynamical action and similar quantities related to dynamic heterogeneity. We argue that the so-called “glass transition” is a manifestation of low action tails in these distributions where the entropy of trajectory space is subextensive in time. These low action tails are a consequence of dynamic heterogeneity and an indication of phase coexistence in trajectory space. The glass transition, where the system falls out of equilibrium, is then an order-disorder phenomenon in space-time occurring at a temperature Tg, which is a weak function of measurement time. We illustrate our perspective ideas with facilitated lattice models and note how these ideas apply more generally. Author contributions: M.M., J.P.G., and D.C. performed research and wrote the paper.
Probability of identity by descent in metapopulations.
Kaj, I; Lascoux, M
1999-01-01
Equilibrium probabilities of identity by descent (IBD), for pairs of genes within individuals, for genes between individuals within subpopulations, and for genes between subpopulations are calculated in metapopulation models with fixed or varying colony sizes. A continuous-time analog to the Moran model was used in either case. For fixed-colony size both propagule and migrant pool models were considered. The varying population size model is based on a birth-death-immigration (BDI) process, to which migration between colonies is added. Wright's F statistics are calculated and compared to previous results. Adding between-island migration to the BDI model can have an important effect on the equilibrium probabilities of IBD and on Wright's index. PMID:10388835
On the proportional abundance of species: Integrating population genetics and community ecology.
Marquet, Pablo A; Espinoza, Guillermo; Abades, Sebastian R; Ganz, Angela; Rebolledo, Rolando
2017-12-01
The frequency of genes in interconnected populations and of species in interconnected communities are affected by similar processes, such as birth, death and immigration. The equilibrium distribution of gene frequencies in structured populations is known since the 1930s, under Wright's metapopulation model known as the island model. The equivalent distribution for the species frequency (i.e. the species proportional abundance distribution (SPAD)), at the metacommunity level, however, is unknown. In this contribution, we develop a stochastic model to analytically account for this distribution (SPAD). We show that the same as for genes SPAD follows a beta distribution, which provides a good description of empirical data and applies across a continuum of scales. This stochastic model, based upon a diffusion approximation, provides an alternative to neutral models for the species abundance distribution (SAD), which focus on number of individuals instead of proportions, and demonstrate that the relative frequency of genes in local populations and of species within communities follow the same probability law. We hope our contribution will help stimulate the mathematical and conceptual integration of theories in genetics and ecology.
Raney Distributions and Random Matrix Theory
NASA Astrophysics Data System (ADS)
Forrester, Peter J.; Liu, Dang-Zheng
2015-03-01
Recent works have shown that the family of probability distributions with moments given by the Fuss-Catalan numbers permit a simple parameterized form for their density. We extend this result to the Raney distribution which by definition has its moments given by a generalization of the Fuss-Catalan numbers. Such computations begin with an algebraic equation satisfied by the Stieltjes transform, which we show can be derived from the linear differential equation satisfied by the characteristic polynomial of random matrix realizations of the Raney distribution. For the Fuss-Catalan distribution, an equilibrium problem characterizing the density is identified. The Stieltjes transform for the limiting spectral density of the singular values squared of the matrix product formed from inverse standard Gaussian matrices, and standard Gaussian matrices, is shown to satisfy a variant of the algebraic equation relating to the Raney distribution. Supported on , we show that it too permits a simple functional form upon the introduction of an appropriate choice of parameterization. As an application, the leading asymptotic form of the density as the endpoints of the support are approached is computed, and is shown to have some universal features.
Statistics of the Work done in a Quantum Quench
NASA Astrophysics Data System (ADS)
Silva, Alessandro
2009-03-01
The quantum quench, i.e. a rapid change in time of a control parameter of a quantum system, is the simplest paradigm of non-equilibrium process, completely analogous to a standard thermodynamic transformation. The dynamics following a quantum quench is particularly interesting in strongly correlated quantum systems, most prominently when the quench in performed across a quantum critical point. In this talk I will present a way to characterize the physics of quantum quenches by looking at the statistics of a basic thermodynamic variable: the work done on the system by changing its parameters [1]. I will first elucidate the relation between the probability distribution of the work, quantum Jarzynski equalities, and the Loschmidt echo, a quantity that emerges usually in the context of dephasing. Using this connection, I will then characterize the statistics of the work done on a Quantum Ising chain by quenching locally or globally the transverse field. I will then show that for global quenches the presence of a quantum critical point results in singularities of the moments of the distribution, while, for local quenches starting at criticality, the probability distribution itself displays an interesting edge singularity. The results of a similar analysis for other systems will be discussed. [4pt] [1] A. Silva, Phys. Rev. Lett. 101, 120603 (2008).
Equilibrium Distribution Functions: Another Look.
ERIC Educational Resources Information Center
Waite, Boyd A.
1986-01-01
Discusses equilibrium distribution functions and provides an alternative "derivation" that allows the student, with the help of a computer, to gain intuitive insight as to the nature of distributions in general and the precise nature of the dominance of the Boltzmann distribution. (JN)
Fractional Brownian motion and the critical dynamics of zipping polymers.
Walter, J-C; Ferrantini, A; Carlon, E; Vanderzande, C
2012-03-01
We consider two complementary polymer strands of length L attached by a common-end monomer. The two strands bind through complementary monomers and at low temperatures form a double-stranded conformation (zipping), while at high temperature they dissociate (unzipping). This is a simple model of DNA (or RNA) hairpin formation. Here we investigate the dynamics of the strands at the equilibrium critical temperature T=T(c) using Monte Carlo Rouse dynamics. We find that the dynamics is anomalous, with a characteristic time scaling as τ∼L(2.26(2)), exceeding the Rouse time ∼L(2.18). We investigate the probability distribution function, velocity autocorrelation function, survival probability, and boundary behavior of the underlying stochastic process. These quantities scale as expected from a fractional Brownian motion with a Hurst exponent H=0.44(1). We discuss similarities to and differences from unbiased polymer translocation.
Rapidity window dependences of higher order cumulants and diffusion master equation
NASA Astrophysics Data System (ADS)
Kitazawa, Masakiyo
2015-10-01
We study the rapidity window dependences of higher order cumulants of conserved charges observed in relativistic heavy ion collisions. The time evolution and the rapidity window dependence of the non-Gaussian fluctuations are described by the diffusion master equation. Analytic formulas for the time evolution of cumulants in a rapidity window are obtained for arbitrary initial conditions. We discuss that the rapidity window dependences of the non-Gaussian cumulants have characteristic structures reflecting the non-equilibrium property of fluctuations, which can be observed in relativistic heavy ion collisions with the present detectors. It is argued that various information on the thermal and transport properties of the hot medium can be revealed experimentally by the study of the rapidity window dependences, especially by the combined use, of the higher order cumulants. Formulas of higher order cumulants for a probability distribution composed of sub-probabilities, which are useful for various studies of non-Gaussian cumulants, are also presented.
Investigating Student Understanding for a Statistical Analysis of Two Thermally Interacting Solids
NASA Astrophysics Data System (ADS)
Loverude, Michael E.
2010-10-01
As part of an ongoing research and curriculum development project for upper-division courses in thermal physics, we have developed a sequence of tutorials in which students apply statistical methods to examine the behavior of two interacting Einstein solids. In the sequence, students begin with simple results from probability and develop a means for counting the states in a single Einstein solid. The students then consider the thermal interaction of two solids, and observe that the classical equilibrium state corresponds to the most probable distribution of energy between the two solids. As part of the development of the tutorial sequence, we have developed several assessment questions to probe student understanding of various aspects of this system. In this paper, we describe the strengths and weaknesses of student reasoning, both qualitative and quantitative, to assess the readiness of students for one tutorial in the sequence.
Evaluation of a locally homogeneous model of spray evaporation
NASA Technical Reports Server (NTRS)
Shearer, A. J.; Faeth, G. M.; Tamura, H.
1978-01-01
Measurements were conducted on an evaporating spray in a stagnant environment. The spray was formed using an air-atomizing injector to yield a Sauter mean diameter of the order of 30 microns. The region where evaporation occurred extended approximately 1 m from the injector for the test conditions. Profiles of mean velocity, temperature, composition, and drop size distribution, as well as velocity fluctuations and Reynolds stress, were measured. The results are compared with a locally homogeneous two-phase flow model which implies no velocity difference and thermodynamic equilibrium between the phases. The flow was represented by a k-epsilon-g turbulence model employing a clipped Gaussian probability density function for mixture fraction fluctuations. The model provides a good representation of earlier single-phase jet measurements, but generally overestimates the rate of development of the spray. Using the model predictions to represent conditions along the centerline of the spray, drop life-history calculations were conducted which indicate that these discrepancies are due to slip and loss of thermodynamic equilibrium between the phases.
Bertrand Model Under Incomplete Information
NASA Astrophysics Data System (ADS)
Ferreira, Fernanda A.; Pinto, Alberto A.
2008-09-01
We consider a Bertrand duopoly model with unknown costs. The firms' aim is to choose the price of its product according to the well-known concept of Bayesian Nash equilibrium. The chooses are made simultaneously by both firms. In this paper, we suppose that each firm has two different technologies, and uses one of them according to a certain probability distribution. The use of either one or the other technology affects the unitary production cost. We show that this game has exactly one Bayesian Nash equilibrium. We analyse the advantages, for firms and for consumers, of using the technology with highest production cost versus the one with cheapest production cost. We prove that the expected profit of each firm increases with the variance of its production costs. We also show that the expected price of each good increases with both expected production costs, being the effect of the expected production costs of the rival dominated by the effect of the own expected production costs.
Global asymptotic stability of plant-seed bank models.
Eager, Eric Alan; Rebarber, Richard; Tenhumberg, Brigitte
2014-07-01
Many plant populations have persistent seed banks, which consist of viable seeds that remain dormant in the soil for many years. Seed banks are important for plant population dynamics because they buffer against environmental perturbations and reduce the probability of extinction. Viability of the seeds in the seed bank can depend on the seed's age, hence it is important to keep track of the age distribution of seeds in the seed bank. In this paper we construct a general density-dependent plant-seed bank model where the seed bank is age-structured. We consider density dependence in both seedling establishment and seed production, since previous work has highlighted that overcrowding can suppress both of these processes. Under certain assumptions on the density dependence, we prove that there is a globally stable equilibrium population vector which is independent of the initial state. We derive an analytical formula for the equilibrium population using methods from feedback control theory. We apply these results to a model for the plant species Cirsium palustre and its seed bank.
IS THE SIZE DISTRIBUTION OF URBAN AEROSOLS DETERMINED BY THERMODYNAMIC EQUILIBRIUM? (R826371C005)
A size-resolved equilibrium model, SELIQUID, is presented and used to simulate the size–composition distribution of semi-volatile inorganic aerosol in an urban environment. The model uses the efflorescence branch of aerosol behavior to predict the equilibrium partitioni...
Equilibrium theory of island biogeography: A review
Angela D. Yu; Simon A. Lei
2001-01-01
The topography, climatic pattern, location, and origin of islands generate unique patterns of species distribution. The equilibrium theory of island biogeography creates a general framework in which the study of taxon distribution and broad island trends may be conducted. Critical components of the equilibrium theory include the species-area relationship, island-...
USDA-ARS?s Scientific Manuscript database
The distribution coefficient (KD) for the human drug carbamazepine was measured using a non-equilibrium technique. Repacked soil columns were prepared using an Airport silt loam (Typic Natrustalf) with an average organic matter content of 2.45%. Carbamazepine solutions were then leached through th...
Kent, D.B.; Davis, J.A.; Anderson, L.C.D.; Rea, B.A.; Waite, T.D.
1994-01-01
Breakthrough of Cr(VI) (chromate), Se(VI) (selenate), and O2 (dissolved oxygen) was observed in tracer tests conducted in a shallow, sand and gravel aquifer with mildly reducing conditions. Loss of Cr, probably due to reduction of Cr(VI) to Cr(III) and irreversible sorption of Cr(III), occurred along with slight retardation of Cr(VI), owing to reversible sorption. Reduction of Se(VI) and O2 was thermodynamically feasible but did not occur, indicating conditions, were unfavorable to microbial reduction. Cr(VI) reduction by constituents of aquifer sediments did not achieve local equilibrium during transport. The reduction rate was probably limited by incomplete contact between Cr(VI) transported along predominant flow paths and reductants located in regions within aquifer sediments of comparatively low permeability. Scatter in the amount of Cr reduction calculated from individual breakthrough curves at identical distances downgradient probably resulted from heterogeneities in the distribution of reductants in the sediments. Predictive modeling of the transport and fate of redox-sensitive solutes cannot be based strictly on thermodynamic considerations; knowledge of reaction rates is critical. Potentially important mass transfer rate limitations between solutes and reactants in sediments as well as heterogeneities in the distribution of redox properties in aquifers complicate determination of limiting rates for use in predictive simulations of the transport of redox-sensitive contaminants in groundwater.
Identification and analysis of student conceptions used to solve chemical equilibrium problems
NASA Astrophysics Data System (ADS)
Voska, Kirk William
This study identified and quantified chemistry conceptions students use when solving chemical equilibrium problems requiring the application of Le Chatelier's principle, and explored the feasibility of designing a paper and pencil test for this purpose. It also demonstrated the utility of conditional probabilities to assess test quality. A 10-item pencil-and-paper, two-tier diagnostic instrument, the Test to Identify Student Conceptualizations (TISC) was developed and administered to 95 second-semester university general chemistry students after they received regular course instruction concerning equilibrium in homogeneous aqueous, heterogeneous aqueous, and homogeneous gaseous systems. The content validity of TISC was established through a review of TISC by a panel of experts; construct validity was established through semi-structured interviews and conditional probabilities. Nine students were then selected from a stratified random sample for interviews to validate TISC. The probability that TISC correctly identified an answer given by a student in an interview was p = .64, while the probability that TISC correctly identified a reason given by a student in an interview was p=.49. Each TISC item contained two parts. In the first part the student selected the correct answer to a problem from a set of four choices. In the second part students wrote reasons for their answer to the first part. TISC questions were designed to identify students' conceptions concerning the application of Le Chatelier's principle, the constancy of the equilibrium constant, K, and the effect of a catalyst. Eleven prevalent incorrect conceptions were identified. This study found students consistently selected correct answers more frequently (53% of the time) than they provided correct reasons (33% of the time). The association between student answers and respective reasons on each TISC item was quantified using conditional probabilities calculated from logistic regression coefficients. The probability a student provided correct reasoning (B) when the student selected a correct answer (A) ranged from P(B| A) =.32 to P(B| A) =.82. However, the probability a student selected a correct answer when they provided correct reasoning ranged from P(A| B) =.96 to P(A| B) = 1. The K-R 20 reliability for TISC was found to be.79.
Baity-Jesi, Marco; Calore, Enrico; Cruz, Andres; Fernandez, Luis Antonio; Gil-Narvión, José Miguel; Gordillo-Guerrero, Antonio; Iñiguez, David; Maiorano, Andrea; Marinari, Enzo; Martin-Mayor, Victor; Monforte-Garcia, Jorge; Muñoz Sudupe, Antonio; Navarro, Denis; Parisi, Giorgio; Perez-Gaviro, Sergio; Ricci-Tersenghi, Federico; Ruiz-Lorenzo, Juan Jesus; Schifano, Sebastiano Fabio; Tarancón, Alfonso; Tripiccione, Raffaele; Yllanes, David
2017-01-01
We have performed a very accurate computation of the nonequilibrium fluctuation–dissipation ratio for the 3D Edwards–Anderson Ising spin glass, by means of large-scale simulations on the special-purpose computers Janus and Janus II. This ratio (computed for finite times on very large, effectively infinite, systems) is compared with the equilibrium probability distribution of the spin overlap for finite sizes. Our main result is a quantitative statics-dynamics dictionary, which could allow the experimental exploration of important features of the spin-glass phase without requiring uncontrollable extrapolations to infinite times or system sizes. PMID:28174274
Information-Theoretic Uncertainty of SCFG-Modeled Folding Space of The Non-coding RNA
Manzourolajdad, Amirhossein; Wang, Yingfeng; Shaw, Timothy I.; Malmberg, Russell L.
2012-01-01
RNA secondary structure ensembles define probability distributions for alternative equilibrium secondary structures of an RNA sequence. Shannon’s Entropy is a measure for the amount of diversity present in any ensemble. In this work, Shannon’s entropy of the SCFG ensemble on an RNA sequence is derived and implemented in polynomial time for both structurally ambiguous and unambiguous grammars. Micro RNA sequences generally have low folding entropy, as previously discovered. Surprisingly, signs of significantly high folding entropy were observed in certain ncRNA families. More effective models coupled with targeted randomization tests can lead to a better insight into folding features of these families. PMID:23160142
Margolin, L. G.; Hunter, A.
2017-10-18
Here, we consider the dependence of velocity probability distribution functions on the finite size of a thermodynamic system. We are motivated by applications to computational fluid dynamics, hence discrete thermodynamics. We then begin by describing a coarsening process that represents geometric renormalization. Then, based only on the requirements of conservation, we demonstrate that the pervasive assumption of local thermodynamic equilibrium is not form invariant. We develop a perturbative correction that restores form invariance to second-order in a small parameter associated with macroscopic gradients. Finally, we interpret the corrections in terms of unresolved kinetic energy and discuss the implications of ourmore » results both in theory and as applied to numerical simulation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margolin, L. G.; Hunter, A.
Here, we consider the dependence of velocity probability distribution functions on the finite size of a thermodynamic system. We are motivated by applications to computational fluid dynamics, hence discrete thermodynamics. We then begin by describing a coarsening process that represents geometric renormalization. Then, based only on the requirements of conservation, we demonstrate that the pervasive assumption of local thermodynamic equilibrium is not form invariant. We develop a perturbative correction that restores form invariance to second-order in a small parameter associated with macroscopic gradients. Finally, we interpret the corrections in terms of unresolved kinetic energy and discuss the implications of ourmore » results both in theory and as applied to numerical simulation.« less
Dissociation rate of bromine diatomics in an argon heat bath
NASA Technical Reports Server (NTRS)
Razner, R.; Hopkins, D.
1973-01-01
The evolution of a collection of 300 K bromine diatomics embedded in a heat bath of argon atoms at 1800 K was studied by computer, and a dissociation-rate constant for the reaction Br2 + BR + Ar yields Br + Ar was determined. Previously published probability distributions for energy and angular momentum transfers in classical three-dimensional Br2-Ar collisions were used in conjunction with a newly developed Monte Carlo scheme for this purpose. Results are compared with experimental shock-tube data and the predictions of several other theoretical models. A departure from equilibrium is obtained which is significantly greater than that predicted by any of these other theories.
Brownian motion of classical spins: Anomalous dissipation and generalized Langevin equation
NASA Astrophysics Data System (ADS)
Bandyopadhyay, Malay; Jayannavar, A. M.
2017-10-01
In this work, we derive the Langevin equation (LE) of a classical spin interacting with a heat bath through momentum variables, starting from the fully dynamical Hamiltonian description. The derived LE with anomalous dissipation is analyzed in detail. The obtained LE is non-Markovian with multiplicative noise terms. The concomitant dissipative terms obey the fluctuation-dissipation theorem. The Markovian limit correctly produces the Kubo and Hashitsume equation. The perturbative treatment of our equations produces the Landau-Lifshitz equation and the Seshadri-Lindenberg equation. Then we derive the Fokker-Planck equation corresponding to LE and the concept of equilibrium probability distribution is analyzed.
Hydrodynamics of the Polyakov line in SU(N c) Yang-Mills
Liu, Yizhuang; Warchoł, Piotr; Zahed, Ismail
2015-12-08
We discuss a hydrodynamical description of the eigenvalues of the Polyakov line at large but finite N c for Yang-Mills theory in even and odd space-time dimensions. The hydro-static solutions for the eigenvalue densities are shown to interpolate between a uniform distribution in the confined phase and a localized distribution in the de-confined phase. The resulting critical temperatures are in overall agreement with those measured on the lattice over a broad range of N c, and are consistent with the string model results at N c = ∞. The stochastic relaxation of the eigenvalues of the Polyakov line out ofmore » equilibrium is captured by a hydrodynamical instanton. An estimate of the probability of formation of a Z(N c)bubble using a piece-wise sound wave is suggested.« less
Linking Well-Tempered Metadynamics Simulations with Experiments
Barducci, Alessandro; Bonomi, Massimiliano; Parrinello, Michele
2010-01-01
Abstract Linking experiments with the atomistic resolution provided by molecular dynamics simulations can shed light on the structure and dynamics of protein-disordered states. The sampling limitations of classical molecular dynamics can be overcome using metadynamics, which is based on the introduction of a history-dependent bias on a small number of suitably chosen collective variables. Even if such bias distorts the probability distribution of the other degrees of freedom, the equilibrium Boltzmann distribution can be reconstructed using a recently developed reweighting algorithm. Quantitative comparison with experimental data is thus possible. Here we show the potential of this combined approach by characterizing the conformational ensemble explored by a 13-residue helix-forming peptide by means of a well-tempered metadynamics/parallel tempering approach and comparing the reconstructed nuclear magnetic resonance scalar couplings with experimental data. PMID:20441734
The Pitman-Yor Process and an Empirical Study of Choice Behavior
NASA Astrophysics Data System (ADS)
Hisakado, Masato; Sano, Fumiaki; Mori, Shintaro
2018-02-01
This study discusses choice behavior using a voting model in which voters can obtain information from a finite number of previous r voters. Voters vote for a candidate with a probability proportional to the previous vote ratio, which is visible to the voters. We obtain the Pitman sampling formula as the equilibrium distribution of r votes. We present the model as a process of posting on a bulletin board system, 2ch.net, where users can choose one of many threads to create a post. We explore how this choice depends on the last r posts and the distribution of these last r posts across threads. We conclude that the posting process is described by our voting model with analog herders for a small r, which might correspond to the time horizon of users' responses.
Voltage-Gated Lipid Ion Channels
Blicher, Andreas; Heimburg, Thomas
2013-01-01
Synthetic lipid membranes can display channel-like ion conduction events even in the absence of proteins. We show here that these events are voltage-gated with a quadratic voltage dependence as expected from electrostatic theory of capacitors. To this end, we recorded channel traces and current histograms in patch-experiments on lipid membranes. We derived a theoretical current-voltage relationship for pores in lipid membranes that describes the experimental data very well when assuming an asymmetric membrane. We determined the equilibrium constant between closed and open state and the open probability as a function of voltage. The voltage-dependence of the lipid pores is found comparable to that of protein channels. Lifetime distributions of open and closed events indicate that the channel open distribution does not follow exponential statistics but rather power law behavior for long open times. PMID:23823188
Equilibrium distribution of heavy quarks in fokker-planck dynamics
Walton; Rafelski
2000-01-03
We obtain an explicit generalization, within Fokker-Planck dynamics, of Einstein's relation between drag, diffusion, and the equilibrium distribution for a spatially homogeneous system, considering both the transverse and longitudinal diffusion for dimension n>1. We provide a complete characterization of the equilibrium distribution in terms of the drag and diffusion transport coefficients. We apply this analysis to charm quark dynamics in a thermal quark-gluon plasma for the case of collisional equilibration.
Dynamical Response of Networks Under External Perturbations: Exact Results
NASA Astrophysics Data System (ADS)
Chinellato, David D.; Epstein, Irving R.; Braha, Dan; Bar-Yam, Yaneer; de Aguiar, Marcus A. M.
2015-04-01
We give exact statistical distributions for the dynamic response of influence networks subjected to external perturbations. We consider networks whose nodes have two internal states labeled 0 and 1. We let nodes be frozen in state 0, in state 1, and the remaining nodes change by adopting the state of a connected node with a fixed probability per time step. The frozen nodes can be interpreted as external perturbations to the subnetwork of free nodes. Analytically extending and to be smaller than 1 enables modeling the case of weak coupling. We solve the dynamical equations exactly for fully connected networks, obtaining the equilibrium distribution, transition probabilities between any two states and the characteristic time to equilibration. Our exact results are excellent approximations for other topologies, including random, regular lattice, scale-free and small world networks, when the numbers of fixed nodes are adjusted to take account of the effect of topology on coupling to the environment. This model can describe a variety of complex systems, from magnetic spins to social networks to population genetics, and was recently applied as a framework for early warning signals for real-world self-organized economic market crises.
Statistical analogues of thermodynamic extremum principles
NASA Astrophysics Data System (ADS)
Ramshaw, John D.
2018-05-01
As shown by Jaynes, the canonical and grand canonical probability distributions of equilibrium statistical mechanics can be simply derived from the principle of maximum entropy, in which the statistical entropy S=- {k}{{B}}{\\sum }i{p}i{log}{p}i is maximised subject to constraints on the mean values of the energy E and/or number of particles N in a system of fixed volume V. The Lagrange multipliers associated with those constraints are then found to be simply related to the temperature T and chemical potential μ. Here we show that the constrained maximisation of S is equivalent to, and can therefore be replaced by, the essentially unconstrained minimisation of the obvious statistical analogues of the Helmholtz free energy F = E ‑ TS and the grand potential J = F ‑ μN. Those minimisations are more easily performed than the maximisation of S because they formally eliminate the constraints on the mean values of E and N and their associated Lagrange multipliers. This procedure significantly simplifies the derivation of the canonical and grand canonical probability distributions, and shows that the well known extremum principles for the various thermodynamic potentials possess natural statistical analogues which are equivalent to the constrained maximisation of S.
Relativistic distribution function for particles with spin at local thermodynamical equilibrium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Becattini, F., E-mail: becattini@fi.infn.it; INFN Sezione di Firenze, Florence; Universität Frankfurt, Frankfurt am Main
2013-11-15
We present an extension of relativistic single-particle distribution function for weakly interacting particles at local thermodynamical equilibrium including spin degrees of freedom, for massive spin 1/2 particles. We infer, on the basis of the global equilibrium case, that at local thermodynamical equilibrium particles acquire a net polarization proportional to the vorticity of the inverse temperature four-vector field. The obtained formula for polarization also implies that a steady gradient of temperature entails a polarization orthogonal to particle momentum. The single-particle distribution function in momentum space extends the so-called Cooper–Frye formula to particles with spin 1/2 and allows us to predict theirmore » polarization in relativistic heavy ion collisions at the freeze-out. -- Highlights: •Single-particle distribution function in local thermodynamical equilibrium with spin. •Polarization of spin 1/2 particles in a fluid at local thermodynamical equilibrium. •Prediction of a new effect: a steady gradient of temperature induces a polarization. •Application to the calculation of polarization in relativistic heavy ion collisions.« less
NASA Astrophysics Data System (ADS)
Mazilu, Irina; Gonzalez, Joshua
2008-03-01
From the point of view of a physicist, a bio-molecular motor represents an interesting non-equilibrium system and it is directly amenable to an analysis using standard methods of non-equilibrium statistical physics. We conduct a rigorous Monte Carlo study of three different driven lattice gas models that retain the basic behavior of three types of cytoskeletal molecular motors. Our models incorporate novel features such as realistic dynamics rules and complex motor-motor interactions. We are interested to have a deeper understanding of how various parameters influence the macroscopic behavior of these systems, what is the density profile and if the system undergoes a phase transition. On the analytical front, we computed the steady-state probability distributions exactly for the one of the models using the matrix method that was established in 1993 by B. Derrida et al. We also explored the possibilities offered by the ``Bethe ansatz'' method by mapping some well studied spin models into asymmetric simple exclusion models (already analyzed using computer simulations), and to use the results obtained for the spin models in finding an exact solution for our problem. We have exhaustive computational studies of the kinesin and dynein molecular motor models that prove to be very useful in checking our analytical work.
Wealth distribution of simple exchange models coupled with extremal dynamics
NASA Astrophysics Data System (ADS)
Bagatella-Flores, N.; Rodríguez-Achach, M.; Coronel-Brizio, H. F.; Hernández-Montoya, A. R.
2015-01-01
Punctuated Equilibrium (PE) states that after long periods of evolutionary quiescence, species evolution can take place in short time intervals, where sudden differentiation makes new species emerge and some species extinct. In this paper, we introduce and study the effect of punctuated equilibrium on two different asset exchange models: the yard sale model (YS, winner gets a random fraction of a poorer player's wealth) and the theft and fraud model (TF, winner gets a random fraction of the loser's wealth). The resulting wealth distribution is characterized using the Gini index. In order to do this, we consider PE as a perturbation with probability ρ of being applied. We compare the resulting values of the Gini index at different increasing values of ρ in both models. We found that in the case of the TF model, the Gini index reduces as the perturbation ρ increases, not showing dependence with the agents number. While for YS we observe a phase transition which happens around ρc = 0.79. For perturbations ρ <ρc the Gini index reaches the value of one as time increases (an extreme wealth condensation state), whereas for perturbations greater than or equal to ρc the Gini index becomes different to one, avoiding the system reaches this extreme state. We show that both simple exchange models coupled with PE dynamics give more realistic results. In particular for YS, we observe a power low decay of wealth distribution.
Tsallis non-extensive statistics and solar wind plasma complexity
NASA Astrophysics Data System (ADS)
Pavlos, G. P.; Iliopoulos, A. C.; Zastenker, G. N.; Zelenyi, L. M.; Karakatsanis, L. P.; Riazantseva, M. O.; Xenakis, M. N.; Pavlos, E. G.
2015-03-01
This article presents novel results revealing non-equilibrium phase transition processes in the solar wind plasma during a strong shock event, which took place on 26th September 2011. Solar wind plasma is a typical case of stochastic spatiotemporal distribution of physical state variables such as force fields (B → , E →) and matter fields (particle and current densities or bulk plasma distributions). This study shows clearly the non-extensive and non-Gaussian character of the solar wind plasma and the existence of multi-scale strong correlations from the microscopic to the macroscopic level. It also underlines the inefficiency of classical magneto-hydro-dynamic (MHD) or plasma statistical theories, based on the classical central limit theorem (CLT), to explain the complexity of the solar wind dynamics, since these theories include smooth and differentiable spatial-temporal functions (MHD theory) or Gaussian statistics (Boltzmann-Maxwell statistical mechanics). On the contrary, the results of this study indicate the presence of non-Gaussian non-extensive statistics with heavy tails probability distribution functions, which are related to the q-extension of CLT. Finally, the results of this study can be understood in the framework of modern theoretical concepts such as non-extensive statistical mechanics (Tsallis, 2009), fractal topology (Zelenyi and Milovanov, 2004), turbulence theory (Frisch, 1996), strange dynamics (Zaslavsky, 2002), percolation theory (Milovanov, 1997), anomalous diffusion theory and anomalous transport theory (Milovanov, 2001), fractional dynamics (Tarasov, 2013) and non-equilibrium phase transition theory (Chang, 1992).
NASA Astrophysics Data System (ADS)
Liolios, Konstantinos; Bergman, Jan; Moussas, Xenophon
2017-04-01
Heliospheric energetic particle populations of energies higher than 1 MeV are studied using a 33 year long data record composed of hourly measurements, as extracted from the NASA Goddard Space Flight Center's OMNI data set. Their periodicities are examined by means least-squares spectral analysis and wavelet analysis and found to be in good agreement with periodicities seen in sunspot numbers, which are well-known indicators of variations in solar activity. Hence, the source of this energetic and positively charged gas is mainly the Sun but part of it should be cosmic rays. As derived from the analyses of suprathermal "heavy" tails of the probability distribution, we assume that the gas kinetics is described by a deformed Maxwell-Boltzmann distribution, namely, the kappa distribution. The q-index analogue to the κ-index is computed for every hour in the data record and used to investigate how far away the gas is from being in classical thermal equilibrium (q = 1). We compare the q-index time series with that of sunspot numbers and conclude that the gas is in continously variable states away (q > 1) from the almost always assumed thermal equilibrium. During the first ˜15 years, the q-indices somewhat exceed the theoretically predicted limit but follow a pattern which is very homogeneous. However, just before 1990, the q-indices begin to fluctuate in a periodic manner, creating maxima and minima, as they continuously increase until they peak about 1996-1997, while after these years, they decrease following a similar pattern. As a result, we assume that after 1990, for a period that lasted at least 10 years, something changed in the Sun's behaviour. A higher number of solar bursts could easily affect the gas but further research, for instance an analysis of solar flare timeseries from the same period, is required to draw a more robust conclusion of what may have caused the observed anomaly.
Computer simulations of equilibrium magnetization and microstructure in magnetic fluids
NASA Astrophysics Data System (ADS)
Rosa, A. P.; Abade, G. C.; Cunha, F. R.
2017-09-01
In this work, Monte Carlo and Brownian Dynamics simulations are developed to compute the equilibrium magnetization of a magnetic fluid under action of a homogeneous applied magnetic field. The particles are free of inertia and modeled as hard spheres with the same diameters. Two different periodic boundary conditions are implemented: the minimum image method and Ewald summation technique by replicating a finite number of particles throughout the suspension volume. A comparison of the equilibrium magnetization resulting from the minimum image approach and Ewald sums is performed by using Monte Carlo simulations. The Monte Carlo simulations with minimum image and lattice sums are used to investigate suspension microstructure by computing the important radial pair-distribution function go(r), which measures the probability density of finding a second particle at a distance r from a reference particle. This function provides relevant information on structure formation and its anisotropy through the suspension. The numerical results of go(r) are compared with theoretical predictions based on quite a different approach in the absence of the field and dipole-dipole interactions. A very good quantitative agreement is found for a particle volume fraction of 0.15, providing a validation of the present simulations. In general, the investigated suspensions are dominated by structures like dimmer and trimmer chains with trimmers having probability to form an order of magnitude lower than dimmers. Using Monte Carlo with lattice sums, the density distribution function g2(r) is also examined. Whenever this function is different from zero, it indicates structure-anisotropy in the suspension. The dependence of the equilibrium magnetization on the applied field, the magnetic particle volume fraction, and the magnitude of the dipole-dipole magnetic interactions for both boundary conditions are explored in this work. Results show that at dilute regimes and with moderate dipole-dipole interactions, the standard method of minimum image is both accurate and computationally efficient. Otherwise, lattice sums of magnetic particle interactions are required to accelerate convergence of the equilibrium magnetization. The accuracy of the numerical code is also quantitatively verified by comparing the magnetization obtained from numerical results with asymptotic predictions of high order in the particle volume fraction, in the presence of dipole-dipole interactions. In addition, Brownian Dynamics simulations are used in order to examine magnetization relaxation of a ferrofluid and to calculate the magnetic relaxation time as a function of the magnetic particle interaction strength for a given particle volume fraction and a non-dimensional applied field. The simulations of magnetization relaxation have shown the existence of a critical value of the dipole-dipole interaction parameter. For strength of the interactions below the critical value at a given particle volume fraction, the magnetic relaxation time is close to the Brownian relaxation time and the suspension has no appreciable memory. On the other hand, for strength of dipole interactions beyond its critical value, the relaxation time increases exponentially with the strength of dipole-dipole interaction. Although we have considered equilibrium conditions, the obtained results have far-reaching implications for the analysis of magnetic suspensions under external flow.
The Equilibrium Allele Frequency Distribution for a Population with Reproductive Skew
Der, Ricky; Plotkin, Joshua B.
2014-01-01
We study the population genetics of two neutral alleles under reversible mutation in a model that features a skewed offspring distribution, called the Λ-Fleming–Viot process. We describe the shape of the equilibrium allele frequency distribution as a function of the model parameters. We show that the mutation rates can be uniquely identified from this equilibrium distribution, but the form of the offspring distribution cannot itself always be so identified. We introduce an estimator for the mutation rate that is consistent, independent of the form of reproductive skew. We also introduce a two-allele infinite-sites version of the Λ-Fleming–Viot process, and we use it to study how reproductive skew influences standing genetic diversity in a population. We derive asymptotic formulas for the expected number of segregating sites as a function of sample size and offspring distribution. We find that the Wright–Fisher model minimizes the equilibrium genetic diversity, for a given mutation rate and variance effective population size, compared to all other Λ-processes. PMID:24473932
BINARY CORRELATIONS IN IONIZED GASES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balescu, R.; Taylor, H.S.
1961-01-01
An equation of evolution for the binary distribution function in a classical homogeneous, nonequilibrium plasma was derived. It is shown that the asymptotic (long-time) solution of this equation is the Debye distribution, thus providing a rigorous dynamical derivation of the equilibrium distribution. This proof is free from the fundamental conceptual difficulties of conventional equilibrium derivations. Out of equilibrium, a closed formula was obtained for the long living correlations, in terms of the momentum distribution function. These results should form an appropriate starting point for a rigorous theory of transport phenomena in plasmas, including the effect of molecular correlations. (auth)
NASA Astrophysics Data System (ADS)
Franchi, Fulvio; Turetta, Clara; Cavalazzi, Barbara; Corami, Fabiana; Barbieri, Roberto
2016-08-01
Trace and rare earth elements (REEs) have proven their utility as tools for assessing the genesis and early diagenesis of widespread geological bodies such as carbonate mounds, whose genetic processes are not yet fully understood. Carbonates from the Middle Devonian conical mud mounds of the Maïder Basin (eastern Anti-Atlas, Morocco) have been analysed for their REE and trace element distribution. Collectively, the carbonates from the Maïder Basin mud mounds appear to display coherent REE patterns. Three different geochemical patterns, possibly related with three different diagenetic events, include: i) dyke fills with a normal marine REE pattern probably precipitated in equilibrium with seawater, ii) mound micrite with a particular enrichment of overall REE contents and variable Ce anomaly probably related to variation of pH, increase of alkalinity or dissolution/remineralization of organic matter during early diagenesis, and iii) haematite-rich vein fills precipitated from venting fluids of probable hydrothermal origin. Our results reinforce the hypothesis that these mounds were probably affected by an early diagenesis induced by microbial activity and triggered by abundance of dispersed organic matter, whilst venting may have affected the mounds during a later diagenetic phase.
The role of probabilities in physics.
Le Bellac, Michel
2012-09-01
Although modern physics was born in the XVIIth century as a fully deterministic theory in the form of Newtonian mechanics, the use of probabilistic arguments turned out later on to be unavoidable. Three main situations can be distinguished. (1) When the number of degrees of freedom is very large, on the order of Avogadro's number, a detailed dynamical description is not possible, and in fact not useful: we do not care about the velocity of a particular molecule in a gas, all we need is the probability distribution of the velocities. This statistical description introduced by Maxwell and Boltzmann allows us to recover equilibrium thermodynamics, gives a microscopic interpretation of entropy and underlies our understanding of irreversibility. (2) Even when the number of degrees of freedom is small (but larger than three) sensitivity to initial conditions of chaotic dynamics makes determinism irrelevant in practice, because we cannot control the initial conditions with infinite accuracy. Although die tossing is in principle predictable, the approach to chaotic dynamics in some limit implies that our ignorance of initial conditions is translated into a probabilistic description: each face comes up with probability 1/6. (3) As is well-known, quantum mechanics is incompatible with determinism. However, quantum probabilities differ in an essential way from the probabilities introduced previously: it has been shown from the work of John Bell that quantum probabilities are intrinsic and cannot be given an ignorance interpretation based on a hypothetical deeper level of description. Copyright © 2012 Elsevier Ltd. All rights reserved.
Momentum conserving Brownian dynamics propagator for complex soft matter fluids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Padding, J. T.; Briels, W. J.
2014-12-28
We present a Galilean invariant, momentum conserving first order Brownian dynamics scheme for coarse-grained simulations of highly frictional soft matter systems. Friction forces are taken to be with respect to moving background material. The motion of the background material is described by locally averaged velocities in the neighborhood of the dissolved coarse coordinates. The velocity variables are updated by a momentum conserving scheme. The properties of the stochastic updates are derived through the Chapman-Kolmogorov and Fokker-Planck equations for the evolution of the probability distribution of coarse-grained position and velocity variables, by requiring the equilibrium distribution to be a stationary solution.more » We test our new scheme on concentrated star polymer solutions and find that the transverse current and velocity time auto-correlation functions behave as expected from hydrodynamics. In particular, the velocity auto-correlation functions display a long time tail in complete agreement with hydrodynamics.« less
Linking well-tempered metadynamics simulations with experiments.
Barducci, Alessandro; Bonomi, Massimiliano; Parrinello, Michele
2010-05-19
Linking experiments with the atomistic resolution provided by molecular dynamics simulations can shed light on the structure and dynamics of protein-disordered states. The sampling limitations of classical molecular dynamics can be overcome using metadynamics, which is based on the introduction of a history-dependent bias on a small number of suitably chosen collective variables. Even if such bias distorts the probability distribution of the other degrees of freedom, the equilibrium Boltzmann distribution can be reconstructed using a recently developed reweighting algorithm. Quantitative comparison with experimental data is thus possible. Here we show the potential of this combined approach by characterizing the conformational ensemble explored by a 13-residue helix-forming peptide by means of a well-tempered metadynamics/parallel tempering approach and comparing the reconstructed nuclear magnetic resonance scalar couplings with experimental data. Copyright (c) 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Are groups of galaxies virialized systems?
NASA Technical Reports Server (NTRS)
Diaferio, Antonaldo; Ramella, Massimo; Geller, Margaret J.; Ferrari, Attilio
1993-01-01
Groups are systems of galaxies with crossing times t(cr) much smaller than the Hubble time. Most of them have t(cr) less than 0.1/H0. The usual interpretation is that they are in virial equilibrium. We compare the data of the group catalog selected from the CfA redshift survey extension with different N-body models. We show that the distributions of kinematic and dynamical quantities of the groups in the CfA catalog can be reproduced by a single collapsing group observed along different line of sights. This result shows that (1) projection effects dominate the statistics of these systems, and (2) observed groups of galaxies are probably still in the collapse phase.
A Stochastic Framework for Modeling the Population Dynamics of Convective Clouds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagos, Samson; Feng, Zhe; Plant, Robert S.
A stochastic prognostic framework for modeling the population dynamics of convective clouds and representing them in climate models is proposed. The approach used follows the non-equilibrium statistical mechanical approach through a master equation. The aim is to represent the evolution of the number of convective cells of a specific size and their associated cloud-base mass flux, given a large-scale forcing. In this framework, referred to as STOchastic framework for Modeling Population dynamics of convective clouds (STOMP), the evolution of convective cell size is predicted from three key characteristics: (i) the probability of growth, (ii) the probability of decay, and (iii)more » the cloud-base mass flux. STOMP models are constructed and evaluated against CPOL radar observations at Darwin and convection permitting model (CPM) simulations. Multiple models are constructed under various assumptions regarding these three key parameters and the realisms of these models are evaluated. It is shown that in a model where convective plumes prefer to aggregate spatially and mass flux is a non-linear function of convective cell area, mass flux manifests a recharge-discharge behavior under steady forcing. Such a model also produces observed behavior of convective cell populations and CPM simulated mass flux variability under diurnally varying forcing. Besides its use in developing understanding of convection processes and the controls on convective cell size distributions, this modeling framework is also designed to be capable of providing alternative, non-equilibrium, closure formulations for spectral mass flux parameterizations.« less
Control and instanton trajectories for random transitions in turbulent flows
NASA Astrophysics Data System (ADS)
Bouchet, Freddy; Laurie, Jason; Zaboronski, Oleg
2011-12-01
Many turbulent systems exhibit random switches between qualitatively different attractors. The transition between these bistable states is often an extremely rare event, that can not be computed through DNS, due to complexity limitations. We present results for the calculation of instanton trajectories (a control problem) between non-equilibrium stationary states (attractors) in the 2D stochastic Navier-Stokes equations. By representing the transition probability between two states using a path integral formulation, we can compute the most probable trajectory (instanton) joining two non-equilibrium stationary states. Technically, this is equivalent to the minimization of an action, which can be related to a fluid mechanics control problem.
NASA Astrophysics Data System (ADS)
Perversi, Eleonora; Regazzini, Eugenio
2015-05-01
For a general inelastic Kac-like equation recently proposed, this paper studies the long-time behaviour of its probability-valued solution. In particular, the paper provides necessary and sufficient conditions for the initial datum in order that the corresponding solution converges to equilibrium. The proofs rest on the general CLT for independent summands applied to a suitable Skorokhod representation of the original solution evaluated at an increasing and divergent sequence of times. It turns out that, roughly speaking, the initial datum must belong to the standard domain of attraction of a stable law, while the equilibrium is presentable as a mixture of stable laws.
Mix or un-mix? Trace element segregation from a heterogeneous mantle, simulated.
NASA Astrophysics Data System (ADS)
Katz, R. F.; Keller, T.; Warren, J. M.; Manley, G.
2016-12-01
Incompatible trace-element concentrations vary in mid-ocean ridge lavas and melt inclusions by an order of magnitude or more, even in samples from the same location. This variability has been attributed to channelised melt flow [Spiegelman & Kelemen, 2003], which brings enriched, low-degree melts to the surface in relative isolation from depleted inter-channel melts. We re-examine this hypothesis using a new melting-column model that incorporates mantle volatiles [Keller & Katz 2016]. Volatiles cause a deeper onset of channelisation: their corrosivity is maximum at the base of the silicate melting regime. We consider how source heterogeneity and melt transport shape trace-element concentrations in basaltic lavas. We use both equilibrium and non-equilibrium formulations [Spiegelman 1996]. In particular, we evaluate the effect of melt transport on probability distributions of trace element concentration, comparing the inflow distribution in the mantle with the outflow distribution in the magma. Which features of melt transport preserve, erase or overprint input correlations between elements? To address this we consider various hypotheses about mantle heterogeneity, allowing for spatial structure in major components, volatiles and trace elements. Of interest are the roles of wavelength, amplitude, and correlation of heterogeneity fields. To investigate how different modes of melt transport affect input distributions, we compare melting models that produce either shallow or deep channelisation, or none at all.References:Keller & Katz (2016). The Role of Volatiles in Reactive Melt Transport in the Asthenosphere. Journal of Petrology, http://doi.org/10.1093/petrology/egw030. Spiegelman (1996). Geochemical consequences of melt transport in 2-D: The sensitivity of trace elements to mantle dynamics. Earth and Planetary Science Letters, 139, 115-132. Spiegelman & Kelemen (2003). Extreme chemical variability as a consequence of channelized melt transport. Geochemistry Geophysics Geosystems, http://doi.org/10.1029/2002GC000336
NASA Astrophysics Data System (ADS)
Ritter, Simon M.; Isenbeck-Schröter, Margot; Schröder-Ritzrau, Andrea; Scholz, Christian; Rheinberger, Stefan; Höfle, Bernhard; Frank, Norbert
2018-03-01
The formation of tufa is essentially influenced by biological processes and, in order to infer environmental information from tufa deposits, it has to be determined how the geochemistry of biologically influenced tufa deviates from equilibrium conditions between water and calcite precipitate. We investigated the evolution of the water and tufa geochemistry of consecutive tufa barrages in a small tufa-depositing creek in Southern Germany. High incorporation of divalent cations into tufa is ubiquitous, which is probably promoted by an influence of biofilms in the tufa element partitioning. The distribution coefficients for the incorporation of Mg, Sr and Ba into tufa at the Kaisinger creek D(Mg), D(Sr) and D(Ba) are 0.020-0.031, 0.13-0.18 and 0.26-0.43, respectively. This agrees with previous research suggesting that biofilm influenced tufa will be enriched in divalent cations over equilibrium values in the order of Mg < Sr < Ba. Furthermore, the incorporation of Mg, Sr and Ba into tufa of the Kaisinger creek decreases downstream, which can be attributed to changes of the relative portions of bio-influenced tufa formation with likely higher distribution coefficients and inorganically-driven tufa formation with likely lower distribution coefficients. Additionally, the distribution coefficients of metals in tufa of the Kaisinger creek D(Cd), D(Zn), D(Co) and D(Mn) show values of 11-22, 2.2-12, 0.7-4.9 and 30-57, respectively. These metals are highly enriched in upstream tufa deposits and their contents in tufa strongly decrease downstream. Such highly compatible elements could therefore be used to distinguish easily between different lateral sections in fluvial barrage-dam tufa depositional systems and could serve as a useful geochemical tool in studying ancient barrage-dam tufa depositional systems.
NASA Astrophysics Data System (ADS)
Schaden, Martin
2002-12-01
Quantum theory is used to model secondary financial markets. Contrary to stochastic descriptions, the formalism emphasizes the importance of trading in determining the value of a security. All possible realizations of investors holding securities and cash is taken as the basis of the Hilbert space of market states. The temporal evolution of an isolated market is unitary in this space. Linear operators representing basic financial transactions such as cash transfer and the buying or selling of securities are constructed and simple model Hamiltonians that generate the temporal evolution due to cash flows and the trading of securities are proposed. The Hamiltonian describing financial transactions becomes local when the profit/loss from trading is small compared to the turnover. This approximation may describe a highly liquid and efficient stock market. The lognormal probability distribution for the price of a stock with a variance that is proportional to the elapsed time is reproduced for an equilibrium market. The asymptotic volatility of a stock in this case is related to the long-term probability that it is traded.
The maximum entropy production and maximum Shannon information entropy in enzyme kinetics
NASA Astrophysics Data System (ADS)
Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš
2018-04-01
We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.
The first-digit frequencies in data of turbulent flows
NASA Astrophysics Data System (ADS)
Biau, Damien
2015-12-01
Considering the first significant digits (noted d) in data sets of dissipation for turbulent flows, the probability to find a given number (d = 1 or 2 or …9) would be 1/9 for a uniform distribution. Instead the probability closely follows Newcomb-Benford's law, namely P(d) = log(1 + 1 / d) . The discrepancies between Newcomb-Benford's law and first-digits frequencies in turbulent data are analysed through Shannon's entropy. The data sets are obtained with direct numerical simulations for two types of fluid flow: an isotropic case initialized with a Taylor-Green vortex and a channel flow. Results are in agreement with Newcomb-Benford's law in nearly homogeneous cases and the discrepancies are related to intermittent events. Thus the scale invariance for the first significant digits, which supports Newcomb-Benford's law, seems to be related to an equilibrium turbulent state, namely with a significant inertial range. A matlab/octave program provided in appendix is such that part of the presented results can easily be replicated.
Genetic distribution of 15 autosomal STR markers in the Punjabi population of Pakistan.
Shan, Muhammad Adnan; Hussain, Manzoor; Shafique, Muhammad; Shahzad, Muhammad; Perveen, Rukhsana; Idrees, Muhammad
2016-11-01
Genetic diversity of 15 autosomal short tandem repeat (STR) loci was evaluated in 713 unrelated individual samples of a Punjabi population of Pakistan. These loci were scrutinized to establish allelic frequencies and statistical parameters of forensic and paternity interests. A total of 165 alleles were observed with the corresponding allele frequencies ranging from 0.001 to 0.446. D2S1338 was found as the most informative locus while TPOX (0.611) was the least discriminating locus. The combined power of discrimination (CPD), the combined probability of exclusion (CPE), and cumulative probability of matching (CPM) were found equaled to 0.999999999999999998606227424808, 0.999995777557989, and 1.37543 × 10-18, respectively. All the loci followed the Hardy-Weinberg equilibrium after the Bonferroni correction (p < 0.0033) except one locus D3S1358. The study revealed that these STR loci are highly polymorphic, suitable for forensic and parentage analyses. In comparison to different populations (Asians and non-Asians), significant differences were recorded for these loci.
The Supermarket Model with Bounded Queue Lengths in Equilibrium
NASA Astrophysics Data System (ADS)
Brightwell, Graham; Fairthorne, Marianne; Luczak, Malwina J.
2018-04-01
In the supermarket model, there are n queues, each with a single server. Customers arrive in a Poisson process with arrival rate λ n , where λ = λ (n) \\in (0,1) . Upon arrival, a customer selects d=d(n) servers uniformly at random, and joins the queue of a least-loaded server amongst those chosen. Service times are independent exponentially distributed random variables with mean 1. In this paper, we analyse the behaviour of the supermarket model in the regime where λ (n) = 1 - n^{-α } and d(n) = \\lfloor n^β \\rfloor , where α and β are fixed numbers in (0, 1]. For suitable pairs (α , β ) , our results imply that, in equilibrium, with probability tending to 1 as n → ∞, the proportion of queues with length equal to k = \\lceil α /β \\rceil is at least 1-2n^{-α + (k-1)β } , and there are no longer queues. We further show that the process is rapidly mixing when started in a good state, and give bounds on the speed of mixing for more general initial conditions.
H2-rich interstellar grain mantles: An equilibrium description
NASA Technical Reports Server (NTRS)
Dissly, Richard W.; Allen, Mark; Anicich, Vincent G.
1994-01-01
Experiments simulating the codeposition of molecular hydrogen and water ice on interstellar grains demonstrate that amorphous water ice at 12 K can incorporate a substantial amount of H2, up to a mole ratio of H2/H2O = 0.53. We find that the physical behavior of approximately 80% of the hydrogen can be explained satisfactorily in terms of an equilibrium population, thermodynamically governed by a wide distribution of binding site energies. Such a description predicts that gas phase accretion could lead to mole fractions of H2 in interstellar grain mantles of nearly 0.3; for the probable conditions of WL5 in the rho Ophiuchi cloud, an H2 mole fraction of between 0.05 and 0.3 is predicted, in possible agreement with the observed abundance reported by Sandford, Allamandola, & Geballe. Accretion of gas phase H2 onto grain mantles, rather than photochemical production of H2 within the ice, could be a general explanation for frozen H2 in interstellar ices. We speculate on the implications of such a composition for grain mantle chemistry and physics.
Light-induced electronic non-equilibrium in plasmonic particles.
Kornbluth, Mordechai; Nitzan, Abraham; Seideman, Tamar
2013-05-07
We consider the transient non-equilibrium electronic distribution that is created in a metal nanoparticle upon plasmon excitation. Following light absorption, the created plasmons decohere within a few femtoseconds, producing uncorrelated electron-hole pairs. The corresponding non-thermal electronic distribution evolves in response to the photo-exciting pulse and to subsequent relaxation processes. First, on the femtosecond timescale, the electronic subsystem relaxes to a Fermi-Dirac distribution characterized by an electronic temperature. Next, within picoseconds, thermalization with the underlying lattice phonons leads to a hot particle in internal equilibrium that subsequently equilibrates with the environment. Here we focus on the early stage of this multistep relaxation process, and on the properties of the ensuing non-equilibrium electronic distribution. We consider the form of this distribution as derived from the balance between the optical absorption and the subsequent relaxation processes, and discuss its implication for (a) heating of illuminated plasmonic particles, (b) the possibility to optically induce current in junctions, and (c) the prospect for experimental observation of such light-driven transport phenomena.
Jet-conversion photons from an anisotropic quark-gluon plasma
NASA Astrophysics Data System (ADS)
Bhattacharya, Lusaka; Roy, Pradip
2010-10-01
We calculate the pT distributions of jet-conversion photons from a quark-gluon plasma with pre-equilibrium momentum-space anisotropy. A phenomenological model has been used for the time evolution of the hard momentum scale phard(τ) and anisotropy parameter ξ(τ). As a result of pre-equilibrium momentum-space anisotropy, we find significant modification of the jet-conversion photon pT distribution. For example, with fixed initial condition pre-equilibrium anisotropy, we predict a significant enhancement of the jet-photon pT distribution in the entire region, whereas for pre-equilibrium anisotropy with fixed final multiplicity (FFM), suppression of the jet-conversion photon pT distribution is observed. The results with FFM (as it is the most realistic situation) have been compared with high pT PHENIX photon data. It is found that the data are reproduced well if the isotropization time lies within 1.5 fm/c.
Kusaba, Akira; Li, Guanchen; von Spakovsky, Michael R; Kangawa, Yoshihiro; Kakimoto, Koichi
2017-08-15
Clearly understanding elementary growth processes that depend on surface reconstruction is essential to controlling vapor-phase epitaxy more precisely. In this study, ammonia chemical adsorption on GaN(0001) reconstructed surfaces under metalorganic vapor phase epitaxy (MOVPE) conditions (3Ga-H and N ad -H + Ga-H on a 2 × 2 unit cell) is investigated using steepest-entropy-ascent quantum thermodynamics (SEAQT). SEAQT is a thermodynamic-ensemble based, first-principles framework that can predict the behavior of non-equilibrium processes, even those far from equilibrium where the state evolution is a combination of reversible and irreversible dynamics. SEAQT is an ideal choice to handle this problem on a first-principles basis since the chemical adsorption process starts from a highly non-equilibrium state. A result of the analysis shows that the probability of adsorption on 3Ga-H is significantly higher than that on N ad -H + Ga-H. Additionally, the growth temperature dependence of these adsorption probabilities and the temperature increase due to the heat of reaction is determined. The non-equilibrium thermodynamic modeling applied can lead to better control of the MOVPE process through the selection of preferable reconstructed surfaces. The modeling also demonstrates the efficacy of DFT-SEAQT coupling for determining detailed non-equilibrium process characteristics with a much smaller computational burden than would be entailed with mechanics-based, microscopic-mesoscopic approaches.
Kusaba, Akira; von Spakovsky, Michael R.; Kangawa, Yoshihiro; Kakimoto, Koichi
2017-01-01
Clearly understanding elementary growth processes that depend on surface reconstruction is essential to controlling vapor-phase epitaxy more precisely. In this study, ammonia chemical adsorption on GaN(0001) reconstructed surfaces under metalorganic vapor phase epitaxy (MOVPE) conditions (3Ga-H and Nad-H + Ga-H on a 2 × 2 unit cell) is investigated using steepest-entropy-ascent quantum thermodynamics (SEAQT). SEAQT is a thermodynamic-ensemble based, first-principles framework that can predict the behavior of non-equilibrium processes, even those far from equilibrium where the state evolution is a combination of reversible and irreversible dynamics. SEAQT is an ideal choice to handle this problem on a first-principles basis since the chemical adsorption process starts from a highly non-equilibrium state. A result of the analysis shows that the probability of adsorption on 3Ga-H is significantly higher than that on Nad-H + Ga-H. Additionally, the growth temperature dependence of these adsorption probabilities and the temperature increase due to the heat of reaction is determined. The non-equilibrium thermodynamic modeling applied can lead to better control of the MOVPE process through the selection of preferable reconstructed surfaces. The modeling also demonstrates the efficacy of DFT-SEAQT coupling for determining detailed non-equilibrium process characteristics with a much smaller computational burden than would be entailed with mechanics-based, microscopic-mesoscopic approaches. PMID:28809816
Statistical mechanics of money and income
NASA Astrophysics Data System (ADS)
Dragulescu, Adrian; Yakovenko, Victor
2001-03-01
Money: In a closed economic system, money is conserved. Thus, by analogy with energy, the equilibrium probability distribution of money will assume the exponential Boltzmann-Gibbs form characterized by an effective temperature. We demonstrate how the Boltzmann-Gibbs distribution emerges in computer simulations of economic models. We discuss thermal machines, the role of debt, and models with broken time-reversal symmetry for which the Boltzmann-Gibbs law does not hold. Reference: A. Dragulescu and V. M. Yakovenko, "Statistical mechanics of money", Eur. Phys. J. B 17, 723-729 (2000), [cond-mat/0001432]. Income: Using tax and census data, we demonstrate that the distribution of individual income in the United States is exponential. Our calculated Lorenz curve without fitting parameters and Gini coefficient 1/2 agree well with the data. We derive the distribution function of income for families with two earners and show that it also agrees well with the data. The family data for the period 1947-1994 fit the Lorenz curve and Gini coefficient 3/8=0.375 calculated for two-earners families. Reference: A. Dragulescu and V. M. Yakovenko, "Evidence for the exponential distribution of income in the USA", cond-mat/0008305.
Neigel, J E; Avise, J C
1993-12-01
In rapidly evolving molecules, such as animal mitochondrial DNA, mutations that delineate specific lineages may not be dispersed at sufficient rates to attain an equilibrium between genetic drift and gene flow. Here we predict conditions that lead to nonequilibrium geographic distributions of mtDNA lineages, test the robustness of these predictions and examine mtDNA data sets for consistency with our model. Under a simple isolation by distance model, the variance of an mtDNA lineage's geographic distribution is expected be proportional to its age. Simulation results indicated that this relationship is fairly robust. Analysis of mtDNA data from natural populations revealed three qualitative distributional patterns: (1) significant departure of lineage structure from equilibrium geographic distributions, a pattern exhibited in three rodent species with limited dispersal; (2) nonsignificant departure from equilibrium expectations, exhibited by two avian and two marine fish species with potentials for relatively long-distance dispersal; and (3) a progression from nonequilibrium distributions for younger lineages to equilibrium distributions for older lineages, a condition displayed by one surveyed avian species. These results demonstrate the advantages of considering mutation and genealogy in the interpretation of mtDNA geographic variation.
Polak, Micha; Rubinovich, Leonid
2011-10-06
Nanoconfinement entropic effects on chemical equilibrium involving a small number of molecules, which we term NCECE, are revealed by two widely diverse types of reactions. Employing statistical-mechanical principles, we show how the NCECE effect stabilizes nucleotide dimerization observed within self-assembled molecular cages. Furthermore, the effect provides the basis for dimerization even under an aqueous environment inside the nanocage. Likewise, the NCECE effect is pertinent to a longstanding issue in astrochemistry, namely the extra deuteration commonly observed for molecules reacting on interstellar dust grain surfaces. The origin of the NCECE effect is elucidated by means of the probability distributions of the reaction extent and related variations in the reactant-product mixing entropy. Theoretical modelling beyond our previous preliminary work highlights the role of the nanospace size in addition to that of the nanosystem size, namely the limited amount of molecules in the reaction mixture. Furthermore, the NCECE effect can depend also on the reaction mechanism, and on deviations from stoichiometry. The NCECE effect, leading to enhanced, greatly variable equilibrium "constants", constitutes a unique physical-chemical phenomenon, distinguished from the usual thermodynamical properties of macroscopically large systems. Being significant particularly for weakly exothermic reactions, the effects should stabilize products in other closed nanoscale structures, and thus can have notable implications for the growing nanotechnological utilization of chemical syntheses conducted within confined nanoreactors.
Norris, Peter M; da Silva, Arlindo M
2016-07-01
A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC.
NASA Technical Reports Server (NTRS)
Norris, Peter M.; Da Silva, Arlindo M.
2016-01-01
A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC.
Norris, Peter M.; da Silva, Arlindo M.
2018-01-01
A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC. PMID:29618847
An equilibrium-conserving taxation scheme for income from capital
NASA Astrophysics Data System (ADS)
Tempere, Jacques
2018-02-01
Under conditions of market equilibrium, the distribution of capital income follows a Pareto power law, with an exponent that characterizes the given equilibrium. Here, a simple taxation scheme is proposed such that the post-tax capital income distribution remains an equilibrium distribution, albeit with a different exponent. This taxation scheme is shown to be progressive, and its parameters can be simply derived from (i) the total amount of tax that will be levied, (ii) the threshold selected above which capital income will be taxed and (iii) the total amount of capital income. The latter can be obtained either by using Piketty's estimates of the capital/labor income ratio or by fitting the initial Pareto exponent. Both ways moreover provide a check on the amount of declared income from capital.
NASA Astrophysics Data System (ADS)
Akıner, Tolga; Mason, Jeremy; Ertürk, Hakan
2017-11-01
The thermal properties of the TIP3P and TIP5P water models are investigated using equilibrium and non-equilibrium molecular dynamics techniques in the presence of solid surfaces. The performance of the non-equilibrium technique for rigid molecules is found to depend significantly on the distribution of atomic degrees of freedom. An improved approach to distribute atomic degrees of freedom is proposed for which the thermal conductivity of the TIP5P model agrees more closely with equilibrium molecular dynamics and experimental results than the existing state of the art.
Budget Allocation in a Competitive Communication Spectrum Economy
NASA Astrophysics Data System (ADS)
Lin, Ming-Hua; Tsai, Jung-Fa; Ye, Yinyu
2009-12-01
This study discusses how to adjust "monetary budget" to meet each user's physical power demand, or balance all individual utilities in a competitive "spectrum market" of a communication system. In the market, multiple users share a common frequency or tone band and each of them uses the budget to purchase its own transmit power spectra (taking others as given) in maximizing its Shannon utility or pay-off function that includes the effect of interferences. A market equilibrium is a budget allocation, price spectrum, and tone power distribution that independently and simultaneously maximizes each user's utility. The equilibrium conditions of the market are formulated and analyzed, and the existence of an equilibrium is proved. Computational results and comparisons between the competitive equilibrium and Nash equilibrium solutions are also presented, which show that the competitive market equilibrium solution often provides more efficient power distribution.
Collocation of equilibria in gravitational field of triangular body via mass redistribution
NASA Astrophysics Data System (ADS)
Burov, Alexander A.; Guerman, Anna D.; Nikonov, Vasily I.
2018-05-01
We consider a gravitating system with triangular mass distribution that can be used as approximation of gravitational field for small irregular celestial bodies. In such system, the locations of equilibrium points, that is, the points where the gravitational forces are balanced, are analyzed. The goal is to find the mass distribution which provides equilibrium in a pre-assigned location near the triangular system, and to study the stability of this equilibrium.
NASA Astrophysics Data System (ADS)
Akimoto, Takuma; Yamamoto, Eiji
2016-12-01
Local diffusion coefficients in disordered systems such as spin glass systems and living cells are highly heterogeneous and may change over time. Such a time-dependent and spatially heterogeneous environment results in irreproducibility of single-particle-tracking measurements. Irreproducibility of time-averaged observables has been theoretically studied in the context of weak ergodicity breaking in stochastic processes. Here, we provide rigorous descriptions of equilibrium and non-equilibrium diffusion processes for the annealed transit time model, which is a heterogeneous diffusion model in living cells. We give analytical solutions for the mean square displacement (MSD) and the relative standard deviation of the time-averaged MSD for equilibrium and non-equilibrium situations. We find that the time-averaged MSD grows linearly with time and that the time-averaged diffusion coefficients are intrinsically random (irreproducible) even in the long-time measurements in non-equilibrium situations. Furthermore, the distribution of the time-averaged diffusion coefficients converges to a universal distribution in the sense that it does not depend on initial conditions. Our findings pave the way for a theoretical understanding of distributional behavior of the time-averaged diffusion coefficients in disordered systems.
Equilibrium Molecular Thermodynamics from Kirkwood Sampling
2015-01-01
We present two methods for barrierless equilibrium sampling of molecular systems based on the recently proposed Kirkwood method (J. Chem. Phys.2009, 130, 134102). Kirkwood sampling employs low-order correlations among internal coordinates of a molecule for random (or non-Markovian) sampling of the high dimensional conformational space. This is a geometrical sampling method independent of the potential energy surface. The first method is a variant of biased Monte Carlo, where Kirkwood sampling is used for generating trial Monte Carlo moves. Using this method, equilibrium distributions corresponding to different temperatures and potential energy functions can be generated from a given set of low-order correlations. Since Kirkwood samples are generated independently, this method is ideally suited for massively parallel distributed computing. The second approach is a variant of reservoir replica exchange, where Kirkwood sampling is used to construct a reservoir of conformations, which exchanges conformations with the replicas performing equilibrium sampling corresponding to different thermodynamic states. Coupling with the Kirkwood reservoir enhances sampling by facilitating global jumps in the conformational space. The efficiency of both methods depends on the overlap of the Kirkwood distribution with the target equilibrium distribution. We present proof-of-concept results for a model nine-atom linear molecule and alanine dipeptide. PMID:25915525
Global behavior analysis for stochastic system of 1,3-PD continuous fermentation
NASA Astrophysics Data System (ADS)
Zhu, Xi; Kliemann, Wolfgang; Li, Chunfa; Feng, Enmin; Xiu, Zhilong
2017-12-01
Global behavior for stochastic system of continuous fermentation in glycerol bio-dissimilation to 1,3-propanediol by Klebsiella pneumoniae is analyzed in this paper. This bioprocess cannot avoid the stochastic perturbation caused by internal and external disturbance which reflect on the growth rate. These negative factors can limit and degrade the achievable performance of controlled systems. Based on multiplicity phenomena, the equilibriums and bifurcations of the deterministic system are analyzed. Then, a stochastic model is presented by a bounded Markov diffusion process. In order to analyze the global behavior, we compute the control sets for the associated control system. The probability distributions of relative supports are also computed. The simulation results indicate that how the disturbed biosystem tend to stationary behavior globally.
Combinatoric analysis of heterogeneous stochastic self-assembly.
D'Orsogna, Maria R; Zhao, Bingyu; Berenji, Bijan; Chou, Tom
2013-09-28
We analyze a fully stochastic model of heterogeneous nucleation and self-assembly in a closed system with a fixed total particle number M, and a fixed number of seeds Ns. Each seed can bind a maximum of N particles. A discrete master equation for the probability distribution of the cluster sizes is derived and the corresponding cluster concentrations are found using kinetic Monte-Carlo simulations in terms of the density of seeds, the total mass, and the maximum cluster size. In the limit of slow detachment, we also find new analytic expressions and recursion relations for the cluster densities at intermediate times and at equilibrium. Our analytic and numerical findings are compared with those obtained from classical mass-action equations and the discrepancies between the two approaches analyzed.
Charge state distribution of 86Kr in hydrogen and helium gas charge strippers at 2.7 MeV /nucleon
NASA Astrophysics Data System (ADS)
Kuboki, H.; Okuno, H.; Hasebe, H.; Fukunishi, N.; Ikezawa, E.; Imao, H.; Kamigaito, O.; Kase, M.
2014-12-01
The charge state distributions of krypton (86Kr) with an energy of 2.7 MeV /nucleon were measured using hydrogen (H2 ) and helium (He) gas charge strippers. A differential pumping system was constructed to confine H2 and He gases to a thickness sufficient for the charge state distributions to attain equilibrium. The mean charge states of 86Kr in H2 and He gases attained equilibrium at 25.1 and 23.2, respectively, whereas the mean charge state in N2 gas at equilibrium was estimated to be less than 20. The charge distributions are successfully reproduced by the cross sections of ionization and electron capture processes optimized by a fitting procedure.
Defense Strategies for Asymmetric Networked Systems with Discrete Components.
Rao, Nageswara S V; Ma, Chris Y T; Hausken, Kjell; He, Fei; Yau, David K Y; Zhuang, Jun
2018-05-03
We consider infrastructures consisting of a network of systems, each composed of discrete components. The network provides the vital connectivity between the systems and hence plays a critical, asymmetric role in the infrastructure operations. The individual components of the systems can be attacked by cyber and physical means and can be appropriately reinforced to withstand these attacks. We formulate the problem of ensuring the infrastructure performance as a game between an attacker and a provider, who choose the numbers of the components of the systems and network to attack and reinforce, respectively. The costs and benefits of attacks and reinforcements are characterized using the sum-form, product-form and composite utility functions, each composed of a survival probability term and a component cost term. We present a two-level characterization of the correlations within the infrastructure: (i) the aggregate failure correlation function specifies the infrastructure failure probability given the failure of an individual system or network, and (ii) the survival probabilities of the systems and network satisfy first-order differential conditions that capture the component-level correlations using multiplier functions. We derive Nash equilibrium conditions that provide expressions for individual system survival probabilities and also the expected infrastructure capacity specified by the total number of operational components. We apply these results to derive and analyze defense strategies for distributed cloud computing infrastructures using cyber-physical models.
Defense Strategies for Asymmetric Networked Systems with Discrete Components
Rao, Nageswara S. V.; Ma, Chris Y. T.; Hausken, Kjell; He, Fei; Yau, David K. Y.
2018-01-01
We consider infrastructures consisting of a network of systems, each composed of discrete components. The network provides the vital connectivity between the systems and hence plays a critical, asymmetric role in the infrastructure operations. The individual components of the systems can be attacked by cyber and physical means and can be appropriately reinforced to withstand these attacks. We formulate the problem of ensuring the infrastructure performance as a game between an attacker and a provider, who choose the numbers of the components of the systems and network to attack and reinforce, respectively. The costs and benefits of attacks and reinforcements are characterized using the sum-form, product-form and composite utility functions, each composed of a survival probability term and a component cost term. We present a two-level characterization of the correlations within the infrastructure: (i) the aggregate failure correlation function specifies the infrastructure failure probability given the failure of an individual system or network, and (ii) the survival probabilities of the systems and network satisfy first-order differential conditions that capture the component-level correlations using multiplier functions. We derive Nash equilibrium conditions that provide expressions for individual system survival probabilities and also the expected infrastructure capacity specified by the total number of operational components. We apply these results to derive and analyze defense strategies for distributed cloud computing infrastructures using cyber-physical models. PMID:29751588
Twin tubular pinch effect in curving confined flows
Clime, Liviu; Morton, Keith J.; Hoa, Xuyen D.; Veres, Teodor
2015-01-01
Colloidal suspensions of buoyancy neutral particles flowing in circular pipes focus into narrow distributions near the wall due to lateral migration effects associated with fluid inertia. In curving flows, these distributions are altered by Dean currents and the interplay between Reynolds and Dean numbers is used to predict equilibrium positions. Here, we propose a new description of inertial lateral migration in curving flows that expands current understanding of both focusing dynamics and equilibrium distributions. We find that at low Reynolds numbers, the ratio δ between lateral inertial migration and Dean forces scales simply with the particle radius, coil curvature and pipe radius as . A critical value δc = 0.148 of this parameter is identified along with two related inertial focusing mechanisms. In the regime below δc, coined subcritical, Dean forces generate permanently circulating, twinned annuli, each with intricate equilibrium particle distributions including eyes and trailing arms. At δ > δc (supercritical regime) inertial lateral migration forces are dominant and particles focus to a single stable equilibrium position. PMID:25927878
Markov Processes: Exploring the Use of Dynamic Visualizations to Enhance Student Understanding
ERIC Educational Resources Information Center
Pfannkuch, Maxine; Budgett, Stephanie
2016-01-01
Finding ways to enhance introductory students' understanding of probability ideas and theory is a goal of many first-year probability courses. In this article, we explore the potential of a prototype tool for Markov processes using dynamic visualizations to develop in students a deeper understanding of the equilibrium and hitting times…
MINTEQA2 is a equilibrium speciation model that can be used to calculate the equilibrium composition of dilute aqueous solutions in the laboratory or in natural aqueous systems. The model is useful for calculating the equilibrium mass distribution among dissolved species, adsorb...
Ozone chemical equilibrium in the extended mesopause under the nighttime conditions
NASA Astrophysics Data System (ADS)
Belikovich, M. V.; Kulikov, M. Yu.; Grygalashvyly, M.; Sonnemann, G. R.; Ermakova, T. S.; Nechaev, A. A.; Feigin, A. M.
2018-01-01
For retrieval of atomic oxygen and atomic hydrogen via ozone observations in the extended mesopause region (∼70-100 km) under nighttime conditions, an assumption on photochemical equilibrium of ozone is often used in research. In this work, an assumption on chemical equilibrium of ozone near mesopause region during nighttime is proofed. We examine 3D chemistry-transport model (CTM) annual calculations and determine the ratio between the correct (modeled) distributions of the O3 density and its equilibrium values depending on the altitude, latitude, and season. The results show that the retrieval of atomic oxygen and atomic hydrogen distributions using an assumption on ozone chemical equilibrium may lead to large errors below ∼81-87 km. We give simple and clear semi-empirical criterion for practical utilization of the lower boundary of the area with ozone's chemical equilibrium near mesopause.
Changes in the distribution of radiocesium in the wood of Japanese cedar trees from 2011 to 2013.
Ogawa, Hideki; Hirano, Yurika; Igei, Shigemitsu; Yokota, Kahori; Arai, Shio; Ito, Hirohisa; Kumata, Atsushi; Yoshida, Hirohisa
2016-09-01
The changes in the distribution of (137)Cs in the wood of Japanese cedar (Cryptomeria japonica) trunks within three years after the Fukushima Dai-ichi Nuclear Power Plant (FDNP) accident in 2011 were investigated. Thirteen trees were felled to collect samples at 6 forests in 2 regions of the Fukushima prefecture. The radial distribution of (137)Cs in the wood was measured at different heights. Profiles of (137)Cs distribution in the wood changed considerably from 2011 to 2013, and the process of (137)Cs distribution change in the wood was clarified. From 2011 to 2012, the active transportation from sapwood to heartwood and the radial diffusion in heartwood proceeded quickly, and the radial (137)Cs distribution differed according to the vertical positon of trees. From 2012 to 2013, the vertical diffusion of (137)Cs from the treetop to the ground, probably caused by the gradient of (137)Cs concentration in the trunk, was observed. Eventually, the radial (137)Cs distributions were nearly identical at any vertical positions in 2013. Our results suggested that the active transportation from sapwood to heartwood and the vertical and radial diffusion in heartwood proceeded according to the vertical position of the tree and (137)Cs distribution in the wood approached the equilibrium state within three years after the accident. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gastis, P.; Perdikakis, G.; Robertson, D.; Almus, R.; Anderson, T.; Bauder, W.; Collon, P.; Lu, W.; Ostdiek, K.; Skulski, M.
2016-04-01
Equilibrium charge state distributions of stable 60Ni, 59Co, and 63Cu beams passing through a 1 μm thick Mo foil were measured at beam energies of 1.84 MeV/u, 2.09 MeV/u, and 2.11 MeV/u respectively. A 1-D position sensitive Parallel Grid Avalanche Counter detector (PGAC) was used at the exit of a spectrograph magnet, enabling us to measure the intensity of several charge states simultaneously. The number of charge states measured for each beam constituted more than 99% of the total equilibrium charge state distribution for that element. Currently, little experimental data exists for equilibrium charge state distributions for heavy ions with 19 ≲Zp,Zt ≲ 54 (Zp and Zt, are the projectile's and target's atomic numbers respectively). Hence the success of the semi-empirical models in predicting typical characteristics of equilibrium CSDs (mean charge states and distribution widths), has not been thoroughly tested at the energy region of interest. A number of semi-empirical models from the literature were evaluated in this study, regarding their ability to reproduce the characteristics of the measured charge state distributions. The evaluated models were selected from the literature based on whether they are suitable for the given range of atomic numbers and on their frequent use by the nuclear physics community. Finally, an attempt was made to combine model predictions for the mean charge state, the distribution width and the distribution shape, to come up with a more reliable model. We discuss this new ;combinatorial; prescription and compare its results with our experimental data and with calculations using the other semi-empirical models studied in this work.
The Distributive Issue in Latin America.
ERIC Educational Resources Information Center
Figueroa, Adolfo
1996-01-01
Presents the central features of an economic theory of social equilibrium based on the theory of distributive equilibrium. Uses the situation in Latin America in the 1980s and 1990s to test the validity of the theory. Argues that excessive inequality cripples sustained growth and democratic movements. (MJP)
Diffusion and Localization of Relative Strategy Scores in The Minority Game
NASA Astrophysics Data System (ADS)
Granath, Mats; Perez-Diaz, Alvaro
2016-10-01
We study the equilibrium distribution of relative strategy scores of agents in the asymmetric phase (α ≡ P/N≳ 1) of the basic Minority Game using sign-payoff, with N agents holding two strategies over P histories. We formulate a statistical model that makes use of the gauge freedom with respect to the ordering of an agent's strategies to quantify the correlation between the attendance and the distribution of strategies. The relative score xin Z of the two strategies of an agent is described in terms of a one dimensional random walk with asymmetric jump probabilities, leading either to a static and asymmetric exponential distribution centered at x=0 for fickle agents or to diffusion with a positive or negative drift for frozen agents. In terms of scaled coordinates x/√{N} and t / N the distributions are uniquely given by α and in quantitative agreement with direct simulations of the game. As the model avoids the reformulation in terms of a constrained minimization problem it can be used for arbitrary payoff functions with little calculational effort and provides a transparent and simple formulation of the dynamics of the basic Minority Game in the asymmetric phase.
NASA Astrophysics Data System (ADS)
Chodera, John D.; Noé, Frank
2010-09-01
Discrete-state Markov (or master equation) models provide a useful simplified representation for characterizing the long-time statistical evolution of biomolecules in a manner that allows direct comparison with experiments as well as the elucidation of mechanistic pathways for an inherently stochastic process. A vital part of meaningful comparison with experiment is the characterization of the statistical uncertainty in the predicted experimental measurement, which may take the form of an equilibrium measurement of some spectroscopic signal, the time-evolution of this signal following a perturbation, or the observation of some statistic (such as the correlation function) of the equilibrium dynamics of a single molecule. Without meaningful error bars (which arise from both approximation and statistical error), there is no way to determine whether the deviations between model and experiment are statistically meaningful. Previous work has demonstrated that a Bayesian method that enforces microscopic reversibility can be used to characterize the statistical component of correlated uncertainties in state-to-state transition probabilities (and functions thereof) for a model inferred from molecular simulation data. Here, we extend this approach to include the uncertainty in observables that are functions of molecular conformation (such as surrogate spectroscopic signals) characterizing each state, permitting the full statistical uncertainty in computed spectroscopic experiments to be assessed. We test the approach in a simple model system to demonstrate that the computed uncertainties provide a useful indicator of statistical variation, and then apply it to the computation of the fluorescence autocorrelation function measured for a dye-labeled peptide previously studied by both experiment and simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demontis, Pierfranco; Suffritti, Giuseppe B., E-mail: pino@uniss.it
2016-09-07
As an attempt to explain some of the many anomalies and unresolved problems which have been reported about the dynamic behavior of particles and molecules absorbed in crystalline solids, the “reverse Mössbauer effect” (RME) is proposed. RME theory posits that a particle in non-equilibrium state with respect to a crystal (colliding with the crystal or absorbed in it, but set out of thermal equilibrium by some external cause) is scattered by the whole crystal with a momentum proportional to a vector representing a reciprocal lattice point. The scattering is expected to occur with a well-defined probability and the momentum transferablemore » to the particle is expected to follow a predictable distribution. The RME theory, in practice, is an extension of the Bragg–von Laue scattering law to high-energy colliding particles, in general, and can be applied to any particle or molecule colliding with the surface of a crystalline solid or absorbed in it, but not in thermal equilibrium with the crystal lattice. We verified the RME theory by considering a well-defined unresolved problem. In an experimental study about methane adsorbed in the zeolite Na-ZSM-5 [H. Jobic, Chem. Phys. Lett. 170, 217 (1990)] reporting neutron inelastic-scattering spectra (recoiled bands) at 10 K, the translational kinetic energy of methane resulted to be much higher than equilibrium expected value, namely, about 85 K (or 7.3 meV). The author concluded that “the interpretation of this unusual behavior has yet to be found.” In the present study, on the basis of the RME, an explanation of this behavior is put forward.« less
Pearlstein, Robert A; McKay, Daniel J J; Hornak, Viktor; Dickson, Callum; Golosov, Andrei; Harrison, Tyler; Velez-Vega, Camilo; Duca, José
2017-01-01
Cellular drug targets exist within networked function-generating systems whose constituent molecular species undergo dynamic interdependent non-equilibrium state transitions in response to specific perturbations (i.e.. inputs). Cellular phenotypic behaviors are manifested through the integrated behaviors of such networks. However, in vitro data are frequently measured and/or interpreted with empirical equilibrium or steady state models (e.g. Hill, Michaelis-Menten, Briggs-Haldane) relevant to isolated target populations. We propose that cells act as analog computers, "solving" sets of coupled "molecular differential equations" (i.e. represented by populations of interacting species)via "integration" of the dynamic state probability distributions among those populations. Disconnects between biochemical and functional/phenotypic assays (cellular/in vivo) may arise with targetcontaining systems that operate far from equilibrium, and/or when coupled contributions (including target-cognate partner binding and drug pharmacokinetics) are neglected in the analysis of biochemical results. The transformation of drug discovery from a trial-and-error endeavor to one based on reliable design criteria depends on improved understanding of the dynamic mechanisms powering cellular function/dysfunction at the systems level. Here, we address the general mechanisms of molecular and cellular function and pharmacological modulation thereof. We outline a first principles theory on the mechanisms by which free energy is stored and transduced into biological function, and by which biological function is modulated by drug-target binding. We propose that cellular function depends on dynamic counter-balanced molecular systems necessitated by the exponential behavior of molecular state transitions under non-equilibrium conditions, including positive versus negative mass action kinetics and solute-induced perturbations to the hydrogen bonds of solvating water versus kT. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Stochastic thermodynamics, fluctuation theorems and molecular machines.
Seifert, Udo
2012-12-01
Stochastic thermodynamics as reviewed here systematically provides a framework for extending the notions of classical thermodynamics such as work, heat and entropy production to the level of individual trajectories of well-defined non-equilibrium ensembles. It applies whenever a non-equilibrium process is still coupled to one (or several) heat bath(s) of constant temperature. Paradigmatic systems are single colloidal particles in time-dependent laser traps, polymers in external flow, enzymes and molecular motors in single molecule assays, small biochemical networks and thermoelectric devices involving single electron transport. For such systems, a first-law like energy balance can be identified along fluctuating trajectories. For a basic Markovian dynamics implemented either on the continuum level with Langevin equations or on a discrete set of states as a master equation, thermodynamic consistency imposes a local-detailed balance constraint on noise and rates, respectively. Various integral and detailed fluctuation theorems, which are derived here in a unifying approach from one master theorem, constrain the probability distributions for work, heat and entropy production depending on the nature of the system and the choice of non-equilibrium conditions. For non-equilibrium steady states, particularly strong results hold like a generalized fluctuation-dissipation theorem involving entropy production. Ramifications and applications of these concepts include optimal driving between specified states in finite time, the role of measurement-based feedback processes and the relation between dissipation and irreversibility. Efficiency and, in particular, efficiency at maximum power can be discussed systematically beyond the linear response regime for two classes of molecular machines, isothermal ones such as molecular motors, and heat engines such as thermoelectric devices, using a common framework based on a cycle decomposition of entropy production.
Alpha-Fair Resource Allocation under Incomplete Information and Presence of a Jammer
NASA Astrophysics Data System (ADS)
Altman, Eitan; Avrachenkov, Konstantin; Garnaev, Andrey
In the present work we deal with the concept of alpha-fair resource allocation in the situation where the decision maker (in our case, the base station) does not have complete information about the environment. Namely, we develop a concept of α-fairness under uncertainty to allocate power resource in the presence of a jammer under two types of uncertainty: (a) the decision maker does not have complete knowledge about the parameters of the environment, but knows only their distribution, (b) the jammer can come into the environment with some probability bringing extra background noise. The goal of the decision maker is to maximize the α-fairness utility function with respect to the SNIR (signal to noise-plus-interference ratio). Here we consider a concept of the expected α-fairness utility function (short-term fairness) as well as fairness of expectation (long-term fairness). In the scenario with the unknown parameters of the environment the most adequate approach is a zero-sum game since it can also be viewed as a minimax problem for the decision maker playing against the nature where the decision maker has to apply the best allocation under the worst circumstances. In the scenario with the uncertainty about jamming being in the system the Nash equilibrium concept is employed since the agents have non-zero sum payoffs: the decision maker would like to maximize either the expected fairness or the fairness of expectation while the jammer would like to minimize the fairness if he comes in on the scene. For all the plots the equilibrium strategies in closed form are found. We have shown that for all the scenarios the equilibrium has to be constructed into two steps. In the first step the equilibrium jamming strategy has to be constructed based on a solution of the corresponding modification of the water-filling equation. In the second step the decision maker equilibrium strategy has to be constructed equalizing the induced by jammer background noise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deffner, Sebastian; Zurek, Wojciech H.
Envariance—entanglement assisted invariance—is a recently discovered symmetry of composite quantum systems. Here, we show that thermodynamic equilibrium states are fully characterized by their envariance. In particular, the microcanonical equilibrium of a systemmore » $${ \\mathcal S }$$ with Hamiltonian $${H}_{{ \\mathcal S }}$$ is a fully energetically degenerate quantum state envariant under every unitary transformation. A representation of the canonical equilibrium then follows from simply counting degenerate energy states. Finally, our conceptually novel approach is free of mathematically ambiguous notions such as ensemble, randomness, etc., and, while it does not even rely on probability, it helps to understand its role in the quantum world.« less
Equilibrium Conditions of Sediment Suspending Flows on Earth, Mars and Titan
NASA Astrophysics Data System (ADS)
Amy, L. A.; Dorrell, R. M.
2016-12-01
Sediment entrainment, erosion and deposition by liquid water on Earth is one of the key processes controlling planetary surface evolution. Similar modification of planetary surfaces by liquids associated with a volatile cycle are also inferred to have occurred on other planets (e.g., water on Mars and methane-ethane on Titan). Here we explore conditions for equilibrium flow - the threshold between net sediment erosion and deposition - on different planets. We use a new theoretical model for particle erosion-suspension-deposition: this model shows a better fit to empirical data than comparative suspension criterions (e.g., Rouse Number) since it takes into account both flow competence and capacity, and particle size distribution effects. Shear stresses required to initially entrain sediment and maintain equilibrium flow vary significantly, being several times lower on Mars and more than ten times lower on Titan resulting principally from lower gravities. On all planets it is harder to maintain equilibrium flow as sediment mixtures become poorer sorted (higher shear stresses are needed as standard deviation increases). In comparison to large differences in critical shear stresses, critical slopes for equilibrium flow are similar for planets. Compared to Earth, equilibrium slopes on Mars should be slightly lower whilst those on Titan will be higher or lower for organic and ice particle systems, respectively. Particle size distribution has a similar, order of magnitude effect, on equilibrium slope on each planet. The results highlight that whilst reduced gravity on Titan and Mars significantly decreases the bed shear stress required for particle transport, it also proportionally effects the bed shear stress of moving fluid, such that similar slope gradients are required for equilibrium flow; minor variations in equilibrium slopes are related to differences in the particle-fluid density contrasts as well as fluid viscosities. These results help explain why planetary surfaces share striking similarities in their present or past landscapes and shows that particle size distribution is critical to sediment transport dynamics. Interestingly, particle distribution may vary between planets depending on the particle compositions and weathering regimes, imposing differences in equilibrium conditions.
NASA Astrophysics Data System (ADS)
Shuai, Yanhua; Douglas, Peter M. J.; Zhang, Shuichang; Stolper, Daniel A.; Ellis, Geoffrey S.; Lawson, Michael; Lewan, Michael D.; Formolo, Michael; Mi, Jingkui; He, Kun; Hu, Guoyi; Eiler, John M.
2018-02-01
Multiply isotopically substituted molecules ('clumped' isotopologues) can be used as geothermometers because their proportions at isotopic equilibrium relative to a random distribution of isotopes amongst all isotopologues are functions of temperature. This has allowed measurements of clumped-isotope abundances to be used to constrain formation temperatures of several natural materials. However, kinetic processes during generation, modification, or transport of natural materials can also affect their clumped-isotope compositions. Herein, we show that methane generated experimentally by closed-system hydrous pyrolysis of shale or nonhydrous pyrolysis of coal yields clumped-isotope compositions consistent with an equilibrium distribution of isotopologues under some experimental conditions (temperature-time conditions corresponding to 'low,' 'mature,' and 'over-mature' stages of catagenesis), but can have non-equilibrium (i.e., kinetically controlled) distributions under other experimental conditions ('high' to 'over-mature' stages), particularly for pyrolysis of coal. Non-equilibrium compositions, when present, lead the measured proportions of clumped species to be lower than expected for equilibrium at the experimental temperature, and in some cases to be lower than a random distribution of isotopes (i.e., negative Δ18 values). We propose that the consistency with equilibrium for methane formed by relatively low temperature pyrolysis reflects local reversibility of isotope exchange reactions involving a reactant or transition state species during demethylation of one or more components of kerogen. Non-equilibrium clumped-isotope compositions occur under conditions where 'secondary' cracking of retained oil in shale or wet gas hydrocarbons (C2-5, especially ethane) in coal is prominent. We suggest these non-equilibrium isotopic compositions are the result of the expression of kinetic isotope effects during the irreversible generation of methane from an alkyl precursor. Other interpretations are also explored. These findings provide new insights into the chemistry of thermogenic methane generation, and may provide an explanation of the elevated apparent temperatures recorded by the methane clumped-isotope thermometer in some natural gases. However, it remains unknown if the laboratory experiments capture the processes that occur at the longer time and lower temperatures of natural gas formation.
Modeling a distribution of point defects as misfitting inclusions in stressed solids
NASA Astrophysics Data System (ADS)
Cai, W.; Sills, R. B.; Barnett, D. M.; Nix, W. D.
2014-05-01
The chemical equilibrium distribution of point defects modeled as non-overlapping, spherical inclusions with purely positive dilatational eigenstrain in an isotropically elastic solid is derived. The compressive self-stress inside existing inclusions must be excluded from the stress dependence of the equilibrium concentration of the point defects, because it does no work when a new inclusion is introduced. On the other hand, a tensile image stress field must be included to satisfy the boundary conditions in a finite solid. Through the image stress, existing inclusions promote the introduction of additional inclusions. This is contrary to the prevailing approach in the literature in which the equilibrium point defect concentration depends on a homogenized stress field that includes the compressive self-stress. The shear stress field generated by the equilibrium distribution of such inclusions is proved to be proportional to the pre-existing stress field in the solid, provided that the magnitude of the latter is small, so that a solid containing an equilibrium concentration of point defects can be described by a set of effective elastic constants in the small-stress limit.
Ensemble theory for slightly deformable granular matter.
Tejada, Ignacio G
2014-09-01
Given a granular system of slightly deformable particles, it is possible to obtain different static and jammed packings subjected to the same macroscopic constraints. These microstates can be compared in a mathematical space defined by the components of the force-moment tensor (i.e. the product of the equivalent stress by the volume of the Voronoi cell). In order to explain the statistical distributions observed there, an athermal ensemble theory can be used. This work proposes a formalism (based on developments of the original theory of Edwards and collaborators) that considers both the internal and the external constraints of the problem. The former give the density of states of the points of this space, and the latter give their statistical weight. The internal constraints are those caused by the intrinsic features of the system (e.g. size distribution, friction, cohesion). They, together with the force-balance condition, determine which the possible local states of equilibrium of a particle are. Under the principle of equal a priori probabilities, and when no other constraints are imposed, it can be assumed that particles are equally likely to be found in any one of these local states of equilibrium. Then a flat sampling over all these local states turns into a non-uniform distribution in the force-moment space that can be represented with density of states functions. Although these functions can be measured, some of their features are explored in this paper. The external constraints are those macroscopic quantities that define the ensemble and are fixed by the protocol. The force-moment, the volume, the elastic potential energy and the stress are some examples of quantities that can be expressed as functions of the force-moment. The associated ensembles are included in the formalism presented here.
Remarks on the chemical Fokker-Planck and Langevin equations: Nonphysical currents at equilibrium.
Ceccato, Alessandro; Frezzato, Diego
2018-02-14
The chemical Langevin equation and the associated chemical Fokker-Planck equation are well-known continuous approximations of the discrete stochastic evolution of reaction networks. In this work, we show that these approximations suffer from a physical inconsistency, namely, the presence of nonphysical probability currents at the thermal equilibrium even for closed and fully detailed-balanced kinetic schemes. An illustration is given for a model case.
Quan, Ji; Liu, Wei; Chu, Yuqing; Wang, Xianjia
2017-11-23
Traditional replication dynamic model and the corresponding concept of evolutionary stable strategy (ESS) only takes into account whether the system can return to the equilibrium after being subjected to a small disturbance. In the real world, due to continuous noise, the ESS of the system may not be stochastically stable. In this paper, a model of voluntary public goods game with punishment is studied in a stochastic situation. Unlike the existing model, we describe the evolutionary process of strategies in the population as a generalized quasi-birth-and-death process. And we investigate the stochastic stable equilibrium (SSE) instead. By numerical experiments, we get all possible SSEs of the system for any combination of parameters, and investigate the influence of parameters on the probabilities of the system to select different equilibriums. It is found that in the stochastic situation, the introduction of the punishment and non-participation strategies can change the evolutionary dynamics of the system and equilibrium of the game. There is a large range of parameters that the system selects the cooperative states as its SSE with a high probability. This result provides us an insight and control method for the evolution of cooperation in the public goods game in stochastic situations.
NASA Technical Reports Server (NTRS)
Paquette, John A.; Nuth, Joseph A., III
2011-01-01
Classical nucleation theory has been used in models of dust nucleation in circumstellar outflows around oxygen-rich asymptotic giant branch stars. One objection to the application of classical nucleation theory (CNT) to astrophysical systems of this sort is that an equilibrium distribution of clusters (assumed by CNT) is unlikely to exist in such conditions due to a low collision rate of condensable species. A model of silicate grain nucleation and growth was modified to evaluate the effect of a nucleation flux orders of magnitUde below the equilibrium value. The results show that a lack of chemical equilibrium has only a small effect on the ultimate grain distribution.
NASA Astrophysics Data System (ADS)
Crisanti, A.; Sarracino, A.; Zannetti, M.
2017-05-01
We study analytically the probability distribution of the heat released by an ensemble of harmonic oscillators to the thermal bath, in the nonequilibrium relaxation process following a temperature quench. We focus on the asymmetry properties of the heat distribution in the nonstationary dynamics, in order to study the forms taken by the fluctuation theorem as the number of degrees of freedom is varied. After analyzing in great detail the cases of one and two oscillators, we consider the limit of a large number of oscillators, where the behavior of fluctuations is enriched by a condensation transition with a nontrivial phase diagram, characterized by reentrant behavior. Numerical simulations confirm our analytical findings. We also discuss and highlight how concepts borrowed from the study of fluctuations in equilibrium under symmetry-breaking conditions [Gaspard, J. Stat. Mech. (2012) P08021, 10.1088/1742-5468/2012/08/P08021] turn out to be quite useful in understanding the deviations from the standard fluctuation theorem.
Power-law decay exponents: A dynamical criterion for predicting thermalization
NASA Astrophysics Data System (ADS)
Távora, Marco; Torres-Herrera, E. J.; Santos, Lea F.
2017-01-01
From the analysis of the relaxation process of isolated lattice many-body quantum systems quenched far from equilibrium, we deduce a criterion for predicting when they are certain to thermalize. It is based on the algebraic behavior ∝t-γ of the survival probability at long times. We show that the value of the power-law exponent γ depends on the shape and filling of the weighted energy distribution of the initial state. Two scenarios are explored in detail: γ ≥2 and γ <1 . Exponents γ ≥2 imply that the energy distribution of the initial state is ergodically filled and the eigenstates are uncorrelated, so thermalization is guaranteed to happen. In this case, the power-law behavior is caused by bounds in the energy spectrum. Decays with γ <1 emerge when the energy eigenstates are correlated and signal lack of ergodicity. They are typical of systems undergoing localization due to strong onsite disorder and are found also in clean integrable systems.
The effects of distributed life cycles on the dynamics of viral infections.
Campos, Daniel; Méndez, Vicenç; Fedotov, Sergei
2008-09-21
We explore the role of cellular life cycles for viruses and host cells in an infection process. For this purpose, we derive a generalized version of the basic model of virus dynamics (Nowak, M.A., Bangham, C.R.M., 1996. Population dynamics of immune responses to persistent viruses. Science 272, 74-79) from a mesoscopic description. In its final form the model can be written as a set of Volterra integrodifferential equations. We consider the role of distributed lifespans and a intracellular (eclipse) phase. These processes are implemented by means of probability distribution functions. The basic reproductive ratio R(0) of the infection is properly defined in terms of such distributions by using an analysis of the equilibrium states and their stability. It is concluded that the introduction of distributed delays can strongly modify both the value of R(0) and the predictions for the virus loads, so the effects on the infection dynamics are of major importance. We also show how the model presented here can be applied to some simple situations where direct comparison with experiments is possible. Specifically, phage-bacteria interactions are analyzed. The dynamics of the eclipse phase for phages is characterized analytically, which allows us to compare the performance of three different fittings proposed before for the one-step growth curve.
Exploring sensitivity of a multistate occupancy model to inform management decisions
Green, A.W.; Bailey, L.L.; Nichols, J.D.
2011-01-01
Dynamic occupancy models are often used to investigate questions regarding the processes that influence patch occupancy and are prominent in the fields of population and community ecology and conservation biology. Recently, multistate occupancy models have been developed to investigate dynamic systems involving more than one occupied state, including reproductive states, relative abundance states and joint habitat-occupancy states. Here we investigate the sensitivities of the equilibrium-state distribution of multistate occupancy models to changes in transition rates. We develop equilibrium occupancy expressions and their associated sensitivity metrics for dynamic multistate occupancy models. To illustrate our approach, we use two examples that represent common multistate occupancy systems. The first example involves a three-state dynamic model involving occupied states with and without successful reproduction (California spotted owl Strix occidentalis occidentalis), and the second involves a novel way of using a multistate occupancy approach to accommodate second-order Markov processes (wood frog Lithobates sylvatica breeding and metamorphosis). In many ways, multistate sensitivity metrics behave in similar ways as standard occupancy sensitivities. When equilibrium occupancy rates are low, sensitivity to parameters related to colonisation is high, while sensitivity to persistence parameters is greater when equilibrium occupancy rates are high. Sensitivities can also provide guidance for managers when estimates of transition probabilities are not available. Synthesis and applications. Multistate models provide practitioners a flexible framework to define multiple, distinct occupied states and the ability to choose which state, or combination of states, is most relevant to questions and decisions about their own systems. In addition to standard multistate occupancy models, we provide an example of how a second-order Markov process can be modified to fit a multistate framework. Assuming the system is near equilibrium, our sensitivity analyses illustrate how to investigate the sensitivity of the system-specific equilibrium state(s) to changes in transition rates. Because management will typically act on these transition rates, sensitivity analyses can provide valuable information about the potential influence of different actions and when it may be prudent to shift the focus of management among the various transition rates. ?? 2011 The Authors. Journal of Applied Ecology ?? 2011 British Ecological Society.
SN 1987A - The evolution from red to blue
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuchman, Y.; Wheeler, J.C.
1989-11-01
Envelope models in thermal and dynamic equilibrium are used to explore the nature of the transition of SK -69 deg 202, the progenitor of SN 1987A, from the Hayashi track to its final blue position in the H-R diagram. Loci of possible thermal equilibrium solutions are presented as a function of Teff and M(C/O), the mass of the carbon/oxygen core interior to the helium burning shell. It is found that uniform helium enrichment of the envelope results in red-blue evolution but that the resulting blue solution is much hotter than SK -69 deg 202. Solutions in which the only changemore » is to redistribute the portion of the envelope enriched in helium during main-sequence convective core contraction into a step function with Y of about 0.5 at a mass cut of about 10 solar masses give a natural transition from red to blue and a final value of Teff in agreement with observations. It is argued that SK -69 deg 202 probably fell on a post-Hayashi track sequence at moderate Teff. The possible connection of this sequence to the step distribution in the H-R diagram of the LMC. 19 refs.« less
Event-driven Monte Carlo: Exact dynamics at all time scales for discrete-variable models
NASA Astrophysics Data System (ADS)
Mendoza-Coto, Alejandro; Díaz-Méndez, Rogelio; Pupillo, Guido
2016-06-01
We present an algorithm for the simulation of the exact real-time dynamics of classical many-body systems with discrete energy levels. In the same spirit of kinetic Monte Carlo methods, a stochastic solution of the master equation is found, with no need to define any other phase-space construction. However, unlike existing methods, the present algorithm does not assume any particular statistical distribution to perform moves or to advance the time, and thus is a unique tool for the numerical exploration of fast and ultra-fast dynamical regimes. By decomposing the problem in a set of two-level subsystems, we find a natural variable step size, that is well defined from the normalization condition of the transition probabilities between the levels. We successfully test the algorithm with known exact solutions for non-equilibrium dynamics and equilibrium thermodynamical properties of Ising-spin models in one and two dimensions, and compare to standard implementations of kinetic Monte Carlo methods. The present algorithm is directly applicable to the study of the real-time dynamics of a large class of classical Markovian chains, and particularly to short-time situations where the exact evolution is relevant.
Kinetics of autocatalysis in small systems
NASA Astrophysics Data System (ADS)
Arslan, Erdem; Laurenzi, Ian J.
2008-01-01
Autocatalysis is a ubiquitous chemical process that drives a plethora of biological phenomena, including the self-propagation of prions etiological to the Creutzfeldt-Jakob disease and bovine spongiform encephalopathy. To explain the dynamics of these systems, we have solved the chemical master equation for the irreversible autocatalytic reaction A +B→2A. This solution comprises the first closed form expression describing the probabilistic time evolution of the populations of autocatalytic and noncatalytic molecules from an arbitrary initial state. Grand probability distributions are likewise presented for autocatalysis in the equilibrium limit (A+B⇌2A), allowing for the first mechanistic comparison of this process with chemical isomerization (B⇌A) in small systems. Although the average population of autocatalytic (i.e., prion) molecules largely conforms to the predictions of the classical "rate law" approach in time and the law of mass action at equilibrium, thermodynamic differences between the entropies of isomerization and autocatalysis are revealed, suggesting a "mechanism dependence" of state variables for chemical reaction processes. These results demonstrate the importance of chemical mechanism and molecularity in the development of stochastic processes for chemical systems and the relationship between the stochastic approach to chemical kinetics and nonequilibrium thermodynamics.
Correlated Fluctuations in Strongly Coupled Binary Networks Beyond Equilibrium
NASA Astrophysics Data System (ADS)
Dahmen, David; Bos, Hannah; Helias, Moritz
2016-07-01
Randomly coupled Ising spins constitute the classical model of collective phenomena in disordered systems, with applications covering glassy magnetism and frustration, combinatorial optimization, protein folding, stock market dynamics, and social dynamics. The phase diagram of these systems is obtained in the thermodynamic limit by averaging over the quenched randomness of the couplings. However, many applications require the statistics of activity for a single realization of the possibly asymmetric couplings in finite-sized networks. Examples include reconstruction of couplings from the observed dynamics, representation of probability distributions for sampling-based inference, and learning in the central nervous system based on the dynamic and correlation-dependent modification of synaptic connections. The systematic cumulant expansion for kinetic binary (Ising) threshold units with strong, random, and asymmetric couplings presented here goes beyond mean-field theory and is applicable outside thermodynamic equilibrium; a system of approximate nonlinear equations predicts average activities and pairwise covariances in quantitative agreement with full simulations down to hundreds of units. The linearized theory yields an expansion of the correlation and response functions in collective eigenmodes, leads to an efficient algorithm solving the inverse problem, and shows that correlations are invariant under scaling of the interaction strengths.
Nonequilibrium thermodynamics and information theory: basic concepts and relaxing dynamics
NASA Astrophysics Data System (ADS)
Altaner, Bernhard
2017-11-01
Thermodynamics is based on the notions of energy and entropy. While energy is the elementary quantity governing physical dynamics, entropy is the fundamental concept in information theory. In this work, starting from first principles, we give a detailed didactic account on the relations between energy and entropy and thus physics and information theory. We show that thermodynamic process inequalities, like the second law, are equivalent to the requirement that an effective description for physical dynamics is strongly relaxing. From the perspective of information theory, strongly relaxing dynamics govern the irreversible convergence of a statistical ensemble towards the maximally non-commital probability distribution that is compatible with thermodynamic equilibrium parameters. In particular, Markov processes that converge to a thermodynamic equilibrium state are strongly relaxing. Our framework generalizes previous results to arbitrary open and driven systems, yielding novel thermodynamic bounds for idealized and real processes. , which features invited work from the best early-career researchers working within the scope of J. Phys. A. This project is part of the Journal of Physics series’ 50th anniversary celebrations in 2017. Bernhard Altaner was selected by the Editorial Board of J. Phys. A as an Emerging Talent.
The physicist's companion to current fluctuations: one-dimensional bulk-driven lattice gases
NASA Astrophysics Data System (ADS)
Lazarescu, Alexandre
2015-12-01
One of the main features of statistical systems out of equilibrium is the currents they exhibit in their stationary state: microscopic currents of probability between configurations, which translate into macroscopic currents of mass, charge, etc. Understanding the general behaviour of these currents is an important step towards building a universal framework for non-equilibrium steady states akin to the Gibbs-Boltzmann distribution for equilibrium systems. In this review, we consider one-dimensional bulk-driven particle gases, and in particular the asymmetric simple exclusion process (ASEP) with open boundaries, which is one of the most popular models of one-dimensional transport. We focus, in particular, on the current of particles flowing through the system in its steady state, and on its fluctuations. We show how one can obtain the complete statistics of that current, through its large deviation function, by combining results from various methods: exact calculation of the cumulants of the current, using the integrability of the model; direct diagonalization of a biased process in the limits of very high or low current; hydrodynamic description of the model in the continuous limit using the macroscopic fluctuation theory. We give a pedagogical account of these techniques, starting with a quick introduction to the necessary mathematical tools, as well as a short overview of the existing works relating to the ASEP. We conclude by drawing the complete dynamical phase diagram of the current. We also remark on a few possible generalizations of these results.
The concept of temperature in space plasmas
NASA Astrophysics Data System (ADS)
Livadiotis, G.
2017-12-01
Independently of the initial distribution function, once the system is thermalized, its particles are stabilized into a specific distribution function parametrized by a temperature. Classical particle systems in thermal equilibrium have their phase-space distribution stabilized into a Maxwell-Boltzmann function. In contrast, space plasmas are particle systems frequently described by stationary states out of thermal equilibrium, namely, their distribution is stabilized into a function that is typically described by kappa distributions. The temperature is well-defined for systems at thermal equilibrium or stationary states described by kappa distributions. This is based on the equivalence of the two fundamental definitions of temperature, that is (i) the kinetic definition of Maxwell (1866) and (ii) the thermodynamic definition of Clausius (1862). This equivalence holds either for Maxwellians or kappa distributions, leading also to the equipartition theorem. The temperature and kappa index (together with density) are globally independent parameters characterizing the kappa distribution. While there is no equation of state or any universal relation connecting these parameters, various local relations may exist along the streamlines of space plasmas. Observations revealed several types of such local relations among plasma thermal parameters.
Assessment of Stable Isotope Distribution in Complex Systems
NASA Astrophysics Data System (ADS)
He, Y.; Cao, X.; Wang, J.; Bao, H.
2017-12-01
Biomolecules in living organisms have the potential to approach chemical steady state and even apparent isotope equilibrium because enzymatic reactions are intrinsically reversible. If an apparent local equilibrium can be identified, enzymatic reversibility and its controlling factors may be quantified, which helps to understand complex biochemical processes. Earlier research on isotope fractionation tends to focus on specific process and compare mostly two different chemical species. Using linear regression, "Thermodynamic order", which refers to correlated δ13C and 13β values, has been proposed to be present among many biomolecules by Galimov et al. However, the concept "thermodynamic order" they proposed and the approach they used has been questioned. Here, we propose that the deviation of a complex system from its equilibrium state can be rigorously described as a graph problem as is applied in discrete mathematics. The deviation of isotope distribution from equilibrium state and apparent local isotope equilibrium among a subset of biomolecules can be assessed using an apparent fractionation difference matrix (|Δα|). Applying the |Δα| matrix analysis to earlier published data of amino acids, we show the existence of apparent local equilibrium among different amino acids in potato and a kind of green alga. The existence of apparent local equilibrium is in turn consistent with the notion that enzymatic reactions can be reversible even in living systems. The result also implies that previous emphasis on external carbon source intake may be misplaced when studying isotope distribution in physiology. In addition to the identification of local equilibrium among biomolecules, the difference matrix approach has the potential to explore chemical or isotope equilibrium state in extraterrestrial bodies, to distinguish living from non-living systems, and to classify living species. This approach will benefit from large numbers of systematic data and advanced pattern recognition techniques.
Modelling non-equilibrium thermodynamic systems from the speed-gradient principle.
Khantuleva, Tatiana A; Shalymov, Dmitry S
2017-03-06
The application of the speed-gradient (SG) principle to the non-equilibrium distribution systems far away from thermodynamic equilibrium is investigated. The options for applying the SG principle to describe the non-equilibrium transport processes in real-world environments are discussed. Investigation of a non-equilibrium system's evolution at different scale levels via the SG principle allows for a fresh look at the thermodynamics problems associated with the behaviour of the system entropy. Generalized dynamic equations for finite and infinite number of constraints are proposed. It is shown that the stationary solution to the equations, resulting from the SG principle, entirely coincides with the locally equilibrium distribution function obtained by Zubarev. A new approach to describe time evolution of systems far from equilibrium is proposed based on application of the SG principle at the intermediate scale level of the system's internal structure. The problem of the high-rate shear flow of viscous fluid near the rigid plane plate is discussed. It is shown that the SG principle allows closed mathematical models of non-equilibrium processes to be constructed.This article is part of the themed issue 'Horizons of cybernetical physics'. © 2017 The Author(s).
Modelling non-equilibrium thermodynamic systems from the speed-gradient principle
NASA Astrophysics Data System (ADS)
Khantuleva, Tatiana A.; Shalymov, Dmitry S.
2017-03-01
The application of the speed-gradient (SG) principle to the non-equilibrium distribution systems far away from thermodynamic equilibrium is investigated. The options for applying the SG principle to describe the non-equilibrium transport processes in real-world environments are discussed. Investigation of a non-equilibrium system's evolution at different scale levels via the SG principle allows for a fresh look at the thermodynamics problems associated with the behaviour of the system entropy. Generalized dynamic equations for finite and infinite number of constraints are proposed. It is shown that the stationary solution to the equations, resulting from the SG principle, entirely coincides with the locally equilibrium distribution function obtained by Zubarev. A new approach to describe time evolution of systems far from equilibrium is proposed based on application of the SG principle at the intermediate scale level of the system's internal structure. The problem of the high-rate shear flow of viscous fluid near the rigid plane plate is discussed. It is shown that the SG principle allows closed mathematical models of non-equilibrium processes to be constructed. This article is part of the themed issue 'Horizons of cybernetical physics'.
Modelling non-equilibrium thermodynamic systems from the speed-gradient principle
Khantuleva, Tatiana A.
2017-01-01
The application of the speed-gradient (SG) principle to the non-equilibrium distribution systems far away from thermodynamic equilibrium is investigated. The options for applying the SG principle to describe the non-equilibrium transport processes in real-world environments are discussed. Investigation of a non-equilibrium system's evolution at different scale levels via the SG principle allows for a fresh look at the thermodynamics problems associated with the behaviour of the system entropy. Generalized dynamic equations for finite and infinite number of constraints are proposed. It is shown that the stationary solution to the equations, resulting from the SG principle, entirely coincides with the locally equilibrium distribution function obtained by Zubarev. A new approach to describe time evolution of systems far from equilibrium is proposed based on application of the SG principle at the intermediate scale level of the system's internal structure. The problem of the high-rate shear flow of viscous fluid near the rigid plane plate is discussed. It is shown that the SG principle allows closed mathematical models of non-equilibrium processes to be constructed. This article is part of the themed issue ‘Horizons of cybernetical physics’. PMID:28115617
Chapman, Brian
2017-06-01
This paper seeks to develop a more thermodynamically sound pedagogy for students of biological transport than is currently available from either of the competing schools of linear non-equilibrium thermodynamics (LNET) or Michaelis-Menten kinetics (MMK). To this end, a minimal model of facilitated diffusion was constructed comprising four reversible steps: cis- substrate binding, cis → trans bound enzyme shuttling, trans -substrate dissociation and trans → cis free enzyme shuttling. All model parameters were subject to the second law constraint of the probability isotherm, which determined the unidirectional and net rates for each step and for the overall reaction through the law of mass action. Rapid equilibration scenarios require sensitive 'tuning' of the thermodynamic binding parameters to the equilibrium substrate concentration. All non-equilibrium scenarios show sigmoidal force-flux relations, with only a minority of cases having their quasi -linear portions close to equilibrium. Few cases fulfil the expectations of MMK relating reaction rates to enzyme saturation. This new approach illuminates and extends the concept of rate-limiting steps by focusing on the free energy dissipation associated with each reaction step and thereby deducing its respective relative chemical impedance. The crucial importance of an enzyme's being thermodynamically 'tuned' to its particular task, dependent on the cis- and trans- substrate concentrations with which it deals, is consistent with the occurrence of numerous isoforms for enzymes that transport a given substrate in physiologically different circumstances. This approach to kinetic modelling, being aligned with neither MMK nor LNET, is best described as intuitive non-equilibrium thermodynamics, and is recommended as a useful adjunct to the design and interpretation of experiments in biotransport.
2017-01-01
This paper seeks to develop a more thermodynamically sound pedagogy for students of biological transport than is currently available from either of the competing schools of linear non-equilibrium thermodynamics (LNET) or Michaelis–Menten kinetics (MMK). To this end, a minimal model of facilitated diffusion was constructed comprising four reversible steps: cis-substrate binding, cis→trans bound enzyme shuttling, trans-substrate dissociation and trans→cis free enzyme shuttling. All model parameters were subject to the second law constraint of the probability isotherm, which determined the unidirectional and net rates for each step and for the overall reaction through the law of mass action. Rapid equilibration scenarios require sensitive ‘tuning’ of the thermodynamic binding parameters to the equilibrium substrate concentration. All non-equilibrium scenarios show sigmoidal force–flux relations, with only a minority of cases having their quasi-linear portions close to equilibrium. Few cases fulfil the expectations of MMK relating reaction rates to enzyme saturation. This new approach illuminates and extends the concept of rate-limiting steps by focusing on the free energy dissipation associated with each reaction step and thereby deducing its respective relative chemical impedance. The crucial importance of an enzyme's being thermodynamically ‘tuned’ to its particular task, dependent on the cis- and trans-substrate concentrations with which it deals, is consistent with the occurrence of numerous isoforms for enzymes that transport a given substrate in physiologically different circumstances. This approach to kinetic modelling, being aligned with neither MMK nor LNET, is best described as intuitive non-equilibrium thermodynamics, and is recommended as a useful adjunct to the design and interpretation of experiments in biotransport. PMID:28680687
Schlecht, William; Dong, Wen-Ji
2017-10-18
Several studies have suggested that conformational dynamics are important in the regulation of thin filament activation in cardiac troponin C (cTnC); however, little direct evidence has been offered to support these claims. In this study, a dye homodimerization approach is developed and implemented that allows the determination of the dynamic equilibrium between open and closed conformations in cTnC's hydrophobic cleft. Modulation of this equilibrium by Ca 2+ , cardiac troponin I (cTnI), cardiac troponin T (cTnT), Ca 2+ -sensitizers, and a Ca 2+ -desensitizing phosphomimic of cTnT (cTnT(T204E) is characterized. Isolated cTnC contained a small open conformation population in the absence of Ca 2+ that increased significantly upon the addition of saturating levels of Ca 2+ . This suggests that the Ca 2+ -induced activation of thin filament arises from an increase in the probability of hydrophobic cleft opening. The inclusion of cTnI increased the population of open cTnC, and the inclusion of cTnT had the opposite effect. Samples containing Ca 2+ -desensitizing cTnT(T204E) showed a slight but insignificant decrease in open conformation probability compared to samples with cardiac troponin T, wild type [cTnT(wt)], while Ca 2+ sensitizer treated samples generally increased open conformation probability. These findings show that an equilibrium between the open and closed conformations of cTnC's hydrophobic cleft play a significant role in tuning the Ca 2+ sensitivity of the heart.
NASA Astrophysics Data System (ADS)
Centrella, Stephen; Vrijmoed, Johannes C.; Putnis, Andrew; Austrheim, Håkon
2017-04-01
The importance of heterogeneous stress and pressure distribution within a rock has been established over the last decades (see review in Tajčmanová et al., 2015). During a hydration reaction, depending on whether the system is open to mass transfer, the volume changes of the reaction may be accommodated by removing material into the fluid phase that leaves the system (Centrella et al., 2015; Centrella et al., 2016). The magnitudes and the spatial distribution of stress and pressure that evolve during such processes is largely unknown. We present here a natural example where a granulite is hydrated at amphibolite facies conditions from the Bergen Arcs in Norway. Granulitic garnet is associated with kyanite and quartz on one side, and amphibole-biotite on the other side. The first couple replaces the plagioclase of the granulite matrix whereas the second replaces the garnet. We use electron probe microanalysis (EPMA) and X-ray mapping to investigate the spatial and possible temporal relationships between these two parageneses. Gresens' analysis has been used to determine the mass balance and the local volume changes associated with the two reactions. The reaction to kyanite+quartz induces a loss in volume compared to the original plagioclase whereas the second reaction amphibole+biotite gains volume compared to the original garnet. The specific mass evolution associated with both reactions suggests a local mass balance probably associated with a single hydration event. Using the methodology of Vrijmoed & Podladchikov (2015) we test whether the microstructure may be partly related to the local stress heterogeneity around the garnet inclusion. We evaluate the phase assemblage and distribution at chemical equilibrium under a given input pressure field that can be computed with the Thermolab software. By varying the input pressure field using the Finite Element Method and comparing the resulting equilibrium assemblage to the real data an estimate of the local stress and pressure distribution around the garnet inclusion is obtained. The differences of the equilibrium model with the observations are discussed. References: Centrella, S., Austrheim, H., and Putnis, A., 2015, Coupled mass transfer through a fluid phase and volume preservation during the hydration of granulite: An example from the Bergen Arcs, Norway: Lithos, 236-237, p. 245-255, doi: 10.1016/j.lithos.2015.09.010. Centrella, S., Austrheim, H., and Putnis, A., 2016, Mass transfer and trace element redistribution during hydration of granulites in the Bergen Arcs, Norway: Lithos, v. 262, p. 1-10, doi: 10.1016/j.lithos.2016.06.019. Tajčmanová, L., Vrijmoed, J., and Moulas, E., 2015, Grain-scale pressure variations in metamorphic rocks: implications for the interpretation of petrographic observations: Lithos, 216-217, p. 338-351, doi: 10.1016/j.lithos.2015.01.006. Vrijmoed, J.C., and Podladchikov, Y.Y., 2015, Thermodynamic equilibrium at heterogeneous pressure: Contributions to Mineralogy and Petrology, v. 170, no. 1, doi: 10.1007/s00410-015-1156-1.
Derivation of the chemical-equilibrium rate coefficient using scattering theory
NASA Technical Reports Server (NTRS)
Mickens, R. E.
1977-01-01
Scattering theory is applied to derive the equilibrium rate coefficient for a general homogeneous chemical reaction involving ideal gases. The reaction rate is expressed in terms of the product of a number of normalized momentum distribution functions, the product of the number of molecules with a given internal energy state, and the spin-averaged T-matrix elements. An expression for momentum distribution at equilibrium for an arbitrary molecule is presented, and the number of molecules with a given internal-energy state is represented by an expression which includes the partition function.
NASA Astrophysics Data System (ADS)
Polotto, Franciele; Drigo Filho, Elso; Chahine, Jorge; Oliveira, Ronaldo Junio de
2018-03-01
This work developed analytical methods to explore the kinetics of the time-dependent probability distributions over thermodynamic free energy profiles of protein folding and compared the results with simulation. The Fokker-Planck equation is mapped onto a Schrödinger-type equation due to the well-known solutions of the latter. Through a semi-analytical description, the supersymmetric quantum mechanics formalism is invoked and the time-dependent probability distributions are obtained with numerical calculations by using the variational method. A coarse-grained structure-based model of the two-state protein Tm CSP was simulated at a Cα level of resolution and the thermodynamics and kinetics were fully characterized. Analytical solutions from non-equilibrium conditions were obtained with the simulated double-well free energy potential and kinetic folding times were calculated. It was found that analytical folding time as a function of temperature agrees, quantitatively, with simulations and experiments from the literature of Tm CSP having the well-known 'U' shape of the Chevron Plots. The simple analytical model developed in this study has a potential to be used by theoreticians and experimentalists willing to explore, quantitatively, rates and the kinetic behavior of their system by informing the thermally activated barrier. The theory developed describes a stochastic process and, therefore, can be applied to a variety of biological as well as condensed-phase two-state systems.
Objectively combining AR5 instrumental period and paleoclimate climate sensitivity evidence
NASA Astrophysics Data System (ADS)
Lewis, Nicholas; Grünwald, Peter
2018-03-01
Combining instrumental period evidence regarding equilibrium climate sensitivity with largely independent paleoclimate proxy evidence should enable a more constrained sensitivity estimate to be obtained. Previous, subjective Bayesian approaches involved selection of a prior probability distribution reflecting the investigators' beliefs about climate sensitivity. Here a recently developed approach employing two different statistical methods—objective Bayesian and frequentist likelihood-ratio—is used to combine instrumental period and paleoclimate evidence based on data presented and assessments made in the IPCC Fifth Assessment Report. Probabilistic estimates from each source of evidence are represented by posterior probability density functions (PDFs) of physically-appropriate form that can be uniquely factored into a likelihood function and a noninformative prior distribution. The three-parameter form is shown accurately to fit a wide range of estimated climate sensitivity PDFs. The likelihood functions relating to the probabilistic estimates from the two sources are multiplicatively combined and a prior is derived that is noninformative for inference from the combined evidence. A posterior PDF that incorporates the evidence from both sources is produced using a single-step approach, which avoids the order-dependency that would arise if Bayesian updating were used. Results are compared with an alternative approach using the frequentist signed root likelihood ratio method. Results from these two methods are effectively identical, and provide a 5-95% range for climate sensitivity of 1.1-4.05 K (median 1.87 K).
Cai, Jing; Read, Paul W; Altes, Talissa A; Molloy, Janelle A; Brookeman, James R; Sheng, Ke
2007-01-21
Treatment planning based on probability distribution function (PDF) of patient geometries has been shown a potential off-line strategy to incorporate organ motion, but the application of such approach highly depends upon the reproducibility of the PDF. In this paper, we investigated the dependences of the PDF reproducibility on the imaging acquisition parameters, specifically the scan time and the frame rate. Three healthy subjects underwent a continuous 5 min magnetic resonance (MR) scan in the sagittal plane with a frame rate of approximately 10 f s-1, and the experiments were repeated with an interval of 2 to 3 weeks. A total of nine pulmonary vessels from different lung regions (upper, middle and lower) were tracked and the dependences of their displacement PDF reproducibility were evaluated as a function of scan time and frame rate. As results, the PDF reproducibility error decreased with prolonged scans and appeared to approach equilibrium state in subjects 2 and 3 within the 5 min scan. The PDF accuracy increased in the power function with the increase of frame rate; however, the PDF reproducibility showed less sensitivity to frame rate presumably due to the randomness of breathing which dominates the effects. As the key component of the PDF-based treatment planning, the reproducibility of the PDF affects the dosimetric accuracy substantially. This study provides a reference for acquiring MR-based PDF of structures in the lung.
Rule-based programming paradigm: a formal basis for biological, chemical and physical computation.
Krishnamurthy, V; Krishnamurthy, E V
1999-03-01
A rule-based programming paradigm is described as a formal basis for biological, chemical and physical computations. In this paradigm, the computations are interpreted as the outcome arising out of interaction of elements in an object space. The interactions can create new elements (or same elements with modified attributes) or annihilate old elements according to specific rules. Since the interaction rules are inherently parallel, any number of actions can be performed cooperatively or competitively among the subsets of elements, so that the elements evolve toward an equilibrium or unstable or chaotic state. Such an evolution may retain certain invariant properties of the attributes of the elements. The object space resembles Gibbsian ensemble that corresponds to a distribution of points in the space of positions and momenta (called phase space). It permits the introduction of probabilities in rule applications. As each element of the ensemble changes over time, its phase point is carried into a new phase point. The evolution of this probability cloud in phase space corresponds to a distributed probabilistic computation. Thus, this paradigm can handle tor deterministic exact computation when the initial conditions are exactly specified and the trajectory of evolution is deterministic. Also, it can handle probabilistic mode of computation if we want to derive macroscopic or bulk properties of matter. We also explain how to support this rule-based paradigm using relational-database like query processing and transactions.
NASA Astrophysics Data System (ADS)
Jacobs, Verne L.
2017-06-01
This investigation has been devoted to the theoretical description and computer modeling of atomic processes giving rise to radiative emission in energetic electron and ion beam interactions and in laboratory plasmas. We are also interested in the effects of directed electron and ion collisions and of anisotropic electric and magnetic fields. In the kinetic-theory description, we treat excitation, de-excitation, ionization, and recombination in electron and ion encounters with partially ionized atomic systems, including the indirect contributions from processes involving autoionizing resonances. These fundamental collisional and electromagnetic interactions also provide particle and photon transport mechanisms. From the spectral perspective, the analysis of atomic radiative emission can reveal detailed information on the physical properties in the plasma environment, such as non-equilibrium electron and charge-state distributions as well as electric and magnetic field distributions. In this investigation, a reduced-density-matrix formulation is developed for the microscopic description of atomic electromagnetic interactions in the presence of environmental (collisional and radiative) relaxation and decoherence processes. Our central objective is a fundamental microscopic description of atomic electromagnetic processes, in which both bound-state and autoionization-resonance phenomena can be treated in a unified and self-consistent manner. The time-domain (equation-of-motion) and frequency-domain (resolvent-operator) formulations of the reduced-density-matrix approach are developed in a unified and self-consistent manner. This is necessary for our ultimate goal of a systematic and self-consistent treatment of non-equilibrium (possibly coherent) atomic-state kinetics and high-resolution (possibly overlapping) spectral-line shapes. We thereby propose the introduction of a generalized collisional-radiative atomic-state kinetics model based on a reduced-density-matrix formulation. It will become apparent that the full atomic data needs for the precise modeling of extreme non-equilibrium plasma environments extend beyond the conventional radiative-transition-probability and collisional-cross-section data sets.
Bhowmick, Amiya Ranjan; Bandyopadhyay, Subhadip; Rana, Sourav; Bhattacharya, Sabyasachi
2016-01-01
The stochastic versions of the logistic and extended logistic growth models are applied successfully to explain many real-life population dynamics and share a central body of literature in stochastic modeling of ecological systems. To understand the randomness in the population dynamics of the underlying processes completely, it is important to have a clear idea about the quasi-equilibrium distribution and its moments. Bartlett et al. (1960) took a pioneering attempt for estimating the moments of the quasi-equilibrium distribution of the stochastic logistic model. Matis and Kiffe (1996) obtain a set of more accurate and elegant approximations for the mean, variance and skewness of the quasi-equilibrium distribution of the same model using cumulant truncation method. The method is extended for stochastic power law logistic family by the same and several other authors (Nasell, 2003; Singh and Hespanha, 2007). Cumulant truncation and some alternative methods e.g. saddle point approximation, derivative matching approach can be applied if the powers involved in the extended logistic set up are integers, although plenty of evidence is available for non-integer powers in many practical situations (Sibly et al., 2005). In this paper, we develop a set of new approximations for mean, variance and skewness of the quasi-equilibrium distribution under more general family of growth curves, which is applicable for both integer and non-integer powers. The deterministic counterpart of this family of models captures both monotonic and non-monotonic behavior of the per capita growth rate, of which theta-logistic is a special case. The approximations accurately estimate the first three order moments of the quasi-equilibrium distribution. The proposed method is illustrated with simulated data and real data from global population dynamics database. Copyright © 2015 Elsevier Inc. All rights reserved.
Stochastic sensitivity of a bistable energy model for visual perception
NASA Astrophysics Data System (ADS)
Pisarchik, Alexander N.; Bashkirtseva, Irina; Ryashko, Lev
2017-01-01
Modern trends in physiology, psychology and cognitive neuroscience suggest that noise is an essential component of brain functionality and self-organization. With adequate noise the brain as a complex dynamical system can easily access different ordered states and improve signal detection for decision-making by preventing deadlocks. Using a stochastic sensitivity function approach, we analyze how sensitive equilibrium points are to Gaussian noise in a bistable energy model often used for qualitative description of visual perception. The probability distribution of noise-induced transitions between two coexisting percepts is calculated at different noise intensity and system stability. Stochastic squeezing of the hysteresis range and its transition from positive (bistable regime) to negative (intermittency regime) are demonstrated as the noise intensity increases. The hysteresis is more sensitive to noise in the system with higher stability.
Optimal power and efficiency of quantum Stirling heat engines
NASA Astrophysics Data System (ADS)
Yin, Yong; Chen, Lingen; Wu, Feng
2017-01-01
A quantum Stirling heat engine model is established in this paper in which imperfect regeneration and heat leakage are considered. A single particle which contained in a one-dimensional infinite potential well is studied, and the system consists of countless replicas. Each particle is confined in its own potential well, whose occupation probabilities can be expressed by the thermal equilibrium Gibbs distributions. Based on the Schrödinger equation, the expressions of power output and efficiency for the engine are obtained. Effects of imperfect regeneration and heat leakage on the optimal performance are discussed. The optimal performance region and the optimal values of important parameters of the engine cycle are obtained. The results obtained can provide some guidelines for the design of a quantum Stirling heat engine.
NASA Astrophysics Data System (ADS)
Dasmeh, Pouria; Searles, Debra J.; Ajloo, Davood; Evans, Denis J.; Williams, Stephen R.
2009-12-01
Le Chatelier's principle states that when a system is disturbed, it will shift its equilibrium to counteract the disturbance. However for a chemical reaction in a small, confined system, the probability of observing it proceed in the opposite direction to that predicted by Le Chatelier's principle, can be significant. This work gives a molecular level proof of Le Chatelier's principle for the case of a temperature change. Moreover, a new, exact mathematical expression is derived that is valid for arbitrary system sizes and gives the relative probability that a single experiment will proceed in the endothermic or exothermic direction, in terms of a microscopic phase function. We show that the average of the time integral of this function is the maximum possible value of the purely irreversible entropy production for the thermal relaxation process. Our result is tested against computer simulations of the unfolding of a polypeptide. We prove that any equilibrium reaction mixture on average responds to a temperature increase by shifting its point of equilibrium in the endothermic direction.
NASA Astrophysics Data System (ADS)
Forest, C. E.; Libardoni, A. G.; Sokolov, A. P.; Monier, E.
2017-12-01
We use the updated MIT Earth System Model (MESM) to derive the joint probability distribution function for Equilibrium Climate sensitivity (S), an effective heat diffusivity (Kv), and the net aerosol forcing (Faer). Using a new 1800-member ensemble of MESM runs, we derive PDFs by comparing model outputs against historical observations of surface temperature and global mean ocean heat content. We focus on how changes in (i) the MESM model, (ii) recent surface temperature and ocean heat content observations, and (iii) estimates of internal climate variability will all contribute to uncertainties. We show that estimates of S increase and Faer is less negative. These shifts result partly from new model forcing inputs but also from including recent temperature records that lead to higher values of S and Kv. We show that the parameter distributions are sensitive to the internal variability in the climate system. When considering these factors, we derive our best estimate for the joint probability distribution for the climate system properties. We estimate the 90-percent confidence intervals for climate sensitivity as 2.7-5.4 oC with a mode of 3.5 oC, for Kv as 1.9-23.0 cm2 s-1 with a mode of 4.41 cm2 s-1, and for Faer as -0.4 - -0.04 Wm-2 with a mode of -0.25 Wm-2. Lastly, we estimate TCR to be between 1.4 and 2.1 oC with a mode of 1.8 oC.
Foundations of statistical mechanics from symmetries of entanglement
Deffner, Sebastian; Zurek, Wojciech H.
2016-06-09
Envariance—entanglement assisted invariance—is a recently discovered symmetry of composite quantum systems. Here, we show that thermodynamic equilibrium states are fully characterized by their envariance. In particular, the microcanonical equilibrium of a systemmore » $${ \\mathcal S }$$ with Hamiltonian $${H}_{{ \\mathcal S }}$$ is a fully energetically degenerate quantum state envariant under every unitary transformation. A representation of the canonical equilibrium then follows from simply counting degenerate energy states. Finally, our conceptually novel approach is free of mathematically ambiguous notions such as ensemble, randomness, etc., and, while it does not even rely on probability, it helps to understand its role in the quantum world.« less
Nighttime Ozone Chemical Equilibrium in the Mesopause Region
NASA Astrophysics Data System (ADS)
Kulikov, M. Yu.; Belikovich, M. V.; Grygalashvyly, M.; Sonnemann, G. R.; Ermakova, T. S.; Nechaev, A. A.; Feigin, A. M.
2018-03-01
We examine the applicability of the assumption that nighttime ozone is in photochemical equilibrium. The analysis is based on calculations with a 3-D chemical transport model. These data are used to determine the ratio of correct (calculated) O3 density to its equilibrium value for the conditions of the nighttime mesosphere depending on the altitude, latitude, and month in the annual cycle. The results obtained demonstrate that the retrieval of O and H distributions using the assumption of photochemical ozone equilibrium may lead to a significant error below 81-87 km depending on season. Possible modifications of the currently used approach that allow improving the quality of retrieval of O and H mesospheric distributions from satellite-based observations are discussed.
Improved Simulation of the Pre-equilibrium Triton Emission in Nuclear Reactions Induced by Nucleons
NASA Astrophysics Data System (ADS)
Konobeyev, A. Yu.; Fischer, U.; Pereslavtsev, P. E.; Blann, M.
2014-04-01
A new approach is proposed for the calculation of non-equilibrium triton energy distributions in nuclear reactions induced by nucleons of intermediate energies. It combines models describing the nucleon pick-up, the coalescence and the triton knock-out processes. Emission and absorption rates for excited particles are represented by the pre-equilibrium hybrid model. The model of Sato, Iwamoto, Harada is used to describe the nucleon pick-up and the coalescence of nucleons from exciton configurations starting from (2p,1h) states. The contribution of the direct nucleon pick-up is described phenomenologically. Multiple pre-equilibrium emission of tritons is accounted for. The calculated triton energy distributions are compared with available experimental data.
Essays on variational approximation techniques for stochastic optimization problems
NASA Astrophysics Data System (ADS)
Deride Silva, Julio A.
This dissertation presents five essays on approximation and modeling techniques, based on variational analysis, applied to stochastic optimization problems. It is divided into two parts, where the first is devoted to equilibrium problems and maxinf optimization, and the second corresponds to two essays in statistics and uncertainty modeling. Stochastic optimization lies at the core of this research as we were interested in relevant equilibrium applications that contain an uncertain component, and the design of a solution strategy. In addition, every stochastic optimization problem relies heavily on the underlying probability distribution that models the uncertainty. We studied these distributions, in particular, their design process and theoretical properties such as their convergence. Finally, the last aspect of stochastic optimization that we covered is the scenario creation problem, in which we described a procedure based on a probabilistic model to create scenarios for the applied problem of power estimation of renewable energies. In the first part, Equilibrium problems and maxinf optimization, we considered three Walrasian equilibrium problems: from economics, we studied a stochastic general equilibrium problem in a pure exchange economy, described in Chapter 3, and a stochastic general equilibrium with financial contracts, in Chapter 4; finally from engineering, we studied an infrastructure planning problem in Chapter 5. We stated these problems as belonging to the maxinf optimization class and, in each instance, we provided an approximation scheme based on the notion of lopsided convergence and non-concave duality. This strategy is the foundation of the augmented Walrasian algorithm, whose convergence is guaranteed by lopsided convergence, that was implemented computationally, obtaining numerical results for relevant examples. The second part, Essays about statistics and uncertainty modeling, contains two essays covering a convergence problem for a sequence of estimators, and a problem for creating probabilistic scenarios on renewable energies estimation. In Chapter 7 we re-visited one of the "folk theorems" in statistics, where a family of Bayes estimators under 0-1 loss functions is claimed to converge to the maximum a posteriori estimator. This assertion is studied under the scope of the hypo-convergence theory, and the density functions are included in the class of upper semicontinuous functions. We conclude this chapter with an example in which the convergence does not hold true, and we provided sufficient conditions that guarantee convergence. The last chapter, Chapter 8, addresses the important topic of creating probabilistic scenarios for solar power generation. Scenarios are a fundamental input for the stochastic optimization problem of energy dispatch, especially when incorporating renewables. We proposed a model designed to capture the constraints induced by physical characteristics of the variables based on the application of an epi-spline density estimation along with a copula estimation, in order to account for partial correlations between variables.
Modular reweighting software for statistical mechanical analysis of biased equilibrium data
NASA Astrophysics Data System (ADS)
Sindhikara, Daniel J.
2012-07-01
Here a simple, useful, modular approach and software suite designed for statistical reweighting and analysis of equilibrium ensembles is presented. Statistical reweighting is useful and sometimes necessary for analysis of equilibrium enhanced sampling methods, such as umbrella sampling or replica exchange, and also in experimental cases where biasing factors are explicitly known. Essentially, statistical reweighting allows extrapolation of data from one or more equilibrium ensembles to another. Here, the fundamental separable steps of statistical reweighting are broken up into modules - allowing for application to the general case and avoiding the black-box nature of some “all-inclusive” reweighting programs. Additionally, the programs included are, by-design, written with little dependencies. The compilers required are either pre-installed on most systems, or freely available for download with minimal trouble. Examples of the use of this suite applied to umbrella sampling and replica exchange molecular dynamics simulations will be shown along with advice on how to apply it in the general case. New version program summaryProgram title: Modular reweighting version 2 Catalogue identifier: AEJH_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJH_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 179 118 No. of bytes in distributed program, including test data, etc.: 8 518 178 Distribution format: tar.gz Programming language: C++, Python 2.6+, Perl 5+ Computer: Any Operating system: Any RAM: 50-500 MB Supplementary material: An updated version of the original manuscript (Comput. Phys. Commun. 182 (2011) 2227) is available Classification: 4.13 Catalogue identifier of previous version: AEJH_v1_0 Journal reference of previous version: Comput. Phys. Commun. 182 (2011) 2227 Does the new version supersede the previous version?: Yes Nature of problem: While equilibrium reweighting is ubiquitous, there are no public programs available to perform the reweighting in the general case. Further, specific programs often suffer from many library dependencies and numerical instability. Solution method: This package is written in a modular format that allows for easy applicability of reweighting in the general case. Modules are small, numerically stable, and require minimal libraries. Reasons for new version: Some minor bugs, some upgrades needed, error analysis added. analyzeweight.py/analyzeweight.py2 has been replaced by “multihist.py”. This new program performs all the functions of its predecessor while being versatile enough to handle other types of histograms and probability analysis. “bootstrap.py” was added. This script performs basic bootstrap resampling allowing for error analysis of data. “avg_dev_distribution.py” was added. This program computes the averages and standard deviations of multiple distributions, making error analysis (e.g. from bootstrap resampling) easier to visualize. WRE.cpp was slightly modified purely for cosmetic reasons. The manual was updated for clarity and to reflect version updates. Examples were removed from the manual in favor of online tutorials (packaged examples remain). Examples were updated to reflect the new format. An additional example is included to demonstrate error analysis. Running time: Preprocessing scripts 1-5 minutes, WHAM engine <1 minute, postprocess script ∼1-5 minutes.
A chemical equilibrium code was improved and used to show that calcium and magnesium have a large yet different effect on the aerosol size distribution in different regions of Los Angeles. In the code, a new technique of solving individual equilibrium equation...
The effect of alumina in slag on manganese and silicon distributions in silicomanganese smelting
NASA Astrophysics Data System (ADS)
Swinbourne, D. R.; Rankin, W. J.; Eric, R. H.
1995-02-01
The distribution ratios of manganese and silicon between silicomanganese alloy and slag, in equilibrium with carbon, were investigated at 1500 °C. The alumina content of the slag was varied from about 9 to 32 pct. Both distribution ratios decreased as A12O3 increased to about 20 pct and, thereafter, remained constant. The value of the “apparent equilibrium constant” displayed a maximum at about 24 pct A12O3, mainly because of the variation in the values of the activity coefficients of SiO2 and MnO. It was concluded that the slag and silicomanganese alloy in a submerged arc furnace are at, or at least close to, equilibrium.
Stress-induced electric current fluctuations in rocks: a superstatistical model
NASA Astrophysics Data System (ADS)
Cartwright-Taylor, Alexis; Vallianatos, Filippos; Sammonds, Peter
2017-04-01
We recorded spontaneous electric current flow in non-piezoelectric Carrara marble samples during triaxial deformation. Mechanical data, ultrasonic velocities and acoustic emissions were acquired simultaneously with electric current to constrain the relationship between electric current flow, differential stress and damage. Under strain-controlled loading, spontaneous electric current signals (nA) were generated and sustained under all conditions tested. In dry samples, a detectable electric current arises only during dilatancy and the overall signal is correlated with the damage induced by microcracking. Our results show that fracture plays a key role in the generation of electric currents in deforming rocks (Cartwright-Taylor et al., in prep). We also analysed the high-frequency fluctuations of these electric current signals and found that they are not normally distributed - they exhibit power-law tails (Cartwright-Taylor et al., 2014). We modelled these distributions with q-Gaussian statistics, derived by maximising the Tsallis entropy. This definition of entropy is particularly applicable to systems which are strongly correlated and far from equilibrium. Good agreement, at all experimental conditions, between the distributions of electric current fluctuations and the q-Gaussian function with q-values far from one, illustrates the highly correlated, fractal nature of the electric source network within the samples and provides further evidence that the source of the electric signals is the developing fractal network of cracks. It has been shown (Beck, 2001) that q-Gaussian distributions can arise from the superposition of local relaxations in the presence of a slowly varying driving force, thus providing a dynamic reason for the appearance of Tsallis statistics in systems with a fluctuating energy dissipation rate. So, the probability distribution for a dynamic variable, u under some external slow forcing, β, can be obtained as a superposition of temporary local equilibrium processes whose variance fluctuates over time. The appearance of q-Gaussian statistics are caused by the fluctuating β parameter, which effectively models the fluctuating energy dissipation rate in the system. This concept is known as superstatistics and is physically relevant for modelling driven non-equilibrium systems where the environmental conditions fluctuate on a large scale. The idea is that the environmental variable, such as temperature or pressure, changes so slowly that a rapidly fluctuating variable within that environment has time to relax back to equilibrium between each change in the environment. The application of superstatistical techniques to our experimental electric current fluctuations show that they can indeed be described, to good approximation, by the superposition of local Gaussian processes with fluctuating variance. We conclude, then, that the measured electric current fluctuates in response to intermittent energy dissipation and is driven to varying temporary local equilibria during deformation by the variations in stress intensity. The advantage of this technique is that, once the model has been established to be a good description of the system in question, the average β parameter (a measure of the average energy dissipation rate) for the system can be obtained simply from the macroscopic q-Gaussian distribution parameters.
DEPARTURE OF HIGH-TEMPERATURE IRON LINES FROM THE EQUILIBRIUM STATE IN FLARING SOLAR PLASMAS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawate, T.; Keenan, F. P.; Jess, D. B., E-mail: t.kawate@qub.ac.uk
2016-07-20
The aim of this study is to clarify if the assumption of ionization equilibrium and a Maxwellian electron energy distribution is valid in flaring solar plasmas. We analyze the 2014 December 20 X1.8 flare, in which the Fe xxi 187 Å, Fe xxii 253 Å, Fe xxiii 263 Å, and Fe xxiv 255 Å emission lines were simultaneously observed by the EUV Imaging Spectrometer on board the Hinode satellite. Intensity ratios among these high-temperature Fe lines are compared and departures from isothermal conditions and ionization equilibrium examined. Temperatures derived from intensity ratios involving these four lines show significant discrepancies atmore » the flare footpoints in the impulsive phase, and at the looptop in the gradual phase. Among these, the temperature derived from the Fe xxii/Fe xxiv intensity ratio is the lowest, which cannot be explained if we assume a Maxwellian electron distribution and ionization equilibrium, even in the case of a multithermal structure. This result suggests that the assumption of ionization equilibrium and/or a Maxwellian electron energy distribution can be violated in evaporating solar plasma around 10 MK.« less
Shuai, Yanhua; Douglas, Peter M.J.; Zhang, Shuichang; Stolper, Daniel A.; Ellis, Geoffrey S.; Lawson, Michael; Lewan, Michael; Formolo, Michael; Mi, Jingkui; He, Kun; Hu, Guoyi; Eiler, John M.
2018-01-01
Multiply isotopically substituted molecules (‘clumped’ isotopologues) can be used as geothermometers because their proportions at isotopic equilibrium relative to a random distribution of isotopes amongst all isotopologues are functions of temperature. This has allowed measurements of clumped-isotope abundances to be used to constrain formation temperatures of several natural materials. However, kinetic processes during generation, modification, or transport of natural materials can also affect their clumped-isotope compositions. Herein, we show that methane generated experimentally by closed-system hydrous pyrolysis of shale or nonhydrous pyrolysis of coal yields clumped-isotope compositions consistent with an equilibrium distribution of isotopologues under some experimental conditions (temperature–time conditions corresponding to ‘low,’ ‘mature,’ and ‘over-mature’ stages of catagenesis), but can have non-equilibrium (i.e., kinetically controlled) distributions under other experimental conditions (‘high’ to ‘over-mature’ stages), particularly for pyrolysis of coal. Non-equilibrium compositions, when present, lead the measured proportions of clumped species to be lower than expected for equilibrium at the experimental temperature, and in some cases to be lower than a random distribution of isotopes (i.e., negative Δ18 values). We propose that the consistency with equilibrium for methane formed by relatively low temperature pyrolysis reflects local reversibility of isotope exchange reactions involving a reactant or transition state species during demethylation of one or more components of kerogen. Non-equilibrium clumped-isotope compositions occur under conditions where ‘secondary’ cracking of retained oil in shale or wet gas hydrocarbons (C2-5, especially ethane) in coal is prominent. We suggest these non-equilibrium isotopic compositions are the result of the expression of kinetic isotope effects during the irreversible generation of methane from an alkyl precursor. Other interpretations are also explored. These findings provide new insights into the chemistry of thermogenic methane generation, and may provide an explanation of the elevated apparent temperatures recorded by the methane clumped-isotope thermometer in some natural gases. However, it remains unknown if the laboratory experiments capture the processes that occur at the longer time and lower temperatures of natural gas formation.
Kalogerakis, Konstantinos S.; Matsiev, Daniel; Cosby, Philip C.; Dodd, James A.; Falcinelli, Stefano; Hedin, Jonas; Kutepov, Alexander A.; Noll, Stefan; Panka, Peter A.; Romanescu, Constantin; Thiebaud, Jérôme E.
2018-01-01
The question of whether mesospheric OH(υ) rotational population distributions are in equilibrium with the local kinetic temperature has been debated over several decades. Despite several indications for the existence of non-equilibrium effects, the general consensus has been that emissions originating from low rotational levels are thermalized. Sky spectra simultaneously observing several vibrational levels demonstrated reproducible trends in the extracted OH(υ) rotational temperatures as a function of vibrational excitation. Laboratory experiments provided information on rotational energy transfer and direct evidence for fast multi-quantum OH(high-υ) vibrational relaxation by O atoms. We examine the relationship of the new relaxation pathways with the behavior exhibited by OH(υ) rotational population distributions. Rapid OH(high-υ) + O multi-quantum vibrational relaxation connects high and low vibrational levels and enhances the hot tail of the OH(low-υ) rotational distributions. The effective rotational temperatures of mesospheric OH(υ) are found to deviate from local thermodynamic equilibrium for all observed vibrational levels. PMID:29503514
Wills, Peter R; Scott, David J; Winzor, Donald J
2012-03-01
This reexamination of a high-speed sedimentation equilibrium distribution for α-chymotrypsin under slightly acidic conditions (pH 4.1, I(M) 0.05) has provided experimental support for the adequacy of nearest-neighbor considerations in the allowance for effects of thermodynamic nonideality in the characterization of protein self-association over a moderate concentration range (up to 8 mg/mL). A widely held but previously untested notion about allowance for thermodynamic nonideality effects is thereby verified experimentally. However, it has also been shown that a greater obstacle to better characterization of protein self-association is likely to be the lack of a reliable estimate of monomer net charge, a parameter that has a far more profound effect on the magnitude of the measured equilibrium constant than any deficiency in current procedures for incorporating the effects of thermodynamic nonideality into the analysis of sedimentation equilibrium distributions reflecting reversible protein self-association. Copyright © 2011 Elsevier Inc. All rights reserved.
Statistical approach to partial equilibrium analysis
NASA Astrophysics Data System (ADS)
Wang, Yougui; Stanley, H. E.
2009-04-01
A statistical approach to market equilibrium and efficiency analysis is proposed in this paper. One factor that governs the exchange decisions of traders in a market, named willingness price, is highlighted and constitutes the whole theory. The supply and demand functions are formulated as the distributions of corresponding willing exchange over the willingness price. The laws of supply and demand can be derived directly from these distributions. The characteristics of excess demand function are analyzed and the necessary conditions for the existence and uniqueness of equilibrium point of the market are specified. The rationing rates of buyers and sellers are introduced to describe the ratio of realized exchange to willing exchange, and their dependence on the market price is studied in the cases of shortage and surplus. The realized market surplus, which is the criterion of market efficiency, can be written as a function of the distributions of willing exchange and the rationing rates. With this approach we can strictly prove that a market is efficient in the state of equilibrium.
NASA Astrophysics Data System (ADS)
Hardwick, Robert J.; Vennin, Vincent; Byrnes, Christian T.; Torrado, Jesús; Wands, David
2017-10-01
We study the stochastic distribution of spectator fields predicted in different slow-roll inflation backgrounds. Spectator fields have a negligible energy density during inflation but may play an important dynamical role later, even giving rise to primordial density perturbations within our observational horizon today. During de-Sitter expansion there is an equilibrium solution for the spectator field which is often used to estimate the stochastic distribution during slow-roll inflation. However slow roll only requires that the Hubble rate varies slowly compared to the Hubble time, while the time taken for the stochastic distribution to evolve to the de-Sitter equilibrium solution can be much longer than a Hubble time. We study both chaotic (monomial) and plateau inflaton potentials, with quadratic, quartic and axionic spectator fields. We give an adiabaticity condition for the spectator field distribution to relax to the de-Sitter equilibrium, and find that the de-Sitter approximation is never a reliable estimate for the typical distribution at the end of inflation for a quadratic spectator during monomial inflation. The existence of an adiabatic regime at early times can erase the dependence on initial conditions of the final distribution of field values. In these cases, spectator fields acquire sub-Planckian expectation values. Otherwise spectator fields may acquire much larger field displacements than suggested by the de-Sitter equilibrium solution. We quantify the information about initial conditions that can be obtained from the final field distribution. Our results may have important consequences for the viability of spectator models for the origin of structure, such as the simplest curvaton models.
Nash equilibrium and evolutionary dynamics in semifinalists' dilemma.
Baek, Seung Ki; Son, Seung-Woo; Jeong, Hyeong-Chai
2015-04-01
We consider a tournament among four equally strong semifinalists. The players have to decide how much stamina to use in the semifinals, provided that the rest is available in the final and the third-place playoff. We investigate optimal strategies for allocating stamina to the successive matches when players' prizes (payoffs) are given according to the tournament results. From the basic assumption that the probability to win a match follows a nondecreasing function of stamina difference, we present symmetric Nash equilibria for general payoff structures. We find three different phases of the Nash equilibria in the payoff space. First, when the champion wins a much bigger payoff than the others, any pure strategy can constitute a Nash equilibrium as long as all four players adopt it in common. Second, when the first two places are much more valuable than the other two, the only Nash equilibrium is such that everyone uses a pure strategy investing all stamina in the semifinal. Third, when the payoff for last place is much smaller than the others, a Nash equilibrium is formed when every player adopts a mixed strategy of using all or none of its stamina in the semifinals. In a limiting case that only last place pays the penalty, this mixed-strategy profile can be proved to be a unique symmetric Nash equilibrium, at least when the winning probability follows a Heaviside step function. Moreover, by using this Heaviside step function, we study the tournament by using evolutionary replicator dynamics to obtain analytic solutions, which reproduces the corresponding Nash equilibria on the population level and gives information on dynamic aspects.
Nash equilibrium and evolutionary dynamics in semifinalists' dilemma
NASA Astrophysics Data System (ADS)
Baek, Seung Ki; Son, Seung-Woo; Jeong, Hyeong-Chai
2015-04-01
We consider a tournament among four equally strong semifinalists. The players have to decide how much stamina to use in the semifinals, provided that the rest is available in the final and the third-place playoff. We investigate optimal strategies for allocating stamina to the successive matches when players' prizes (payoffs) are given according to the tournament results. From the basic assumption that the probability to win a match follows a nondecreasing function of stamina difference, we present symmetric Nash equilibria for general payoff structures. We find three different phases of the Nash equilibria in the payoff space. First, when the champion wins a much bigger payoff than the others, any pure strategy can constitute a Nash equilibrium as long as all four players adopt it in common. Second, when the first two places are much more valuable than the other two, the only Nash equilibrium is such that everyone uses a pure strategy investing all stamina in the semifinal. Third, when the payoff for last place is much smaller than the others, a Nash equilibrium is formed when every player adopts a mixed strategy of using all or none of its stamina in the semifinals. In a limiting case that only last place pays the penalty, this mixed-strategy profile can be proved to be a unique symmetric Nash equilibrium, at least when the winning probability follows a Heaviside step function. Moreover, by using this Heaviside step function, we study the tournament by using evolutionary replicator dynamics to obtain analytic solutions, which reproduces the corresponding Nash equilibria on the population level and gives information on dynamic aspects.
A Nonequilibrium Rate Formula for Collective Motions of Complex Molecular Systems
NASA Astrophysics Data System (ADS)
Yanao, Tomohiro; Koon, Wang Sang; Marsden, Jerrold E.
2010-09-01
We propose a compact reaction rate formula that accounts for a non-equilibrium distribution of residence times of complex molecules, based on a detailed study of the coarse-grained phase space of a reaction coordinate. We take the structural transition dynamics of a six-atom Morse cluster between two isomers as a prototype of multi-dimensional molecular reactions. Residence time distribution of one of the isomers shows an exponential decay, while that of the other isomer deviates largely from the exponential form and has multiple peaks. Our rate formula explains such equilibrium and non-equilibrium distributions of residence times in terms of the rates of diffusions of energy and the phase of the oscillations of the reaction coordinate. Rapid diffusions of energy and the phase generally give rise to the exponential decay of residence time distribution, while slow diffusions give rise to a non-exponential decay with multiple peaks. We finally make a conjecture about a general relationship between the rates of the diffusions and the symmetry of molecular mass distributions.
Astumian, R D
2018-01-11
In the absence of input energy, a chemical reaction in a closed system ineluctably relaxes toward an equilibrium state governed by a Boltzmann distribution. The addition of a catalyst to the system provides a way for more rapid equilibration toward this distribution, but the catalyst can never, in and of itself, drive the system away from equilibrium. In the presence of external fluctuations, however, a macromolecular catalyst (e.g., an enzyme) can absorb energy and drive the formation of a steady state between reactant and product that is not determined solely by their relative energies. Due to the ubiquity of non-equilibrium steady states in living systems, the development of a theory for the effects of external fluctuations on chemical systems has been a longstanding focus of non-equilibrium thermodynamics. The theory of stochastic pumping has provided insight into how a non-equilibrium steady-state can be formed and maintained in the presence of dissipation and kinetic asymmetry. This effort has been greatly enhanced by a confluence of experimental and theoretical work on synthetic molecular machines designed explicitly to harness external energy to drive non-equilibrium transport and self-assembly.
Distributed Nash Equilibrium Seeking for Generalized Convex Games with Shared Constraints
NASA Astrophysics Data System (ADS)
Sun, Chao; Hu, Guoqiang
2018-05-01
In this paper, we deal with the problem of finding a Nash equilibrium for a generalized convex game. Each player is associated with a convex cost function and multiple shared constraints. Supposing that each player can exchange information with its neighbors via a connected undirected graph, the objective of this paper is to design a Nash equilibrium seeking law such that each agent minimizes its objective function in a distributed way. Consensus and singular perturbation theories are used to prove the stability of the system. A numerical example is given to show the effectiveness of the proposed algorithms.
Income and poverty in a developing economy
NASA Astrophysics Data System (ADS)
Chattopadhyay, A. K.; Ackland, G. J.; Mallick, S. K.
2010-09-01
We present a stochastic agent-based model for the distribution of personal incomes in a developing economy. We start with the assumption that incomes are determined both by individual labour and by stochastic effects of trading and investment. The income from personal effort alone is distributed about a mean, while the income from trade, which may be positive or negative, is proportional to the trader's income. These assumptions lead to a Langevin model with multiplicative noise, from which we derive a Fokker-Planck (FP) equation for the income probability density function (IPDF) and its variation in time. We find that high earners have a power law income distribution while the low-income groups have a Levy IPDF. Comparing our analysis with the Indian survey data (obtained from the world bank website: http://go.worldbank.org/SWGZB45DN0) taken over many years we obtain a near-perfect data collapse onto our model's equilibrium IPDF. Using survey data to relate the IPDF to actual food consumption we define a poverty index (Sen A. K., Econometrica., 44 (1976) 219; Kakwani N. C., Econometrica, 48 (1980) 437), which is consistent with traditional indices, but independent of an arbitrarily chosen "poverty line" and therefore less susceptible to manipulation.
Incremental viscosity by non-equilibrium molecular dynamics and the Eyring model
NASA Astrophysics Data System (ADS)
Heyes, D. M.; Dini, D.; Smith, E. R.
2018-05-01
The viscoelastic behavior of sheared fluids is calculated by Non-Equilibrium Molecular Dynamics (NEMD) simulation, and complementary analytic solutions of a time-dependent extension of Eyring's model (EM) for shear thinning are derived. It is argued that an "incremental viscosity," ηi, or IV which is the derivative of the steady state stress with respect to the shear rate is a better measure of the physical state of the system than the conventional definition of the shear rate dependent viscosity (i.e., the shear stress divided by the strain rate). The stress relaxation function, Ci(t), associated with ηi is consistent with Boltzmann's superposition principle and is computed by NEMD and the EM. The IV of the Eyring model is shown to be a special case of the Carreau formula for shear thinning. An analytic solution for the transient time correlation function for the EM is derived. An extension of the EM to allow for significant local shear stress fluctuations on a molecular level, represented by a gaussian distribution, is shown to have the same analytic form as the original EM but with the EM stress replaced by its time and spatial average. Even at high shear rates and on small scales, the probability distribution function is almost gaussian (apart from in the wings) with the peak shifted by the shear. The Eyring formula approximately satisfies the Fluctuation Theorem, which may in part explain its success in representing the shear thinning curves of a wide range of different types of chemical systems.
Ricard, Jacques
2010-01-01
The present article discusses the possibility that catalysed chemical networks can evolve. Even simple enzyme-catalysed chemical reactions can display this property. The example studied is that of a two-substrate proteinoid, or enzyme, reaction displaying random binding of its substrates A and B. The fundamental property of such a system is to display either emergence or integration depending on the respective values of the probabilities that the enzyme has bound one of its substrate regardless it has bound the other substrate, or, specifically, after it has bound the other substrate. There is emergence of information if p(A)>p(AB) and p(B)>p(BA). Conversely, if p(A)
Competitive Cyber-Insurance and Internet Security
NASA Astrophysics Data System (ADS)
Shetty, Nikhil; Schwartz, Galina; Felegyhazi, Mark; Walrand, Jean
This paper investigates how competitive cyber-insurers affect network security and welfare of the networked society. In our model, a user's probability to incur damage (from being attacked) depends on both his security and the network security, with the latter taken by individual users as given. First, we consider cyberinsurers who cannot observe (and thus, affect) individual user security. This asymmetric information causes moral hazard. Then, for most parameters, no equilibrium exists: the insurance market is missing. Even if an equilibrium exists, the insurance contract covers only a minor fraction of the damage; network security worsens relative to the no-insurance equilibrium. Second, we consider insurers with perfect information about their users' security. Here, user security is perfectly enforceable (zero cost); each insurance contract stipulates the required user security. The unique equilibrium contract covers the entire user damage. Still, for most parameters, network security worsens relative to the no-insurance equilibrium. Although cyber-insurance improves user welfare, in general, competitive cyber-insurers fail to improve network security.
NASA Astrophysics Data System (ADS)
Dzifčáková, E.; Dudík, J.; Mackovjak, Š.
2016-05-01
Context. Coronal heating is currently thought to proceed via the mechanism of nanoflares, small-scale and possibly recurring heating events that release magnetic energy. Aims: We investigate the effects of a periodic high-energy electron beam on the synthetic spectra of coronal Fe ions. Methods: Initially, the coronal plasma is assumed to be Maxwellian with a temperature of 1 MK. The high-energy beam, described by a κ-distribution, is then switched on every period P for the duration of P/ 2. The periods are on the order of several tens of seconds, similar to exposure times or cadences of space-borne spectrometers. Ionization, recombination, and excitation rates for the respective distributions are used to calculate the resulting non-equilibrium ionization state of Fe and the instantaneous and period-averaged synthetic spectra. Results: Under the presence of the periodic electron beam, the plasma is out of ionization equilibrium at all times. The resulting spectra averaged over one period are almost always multithermal if interpreted in terms of ionization equilibrium for either a Maxwellian or a κ-distribution. Exceptions occur, however; the EM-loci curves appear to have a nearly isothermal crossing-point for some values of κs. The instantaneous spectra show fast changes in intensities of some lines, especially those formed outside of the peak of the respective EM(T) distributions if the ionization equilibrium is assumed. Movies 1-5 are available in electronic form at http://www.aanda.org
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardwick, Robert J.; Vennin, Vincent; Wands, David
We study the stochastic distribution of spectator fields predicted in different slow-roll inflation backgrounds. Spectator fields have a negligible energy density during inflation but may play an important dynamical role later, even giving rise to primordial density perturbations within our observational horizon today. During de-Sitter expansion there is an equilibrium solution for the spectator field which is often used to estimate the stochastic distribution during slow-roll inflation. However slow roll only requires that the Hubble rate varies slowly compared to the Hubble time, while the time taken for the stochastic distribution to evolve to the de-Sitter equilibrium solution can bemore » much longer than a Hubble time. We study both chaotic (monomial) and plateau inflaton potentials, with quadratic, quartic and axionic spectator fields. We give an adiabaticity condition for the spectator field distribution to relax to the de-Sitter equilibrium, and find that the de-Sitter approximation is never a reliable estimate for the typical distribution at the end of inflation for a quadratic spectator during monomial inflation. The existence of an adiabatic regime at early times can erase the dependence on initial conditions of the final distribution of field values. In these cases, spectator fields acquire sub-Planckian expectation values. Otherwise spectator fields may acquire much larger field displacements than suggested by the de-Sitter equilibrium solution. We quantify the information about initial conditions that can be obtained from the final field distribution. Our results may have important consequences for the viability of spectator models for the origin of structure, such as the simplest curvaton models.« less
Acid precipitation effects on soil pH and base saturation of exchange sites
W. W. McFee; J. M. Kelly; R. H. Beck
1976-01-01
The typical values and probable ranges of acid-precipitation are evaluated in terms of their theoretical effects on pH and cation exchange equilibrium of soils characteristic of the humid temperature region. The extent of probable change in soil pH and the time required to cause such a change are calculated for a range of common soils. Hydrogen ion input by acid...
Physical Premium Principle: A New Way for Insurance Pricing
NASA Astrophysics Data System (ADS)
Darooneh, Amir H.
2005-03-01
In our previous work we suggested a way for computing the non-life insurance premium. The probable surplus of the insurer company assumed to be distributed according to the canonical ensemble theory. The Esscher premium principle appeared as its special case. The difference between our method and traditional principles for premium calculation was shown by simulation. Here we construct a theoretical foundation for the main assumption in our method, in this respect we present a new (physical) definition for the economic equilibrium. This approach let us to apply the maximum entropy principle in the economic systems. We also extend our method to deal with the problem of premium calculation for correlated risk categories. Like the Buhlman economic premium principle our method considers the effect of the market on the premium but in a different way.
A Bayesian perspective on Markovian dynamics and the fluctuation theorem
NASA Astrophysics Data System (ADS)
Virgo, Nathaniel
2013-08-01
One of E. T. Jaynes' most important achievements was to derive statistical mechanics from the maximum entropy (MaxEnt) method. I re-examine a relatively new result in statistical mechanics, the Evans-Searles fluctuation theorem, from a MaxEnt perspective. This is done in the belief that interpreting such results in Bayesian terms will lead to new advances in statistical physics. The version of the fluctuation theorem that I will discuss applies to discrete, stochastic systems that begin in a non-equilibrium state and relax toward equilibrium. I will show that for such systems the fluctuation theorem can be seen as a consequence of the fact that the equilibrium distribution must obey the property of detailed balance. Although the principle of detailed balance applies only to equilibrium ensembles, it puts constraints on the form of non-equilibrium trajectories. This will be made clear by taking a novel kind of Bayesian perspective, in which the equilibrium distribution is seen as a prior over the system's set of possible trajectories. Non-equilibrium ensembles are calculated from this prior using Bayes' theorem, with the initial conditions playing the role of the data. I will also comment on the implications of this perspective for the question of how to derive the second law.
A non-equilibrium neutral model for analysing cultural change.
Kandler, Anne; Shennan, Stephen
2013-08-07
Neutral evolution is a frequently used model to analyse changes in frequencies of cultural variants over time. Variants are chosen to be copied according to their relative frequency and new variants are introduced by a process of random mutation. Here we present a non-equilibrium neutral model which accounts for temporally varying population sizes and mutation rates and makes it possible to analyse the cultural system under consideration at any point in time. This framework gives an indication whether observed changes in the frequency distributions of a set of cultural variants between two time points are consistent with the random copying hypothesis. We find that the likelihood of the existence of the observed assemblage at the end of the considered time period (expressed by the probability of the observed number of cultural variants present in the population during the whole period under neutral evolution) is a powerful indicator of departures from neutrality. Further, we study the effects of frequency-dependent selection on the evolutionary trajectories and present a case study of change in the decoration of pottery in early Neolithic Central Europe. Based on the framework developed we show that neutral evolution is not an adequate description of the observed changes in frequency. Copyright © 2013 Elsevier Ltd. All rights reserved.
de Tudela, Ricardo Pérez; Barragán, Patricia; Prosmiti, Rita; Villarreal, Pablo; Delgado-Barrio, Gerardo
2011-03-31
Classical and path integral Monte Carlo (CMC, PIMC) "on the fly" calculations are carried out to investigate anharmonic quantum effects on the thermal equilibrium structure of the H5(+) cluster. The idea to follow in our computations is based on using a combination of the above-mentioned nuclear classical and quantum statistical methods, and first-principles density functional (DFT) electronic structure calculations. The interaction energies are computed within the DFT framework using the B3(H) hybrid functional, specially designed for hydrogen-only systems. The global minimum of the potential is predicted to be a nonplanar configuration of C(2v) symmetry, while the next three low-lying stationary points on the surface correspond to extremely low-energy barriers for the internal proton transfer and to the rotation of the H2 molecules, around the C2 axis of H5(+), connecting the symmetric C(2v) minima in the planar and nonplanar orientations. On the basis of full-dimensional converged PIMC calculations, results on the quantum vibrational zero-point energy (ZPE) and state of H5(+) are reported at a low temperature of 10 K, and the influence of the above-mentioned topological features of the surface on its probability distributions is clearly demonstrated.
Inferring the parameters of a Markov process from snapshots of the steady state
NASA Astrophysics Data System (ADS)
Dettmer, Simon L.; Berg, Johannes
2018-02-01
We seek to infer the parameters of an ergodic Markov process from samples taken independently from the steady state. Our focus is on non-equilibrium processes, where the steady state is not described by the Boltzmann measure, but is generally unknown and hard to compute, which prevents the application of established equilibrium inference methods. We propose a quantity we call propagator likelihood, which takes on the role of the likelihood in equilibrium processes. This propagator likelihood is based on fictitious transitions between those configurations of the system which occur in the samples. The propagator likelihood can be derived by minimising the relative entropy between the empirical distribution and a distribution generated by propagating the empirical distribution forward in time. Maximising the propagator likelihood leads to an efficient reconstruction of the parameters of the underlying model in different systems, both with discrete configurations and with continuous configurations. We apply the method to non-equilibrium models from statistical physics and theoretical biology, including the asymmetric simple exclusion process (ASEP), the kinetic Ising model, and replicator dynamics.
Social Interactions under Incomplete Information: Games, Equilibria, and Expectations
NASA Astrophysics Data System (ADS)
Yang, Chao
My dissertation research investigates interactions of agents' behaviors through social networks when some information is not shared publicly, focusing on solutions to a series of challenging problems in empirical research, including heterogeneous expectations and multiple equilibria. The first chapter, "Social Interactions under Incomplete Information with Heterogeneous Expectations", extends the current literature in social interactions by devising econometric models and estimation tools with private information in not only the idiosyncratic shocks but also some exogenous covariates. For example, when analyzing peer effects in class performances, it was previously assumed that all control variables, including individual IQ and SAT scores, are known to the whole class, which is unrealistic. This chapter allows such exogenous variables to be private information and models agents' behaviors as outcomes of a Bayesian Nash Equilibrium in an incomplete information game. The distribution of equilibrium outcomes can be described by the equilibrium conditional expectations, which is unique when the parameters are within a reasonable range according to the contraction mapping theorem in function spaces. The equilibrium conditional expectations are heterogeneous in both exogenous characteristics and the private information, which makes estimation in this model more demanding than in previous ones. This problem is solved in a computationally efficient way by combining the quadrature method and the nested fixed point maximum likelihood estimation. In Monte Carlo experiments, if some exogenous characteristics are private information and the model is estimated under the mis-specified hypothesis that they are known to the public, estimates will be biased. Applying this model to municipal public spending in North Carolina, significant negative correlations between contiguous municipalities are found, showing free-riding effects. The Second chapter "A Tobit Model with Social Interactions under Incomplete Information", is an application of the first chapter to censored outcomes, corresponding to the situation when agents" behaviors are subjected to some binding restrictions. In an interesting empirical analysis for property tax rates set by North Carolina municipal governments, it is found that there is a significant positive correlation among near-by municipalities. Additionally, some private information about its own residents is used by a municipal government to predict others' tax rates, which enriches current empirical work about tax competition. The third chapter, "Social Interactions under Incomplete Information with Multiple Equilibria", extends the first chapter by investigating effective estimation methods when the condition for a unique equilibrium may not be satisfied. With multiple equilibria, the previous model is incomplete due to the unobservable equilibrium selection. Neither conventional likelihoods nor moment conditions can be used to estimate parameters without further specifications. Although there are some solutions to this issue in the current literature, they are based on strong assumptions such as agents with the same observable characteristics play the same strategy. This paper relaxes those assumptions and extends the all-solution method used to estimate discrete choice games to a setting with both discrete and continuous choices, bounded and unbounded outcomes, and a general form of incomplete information, where the existence of a pure strategy equilibrium has been an open question for a long time. By the use of differential topology and functional analysis, it is found that when all exogenous characteristics are public information, there are a finite number of equilibria. With privately known exogenous characteristics, the equilbria can be represented by a compact set in a Banach space and be approximated by a finite set. As a result, a finite-state probability mass function can be used to specify a probability measure for equilibrium selection, which completes the model. From Monte Carlo experiments about two types of binary choice models, it is found that assuming equilibrium uniqueness can bring in estimation biases when the true value of interaction intensity is large and there are multiple equilibria in the data generating process.
NASA Technical Reports Server (NTRS)
Grams, G. W.; SHARDANAND
1972-01-01
The inherent errors of applying terrestrial atmospheric ozone distribution studies to the atmosphere of other planets are discussed. Limitations associated with some of the earlier treatments of photochemical equilibrium distributions of ozone in planetary atmospheres are described. A technique having more universal application is presented. Ozone concentration profiles for the Martian atmosphere based on the results of the Mariner 4 radio occultation experiment and the more recent results with Mariner 6 and Mariner 7 have been calculated using this approach.
A Linear City Model with Asymmetric Consumer Distribution
Azar, Ofer H.
2015-01-01
The article analyzes a linear-city model where the consumer distribution can be asymmetric, which is important because in real markets this distribution is often asymmetric. The model yields equilibrium price differences, even though the firms’ costs are equal and their locations are symmetric (at the two endpoints of the city). The equilibrium price difference is proportional to the transportation cost parameter and does not depend on the good's cost. The firms' markups are also proportional to the transportation cost. The two firms’ prices will be equal in equilibrium if and only if half of the consumers are located to the left of the city’s midpoint, even if other characteristics of the consumer distribution are highly asymmetric. An extension analyzes what happens when the firms have different costs and how the two sources of asymmetry – the consumer distribution and the cost per unit – interact together. The model can be useful as a tool for further development by other researchers interested in applying this simple yet flexible framework for the analysis of various topics. PMID:26034984
NASA Technical Reports Server (NTRS)
Sagan, C.
1973-01-01
Analysis of non-gray radiative equilibrium and gray convective equilibrium on Titan suggests that a massive molecular-hydrogen greenhouse effect may be responsible for the disagreement between the observed IR temperatures and the equilibrium temperature of an atmosphereless Titan. Calculations of convection indicate a probable minimum optical depth of 14 which corresponds to a molecular hydrogen shell of substantial thickness with total pressures of about 0.1 bar. It is suggested that there is an equilibrium between outgassing and blow-off on the one hand and accretion from the protons trapped in a hypothetical Saturnian magnetic field on the other, in the present atmosphere of Titan. It is believed that an outgassing equivalent to the volatilization of a few kilometers of subsurface ice is required to maintain the present blow-off rate without compensation for all geological time. The presence of an extensive hydrogen corona around Titan is postulated, with surface temperatures up to 200 K.
Brocklehurst, K
1979-01-01
To facilitate mechanistic interpretation of the kinetics of time-dependent inhibition of enzymes and of similar protein modification reactions, it is important to know when the equilibrium assumption may be applied to the model: formula: (see text). The conventional criterion of quasi-equilibrium, k + 2 less than k-1, is not always easy to assess, particularly when k + 2 cannot be separately determined. It is demonstrated that the condition k + 2 less than k-1 is necessarily true, however, when the value of the apparent second-order rate constant for the modification reaction is much smaller than the value of k + 1. Since k + 1 is commonly at least 10(7)M-1.S-1 for substrates, it is probable that the equilibrium assumption may be properly applied to most irreversible inhibitions and modification reactions. PMID:518556
Discretized kinetic theory on scale-free networks
NASA Astrophysics Data System (ADS)
Bertotti, Maria Letizia; Modanese, Giovanni
2016-10-01
The network of interpersonal connections is one of the possible heterogeneous factors which affect the income distribution emerging from micro-to-macro economic models. In this paper we equip our model discussed in [1, 2] with a network structure. The model is based on a system of n differential equations of the kinetic discretized-Boltzmann kind. The network structure is incorporated in a probabilistic way, through the introduction of a link density P(α) and of correlation coefficients P(β|α), which give the conditioned probability that an individual with α links is connected to one with β links. We study the properties of the equations and give analytical results concerning the existence, normalization and positivity of the solutions. For a fixed network with P(α) = c/α q , we investigate numerically the dependence of the detailed and marginal equilibrium distributions on the initial conditions and on the exponent q. Our results are compatible with those obtained from the Bouchaud-Mezard model and from agent-based simulations, and provide additional information about the dependence of the individual income on the level of connectivity.
Efficiency and large deviations in time-asymmetric stochastic heat engines
Gingrich, Todd R.; Rotskoff, Grant M.; Vaikuntanathan, Suriyanarayanan; ...
2014-10-24
In a stochastic heat engine driven by a cyclic non-equilibrium protocol, fluctuations in work and heat give rise to a fluctuating efficiency. Using computer simulations and tools from large deviation theory, we have examined these fluctuations in detail for a model two-state engine. We find in general that the form of efficiency probability distributions is similar to those described by Verley et al (2014 Nat. Commun. 5 4721), in particular featuring a local minimum in the long-time limit. In contrast to the time-symmetric engine protocols studied previously, however, this minimum need not occur at the value characteristic of a reversible Carnot engine. Furthermore, while the local minimum may reside at the global minimum of a large deviation rate function, it does not generally correspond to the least likely efficiency measured over finite time. Lastly, we introduce a general approximation for the finite-time efficiency distribution,more » $$P(\\eta )$$, based on large deviation statistics of work and heat, that remains very accurate even when $$P(\\eta )$$ deviates significantly from its large deviation form.« less
Model for calorimetric measurements in an open quantum system
NASA Astrophysics Data System (ADS)
Donvil, Brecht; Muratore-Ginanneschi, Paolo; Pekola, Jukka P.; Schwieger, Kay
2018-05-01
We investigate the experimental setup proposed in New J. Phys. 15, 115006 (2013), 10.1088/1367-2630/15/11/115006 for calorimetric measurements of thermodynamic indicators in an open quantum system. As a theoretical model we consider a periodically driven qubit coupled with a large yet finite electron reservoir, the calorimeter. The calorimeter is initially at equilibrium with an infinite phonon bath. As time elapses, the temperature of the calorimeter varies in consequence of energy exchanges with the qubit and the phonon bath. We show how under weak-coupling assumptions, the evolution of the qubit-calorimeter system can be described by a generalized quantum jump process including as dynamical variable the temperature of the calorimeter. We study the jump process by numeric and analytic methods. Asymptotically with the duration of the drive, the qubit-calorimeter attains a steady state. In this same limit, we use multiscale perturbation theory to derive a Fokker-Planck equation governing the calorimeter temperature distribution. We inquire the properties of the temperature probability distribution close and at the steady state. In particular, we predict the behavior of measurable statistical indicators versus the qubit-calorimeter coupling constant.
Economic inequality and mobility in kinetic models for social sciences
NASA Astrophysics Data System (ADS)
Letizia Bertotti, Maria; Modanese, Giovanni
2016-10-01
Statistical evaluations of the economic mobility of a society are more difficult than measurements of the income distribution, because they require to follow the evolution of the individuals' income for at least one or two generations. In micro-to-macro theoretical models of economic exchanges based on kinetic equations, the income distribution depends only on the asymptotic equilibrium solutions, while mobility estimates also involve the detailed structure of the transition probabilities of the model, and are thus an important tool for assessing its validity. Empirical data show a remarkably general negative correlation between economic inequality and mobility, whose explanation is still unclear. It is therefore particularly interesting to study this correlation in analytical models. In previous work we investigated the behavior of the Gini inequality index in kinetic models in dependence on several parameters which define the binary interactions and the taxation and redistribution processes: saving propensity, taxation rates gap, tax evasion rate, welfare means-testing etc. Here, we check the correlation of mobility with inequality by analyzing the mobility dependence from the same parameters. According to several numerical solutions, the correlation is confirmed to be negative.
Condition of Mechanical Equilibrium at the Phase Interface with Arbitrary Geometry
NASA Astrophysics Data System (ADS)
Zubkov, V. V.; Zubkova, A. V.
2017-09-01
The authors produced an expression for the mechanical equilibrium condition at the phase interface within the force definition of surface tension. This equilibrium condition is the most general one from the mathematical standpoint and takes into account the three-dimensional aspect of surface tension. Furthermore, the formula produced allows describing equilibrium on the fractal surface of the interface. The authors used the fractional integral model of fractal distribution and took the fractional order integrals over Euclidean space instead of integrating over the fractal set.
To predict the niche, model colonization and extinction
Yackulic, Charles B.; Nichols, James D.; Reid, Janice; Der, Ricky
2015-01-01
Ecologists frequently try to predict the future geographic distributions of species. Most studies assume that the current distribution of a species reflects its environmental requirements (i.e., the species' niche). However, the current distributions of many species are unlikely to be at equilibrium with the current distribution of environmental conditions, both because of ongoing invasions and because the distribution of suitable environmental conditions is always changing. This mismatch between the equilibrium assumptions inherent in many analyses and the disequilibrium conditions in the real world leads to inaccurate predictions of species' geographic distributions and suggests the need for theory and analytical tools that avoid equilibrium assumptions. Here, we develop a general theory of environmental associations during periods of transient dynamics. We show that time-invariant relationships between environmental conditions and rates of local colonization and extinction can produce substantial temporal variation in occupancy–environment relationships. We then estimate occupancy–environment relationships during three avian invasions. Changes in occupancy–environment relationships over time differ among species but are predicted by dynamic occupancy models. Since estimates of the occupancy–environment relationships themselves are frequently poor predictors of future occupancy patterns, research should increasingly focus on characterizing how rates of local colonization and extinction vary with environmental conditions.
How psychological framing affects economic market prices in the lab and field.
Sonnemann, Ulrich; Camerer, Colin F; Fox, Craig R; Langer, Thomas
2013-07-16
A fundamental debate in social sciences concerns how individual judgments and choices, resulting from psychological mechanisms, are manifested in collective economic behavior. Economists emphasize the capacity of markets to aggregate information distributed among traders into rational equilibrium prices. However, psychologists have identified pervasive and systematic biases in individual judgment that they generally assume will affect collective behavior. In particular, recent studies have found that judged likelihoods of possible events vary systematically with the way the entire event space is partitioned, with probabilities of each of N partitioned events biased toward 1/N. Thus, combining events into a common partition lowers perceived probability, and unpacking events into separate partitions increases their perceived probability. We look for evidence of such bias in various prediction markets, in which prices can be interpreted as probabilities of upcoming events. In two highly controlled experimental studies, we find clear evidence of partition dependence in a 2-h laboratory experiment and a field experiment on National Basketball Association (NBA) and Federation Internationale de Football Association (FIFA World Cup) sports events spanning several weeks. We also find evidence consistent with partition dependence in nonexperimental field data from prediction markets for economic derivatives (guessing the values of important macroeconomic statistics) and horse races. Results in any one of the studies might be explained by a specialized alternative theory, but no alternative theories can explain the results of all four studies. We conclude that psychological biases in individual judgment can affect market prices, and understanding those effects requires combining a variety of methods from psychology and economics.
How psychological framing affects economic market prices in the lab and field
Sonnemann, Ulrich; Camerer, Colin F.; Fox, Craig R.; Langer, Thomas
2013-01-01
A fundamental debate in social sciences concerns how individual judgments and choices, resulting from psychological mechanisms, are manifested in collective economic behavior. Economists emphasize the capacity of markets to aggregate information distributed among traders into rational equilibrium prices. However, psychologists have identified pervasive and systematic biases in individual judgment that they generally assume will affect collective behavior. In particular, recent studies have found that judged likelihoods of possible events vary systematically with the way the entire event space is partitioned, with probabilities of each of N partitioned events biased toward 1/N. Thus, combining events into a common partition lowers perceived probability, and unpacking events into separate partitions increases their perceived probability. We look for evidence of such bias in various prediction markets, in which prices can be interpreted as probabilities of upcoming events. In two highly controlled experimental studies, we find clear evidence of partition dependence in a 2-h laboratory experiment and a field experiment on National Basketball Association (NBA) and Federation Internationale de Football Association (FIFA World Cup) sports events spanning several weeks. We also find evidence consistent with partition dependence in nonexperimental field data from prediction markets for economic derivatives (guessing the values of important macroeconomic statistics) and horse races. Results in any one of the studies might be explained by a specialized alternative theory, but no alternative theories can explain the results of all four studies. We conclude that psychological biases in individual judgment can affect market prices, and understanding those effects requires combining a variety of methods from psychology and economics. PMID:23818628
Pore Size Distributions Inferred from Modified Inversion Percolation Modeling of Drainage Curves
NASA Astrophysics Data System (ADS)
Dralus, D. E.; Wang, H. F.; Strand, T. E.; Glass, R. J.; Detwiler, R. L.
2005-12-01
Experiments have been conducted of drainage in sand packs. At equilibrium, the interface between the fluids forms a saturation transition fringe where the saturation decreases monotonically with height. This behavior was observed in a 1-inch thick pack of 20-30 sand contained front and back within two thin, 12-inch-by-24-inch glass plates. The translucent chamber was illuminated from behind by a bank of fluorescent bulbs. Acquired data were in the form of images captured by a CCD camera with resolution on the grain scale. The measured intensity of the transmitted light was used to calculate the average saturation at each point in the chamber. This study used a modified invasion percolation (MIP) model to simulate the drainage experiments to evaluate the relationship between the saturation-versus-height curve at equilibrium and the pore size distribution associated with the granular medium. The simplest interpretation of a drainage curve is in terms of a distribution of capillary tubes whose radii reproduce the the observed distribution of rise heights. However, this apparent radius distribution obtained from direct inversion of the saturation profile did not yield the assumed radius distribution. Further investigation demonstrated that the equilibrium height distribution is controlled primarily by the Bond number (ratio of gravity to capillary forces) with some influence from the width of the pore radius distribution. The width of the equilibrium fringe is quantified in terms of the ratio of Bond number to the standard deviation of the pore throat distribution. The normalized saturation-vs-height curves exhibit a power-law scaling behavior consistent with both Brooks-Corey and Van Genuchten type curves. Fundamental tenets of percolation theory were used to quantify the relationship between the apparent and actual radius distributions as a function of the mean coordination number and of the ratio of Bond number to standard deviation, which was supported by both MIP simulations and corresponding drainage experiments.
Krop, H B; van Noort, P C; Govers, H A
2001-01-01
Literature on the equilibrium constant for distribution between dissolved organic carbon (DOC) (Kdoc) data of strongly hydrophobic organic contaminants were collected and critically analyzed. About 900 Kdoc entries for experimental values were retrieved and tabulated, including those factors that can influence them. In addition, quantitative structure-activity relationship (QSAR) prediction equations were retrieved and tabulated. Whether a partition or association process between the contaminant and DOC takes place could not be fully established, but indications toward an association process are strong in several cases. Equilibrium between a contaminant and DOC in solution was shown to be achieved within a minute. When the equilibrium shifts in time, this was caused by either a physical or chemical change of the DOC, affecting the lighter fractions most. Adsorption isotherms turned out to be linear in the contaminant concentration for the relevant DOC concentration up to 100 mg of C/L. Eighteen experimental methods have been developed for the determination of the pertinent distribution constant. Experimental Kdoc values revealed the expected high correlation with partition coefficients over n-octanol and water (Kow) for all experimental methods, except for the HPLC and apparent solubility (AS) method. Only fluorescence quenching (FQ) and solid-phase microextraction (SPME) methods could quantify fast equilibration. Only 21% of the experimental values had a 95% confidence interval, which was statistically significantly different from zero. Variation in Kdoc values was shown to be high, caused mainly by the large variation of DOC in water samples. Even DOC from one sample gave different equilibrium constants for different DOC fractions. Measured Kdoc values should, therefore, be regarded as average values. Kdoc was shown to increase on increasing molecular mass, indicating that the molecular mass distribution is a proper normalization function for the average Kdoc at the current state of knowledge. The weakly bound fraction could easily be desorbed when other adsorbing media, such as a SepPak column or living organism, are present. The amount that moves from the DOC to the other medium will depend, among other reasons, on the size of the labile DOC fraction and the equilibrium constant of the other medium. Variation of Kdoc with temperature turned out to be small, probably caused by a small enthalpy of transfer from water to DOC. Ionic strength turned out to be more important, leading to changes of a factor of 2-5. The direction of this effect depends on the type of ion. With respect to QSAR relationships between Kdoc and macroscopic or molecular descriptors, it was concluded that only a small number of equations are available in the literature, for apolar compounds only, and with poor statistics and predictive power. Therefore, a first requirement is the improvement of the availability and quality of experimental data. Along with this, theoretical (mechanistic) models for the relationship between DOC plus contaminant descriptors on the one side and Kdoc on the other should be further developed. Correlations between Kdoc and Kow and those between the soil-water partition constant (Koc) and Kow were significantly different only in the case of natural aquatic DOC, pointing at substantial differences between these two types of organic material and at a high correspondence for other types of commercial and natural DOC.
Thermodynamic modeling using BINGO-ANTIDOTE: A new strategy to investigate metamorphic rocks
NASA Astrophysics Data System (ADS)
Lanari, Pierre; Duesterhoeft, Erik
2016-04-01
BINGO-ANTIDOTE is a new program, combing the achievements of the two petrological software packages XMAPTOOLS[1] and THERIAK-DOMINO[2]. XMAPTOOLS affords information about compositional zoning in mineral and local bulk composition of domains at the thin sections scale. THERIAK-DOMINO calculates equilibrium phase assemblages from given bulk rock composition, temperature T and pressure P. Primarily BINGO-ANTIDOTE can be described as an inverse THERIAK-DOMINO, because it uses the information provided by XMAPTOOLS to calculate the probable P-T equilibrium conditions of metamorphic rocks. Consequently, the introduced program combines the strengths of forward Gibbs free energy minimization models with the intuitive output of inverse thermobarometry models. In order to get "best" P-T equilibrium conditions of a metamorphic rock sample and thus estimating the degree of agreement between the observed and calculated mineral assemblage, it is critical to define a reliable scoring strategy. BINGO uses the THERIAKD ADD-ON[3] (Duesterhoeft and de Capitani, 2013) and is a flexible model scorer with 3+1 evaluation criteria. These criteria are the statistical agreement between the observed and calculated mineral-assemblage, -proportions (vol%) and -composition (mol). Additionally, a total likelihood, consisting of the first three criteria, allows the user an evaluation of the most probable equilibrium P-T condition. ANTIDOTE is an interactive user interface, displaying the 3+1 evaluation criteria as probability P-T-maps. It can be used with and without XMAPTOOLS. As a stand-alone program, the user is able to give the program macroscopic observations (i.e., mineral names and proportions), which ANTIDOTE converts to a readable BINGO input. In this manner, the use of BINGO-ANTIDOTE opens up thermodynamics to students and people with only a basic knowledge of phase diagrams and thermodynamic modeling techniques. This presentation introduces BINGO-ANTIDOTE and includes typical examples of its functionality, such as the determination of P-T conditions of high-grade rocks. BINGO-ANTIDOTE is still under development and will soon be freely available online. References: [1] Lanari P., Vidal O., De Andrade V., Dubacq B., Lewin E., Grosch E. G. and Schwartz S. (2013) XMapTools: a MATLAB©-based program for electron microprobe X-ray image processing and geothermobarometry. Comput. Geosci. 62, 227-240. [2] de Capitani C. and Petrakakis K. (2010) The computation of equilibrium assemblage diagrams with Theriak/Domino software. Am. Mineral. 95, 1006-1016. [3] Duesterhoeft E. and de Capitani C. (2013) Theriak_D: An add-on to implement equilibrium computations in geodynamic models. Geochem. Geophys. Geosyst. 14, 4962-4967.
Non-equilibrium thermionic electron emission for metals at high temperatures
NASA Astrophysics Data System (ADS)
Domenech-Garret, J. L.; Tierno, S. P.; Conde, L.
2015-08-01
Stationary thermionic electron emission currents from heated metals are compared against an analytical expression derived using a non-equilibrium quantum kappa energy distribution for the electrons. The latter depends on the temperature decreasing parameter κ ( T ) , which decreases with increasing temperature and can be estimated from raw experimental data and characterizes the departure of the electron energy spectrum from equilibrium Fermi-Dirac statistics. The calculations accurately predict the measured thermionic emission currents for both high and moderate temperature ranges. The Richardson-Dushman law governs electron emission for large values of kappa or equivalently, moderate metal temperatures. The high energy tail in the electron energy distribution function that develops at higher temperatures or lower kappa values increases the emission currents well over the predictions of the classical expression. This also permits the quantitative estimation of the departure of the metal electrons from the equilibrium Fermi-Dirac statistics.
Jeffrey, P D; Nichol, L W; Smith, G D
1975-01-25
A method is presented by which an experimental record of total concentration as a function of radial distance, obtained in a sedimentation equilibrium experiment conducted with a noninteracting mixture in the absence of a density gradient, may be analyzed to obtain the unimodal distributions of molecular weight and of partial molar volume when these vary concomitantly and continuously. Particular attention is given to the caracterization of classes of lipoproteins exhibiting Gaussian distributions of these quantities, although the analysis is applicable to other types of unimodal distribution. Equations are also formulated permitting the definition of the corresponding distributions of partial specific volume and of density. The analysis procedure is based on a method (employing Laplace transforms) developed previously, but differs from it in that it avoids the necessity of differentiating experimental results, which introduces error. The method offers certain advantages over other procedures used to characterize and compare lipoprotein samples (exhibiting unimodal distributions) with regard to the duration of the experiment, economy of the sample, and, particularly, the ability to define in principle all of the relevant distributions from one sedimentation equilibrium experiment and an external measurement of the weight average partial specific volume. These points and the steps in the analysis procedure are illustrated with experimental results obtained in the sedimentation equilibrium of a sample of human serum low density lipoprotein. The experimental parameters (such as solution density, column height, and angular velocity) used in the conduction of these experiments were selected on the basis of computer-simulated examples, which are also presented. These provide a guide for other workers interested in characterizing lipoproteins of this class.
Identifying apparent local stable isotope equilibrium in a complex non-equilibrium system.
He, Yuyang; Cao, Xiaobin; Wang, Jianwei; Bao, Huiming
2018-02-28
Although being out of equilibrium, biomolecules in organisms have the potential to approach isotope equilibrium locally because enzymatic reactions are intrinsically reversible. A rigorous approach that can describe isotope distribution among biomolecules and their apparent deviation from equilibrium state is lacking, however. Applying the concept of distance matrix in graph theory, we propose that apparent local isotope equilibrium among a subset of biomolecules can be assessed using an apparent fractionation difference (|Δα|) matrix, in which the differences between the observed isotope composition (δ') and the calculated equilibrium fractionation factor (1000lnβ) can be more rigorously evaluated than by using a previous approach for multiple biomolecules. We tested our |Δα| matrix approach by re-analyzing published data of different amino acids (AAs) in potato and in green alga. Our re-analysis shows that biosynthesis pathways could be the reason for an apparently close-to-equilibrium relationship inside AA families in potato leaves. Different biosynthesis/degradation pathways in tubers may have led to the observed isotope distribution difference between potato leaves and tubers. The analysis of data from green algae does not support the conclusion that AAs are further from equilibrium in glucose-cultured green algae than in the autotrophic ones. Application of the |Δα| matrix can help us to locate potential reversible reactions or reaction networks in a complex system such as a metabolic system. The same approach can be broadly applied to all complex systems that have multiple components, e.g. geochemical or atmospheric systems of early Earth or other planets. Copyright © 2017 John Wiley & Sons, Ltd.
Metocean design parameter estimation for fixed platform based on copula functions
NASA Astrophysics Data System (ADS)
Zhai, Jinjin; Yin, Qilin; Dong, Sheng
2017-08-01
Considering the dependent relationship among wave height, wind speed, and current velocity, we construct novel trivariate joint probability distributions via Archimedean copula functions. Total 30-year data of wave height, wind speed, and current velocity in the Bohai Sea are hindcast and sampled for case study. Four kinds of distributions, namely, Gumbel distribution, lognormal distribution, Weibull distribution, and Pearson Type III distribution, are candidate models for marginal distributions of wave height, wind speed, and current velocity. The Pearson Type III distribution is selected as the optimal model. Bivariate and trivariate probability distributions of these environmental conditions are established based on four bivariate and trivariate Archimedean copulas, namely, Clayton, Frank, Gumbel-Hougaard, and Ali-Mikhail-Haq copulas. These joint probability models can maximize marginal information and the dependence among the three variables. The design return values of these three variables can be obtained by three methods: univariate probability, conditional probability, and joint probability. The joint return periods of different load combinations are estimated by the proposed models. Platform responses (including base shear, overturning moment, and deck displacement) are further calculated. For the same return period, the design values of wave height, wind speed, and current velocity obtained by the conditional and joint probability models are much smaller than those by univariate probability. Considering the dependence among variables, the multivariate probability distributions provide close design parameters to actual sea state for ocean platform design.
NASA Astrophysics Data System (ADS)
Gromov, Yu Yu; Minin, Yu V.; Ivanova, O. G.; Morozova, O. N.
2018-03-01
Multidimensional discrete distributions of probabilities of independent random values were received. Their one-dimensional distribution is widely used in probability theory. Producing functions of those multidimensional distributions were also received.
NASA Astrophysics Data System (ADS)
Rydalevskaya, Maria A.; Voroshilova, Yulia N.
2018-05-01
Vibrationally non-equilibrium flows of chemically homogeneous diatomic gases are considered under the conditions that the distribution of the molecules over vibrational levels differs significantly from the Boltzmann distribution. In such flows, molecular collisions can be divided into two groups: the first group corresponds to "rapid" microscopic processes whereas the second one corresponds to "slow" microscopic processes (their rate is comparable to or larger than that of gasdynamic parameters variation). The collisions of the first group form quasi-stationary vibrationally non-equilibrium distribution functions. The model kinetic equations are used to study the transport processes under these conditions. In these equations, the BGK-type approximation is used to model only the collision operators of the first group. It allows us to simplify derivation of the transport fluxes and calculation of the kinetic coefficients. Special attention is given to the connection between the formulae for the bulk viscosity coefficient and the sound velocity square.
NASA Astrophysics Data System (ADS)
Xu, Dazhi; Cao, Jianshu
2016-08-01
The concept of polaron, emerged from condense matter physics, describes the dynamical interaction of moving particle with its surrounding bosonic modes. This concept has been developed into a useful method to treat open quantum systems with a complete range of system-bath coupling strength. Especially, the polaron transformation approach shows its validity in the intermediate coupling regime, in which the Redfield equation or Fermi's golden rule will fail. In the polaron frame, the equilibrium distribution carried out by perturbative expansion presents a deviation from the canonical distribution, which is beyond the usual weak coupling assumption in thermodynamics. A polaron transformed Redfield equation (PTRE) not only reproduces the dissipative quantum dynamics but also provides an accurate and efficient way to calculate the non-equilibrium steady states. Applications of the PTRE approach to problems such as exciton diffusion, heat transport and light-harvesting energy transfer are presented.
Evidence for a Late Reducing Event in IAB-Silicate Inclusions
NASA Astrophysics Data System (ADS)
Seckendorff, V. V.; O'Neill, H. St. C.; Zipfel, J.; Palme, H.
1992-07-01
Coexisting orthopyroxene (opx) and olivine (ol) in silicate inclusions of IAB-iron meteorites have different Fe/(Fe+Mg) ratios. Ferrosilite (fs) contents of opx are higher than fayalite contents (fa) of ol (e.g., Bunch and Keil 1970). Non-ideal solid solution of fs in opx and/or fa in ol is generally assumed. We reinvestigated the equilibrium Fe-Mg distribution between coexisting ol+opx in the system MgO-FeO-SiO2 (von Seckendorff and O'Neill 1992). Reversal experiments at high- Mg compositions were performed from 900 to 1600 degrees C at 16 and 20 kbar using a barium borosilicate flux. The data could be fitted to a simple thermodynamic model with ol and opx treated as regular solutions and this model was found to describe satisfactorily the literature data extending down to 700 degrees C. For Fe/(Fe+Mg) between 0.05 to 0.15 we find KD^ol-opx close to one from 1600 to 700 degrees C, virtually independent of pressure and temperature. Fig. 1 shows experimental results at the Mg-rich end. Error bars mark 1-sigma standard deviations. Ol is in all cases more Fe-rich than coexisting opx, except for a single run at 1000 degrees C that probably did not reach equilibrium because of slow reaction kinetics. Two calculated distribution curves (1300, 700 degrees C at 16 kbar) lie close together indicating the absence of any significant temperature dependence of the exchange reaction at the Mg- rich end of the system. IAB-silicate inclusions plot outside the range of experimental data (Fig. 1). Although some previous models for Fe-Mg exchange between ol and opx (e.g., Sack 1980) extrapolate to KD<1 at temperatures near 500 degrees C, such models reproduce the experimental data (700 to 1600 degrees C) less well, than our updated model. In addition, temperatures at 500 degrees C are probably too low to allow Fe diffusion in opx. Two pyroxene equilibration temperatures of IAB-silicate inclusions are around 900-1000 degrees C suggesting a similar closure temperature for Fe diffusion in opx. Because of this and because of the essentially temperature-independent Fe-Mg distribution between ol and opx from 1600 to 700 degrees C, we conclude that the Fe-Mg distribution between ol and opx in IAB-silicate inclusion does not reflect thermodynamic equilibrium. As Fe-diffusion in ol is faster than in opx, redistribution of Fe in ol should have occurred at a temperature below the closure temperature for Fe-diffusion in opx. We suggest that FeO in ol was reduced to Fe metal by some species such as C, P, S, etc. A lower limit for the temperature of the reducing event is provided by Ca-zoning in ol, which develops below 650 degrees C (Kohler et al. 1991). Since strong FeO zoning in ol is absent, reduction of FeO in ol should have occurred above 650 degrees C, assuming similar diffusion coefficients for Ca and Fe in ol. References: Bunch T.E. and Keil K. (1970) Contrib. Mineral. Petrol. 25, 297-340. Kohler T., Palme H. and Brey G. (1991) N. Jb. Miner. Mh. 9, 423-431. Sack R.O. (1980) Contrib. Mineral. Petrol. 71, 257-269. v. Seckendorff V. and O'Neill H.St.C. (1992) Contr. Min. Petrol. (submitted).
NASA Astrophysics Data System (ADS)
Tovbin, Yu. K.
2018-06-01
An analysis is presented of one of the key concepts of physical chemistry of condensed phases: the theory self-consistency in describing the rates of elementary stages of reversible processes and the equilibrium distribution of components in a reaction mixture. It posits that by equating the rates of forward and backward reactions, we must obtain the same equation for the equilibrium distribution of reaction mixture components, which follows directly from deducing the equation in equilibrium theory. Ideal reaction systems always have this property, since the theory is of a one-particle character. Problems arise in considering interparticle interactions responsible for the nonideal behavior of real systems. The Eyring and Temkin approaches to describing nonideal reaction systems are compared. Conditions for the self-consistency of the theory for mono- and bimolecular processes in different types of interparticle potentials, the degree of deviation from the equilibrium state, allowing for the internal motions of molecules in condensed phases, and the electronic polarization of the reagent environment are considered within the lattice gas model. The inapplicability of the concept of an activated complex coefficient for reaching self-consistency is demonstrated. It is also shown that one-particle approximations for considering intermolecular interactions do not provide a theory of self-consistency for condensed phases. We must at a minimum consider short-range order correlations.
Triplet correlation in sheared suspensions of Brownian particles
NASA Astrophysics Data System (ADS)
Yurkovetsky, Yevgeny; Morris, Jeffrey F.
2006-05-01
Triplet microstructure of sheared concentrated suspensions of Brownian monodisperse spherical particles is studied by sampling realizations of a three-dimensional unit cell subject to periodic boundary conditions obtained in accelerated Stokesian dynamics simulations. Triplets are regarded as a bridge between particle pairs and many-particle clusters thought responsible for shear thickening. Triplet-correlation data for weakly sheared near-equilibrium systems display an excluded volume effect of accumulated correlation for equilateral contacting triplets. As the Péclet number increases, there is a change in the preferred contacting isosceles triplet configuration, away from the "closed" triplet where the particles lie at the vertices of an equilateral triangle and toward the fully extended rod-like linear arrangement termed the "open" triplet. This transition is most pronounced for triplets lying in the plane of shear, where the open triplets' angular orientation with respect to the flow is very similar to that of a contacting pair. The correlation of suspension rheology to observed structure signals onset of larger clusters. An investigation of the predictive ability of Kirkwood's superposition approximation (KSA) provides valuable insights into the relationship between the pair and triplet probability distributions and helps achieve a better and more detailed understanding of the interplay of the pair and triplet dynamics. The KSA is seen more successfully to predict the shape of isosceles contacting triplet nonequilibrium distributions in the plane of shear than for similar configurations in equilibrium hard-sphere systems; in the sheared case, the discrepancies in magnitudes of distribution peaks are attributable to two interaction effects when pair average trajectories and locations of particles change in response to real, or "hard," and probabilistically favored ("soft") neighboring excluded volumes and, in the case of open triplets, due to changes in the correlation of the farthest separated pair caused by the fixed presence of the particle in the middle.
James, O.B.; Floss, C.; McGee, J.J.
2002-01-01
We present results of a secondary ion mass spectrometry study of the rare earth elements (REEs) in the minerals of two samples of lunar ferroan anorthosite, and the results are applicable to studies of REEs in all igneous rocks, no matter what their planet of origin. Our pyroxene analyses are used to determine solid-solid REE distribution coefficients (D = CREE in low-Ca pyroxene/CREE in augite) in orthopyroxene-augite pairs derived by inversion of pigeonite. Our data and predictions from crystal-chemical considerations indicate that as primary pigeonite inverts to orthopyroxene plus augite and subsolidus reequilibration proceeds, the solid-solid Ds for orthopyroxene-augite pairs progressively decrease for all REEs; the decrease is greatest for the LREEs. The REE pattern of solid-solid Ds for inversion-derived pyroxene pairs is close to a straight line for Sm-Lu and turns upward for REEs lighter than Sm; the shape of this pattern is predicted by the shapes of the REE patterns for the individual minerals. Equilibrium liquids calculated for one sample from the compositions of primary phases, using measured or experimentally determined solid-liquid Ds, have chondrite-normalized REE patterns that are very slightly enriched in LREEs. The plagioclase equilibrium liquid is overall less rich in REEs than pyroxene equilibrium liquids, and the discrepancy probably arises because the calculated plagioclase equilibrium liquid represents a liquid earlier in the fractionation sequence than the pyroxene equilibrium liquids. "Equilibrium" liquids calculated from the compositions of inversion-derived pyroxenes or orthopyroxene derived by reaction of olivine are LREE depleted (in some cases substantially) in comparison with equilibrium liquids calculated from the compositions of primary phases. These discrepancies arise because the inversion-derived and reaction-derived pyroxenes did not crystallize directly from liquid, and the use of solid-liquid Ds is inappropriate. The LREE depletion of the calculated liquids is a relic of formation of these phases from primary LREE-depleted minerals. Thus, if one attempts to calculate the compositions of equilibrium liquids from pyroxene compositions, it is important to establish that the pyroxenes are primary. In addition, our data suggest that experimental studies have underestimated solid-liquid Ds for REEs in pigeonite and that REE contents of liquids calculated using these Ds are overestimates. Our results have implications for Sm-Nd age studies. Our work shows that if pigeonite inversion and/or subsolidus reequilibration between augite and orthopyroxene occured significantly after crystallization, and if pyroxene separates isolated for Sm-Nd studies do not have the bulk composition of the primary pyroxenes, then the Sm-Nd isochron age and ??Nd will be in error. Copyright ?? 2002 Elsevier Science Ltd.
Non-equilibrium many-body dynamics following a quantum quench
NASA Astrophysics Data System (ADS)
Vyas, Manan
2017-12-01
We study analytically and numerically the non-equilibrium dynamics of an isolated interacting many-body quantum system following a random quench. We model the system Hamiltonian by Embedded Gaussian Orthogonal Ensemble (EGOE) of random matrices with one plus few-body interactions for fermions. EGOE are paradigmatic models to study the crossover from integrability to chaos in interacting many-body quantum systems. We obtain a generic formulation, based on spectral variances, for describing relaxation dynamics of survival probabilities as a function of rank of interactions. Our analytical results are in good agreement with numerics.
NASA Astrophysics Data System (ADS)
Polunin, Pavel M.
In this work we consider several nonlinearity-based and/or noise-related phenomena that have been recently observed in micro-electromechanical vibratory systems. The main goals are to closely examine these phenomena, develop an understanding of their underlying physics, derive techniques for characterizing parameters in relevant mathematical models, and determine ways to improve the performance of specific classes of micro-electromechanical systems (MEMS) used in applications. The general perspective of this work is based on the fact that nonlinearity and noise represent integral parts of the models needed to describe the response of these systems, and the focus is on situations where these generally undesirable features can be utilized or accounted for in design. We consider three different, but related, topics in this general area. The first topic uses the slowly varying states in a rotating frame of reference where we analyze the stationary probability distribution of a nonlinear parametrically-driven resonator subjected to Poisson pulses and thermal noise. We show that Poisson pulses with low pulse rates, as compared with the resonator decay rate, cause a power-law divergence of the probability density at the resonator equilibrium in both the underdamped (overdamped) regimes, in which the response does (does not) spiral in the rotating frame. We have also found that the shape of the probability distribution away from the equilibrium position is qualitatively different for the overdamped and underdamped cases. In particular, in the overdamped regime, the form of the secondary singularity in the probability distribution depends strongly on the reference phase of the resonator response and the pulse modulation phase, while in the underdamped regime several singular peaks occur in the distribution, and their locations are determined by the resonator frequency and decay rate in the rotating frame. Finally, we show that even weak Gaussian noise smoothens out the singular peaks in the probability distribution. The theoretical results are successfully compared experimental results obtained from collaborators at the Hong Kong University of Science and Technology. Second, we discuss a time-domain technique for characterizing parameters for models that describe the response of a single vibrational mode of micromechanical resonators with symmetric restoring and damping forces. These parameters include coefficients of conservative and dissipative linear and nonlinear terms, as well as the strengths of various noise sources acting on the mode of interest. The method relies on measurements taken during a ringdown response, that is, free vibration, in which the nonlinearities result in an amplitude-dependent frequency and a non-exponential decay of the amplitude, while noise sources cause fluctuations in the resonator amplitude and phase. Analysis of the amplitude of the ringdown response allows one to estimate the quality factor and the dissipative nonlinearity, and the zero-crossing points in the ringdown measurement can be used to characterize the linear natural frequency and the cubic and quintic nonlinearities of the vibrational mode, which typically arise from a combination of mechanical and electrostatic effects. Additionally, we develop and demonstrate a statistical analysis of the zero-crossing points in the resonator response that allows one to separate the effects of additive, multiplicative, and measurement noises and estimate their corresponding intensities. These characterization methods are demonstrated using experimental measurements obtained from collaborators at Stanford University. Finally, we examine the problem of self-induced parametric amplification in ring/disk resonating gyroscopes. We model the dynamics of these gyroscopes by considering flexural (elliptical) vibrations of a thin elastic ring subjected to electrostatic transduction and show that the parametric amplification arises naturally from nonlinear intermodal coupling between the drive and sense modes of the gyroscope. Analysis shows that this coupling results in a substantial increase in the sensitivity of the gyroscope to the external angular rate. This improvement in the gyroscope performance depends strongly on both the modal coupling strength and the operating point of the gyroscope, features which depend on details of nonlinear kinematics of, and forces acting on, the ring. Using the results from this model, we explore ways to enhance the amplification effect by changing the shape of the resonator body and attendant electrodes, and by electrostatic tuning. These results suggest new designs for ring gyros, and a general approach for other geometries, such as disk-resonator-gyros (DRGs), that should offer significant improvements in device sensitivity.
Tygert, Mark
2010-09-21
We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).
Thermal equilibrium and statistical thermometers in special relativity.
Cubero, David; Casado-Pascual, Jesús; Dunkel, Jörn; Talkner, Peter; Hänggi, Peter
2007-10-26
There is an intense debate in the recent literature about the correct generalization of Maxwell's velocity distribution in special relativity. The most frequently discussed candidate distributions include the Jüttner function as well as modifications thereof. Here we report results from fully relativistic one-dimensional molecular dynamics simulations that resolve the ambiguity. The numerical evidence unequivocally favors the Jüttner distribution. Moreover, our simulations illustrate that the concept of "thermal equilibrium" extends naturally to special relativity only if a many-particle system is spatially confined. They make evident that "temperature" can be statistically defined and measured in an observer frame independent way.
NASA Astrophysics Data System (ADS)
Crum, Dax M.; Valsaraj, Amithraj; David, John K.; Register, Leonard F.; Banerjee, Sanjay K.
2016-12-01
Particle-based ensemble semi-classical Monte Carlo (MC) methods employ quantum corrections (QCs) to address quantum confinement and degenerate carrier populations to model tomorrow's ultra-scaled metal-oxide-semiconductor-field-effect-transistors. Here, we present the most complete treatment of quantum confinement and carrier degeneracy effects in a three-dimensional (3D) MC device simulator to date, and illustrate their significance through simulation of n-channel Si and III-V FinFETs. Original contributions include our treatment of far-from-equilibrium degenerate statistics and QC-based modeling of surface-roughness scattering, as well as considering quantum-confined phonon and ionized-impurity scattering in 3D. Typical MC simulations approximate degenerate carrier populations as Fermi distributions to model the Pauli-blocking (PB) of scattering to occupied final states. To allow for increasingly far-from-equilibrium non-Fermi carrier distributions in ultra-scaled and III-V devices, we instead generate the final-state occupation probabilities used for PB by sampling the local carrier populations as function of energy and energy valley. This process is aided by the use of fractional carriers or sub-carriers, which minimizes classical carrier-carrier scattering intrinsically incompatible with degenerate statistics. Quantum-confinement effects are addressed through quantum-correction potentials (QCPs) generated from coupled Schrödinger-Poisson solvers, as commonly done. However, we use these valley- and orientation-dependent QCPs not just to redistribute carriers in real space, or even among energy valleys, but also to calculate confinement-dependent phonon, ionized-impurity, and surface-roughness scattering rates. FinFET simulations are used to illustrate the contributions of each of these QCs. Collectively, these quantum effects can substantially reduce and even eliminate otherwise expected benefits of considered In0.53Ga0.47 As FinFETs over otherwise identical Si FinFETs despite higher thermal velocities in In0.53Ga0.47 As. It also may be possible to extend these basic uses of QCPs, however calculated, to still more computationally efficient drift-diffusion and hydrodynamic simulations, and the basic concepts even to compact device modeling.
High-order regularization in lattice-Boltzmann equations
NASA Astrophysics Data System (ADS)
Mattila, Keijo K.; Philippi, Paulo C.; Hegele, Luiz A.
2017-04-01
A lattice-Boltzmann equation (LBE) is the discrete counterpart of a continuous kinetic model. It can be derived using a Hermite polynomial expansion for the velocity distribution function. Since LBEs are characterized by discrete, finite representations of the microscopic velocity space, the expansion must be truncated and the appropriate order of truncation depends on the hydrodynamic problem under investigation. Here we consider a particular truncation where the non-equilibrium distribution is expanded on a par with the equilibrium distribution, except that the diffusive parts of high-order non-equilibrium moments are filtered, i.e., only the corresponding advective parts are retained after a given rank. The decomposition of moments into diffusive and advective parts is based directly on analytical relations between Hermite polynomial tensors. The resulting, refined regularization procedure leads to recurrence relations where high-order non-equilibrium moments are expressed in terms of low-order ones. The procedure is appealing in the sense that stability can be enhanced without local variation of transport parameters, like viscosity, or without tuning the simulation parameters based on embedded optimization steps. The improved stability properties are here demonstrated using the perturbed double periodic shear layer flow and the Sod shock tube problem as benchmark cases.
Theoretical Transport Studies of Non-equilibrium Carriers Driven by High Electric Fields
2012-04-25
for two different types of confinement. Motivated by our desire to understand scattering processes in quantum wires in a simple way, in the final...Π’s are probability propagators. The probability propagators can be found, for example, by solving a Master equation if the motion is fully inco - herent...shown that when the transport is coherent (i.e. there are no phase- breaking scattering processes ), the current in the conductor is related to the
Non-equilibrium populations of hydrogen in high-redshift galaxies
NASA Astrophysics Data System (ADS)
Pomerantz, Brian B.; Redmond, Kayla; Strelnitski, Vladimir
2014-07-01
We investigate the possibility of maser amplification in hydrogen recombination lines from the galaxies of first generation, at z≲ 30. Combining analytical and computational approaches, we show that the transitions between the hydrogen Rydberg energy levels induced by the radiation from the ionizing star and by the (warmer than currently) cosmic microwave background can produce noticeable differences in the population distribution, as compared with previous computations for contemporary H+ regions, most of which ignored the processes induced by the ionizing star's radiation. In particular, the low (n≲ 30) α-transitions show an increased tendency towards population inversion, when ionization of the H+ region is caused by a very hot star at high redshift. The resulting maser/laser amplification can increase the brightness of the emitted lines and make them detectable. However, the limiting effects of maser saturation will probably not allow maser gains to exceed one or two orders of magnitude.
Dissipative open systems theory as a foundation for the thermodynamics of linear systems.
Delvenne, Jean-Charles; Sandberg, Henrik
2017-03-06
In this paper, we advocate the use of open dynamical systems, i.e. systems sharing input and output variables with their environment, and the dissipativity theory initiated by Jan Willems as models of thermodynamical systems, at the microscopic and macroscopic level alike. We take linear systems as a study case, where we show how to derive a global Lyapunov function to analyse networks of interconnected systems. We define a suitable notion of dynamic non-equilibrium temperature that allows us to derive a discrete Fourier law ruling the exchange of heat between lumped, discrete-space systems, enriched with the Maxwell-Cattaneo correction. We complete these results by a brief recall of the steps that allow complete derivation of the dissipation and fluctuation in macroscopic systems (i.e. at the level of probability distributions) from lossless and deterministic systems.This article is part of the themed issue 'Horizons of cybernetical physics'. © 2017 The Author(s).
Self-Consistent Field Lattice Model for Polymer Networks.
Tito, Nicholas B; Storm, Cornelis; Ellenbroek, Wouter G
2017-12-26
A lattice model based on polymer self-consistent field theory is developed to predict the equilibrium statistics of arbitrary polymer networks. For a given network topology, our approach uses moment propagators on a lattice to self-consistently construct the ensemble of polymer conformations and cross-link spatial probability distributions. Remarkably, the calculation can be performed "in the dark", without any prior knowledge on preferred chain conformations or cross-link positions. Numerical results from the model for a test network exhibit close agreement with molecular dynamics simulations, including when the network is strongly sheared. Our model captures nonaffine deformation, mean-field monomer interactions, cross-link fluctuations, and finite extensibility of chains, yielding predictions that differ markedly from classical rubber elasticity theory for polymer networks. By examining polymer networks with different degrees of interconnectivity, we gain insight into cross-link entropy, an important quantity in the macroscopic behavior of gels and self-healing materials as they are deformed.
On the self-organizing process of large scale shear flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newton, Andrew P. L.; Kim, Eun-jin; Liu, Han-Li
2013-09-15
Self organization is invoked as a paradigm to explore the processes governing the evolution of shear flows. By examining the probability density function (PDF) of the local flow gradient (shear), we show that shear flows reach a quasi-equilibrium state as its growth of shear is balanced by shear relaxation. Specifically, the PDFs of the local shear are calculated numerically and analytically in reduced 1D and 0D models, where the PDFs are shown to converge to a bimodal distribution in the case of finite correlated temporal forcing. This bimodal PDF is then shown to be reproduced in nonlinear simulation of 2Dmore » hydrodynamic turbulence. Furthermore, the bimodal PDF is demonstrated to result from a self-organizing shear flow with linear profile. Similar bimodal structure and linear profile of the shear flow are observed in gulf stream, suggesting self-organization.« less
Entropy factor for randomness quantification in neuronal data.
Rajdl, K; Lansky, P; Kostal, L
2017-11-01
A novel measure of neural spike train randomness, an entropy factor, is proposed. It is based on the Shannon entropy of the number of spikes in a time window and can be seen as an analogy to the Fano factor. Theoretical properties of the new measure are studied for equilibrium renewal processes and further illustrated on gamma and inverse Gaussian probability distributions of interspike intervals. Finally, the entropy factor is evaluated from the experimental records of spontaneous activity in macaque primary visual cortex and compared to its theoretical behavior deduced for the renewal process models. Both theoretical and experimental results show substantial differences between the Fano and entropy factors. Rather paradoxically, an increase in the variability of spike count is often accompanied by an increase of its predictability, as evidenced by the entropy factor. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Random Evolutionary Dynamics Driven by Fitness and House-of-Cards Mutations: Sampling Formulae
NASA Astrophysics Data System (ADS)
Huillet, Thierry E.
2017-07-01
We first revisit the multi-allelic mutation-fitness balance problem, especially when mutations obey a house of cards condition, where the discrete-time deterministic evolutionary dynamics of the allelic frequencies derives from a Shahshahani potential. We then consider multi-allelic Wright-Fisher stochastic models whose deviation to neutrality is from the Shahshahani mutation/selection potential. We next focus on the weak selection, weak mutation cases and, making use of a Gamma calculus, we compute the normalizing partition functions of the invariant probability densities appearing in their Wright-Fisher diffusive approximations. Using these results, generalized Ewens sampling formulae (ESF) from the equilibrium distributions are derived. We start treating the ESF in the mixed mutation/selection potential case and then we restrict ourselves to the ESF in the simpler house-of-cards mutations only situation. We also address some issues concerning sampling problems from infinitely-many alleles weak limits.
Testing the criterion for correct convergence in the complex Langevin method
NASA Astrophysics Data System (ADS)
Nagata, Keitaro; Nishimura, Jun; Shimasaki, Shinji
2018-05-01
Recently the complex Langevin method (CLM) has been attracting attention as a solution to the sign problem, which occurs in Monte Carlo calculations when the effective Boltzmann weight is not real positive. An undesirable feature of the method, however, was that it can happen in some parameter regions that the method yields wrong results even if the Langevin process reaches equilibrium without any problem. In our previous work, we proposed a practical criterion for correct convergence based on the probability distribution of the drift term that appears in the complex Langevin equation. Here we demonstrate the usefulness of this criterion in two solvable theories with many dynamical degrees of freedom, i.e., two-dimensional Yang-Mills theory with a complex coupling constant and the chiral Random Matrix Theory for finite density QCD, which were studied by the CLM before. Our criterion can indeed tell the parameter regions in which the CLM gives correct results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harstad, E. N.; Harlow, Francis Harvey,; Schreyer, H. L.
Our goal is to develop constitutive relations for the behavior of a solid polymer during high-strain-rate deformations. In contrast to the classic thermodynamic techniques for deriving stress-strain response in static (equilibrium) circumstances, we employ a statistical-mechanics approach, in which we evolve a probability distribution function (PDF) for the velocity fluctuations of the repeating units of the chain. We use a Langevin description for the dynamics of a single repeating unit and a Lioville equation to describe the variations of the PDF. Moments of the PDF give the conservation equations for a single polymer chain embedded in other similar chains. Tomore » extract single-chain analytical constitutive relations these equations have been solved for representative loading paths. By this process we discover that a measure of nonuniform chain link displacement serves this purpose very well. We then derive an evolution equation for the descriptor function, with the result being a history-dependent constitutive relation.« less
A B-B-G-K-Y framework for fluid turbulence
NASA Technical Reports Server (NTRS)
Montgomery, D.
1975-01-01
A kinetic theory for fluid turbulence is developed from the Liouville equation and the associated BBGKY hierarchy. Real and imaginary parts of Fourier coefficients of fluid variables play the roles of particles. Closure is achieved by the assumption of negligible five-coefficient correlation functions and probability distributions of Fourier coefficients are the basic variables of the theory. An additional approximation leads to a closed-moment description similar to the so-called eddy-damped Markovian approximation. A kinetic equation is derived for which conservation laws and an H-theorem can be rigorously established, the H-theorem implying relaxation of the absolute equilibrium of Kraichnan. The equation can be cast in the Fokker-Planck form, and relaxation times estimated from its friction and diffusion coefficients. An undetermined parameter in the theory is the free decay time for triplet correlations. Some attention is given to the inclusion of viscous damping and external driving forces.
Bayesian tomography and integrated data analysis in fusion diagnostics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Dong, E-mail: lid@swip.ac.cn; Dong, Y. B.; Deng, Wei
2016-11-15
In this article, a Bayesian tomography method using non-stationary Gaussian process for a prior has been introduced. The Bayesian formalism allows quantities which bear uncertainty to be expressed in the probabilistic form so that the uncertainty of a final solution can be fully resolved from the confidence interval of a posterior probability. Moreover, a consistency check of that solution can be performed by checking whether the misfits between predicted and measured data are reasonably within an assumed data error. In particular, the accuracy of reconstructions is significantly improved by using the non-stationary Gaussian process that can adapt to the varyingmore » smoothness of emission distribution. The implementation of this method to a soft X-ray diagnostics on HL-2A has been used to explore relevant physics in equilibrium and MHD instability modes. This project is carried out within a large size inference framework, aiming at an integrated analysis of heterogeneous diagnostics.« less
NASA Astrophysics Data System (ADS)
Rau, Uwe; Brendel, Rolf
1998-12-01
It is shown that a recently described general relationship between the local collection efficiency of solar cells and the dark carrier concentration (reciprocity theorem) directly follows from the principle of detailed balance. We derive the relationship for situations where transport of charge carriers occurs between discrete states as well as for the situation where electronic transport is described in terms of continuous functions. Combining both situations allows to extend the range of applicability of the reciprocity theorem to all types of solar cells, including, e.g., metal-insulator-semiconductor-type, electrochemical solar cells, as well as the inclusion of the impurity photovoltaic effect. We generalize the theorem further to situations where the occupation probability of electronic states is governed by Fermi-Dirac statistics instead of Boltzmann statistics as underlying preceding work. In such a situation the reciprocity theorem is restricted to small departures from equilibrium.
Self Organized Criticality as a new paradigm of sleep regulation
NASA Astrophysics Data System (ADS)
Ivanov, Plamen Ch.; Bartsch, Ronny P.
2012-02-01
Humans and animals often exhibit brief awakenings from sleep (arousals), which are traditionally viewed as random disruptions of sleep caused by external stimuli or pathologic perturbations. However, our recent findings show that arousals exhibit complex temporal organization and scale-invariant behavior, characterized by a power-law probability distribution for their durations, while sleep stage durations exhibit exponential behavior. The co-existence of both scale-invariant and exponential processes generated by a single regulatory mechanism has not been observed in physiological systems until now. Such co-existence resembles the dynamical features of non-equilibrium systems exhibiting self-organized criticality (SOC). Our empirical analysis and modeling approaches based on modern concepts from statistical physics indicate that arousals are an integral part of sleep regulation and may be necessary to maintain and regulate healthy sleep by releasing accumulated excitations in the regulatory neuronal networks, following a SOC-type temporal organization.
NASA Astrophysics Data System (ADS)
Sarkisyan, M. A.
1989-02-01
An analysis is made of the interaction of a three-level "cascade" atomic system with a resonant laser field. An investigation is made of the dynamics of the populations of the quasienergy states and of the atomic levels over times greater than the spontaneous transition times. In the steady-state regime the distribution of atoms over various quasienergy states is obtained under two-photon resonance conditions and for the case when all the resonances are strong. It is found that a suitable selection of the interaction parameters can establish an inversion between the quasienergy states and also due to atomic transitions. The total probability of spontaneous scattering is calculated. It is shown that, under two-photon resonance conditions, the scattering intensity increases sharply due to a self-induced resonance.
Inflation with a graceful exit in a random landscape
NASA Astrophysics Data System (ADS)
Pedro, F. G.; Westphal, A.
2017-03-01
We develop a stochastic description of small-field inflationary histories with a graceful exit in a random potential whose Hessian is a Gaussian random matrix as a model of the unstructured part of the string landscape. The dynamical evolution in such a random potential from a small-field inflation region towards a viable late-time de Sitter (dS) minimum maps to the dynamics of Dyson Brownian motion describing the relaxation of non-equilibrium eigenvalue spectra in random matrix theory. We analytically compute the relaxation probability in a saddle point approximation of the partition function of the eigenvalue distribution of the Wigner ensemble describing the mass matrices of the critical points. When applied to small-field inflation in the landscape, this leads to an exponentially strong bias against small-field ranges and an upper bound N ≪ 10 on the number of light fields N participating during inflation from the non-observation of negative spatial curvature.
Nonequilibrium Phase Transitions in Supercooled Water
NASA Astrophysics Data System (ADS)
Limmer, David; Chandler, David
2012-02-01
We present results of a simulation study of water driven out of equilibrium. Using transition path sampling, we can probe stationary path distributions parameterize by order parameters that are extensive in space and time. We find that by coupling external fields to these parameters, we can drive water through a first order dynamical phase transition into amorphous ice. By varying the initial equilibrium distributions we can probe pathways for the creation of amorphous ices of low and high densities.
Bak, Lasse K; Schousboe, Arne
2017-11-01
Lactate dehydrogenase (LDH) catalyzes the interconversion of pyruvate and lactate involving the coenzyme NAD + . Part of the foundation for the proposed shuttling of lactate from astrocytes to neurons during brain activation is the differential distribution of LDH isoenzymes between the two cell types. In this short review, we outline the basic kinetic properties of the LDH isoenzymes expressed in neurons and astrocytes, and argue that the distribution of LDH isoenzymes does not in any way govern directional flow of lactate between the two cellular compartments. The two main points are as follows. First, in line with the general concept of chemical catalysis, enzymes do not influence the thermodynamic equilibrium of a chemical reaction but merely the speed at which equilibrium is obtained. Thus, differential distribution of LDH isoenzymes with different kinetic parameters does not predict which cells are producing and which are consuming lactate. Second, the thermodynamic equilibrium of the reaction is toward the reduced substrate (i.e., lactate), which is reflected in the concentrations measured in brain tissue, suggesting that the reaction is at near-equilibrium at steady state. To conclude, the cellular distribution of LDH isoenzymes is of little if any consequence in determining any directional flow of lactate between neurons and astrocytes. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Freed, Karl F
2014-10-14
A general theory of the long time, low temperature dynamics of glass-forming fluids remains elusive despite the almost 20 years since the famous pronouncement by the Nobel Laureate P. W. Anderson, "The deepest and most interesting unsolved problem in solid state theory is probably the theory of the nature of glass and the glass transition" [Science 267, 1615 (1995)]. While recent work indicates that Adam-Gibbs theory (AGT) provides a framework for computing the structural relaxation time of supercooled fluids and for analyzing the properties of the cooperatively rearranging dynamical strings observed in low temperature molecular dynamics simulations, the heuristic nature of AGT has impeded general acceptance due to the lack of a first principles derivation [G. Adam and J. H. Gibbs, J. Chem. Phys. 43, 139 (1965)]. This deficiency is rectified here by a statistical mechanical derivation of AGT that uses transition state theory and the assumption that the transition state is composed of elementary excitations of a string-like form. The strings are assumed to form in equilibrium with the mobile particles in the fluid. Hence, transition state theory requires the strings to be in mutual equilibrium and thus to have the size distribution of a self-assembling system, in accord with the simulations and analyses of Douglas and co-workers. The average relaxation rate is computed as a grand canonical ensemble average over all string sizes, and use of the previously determined relation between configurational entropy and the average cluster size in several model equilibrium self-associating systems produces the AGT expression in a manner enabling further extensions and more fundamental tests of the assumptions.
Study and modeling of finite rate chemistry effects in turbulent non-premixed flames
NASA Technical Reports Server (NTRS)
Vervisch, Luc
1993-01-01
The development of numerical models that reflect some of the most important features of turbulent reacting flows requires information about the behavior of key quantities in well defined combustion regimes. In turbulent flames, the coupling between turbulent and chemical processes is so strong that it is extremely difficult to isolate the role played by one individual physical phenomenon. Direct numerical simulation (hereafter DNS) allows us to study in detail the turbulence-chemistry interaction in some restricted but completely defined situations. Globally, non-premixed flames are controlled by two limiting regimes: the fast chemistry case, where the turbulent flame can be pictured as a random distribution of local chemical equilibrium problems; and the slow chemistry case, where the chemistry integrates in time the turbulent fluctuations. The Damkoehler number, ratio of a mechanical time scale to chemical time scale, is used to distinguish between these regimes. Today most of the industrial computer codes are able to perform predictions in the hypothesis of local equilibrium chemistry using a presumed shape for the probability density function (pdt) of the conserved scalar. However, the finite rate chemistry situation is of great interest because industrial burners usually generate regimes in which, at some points, the flame is undergoing local extinction or at least non-equilibrium situations. Moreover, this variety of situations strongly influences the production of pollutants. To quantify finite rate chemistry effect, the interaction between a non-premixed flame and a free decaying turbulence is studied using DNS. The attention is focused on the dynamic of extinction, and an attempt is made to quantify the effect of the reaction on the small scale mixing process. The unequal diffusivity effect is also addressed. Finally, a simple turbulent combustion model based on the DNS observations and tractable in real flow configurations is proposed.
NASA Astrophysics Data System (ADS)
Freed, Karl F.
2014-10-01
A general theory of the long time, low temperature dynamics of glass-forming fluids remains elusive despite the almost 20 years since the famous pronouncement by the Nobel Laureate P. W. Anderson, "The deepest and most interesting unsolved problem in solid state theory is probably the theory of the nature of glass and the glass transition" [Science 267, 1615 (1995)]. While recent work indicates that Adam-Gibbs theory (AGT) provides a framework for computing the structural relaxation time of supercooled fluids and for analyzing the properties of the cooperatively rearranging dynamical strings observed in low temperature molecular dynamics simulations, the heuristic nature of AGT has impeded general acceptance due to the lack of a first principles derivation [G. Adam and J. H. Gibbs, J. Chem. Phys. 43, 139 (1965)]. This deficiency is rectified here by a statistical mechanical derivation of AGT that uses transition state theory and the assumption that the transition state is composed of elementary excitations of a string-like form. The strings are assumed to form in equilibrium with the mobile particles in the fluid. Hence, transition state theory requires the strings to be in mutual equilibrium and thus to have the size distribution of a self-assembling system, in accord with the simulations and analyses of Douglas and co-workers. The average relaxation rate is computed as a grand canonical ensemble average over all string sizes, and use of the previously determined relation between configurational entropy and the average cluster size in several model equilibrium self-associating systems produces the AGT expression in a manner enabling further extensions and more fundamental tests of the assumptions.
A general methodology for population analysis
NASA Astrophysics Data System (ADS)
Lazov, Petar; Lazov, Igor
2014-12-01
For a given population with N - current and M - maximum number of entities, modeled by a Birth-Death Process (BDP) with size M+1, we introduce utilization parameter ρ, ratio of the primary birth and death rates in that BDP, which, physically, determines (equilibrium) macrostates of the population, and information parameter ν, which has an interpretation as population information stiffness. The BDP, modeling the population, is in the state n, n=0,1,…,M, if N=n. In presence of these two key metrics, applying continuity law, equilibrium balance equations concerning the probability distribution pn, n=0,1,…,M, of the quantity N, pn=Prob{N=n}, in equilibrium, and conservation law, and relying on the fundamental concepts population information and population entropy, we develop a general methodology for population analysis; thereto, by definition, population entropy is uncertainty, related to the population. In this approach, what is its essential contribution, the population information consists of three basic parts: elastic (Hooke's) or absorption/emission part, synchronization or inelastic part and null part; the first two parts, which determine uniquely the null part (the null part connects them), are the two basic components of the Information Spectrum of the population. Population entropy, as mean value of population information, follows this division of the information. A given population can function in information elastic, antielastic and inelastic regime. In an information linear population, the synchronization part of the information and entropy is absent. The population size, M+1, is the third key metric in this methodology. Namely, right supposing a population with infinite size, the most of the key quantities and results for populations with finite size, emerged in this methodology, vanish.
N-Player Quantum Games in an EPR Setting
Chappell, James M.; Iqbal, Azhar; Abbott, Derek
2012-01-01
The -player quantum games are analyzed that use an Einstein-Podolsky-Rosen (EPR) experiment, as the underlying physical setup. In this setup, a player’s strategies are not unitary transformations as in alternate quantum game-theoretic frameworks, but a classical choice between two directions along which spin or polarization measurements are made. The players’ strategies thus remain identical to their strategies in the mixed-strategy version of the classical game. In the EPR setting the quantum game reduces itself to the corresponding classical game when the shared quantum state reaches zero entanglement. We find the relations for the probability distribution for -qubit GHZ and W-type states, subject to general measurement directions, from which the expressions for the players’ payoffs and mixed Nash equilibrium are determined. Players’ payoff matrices are then defined using linear functions so that common two-player games can be easily extended to the -player case and permit analytic expressions for the Nash equilibrium. As a specific example, we solve the Prisoners’ Dilemma game for general . We find a new property for the game that for an even number of players the payoffs at the Nash equilibrium are equal, whereas for an odd number of players the cooperating players receive higher payoffs. By dispensing with the standard unitary transformations on state vectors in Hilbert space and using instead rotors and multivectors, based on Clifford’s geometric algebra (GA), it is shown how the N-player case becomes tractable. The new mathematical approach presented here has wide implications in the areas of quantum information and quantum complexity, as it opens up a powerful way to tractably analyze N-partite qubit interactions. PMID:22606258
Topography at the inner core boundary
NASA Astrophysics Data System (ADS)
Lasbleis, M.; Forquenot, Q.; Deguen, R.
2017-12-01
Topography at the inner core boundary has been proposed to explain surprising seismic observations of some regional studies. Such observations are still debatted, and numerical values of possible inner core topography have been proposed ranging from no topography to "inner core mountains" (10km heigth over lengthscales of 20km, as in Dai et al. 2012). The inner core boundary is a peculiar boundary, as it is the place where the iron alloy constituting the core freezes. The existence of a significant topography on such a boundary is possible, but unlikely. At thermodynamic equilibrium, no topography is expected, as any material above the equilibrium radius would have melted and any below would have freezed. However, mechanical forcing may push the system out of equilibrium. Dynamical topography could be forced by convective flows in the inner core or by outer core heterogeneities. A topography induced by outer core convection would be short-lived when compared to geodynamical processes in the bulk of the inner core (τ ≈ 10-100 Myears), but long-lived compared to observations. Here, we would like to give a geodynamical perspective over inner core topography. We constrain plausible amplitude of inner core topography, and discuss the implications for seismic observations. We consider topography created by viscous flows in the bulk of the inner core and by variations of growth rate on regional lengthscale due to outer core convection. This approach allows us to consider both internal and external forcings on the topography. We treat topography forcings as stochastic processes, and calculate the probability of observing a given topography. Based on preliminary results, the high values for observed topography can not be interpreted as a normal behavior of core dynamics. If confirmed, the regions are likely to be anomalous and originated from outliers in the distribution of stochastic processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freed, Karl F., E-mail: freed@uchicago.edu
A general theory of the long time, low temperature dynamics of glass-forming fluids remains elusive despite the almost 20 years since the famous pronouncement by the Nobel Laureate P. W. Anderson, “The deepest and most interesting unsolved problem in solid state theory is probably the theory of the nature of glass and the glass transition” [Science 267, 1615 (1995)]. While recent work indicates that Adam-Gibbs theory (AGT) provides a framework for computing the structural relaxation time of supercooled fluids and for analyzing the properties of the cooperatively rearranging dynamical strings observed in low temperature molecular dynamics simulations, the heuristic naturemore » of AGT has impeded general acceptance due to the lack of a first principles derivation [G. Adam and J. H. Gibbs, J. Chem. Phys. 43, 139 (1965)]. This deficiency is rectified here by a statistical mechanical derivation of AGT that uses transition state theory and the assumption that the transition state is composed of elementary excitations of a string-like form. The strings are assumed to form in equilibrium with the mobile particles in the fluid. Hence, transition state theory requires the strings to be in mutual equilibrium and thus to have the size distribution of a self-assembling system, in accord with the simulations and analyses of Douglas and co-workers. The average relaxation rate is computed as a grand canonical ensemble average over all string sizes, and use of the previously determined relation between configurational entropy and the average cluster size in several model equilibrium self-associating systems produces the AGT expression in a manner enabling further extensions and more fundamental tests of the assumptions.« less
Meng, X Flora; Baetica, Ania-Ariadna; Singhal, Vipul; Murray, Richard M
2017-05-01
Noise is often indispensable to key cellular activities, such as gene expression, necessitating the use of stochastic models to capture its dynamics. The chemical master equation (CME) is a commonly used stochastic model of Kolmogorov forward equations that describe how the probability distribution of a chemically reacting system varies with time. Finding analytic solutions to the CME can have benefits, such as expediting simulations of multiscale biochemical reaction networks and aiding the design of distributional responses. However, analytic solutions are rarely known. A recent method of computing analytic stationary solutions relies on gluing simple state spaces together recursively at one or two states. We explore the capabilities of this method and introduce algorithms to derive analytic stationary solutions to the CME. We first formally characterize state spaces that can be constructed by performing single-state gluing of paths, cycles or both sequentially. We then study stochastic biochemical reaction networks that consist of reversible, elementary reactions with two-dimensional state spaces. We also discuss extending the method to infinite state spaces and designing the stationary behaviour of stochastic biochemical reaction networks. Finally, we illustrate the aforementioned ideas using examples that include two interconnected transcriptional components and biochemical reactions with two-dimensional state spaces. © 2017 The Author(s).
Supernova Driving. II. Compressive Ratio in Molecular-cloud Turbulence
NASA Astrophysics Data System (ADS)
Pan, Liubin; Padoan, Paolo; Haugbølle, Troels; Nordlund, Åke
2016-07-01
The compressibility of molecular cloud (MC) turbulence plays a crucial role in star formation models, because it controls the amplitude and distribution of density fluctuations. The relation between the compressive ratio (the ratio of powers in compressive and solenoidal motions) and the statistics of turbulence has been previously studied systematically only in idealized simulations with random external forces. In this work, we analyze a simulation of large-scale turbulence (250 pc) driven by supernova (SN) explosions that has been shown to yield realistic MC properties. We demonstrate that SN driving results in MC turbulence with a broad lognormal distribution of the compressive ratio, with a mean value ≈0.3, lower than the equilibrium value of ≈0.5 found in the inertial range of isothermal simulations with random solenoidal driving. We also find that the compressibility of the turbulence is not noticeably affected by gravity, nor are the mean cloud radial (expansion or contraction) and solid-body rotation velocities. Furthermore, the clouds follow a general relation between the rms density and the rms Mach number similar to that of supersonic isothermal turbulence, though with a large scatter, and their average gas density probability density function is described well by a lognormal distribution, with the addition of a high-density power-law tail when self-gravity is included.
Baetica, Ania-Ariadna; Singhal, Vipul; Murray, Richard M.
2017-01-01
Noise is often indispensable to key cellular activities, such as gene expression, necessitating the use of stochastic models to capture its dynamics. The chemical master equation (CME) is a commonly used stochastic model of Kolmogorov forward equations that describe how the probability distribution of a chemically reacting system varies with time. Finding analytic solutions to the CME can have benefits, such as expediting simulations of multiscale biochemical reaction networks and aiding the design of distributional responses. However, analytic solutions are rarely known. A recent method of computing analytic stationary solutions relies on gluing simple state spaces together recursively at one or two states. We explore the capabilities of this method and introduce algorithms to derive analytic stationary solutions to the CME. We first formally characterize state spaces that can be constructed by performing single-state gluing of paths, cycles or both sequentially. We then study stochastic biochemical reaction networks that consist of reversible, elementary reactions with two-dimensional state spaces. We also discuss extending the method to infinite state spaces and designing the stationary behaviour of stochastic biochemical reaction networks. Finally, we illustrate the aforementioned ideas using examples that include two interconnected transcriptional components and biochemical reactions with two-dimensional state spaces. PMID:28566513
NASA Astrophysics Data System (ADS)
Eichhorn, Ralf; Aurell, Erik
2014-04-01
'Stochastic thermodynamics as a conceptual framework combines the stochastic energetics approach introduced a decade ago by Sekimoto [1] with the idea that entropy can consistently be assigned to a single fluctuating trajectory [2]'. This quote, taken from Udo Seifert's [3] 2008 review, nicely summarizes the basic ideas behind stochastic thermodynamics: for small systems, driven by external forces and in contact with a heat bath at a well-defined temperature, stochastic energetics [4] defines the exchanged work and heat along a single fluctuating trajectory and connects them to changes in the internal (system) energy by an energy balance analogous to the first law of thermodynamics. Additionally, providing a consistent definition of trajectory-wise entropy production gives rise to second-law-like relations and forms the basis for a 'stochastic thermodynamics' along individual fluctuating trajectories. In order to construct meaningful concepts of work, heat and entropy production for single trajectories, their definitions are based on the stochastic equations of motion modeling the physical system of interest. Because of this, they are valid even for systems that are prevented from equilibrating with the thermal environment by external driving forces (or other sources of non-equilibrium). In that way, the central notions of equilibrium thermodynamics, such as heat, work and entropy, are consistently extended to the non-equilibrium realm. In the (non-equilibrium) ensemble, the trajectory-wise quantities acquire distributions. General statements derived within stochastic thermodynamics typically refer to properties of these distributions, and are valid in the non-equilibrium regime even beyond the linear response. The extension of statistical mechanics and of exact thermodynamic statements to the non-equilibrium realm has been discussed from the early days of statistical mechanics more than 100 years ago. This debate culminated in the development of linear response theory for small deviations from equilibrium, in which a general framework is constructed from the analysis of non-equilibrium states close to equilibrium. In a next step, Prigogine and others developed linear irreversible thermodynamics, which establishes relations between transport coefficients and entropy production on a phenomenological level in terms of thermodynamic forces and fluxes. However, beyond the realm of linear response no general theoretical results were available for quite a long time. This situation has changed drastically over the last 20 years with the development of stochastic thermodynamics, revealing that the range of validity of thermodynamic statements can indeed be extended deep into the non-equilibrium regime. Early developments in that direction trace back to the observations of symmetry relations between the probabilities for entropy production and entropy annihilation in non-equilibrium steady states [5-8] (nowadays categorized in the class of so-called detailed fluctuation theorems), and the derivations of the Bochkov-Kuzovlev [9, 10] and Jarzynski relations [11] (which are now classified as so-called integral fluctuation theorems). Apart from its fundamental theoretical interest, the developments in stochastic thermodynamics have experienced an additional boost from the recent experimental progress in fabricating, manipulating, controlling and observing systems on the micro- and nano-scale. These advances are not only of formidable use for probing and monitoring biological processes on the cellular, sub-cellular and molecular level, but even include the realization of a microscopic thermodynamic heat engine [12] or the experimental verification of Landauer's principle in a colloidal system [13]. The scientific program Stochastic Thermodynamics held between 4 and 15 March 2013, and hosted by The Nordic Institute for Theoretical Physics (Nordita), was attended by more than 50 scientists from the Nordic countries and elsewhere, amongst them many leading experts in the field. During the program, the most recent developments, open questions and new ideas in stochastic thermodynamics were presented and discussed. From the talks and debates, the notion of information in stochastic thermodynamics, the fundamental properties of entropy production (rate) in non-equilibrium, the efficiency of small thermodynamic machines and the characteristics of optimal protocols for the applied (cyclic) forces were crystallizing as main themes. Surprisingly, the long-studied adiabatic piston, its peculiarities and its relation to stochastic thermodynamics were also the subject of intense discussions. The comment on the Nordita program Stochastic Thermodynamics published in this issue of Physica Scripta exploits the Jarzynski relation for determining free energy differences in the adiabatic piston. This scientific program and the contribution presented here were made possible by the financial and administrative support of The Nordic Institute for Theoretical Physics.
Implications of the dependence of the elastic properties of DNA on nucleotide sequence.
Olson, Wilma K; Swigon, David; Coleman, Bernard D
2004-07-15
Recent advances in structural biochemistry have provided evidence that not only the geometric properties but also the elastic moduli of duplex DNA are strongly dependent on nucleotide sequence in a way that is not accounted for by classical rod models of the Kirchhoff type. A theory of sequence-dependent DNA elasticity is employed here to calculate the dependence of the equilibrium configurations of circular DNA on the binding of ligands that can induce changes in intrinsic twist at a single base-pair step. Calculations are presented of the influence on configurations of the assumed values and distribution along the DNA of intrinsic roll and twist and a modulus coupling roll to twist. Among the results obtained are the following. For minicircles formed from intrinsically straight DNA, the distribution of roll-twist coupling strongly affects the dependence of the total elastic energy Psi on the amount alpha of imposed untwisting, and that dependence can be far from quadratic. (In fact, for a periodic distribution of roll-twist coupling with a period equal to the intrinsic helical repeat length, Psi can be essentially independent of alpha for -90 degrees < alpha <90 degrees.) When the minicircle is homogeneous and without roll-twist coupling, but with uniform positive intrinsic roll, the point at which Psi attains its minimum value shifts towards negative values of alpha. It is remarked that there are cases in which one can relate graphs of Psi versus alpha to the 'effective values' of bending and twisting moduli and helical repeat length obtained from measurements of equilibrium distributions of topoisomers and probabilities of ring closure. For a minicircle formed from DNA that has an 'S' shape when stress-free, the graphs of Psi versus alpha have maxima at alpha = 0. As the binding of a twisting agent to such a minicircle results in a net decrease in Psi, the affinity of the twisting agent for binding to the minicircle is greater than its affinity for binding to unconstrained DNA with the same sequence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Szymanski, R., E-mail: rszymans@cbmm.lodz.pl; Sosnowski, S.; Maślanka, Ł.
2016-03-28
Theoretical analysis and computer simulations (Monte Carlo and numerical integration of differential equations) show that the statistical effect of a small number of reacting molecules depends on a way the molecules are distributed among the small volume nano-reactors (droplets in this study). A simple reversible association A + B = C was chosen as a model reaction, enabling to observe both thermodynamic (apparent equilibrium constant) and kinetic effects of a small number of reactant molecules. When substrates are distributed uniformly among droplets, all containing the same equal number of substrate molecules, the apparent equilibrium constant of the association is highermore » than the chemical one (observed in a macroscopic—large volume system). The average rate of the association, being initially independent of the numbers of molecules, becomes (at higher conversions) higher than that in a macroscopic system: the lower the number of substrate molecules in a droplet, the higher is the rate. This results in the correspondingly higher apparent equilibrium constant. A quite opposite behavior is observed when reactant molecules are distributed randomly among droplets: the apparent association rate and equilibrium constants are lower than those observed in large volume systems, being the lower, the lower is the average number of reacting molecules in a droplet. The random distribution of reactant molecules corresponds to ideal (equal sizes of droplets) dispersing of a reaction mixture. Our simulations have shown that when the equilibrated large volume system is dispersed, the resulting droplet system is already at equilibrium and no changes of proportions of droplets differing in reactant compositions can be observed upon prolongation of the reaction time.« less
NASA Astrophysics Data System (ADS)
Andreev, Pavel A.; Kuz'menkov, L. S.
2017-11-01
A consideration of waves propagating parallel to the external magnetic field is presented. The dielectric permeability tensor is derived from the quantum kinetic equations with non-trivial equilibrium spin-distribution functions in the linear approximation on the amplitude of wave perturbations. It is possible to consider the equilibrium spin-distribution functions with nonzero z-projection proportional to the difference of the Fermi steps of electrons with the chosen spin direction, while x- and y-projections are equal to zero. It is called the trivial equilibrium spin-distribution functions. In the general case, x- and y-projections of the spin-distribution functions are nonzero which is called the non-trivial regime. A corresponding equilibrium solution is found in Andreev [Phys. Plasmas 23, 062103 (2016)]. The contribution of the nontrivial part of the spin-distribution function appears in the dielectric permeability tensor in the additive form. It is explicitly found here. A corresponding modification in the dispersion equation for the transverse waves is derived. The contribution of the nontrivial part of the spin-distribution function in the spectrum of transverse waves is calculated numerically. It is found that the term caused by the nontrivial part of the spin-distribution function can be comparable with the classic terms for the relatively small wave vectors and frequencies above the cyclotron frequency. In a majority of regimes, the extra spin caused term dominates over the spin term found earlier, except the small frequency regime, where their contributions in the whistler spectrum are comparable. A decrease of the left-hand circularly polarized wave frequency, an increase of the high-frequency right-hand circularly polarized wave frequency, and a decrease of frequency changing by an increase of frequency at the growth of the wave vector for the whistler are found. A considerable decrease of the spin wave frequency is found either. It results in an increase of module of the negative group velocity of the spin wave. The found dispersion equations are used for obtaining of an effective quantum hydrodynamics reproducing these results. This generalization requires the introduction of the corresponding equation of state for the thermal part of the spin current in the spin evolution equation.
NASA Astrophysics Data System (ADS)
Shioiri, Tetsu; Asari, Naoki; Sato, Junichi; Sasage, Kosuke; Yokokura, Kunio; Homma, Mitsutaka; Suzuki, Katsumi
To investigate the reliability of equipment of vacuum insulation, a study was carried out to clarify breakdown probability distributions in vacuum gap. Further, a double-break vacuum circuit breaker was investigated for breakdown probability distribution. The test results show that the breakdown probability distribution of the vacuum gap can be represented by a Weibull distribution using a location parameter, which shows the voltage that permits a zero breakdown probability. The location parameter obtained from Weibull plot depends on electrode area. The shape parameter obtained from Weibull plot of vacuum gap was 10∼14, and is constant irrespective non-uniform field factor. The breakdown probability distribution after no-load switching can be represented by Weibull distribution using a location parameter. The shape parameter after no-load switching was 6∼8.5, and is constant, irrespective of gap length. This indicates that the scatter of breakdown voltage was increased by no-load switching. If the vacuum circuit breaker uses a double break, breakdown probability at low voltage becomes lower than single-break probability. Although potential distribution is a concern in the double-break vacuum cuicuit breaker, its insulation reliability is better than that of the single-break vacuum interrupter even if the bias of the vacuum interrupter's sharing voltage is taken into account.
Applicability of Donnan equilibrium theory at nanochannel-reservoir interfaces.
Tian, Huanhuan; Zhang, Li; Wang, Moran
2015-08-15
Understanding ionic transport in nanochannels has attracted broad attention from various areas in energy and environmental fields. In most pervious research, Donnan equilibrium has been applied widely to nanofluidic systems to obtain ionic concentration and electrical potential at channel-reservoir interfaces; however, as well known that Donnan equilibrium is derived from classical thermodynamic theories with equilibrium assumptions. Therefore the applicability of the Donnan equilibrium may be questionable when the transport at nanochannel-reservoir interface is strongly non-equilibrium. In this work, the Poisson-Nernst-Planck model for ion transport is numerically solved to obtain the exact distributions of ionic concentration and electrical potential. The numerical results are quantitatively compared with the Donnan equilibrium predictions. The applicability of Donnan equilibrium is therefore justified by changing channel length, reservoir ionic concentration, surface charge density and channel height. The results indicate that the Donnan equilibrium is not applicable for short nanochannels, large concentration difference and wide openings. A non-dimensional parameter, Q factor, is proposed to measure the non-equilibrium extent and the relation between Q and the working conditions is studied in detail. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Morales, Roberto; Barriga-Carrasco, Manuel D.; Casas, David
2017-04-01
The instantaneous charge state of uranium ions traveling through a fully ionized hydrogen plasma has been theoretically studied and compared with one of the first energy loss experiments in plasmas, carried out at GSI-Darmstadt by Hoffmann et al. in the 1990s. For this purpose, two different methods to estimate the instantaneous charge state of the projectile have been employed: (1) rate equations using ionization and recombination cross sections and (2) equilibrium charge state formulas for plasmas. Also, the equilibrium charge state has been obtained using these ionization and recombination cross sections and compared with the former equilibrium formulas. The equilibrium charge state of projectiles in plasmas is not always reached, and it depends mainly on the projectile velocity and the plasma density. Therefore, a non-equilibrium or an instantaneous description of the projectile charge is necessary. The charge state of projectile ions cannot be measured, except after exiting the target, and experimental data remain very scarce. Thus, the validity of our charge state model is checked by comparing the theoretical predictions with an energy loss experiment, as the energy loss has a generally quadratic dependence on the projectile charge state. The dielectric formalism has been used to calculate the plasma stopping power including the Brandt-Kitagawa (BK) model to describe the charge distribution of the projectile. In this charge distribution, the instantaneous number of bound electrons instead of the equilibrium number has been taken into account. Comparing our theoretical predictions with experiments, it is shown the necessity of including the instantaneous charge state and the BK charge distribution for a correct energy loss estimation. The results also show that the initial charge state has a strong influence in order to estimate the energy loss of the uranium ions.
Free energy landscape from path-sampling: application to the structural transition in LJ38
NASA Astrophysics Data System (ADS)
Adjanor, G.; Athènes, M.; Calvo, F.
2006-09-01
We introduce a path-sampling scheme that allows equilibrium state-ensemble averages to be computed by means of a biased distribution of non-equilibrium paths. This non-equilibrium method is applied to the case of the 38-atom Lennard-Jones atomic cluster, which has a double-funnel energy landscape. We calculate the free energy profile along the Q4 bond orientational order parameter. At high or moderate temperature the results obtained using the non-equilibrium approach are consistent with those obtained using conventional equilibrium methods, including parallel tempering and Wang-Landau Monte Carlo simulations. At lower temperatures, the non-equilibrium approach becomes more efficient in exploring the relevant inherent structures. In particular, the free energy agrees with the predictions of the harmonic superposition approximation.
Garriguet, Didier
2016-04-01
Estimates of the prevalence of adherence to physical activity guidelines in the population are generally the result of averaging individual probability of adherence based on the number of days people meet the guidelines and the number of days they are assessed. Given this number of active and inactive days (days assessed minus days active), the conditional probability of meeting the guidelines that has been used in the past is a Beta (1 + active days, 1 + inactive days) distribution assuming the probability p of a day being active is bounded by 0 and 1 and averages 50%. A change in the assumption about the distribution of p is required to better match the discrete nature of the data and to better assess the probability of adherence when the percentage of active days in the population differs from 50%. Using accelerometry data from the Canadian Health Measures Survey, the probability of adherence to physical activity guidelines is estimated using a conditional probability given the number of active and inactive days distributed as a Betabinomial(n, a + active days , β + inactive days) assuming that p is randomly distributed as Beta(a, β) where the parameters a and β are estimated by maximum likelihood. The resulting Betabinomial distribution is discrete. For children aged 6 or older, the probability of meeting physical activity guidelines 7 out of 7 days is similar to published estimates. For pre-schoolers, the Betabinomial distribution yields higher estimates of adherence to the guidelines than the Beta distribution, in line with the probability of being active on any given day. In estimating the probability of adherence to physical activity guidelines, the Betabinomial distribution has several advantages over the previously used Beta distribution. It is a discrete distribution and maximizes the richness of accelerometer data.
Inertial migration of deformable droplets in a microchannel
NASA Astrophysics Data System (ADS)
Chen, Xiaodong; Xue, Chundong; Zhang, Li; Hu, Guoqing; Jiang, Xingyu; Sun, Jiashu
2014-11-01
The microfluidic inertial effect is an effective way of focusing and sorting droplets suspended in a carrier fluid in microchannels. To understand the flow dynamics of microscale droplet migration, we conduct numerical simulations on the droplet motion and deformation in a straight microchannel. The results are compared with preliminary experiments and theoretical analysis. In contrast to most existing literature, the present simulations are three-dimensional and full length in the streamwise direction and consider the confinement effects for a rectangular cross section. To thoroughly examine the effect of the velocity distribution, the release positions of single droplets are varied in a quarter of the channel cross section based on the geometrical symmetries. The migration dynamics and equilibrium positions of the droplets are obtained for different fluid velocities and droplet sizes. Droplets with diameters larger than half of the channel height migrate to the centerline in the height direction and two equilibrium positions are observed between the centerline and the wall in the width direction. In addition to the well-known Segré-Silberberg equilibrium positions, new equilibrium positions closer to the centerline are observed. This finding is validated by preliminary experiments that are designed to introduce droplets at different initial lateral positions. Small droplets also migrate to two equilibrium positions in the quarter of the channel cross section, but the coordinates in the width direction are between the centerline and the wall. The equilibrium positions move toward the centerlines with increasing Reynolds number due to increasing deformations of the droplets. The distributions of the lift forces, angular velocities, and the deformation parameters of droplets along the two confinement direction are investigated in detail. Comparisons are made with theoretical predictions to determine the fundamentals of droplet migration in microchannels. In addition, existence of the inner equilibrium position is linked to the quartic velocity distribution in the width direction through a simple model for the slip angular velocities of droplets.
Effects of elastic strain energy on the antisite defect of D0 22-Ni 3V phase
NASA Astrophysics Data System (ADS)
Zhang, Jing; Chen, Zheng; Wang, Yong Xin; Lu, Yan Li
2010-01-01
A time-dependent phase field microelasticity model of an elastically anisotropic Ni-Al-V solid is employed for a D0 22-Ni 3V antisite defect application. The elastic strain energy (ESE), caused by a coherent misfit, changes the behavior of the temporal evolution occupancy probability (OP), slows down the phase transformation, and eventually leads to directional coarsening of coherent microstructures. In particular, for the antisite defects (Ni V, V Ni) and ternary alloying elements (Al Ni, Al V), ESE is responsible for the decrease in the calculated equilibrium values of Ni V, Al Ni, and Al V, as well as the increase in the equilibrium value of V Ni. The gap between Ni V and V Ni and Al Ni and Al V is narrowed in the system involving ESE, but the calculated equilibrium magnitude of Ni V is still greater than that of V Ni. The calculated equilibrium magnitude of Al Ni was always greater than Al V in this study.
Kinetic Dissection of the Pre-existing Conformational Equilibrium in the Trypsin Fold*
Vogt, Austin D.; Chakraborty, Pradipta; Di Cera, Enrico
2015-01-01
Structural biology has recently documented the conformational plasticity of the trypsin fold for both the protease and zymogen in terms of a pre-existing equilibrium between closed (E*) and open (E) forms of the active site region. How such plasticity is manifested in solution and affects ligand recognition by the protease and zymogen is poorly understood in quantitative terms. Here we dissect the E*-E equilibrium with stopped-flow kinetics in the presence of excess ligand or macromolecule. Using the clotting protease thrombin and its zymogen precursor prethrombin-2 as relevant models we resolve the relative distribution of the E* and E forms and the underlying kinetic rates for their interconversion. In the case of thrombin, the E* and E forms are distributed in a 1:4 ratio and interconvert on a time scale of 45 ms. In the case of prethrombin-2, the equilibrium is shifted strongly (10:1 ratio) in favor of the closed E* form and unfolds over a faster time scale of 4.5 ms. The distribution of E* and E forms observed for thrombin and prethrombin-2 indicates that zymogen activation is linked to a significant shift in the pre-existing equilibrium between closed and open conformations that facilitates ligand binding to the active site. These findings broaden our mechanistic understanding of how conformational transitions control ligand recognition by thrombin and its zymogen precursor prethrombin-2 and have direct relevance to other members of the trypsin fold. PMID:26216877
Procacci, Piero
2016-06-01
In this contribution I critically revise the alchemical reversible approach in the context of the statistical mechanics theory of non-covalent bonding in drug-receptor systems. I show that most of the pitfalls and entanglements for the binding free energy evaluation in computer simulations are rooted in the equilibrium assumption that is implicit in the reversible method. These critical issues can be resolved by using a non-equilibrium variant of the alchemical method in molecular dynamics simulations, relying on the production of many independent trajectories with a continuous dynamical evolution of an externally driven alchemical coordinate, completing the decoupling of the ligand in a matter of a few tens of picoseconds rather than nanoseconds. The absolute binding free energy can be recovered from the annihilation work distributions by applying an unbiased unidirectional free energy estimate, on the assumption that any observed work distribution is given by a mixture of normal distributions, whose components are identical in either direction of the non-equilibrium process, with weights regulated by the Crooks theorem. I finally show that the inherent reliability and accuracy of the unidirectional estimate of the decoupling free energies, based on the production of a few hundreds of non-equilibrium independent sub-nanosecond unrestrained alchemical annihilation processes, is a direct consequence of the funnel-like shape of the free energy surface in molecular recognition. An application of the technique to a real drug-receptor system is presented in the companion paper.
Zheng, Xiliang; Wang, Jin
2015-01-01
We uncovered the universal statistical laws for the biomolecular recognition/binding process. We quantified the statistical energy landscapes for binding, from which we can characterize the distributions of the binding free energy (affinity), the equilibrium constants, the kinetics and the specificity by exploring the different ligands binding with a particular receptor. The results of the analytical studies are confirmed by the microscopic flexible docking simulations. The distribution of binding affinity is Gaussian around the mean and becomes exponential near the tail. The equilibrium constants of the binding follow a log-normal distribution around the mean and a power law distribution in the tail. The intrinsic specificity for biomolecular recognition measures the degree of discrimination of native versus non-native binding and the optimization of which becomes the maximization of the ratio of the free energy gap between the native state and the average of non-native states versus the roughness measured by the variance of the free energy landscape around its mean. The intrinsic specificity obeys a Gaussian distribution near the mean and an exponential distribution near the tail. Furthermore, the kinetics of binding follows a log-normal distribution near the mean and a power law distribution at the tail. Our study provides new insights into the statistical nature of thermodynamics, kinetics and function from different ligands binding with a specific receptor or equivalently specific ligand binding with different receptors. The elucidation of distributions of the kinetics and free energy has guiding roles in studying biomolecular recognition and function through small-molecule evolution and chemical genetics. PMID:25885453
Treatment of Chemical Equilibrium without Using Thermodynamics or Statistical Mechanics.
ERIC Educational Resources Information Center
Nelson, P. G.
1986-01-01
Discusses the conventional approaches to teaching about chemical equilibrium in advanced physical chemistry courses. Presents an alternative approach to the treatment of this concept by using Boltzmann's distribution law. Lists five advantages to using this method as compared with the other approaches. (TW)
Perspective: Maximum caliber is a general variational principle for dynamical systems
NASA Astrophysics Data System (ADS)
Dixit, Purushottam D.; Wagoner, Jason; Weistuch, Corey; Pressé, Steve; Ghosh, Kingshuk; Dill, Ken A.
2018-01-01
We review here Maximum Caliber (Max Cal), a general variational principle for inferring distributions of paths in dynamical processes and networks. Max Cal is to dynamical trajectories what the principle of maximum entropy is to equilibrium states or stationary populations. In Max Cal, you maximize a path entropy over all possible pathways, subject to dynamical constraints, in order to predict relative path weights. Many well-known relationships of non-equilibrium statistical physics—such as the Green-Kubo fluctuation-dissipation relations, Onsager's reciprocal relations, and Prigogine's minimum entropy production—are limited to near-equilibrium processes. Max Cal is more general. While it can readily derive these results under those limits, Max Cal is also applicable far from equilibrium. We give examples of Max Cal as a method of inference about trajectory distributions from limited data, finding reaction coordinates in bio-molecular simulations, and modeling the complex dynamics of non-thermal systems such as gene regulatory networks or the collective firing of neurons. We also survey its basis in principle and some limitations.
Turning Passive Brownian Motion Into Active Motion
NASA Astrophysics Data System (ADS)
Sevilla, Francisco J.; VáSquez-Arzola, Alejandro; Puga-Cital, Enrique
We consider out-of-equilibrium phenomena, specifically, the pattern of motion of active particles. These particles absorb energy from the environment and transform it into self-locomotion, generally, through complex mechanisms. Though the out-of-equilibrium nature of on the motion of these systems is well recognized, is generally difficult to pinpoint how far from equilibrium these systems are. In this work we elucidate the out-of-equilibrium nature of non-interacting, trapped, active particles, whose pattern of motion is described by a run-and-tumble dynamics. We show that the stationary distributions of these run-and-tumble particles, moving under the effects of an external potential, is equivalent to the stationary distribution of non-interacting, passive Brownian particles moving in the same potential but in an inhomogeneous source of heat. The interest in this topic has recently regrown due to the experimental possibility to design man-made active particles that emulate the ones that exist in the biological realm. F.J.S kindly acknowledges support from Grant UNAM-DGAPA-PAPIIT-IN113114.
Perspective: Maximum caliber is a general variational principle for dynamical systems.
Dixit, Purushottam D; Wagoner, Jason; Weistuch, Corey; Pressé, Steve; Ghosh, Kingshuk; Dill, Ken A
2018-01-07
We review here Maximum Caliber (Max Cal), a general variational principle for inferring distributions of paths in dynamical processes and networks. Max Cal is to dynamical trajectories what the principle of maximum entropy is to equilibrium states or stationary populations. In Max Cal, you maximize a path entropy over all possible pathways, subject to dynamical constraints, in order to predict relative path weights. Many well-known relationships of non-equilibrium statistical physics-such as the Green-Kubo fluctuation-dissipation relations, Onsager's reciprocal relations, and Prigogine's minimum entropy production-are limited to near-equilibrium processes. Max Cal is more general. While it can readily derive these results under those limits, Max Cal is also applicable far from equilibrium. We give examples of Max Cal as a method of inference about trajectory distributions from limited data, finding reaction coordinates in bio-molecular simulations, and modeling the complex dynamics of non-thermal systems such as gene regulatory networks or the collective firing of neurons. We also survey its basis in principle and some limitations.
An exact collisionless equilibrium for the Force-Free Harris Sheet with low plasma beta
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allanson, O., E-mail: oliver.allanson@st-andrews.ac.uk; Neukirch, T., E-mail: tn3@st-andrews.ac.uk; Wilson, F., E-mail: fw237@st-andrews.ac.uk
We present a first discussion and analysis of the physical properties of a new exact collisionless equilibrium for a one-dimensional nonlinear force-free magnetic field, namely, the force-free Harris sheet. The solution allows any value of the plasma beta, and crucially below unity, which previous nonlinear force-free collisionless equilibria could not. The distribution function involves infinite series of Hermite polynomials in the canonical momenta, of which the important mathematical properties of convergence and non-negativity have recently been proven. Plots of the distribution function are presented for the plasma beta modestly below unity, and we compare the shape of the distribution functionmore » in two of the velocity directions to a Maxwellian distribution.« less
NASA Astrophysics Data System (ADS)
Andreev, Pavel A.
2017-02-01
The dielectric permeability tensor for spin polarized plasmas is derived in terms of the spin-1/2 quantum kinetic model in six-dimensional phase space. Expressions for the distribution function and spin distribution function are derived in linear approximations on the path of dielectric permeability tensor derivation. The dielectric permeability tensor is derived for the spin-polarized degenerate electron gas. It is also discussed at the finite temperature regime, where the equilibrium distribution function is presented by the spin-polarized Fermi-Dirac distribution. Consideration of the spin-polarized equilibrium states opens possibilities for the kinetic modeling of the thermal spin current contribution in the plasma dynamics.
Costa, Luciano T; Ribeiro, Mauro C C
2006-05-14
Molecular dynamics (MD) simulations have been performed for prototype models of polymer electrolytes in which the salt is an ionic liquid based on 1-alkyl-3-methylimidazolium cations and the polymer is poly(ethylene oxide), PEO. The MD simulations were performed by combining the previously proposed models for pure ionic liquids and polymer electrolytes containing simple inorganic ions. A systematic investigation of ionic liquid concentration, temperature, and the 1-alkyl- chain length, [1,3-dimethylimidazolium]PF6, and [1-butyl-3-methylimidazolium]PF6, effects on resulting equilibrium structure is provided. It is shown that the ionic liquid is dispersed in the polymeric matrix, but ionic pairs remain in the polymer electrolyte. Imidazolium cations are coordinated by both the anions and the oxygen atoms of PEO chains. Probability density maps of occurrences of nearest neighbors around imidazolium cations give a detailed physical picture of the environment experienced by cations. Conformational changes on PEO chains upon addition of the ionic liquid are identified. The equilibrium structure of simulated systems is also analyzed in reciprocal space by using the static structure factor, S(k). Calculated S(k) display a low wave-vector peak, indicating that spatial correlation in an extended-range order prevail in the ionic liquid polymer electrolytes. Long-range correlations are assigned to nonuniform distribution of ionic species within the simulation box.
Praveen, Prashant; Loh, Kai-Chee
2016-06-01
Trioctylphosphine oxide based extractant impregnated membranes (EIM) were used for extraction of phenol and its methyl, hydroxyl and chloride substituted derivatives. The distribution coefficients of the phenols varied from 2 to 234, in the order of 1-napthol > p-chlorophenol > m-cresol > p-cresol > o-cresol > phenol > catechol > pyrogallol > hydroquinone, when initial phenols loadings was varied in 100-2000 mg/L. An extraction model, based on the law of mass action, was formulated to predict the equilibrium distribution of the phenols. The model was in excellent agreement (R(2) > 0.97) with the experimental results at low phenols concentrations (<800 mg/L). At higher phenols loadings though, Langmuir isotherm was better suited for equilibrium prediction (R(2) > 0.95), which signified high mass transfer resistance in the EIMs. Examination of the effects of ring substitution on equilibrium, and bivariate statistical analysis between the amounts of phenols extracted into the EIMs and factors affecting phenols interaction with TOPO, indicated the dominant role of hydrophobicity in equilibrium determination. These results improve understanding of the solid/liquid equilibrium process between phenols and the EIMs, and these will be useful in designing phenol recovery process from wastewater. Copyright © 2016 Elsevier Ltd. All rights reserved.
Geologic map of the Agnesi quadrangle (V-45), Venus
Hansen, Vicki L.; Tharalson, Erik R.
2014-01-01
Two general classes of hypotheses have emerged to address the near random spatial distribution of ~970 apparently pristine impact craters across the surface of Venus: (1) catastrophic/episodic resurfacing and (2) equilibrium/evolutionary resurfacing. Catastrophic/episodic hypotheses propose that a global-scale, temporally punctuated event or events dominated Venus’ evolution and that the generally uniform impact crater distribution (Schaber and others, 1992; Phillips and others, 1992; Herrick and others, 1997) reflects craters that accumulated during relative global quiescence since that event (for example, Strom and others, 1994; Herrick, 1994; Turcotte and others, 1999). Equilibrium/evolutionary hypotheses suggest instead that the near random crater distribution results from relatively continuous, but spatially localized, resurfacing in which volcanic and (or) tectonic processes occur across the planet through time, although the style of operative processes may have varied temporally and spatially (for example, Phillips and others, 1992; Guest and Stofan, 1999; Hansen and Young, 2007). Geologic relations within the map area allow us to test the catastrophic/episodic versus equilibrium/evolutionary resurfacing hypotheses.
NASA Astrophysics Data System (ADS)
Gurevich, Boris M.; Tempel'man, Arcady A.
2010-05-01
For a dynamical system \\tau with 'time' \\mathbb Z^d and compact phase space X, we introduce three subsets of the space \\mathbb R^m related to a continuous function f\\colon X\\to\\mathbb R^m: the set of time means of f and two sets of space means of f, namely those corresponding to all \\tau-invariant probability measures and those corresponding to some equilibrium measures on X. The main results concern topological properties of these sets of means and their mutual position. Bibliography: 18 titles.
Diagnostic criteria for Menière's disease.
Lopez-Escamez, Jose A; Carey, John; Chung, Won-Ho; Goebel, Joel A; Magnusson, Måns; Mandalà, Marco; Newman-Toker, David E; Strupp, Michael; Suzuki, Mamoru; Trabalzini, Franco; Bisdorff, Alexandre
2015-01-01
This paper presents diagnostic criteria for Menière's disease jointly formulated by the Classification Committee of the Bárány Society, The Japan Society for Equilibrium Research, the European Academy of Otology and Neurotology (EAONO), the Equilibrium Committee of the American Academy of Otolaryngology-Head and Neck Surgery (AAO-HNS) and the Korean Balance Society. The classification includes two categories: definite Menière's disease and probable Menière's disease. The diagnosis of definite Menière's disease is based on clinical criteria and requires the observation of an episodic vertigo syndrome associated with low- to medium-frequency sensorineural hearing loss and fluctuating aural symptoms (hearing, tinnitus and/or fullness) in the affected ear. Duration of vertigo episodes is limited to a period between 20 minutes and 12 hours. Probable Menière's disease is a broader concept defined by episodic vestibular symptoms (vertigo or dizziness) associated with fluctuating aural symptoms occurring in a period from 20 minutes to 24 hours.
Phonon Mapping in Flowing Equilibrium
NASA Astrophysics Data System (ADS)
Ruff, J. P. C.
2015-03-01
When a material conducts heat, a modification of the phonon population occurs. The equilibrium Bose-Einstein distribution is perturbed towards flowing-equilibrium, for which the distribution function is not analytically known. Here I argue that the altered phonon population can be efficiently mapped over broad regions of reciprocal space, via diffuse x-ray scattering or time-of-flight neutron scattering, while a thermal gradient is applied across a single crystal sample. When compared to traditional transport measurements, this technique offers a superior, information-rich new perspective on lattice thermal conductivity, wherein the band and momentum dependences of the phonon thermal current are directly resolved. The proposed method is benchmarked using x-ray thermal diffuse scattering measurements of single crystal diamond under transport conditions. CHESS is supported by the NSF & NIH/NIGMS via NSF Award DMR-1332208.
Vilar, Vítor J P; Botelho, Cidália M S; Boaventura, Rui A R
2007-05-08
Pb(II) biosorption onto algae Gelidium, algal waste from agar extraction industry and a composite material was studied. Discrete and continuous site distribution models were used to describe the biosorption equilibrium at different pH (5.3, 4 and 3), considering competition among Pb(II) ions and protons. The affinity distribution function of Pb(II) on the active sites was calculated by the Sips distribution. The Langmuir equilibrium constant was compared with the apparent affinity calculated by the discrete model, showing higher affinity for lead ions at higher pH values. Kinetic experiments were conducted at initial Pb(II) concentrations of 29-104 mgl(-1) and data fitted to pseudo-first Lagergren and second-order models. The adsorptive behaviour of biosorbent particles was modelled using a batch mass transfer kinetic model, which successfully predicts Pb(II) concentration profiles at different initial lead concentration and pH, and provides significant insights on the biosorbents performance. Average values of homogeneous diffusivity, D(h), are 3.6 x 10(-8); 6.1 x 10(-8) and 2.4 x 10(-8)cm(2)s(-1), respectively, for Gelidium, algal waste and composite material. The concentration of lead inside biosorbent particles follows a parabolic profile that becomes linear near equilibrium.
Inference of directional selection and mutation parameters assuming equilibrium.
Vogl, Claus; Bergman, Juraj
2015-12-01
In a classical study, Wright (1931) proposed a model for the evolution of a biallelic locus under the influence of mutation, directional selection and drift. He derived the equilibrium distribution of the allelic proportion conditional on the scaled mutation rate, the mutation bias and the scaled strength of directional selection. The equilibrium distribution can be used for inference of these parameters with genome-wide datasets of "site frequency spectra" (SFS). Assuming that the scaled mutation rate is low, Wright's model can be approximated by a boundary-mutation model, where mutations are introduced into the population exclusively from sites fixed for the preferred or unpreferred allelic states. With the boundary-mutation model, inference can be partitioned: (i) the shape of the SFS distribution within the polymorphic region is determined by random drift and directional selection, but not by the mutation parameters, such that inference of the selection parameter relies exclusively on the polymorphic sites in the SFS; (ii) the mutation parameters can be inferred from the amount of polymorphic and monomorphic preferred and unpreferred alleles, conditional on the selection parameter. Herein, we derive maximum likelihood estimators for the mutation and selection parameters in equilibrium and apply the method to simulated SFS data as well as empirical data from a Madagascar population of Drosophila simulans. Copyright © 2015 Elsevier Inc. All rights reserved.
Development of a methodology to evaluate material accountability in pyroprocess
NASA Astrophysics Data System (ADS)
Woo, Seungmin
This study investigates the effect of the non-uniform nuclide composition in spent fuel on material accountancy in the pyroprocess. High-fidelity depletion simulations are performed using the Monte Carlo code SERPENT in order to determine nuclide composition as a function of axial and radial position within fuel rods and assemblies, and burnup. For improved accuracy, the simulations use short burnups step (25 days or less), Xe-equilibrium treatment (to avoid oscillations over burnup steps), axial moderator temperature distribution, and 30 axial meshes. Analytical solutions of the simplified depletion equations are built to understand the axial non-uniformity of nuclide composition in spent fuel. The cosine shape of axial neutron flux distribution dominates the axial non-uniformity of the nuclide composition. Combined cross sections and time also generate axial non-uniformity, as the exponential term in the analytical solution consists of the neutron flux, cross section and time. The axial concentration distribution for a nuclide having the small cross section gets steeper than that for another nuclide having the great cross section because the axial flux is weighted by the cross section in the exponential term in the analytical solution. Similarly, the non-uniformity becomes flatter as increasing burnup, because the time term in the exponential increases. Based on the developed numerical recipes and decoupling of the results between the axial distributions and the predetermined representative radial distributions by matching the axial height, the axial and radial composition distributions for representative spent nuclear fuel assemblies, the Type-0, -1, and -2 assemblies after 1, 2, and 3 depletion cycles, is obtained. These data are appropriately modified to depict processing for materials in the head-end process of pyroprocess that is chopping, voloxidation and granulation. The expectation and standard deviation of the Pu-to-244Cm-ratio by the single granule sampling calculated by the central limit theorem and the Geary-Hinkley transformation. Then, the uncertainty propagation through the key-pyroprocess is conducted to analyze the Material Unaccounted For (MUF), which is a random variable defined as a receipt minus a shipment of a process, in the system. The random variable, LOPu, is defined for evaluating the non-detection probability at each Key Measurement Point (KMP) as the original Pu mass minus the Pu mass after a missing scenario. A number of assemblies for the LOPu to be 8 kg is considered in this calculation. The probability of detection for the 8 kg LOPu is evaluated with respect the size of granule and powder using the event tree analysis and the hypothesis testing method. We can observe there are possible cases showing the probability of detection for the 8 kg LOPu less than 95%. In order to enhance the detection rate, a new Material Balance Area (MBA) model is defined for the key-pyroprocess. The probabilities of detection for all spent fuel types based on the new MBA model are greater than 99%. Furthermore, it is observed that the probability of detection significantly increases by increasing granule sample sizes to evaluate the Pu-to-244Cm-ratio before the key-pyroprocess. Based on these observations, even though the Pu material accountability in pyroprocess is affected by the non-uniformity of nuclide composition when the Pu-to-244Cm-ratio method is being applied, that is surmounted by decreasing the uncertainty of measured ratio by increasing sample sizes and modifying the MBAs and KMPs. (Abstract shortened by ProQuest.).
NASA Astrophysics Data System (ADS)
Shi, Chenguang; Salous, Sana; Wang, Fei; Zhou, Jianjiang
2017-08-01
Distributed radar network systems have been shown to have many unique features. Due to their advantage of signal and spatial diversities, radar networks are attractive for target detection. In practice, the netted radars in radar networks are supposed to maximize their transmit power to achieve better detection performance, which may be in contradiction with low probability of intercept (LPI). Therefore, this paper investigates the problem of adaptive power allocation for radar networks in a cooperative game-theoretic framework such that the LPI performance can be improved. Taking into consideration both the transmit power constraints and the minimum signal to interference plus noise ratio (SINR) requirement of each radar, a cooperative Nash bargaining power allocation game based on LPI is formulated, whose objective is to minimize the total transmit power by optimizing the power allocation in radar networks. First, a novel SINR-based network utility function is defined and utilized as a metric to evaluate power allocation. Then, with the well-designed network utility function, the existence and uniqueness of the Nash bargaining solution are proved analytically. Finally, an iterative Nash bargaining algorithm is developed that converges quickly to a Pareto optimal equilibrium for the cooperative game. Numerical simulations and theoretic analysis are provided to evaluate the effectiveness of the proposed algorithm.
Bellomo, A; Inbar, G
1997-01-01
One of the theories of human motor control is the gamma Equilibrium Point Hypothesis. It is an attractive theory since it offers an easy control scheme where the planned trajectory shifts monotionically from an initial to a final equilibrium state. The feasibility of this model was tested by reconstructing the virtual trajectory and the stiffness profiles for movements performed with different inertial loads and examining them. Three types of movements were tested: passive movements, targeted movements, and repetitive movements. Each of the movements was performed with five different inertial loads. Plausible virtual trajectories and stiffness profiles were reconstructed based on the gamma Equilibrium Point Hypothesis for the three different types of movements performed with different inertial loads. However, the simple control strategy supported by the model, where the planned trajectory shifts monotonically from an initial to a final equilibrium state, could not be supported for targeted movements performed with added inertial load. To test the feasibility of the model further we must examine the probability that the human motor control system would choose a trajectory more complicated than the actual trajectory to control.
Far-from-Equilibrium Route to Superthermal Light in Bimodal Nanolasers
NASA Astrophysics Data System (ADS)
Marconi, Mathias; Javaloyes, Julien; Hamel, Philippe; Raineri, Fabrice; Levenson, Ariel; Yacomotti, Alejandro M.
2018-02-01
Microscale and nanoscale lasers inherently exhibit rich photon statistics due to complex light-matter interaction in a strong spontaneous emission noise background. It is well known that they may display superthermal fluctuations—photon superbunching—in specific situations due to either gain competition, leading to mode-switching instabilities, or carrier-carrier coupling in superradiant microcavities. Here we show a generic route to superbunching in bimodal nanolasers by preparing the system far from equilibrium through a parameter quench. We demonstrate, both theoretically and experimentally, that transient dynamics after a short-pump-pulse-induced quench leads to heavy-tailed superthermal statistics when projected onto the weak mode. We implement a simple experimental technique to access the probability density functions that further enables quantifying the distance from thermal equilibrium via the thermodynamic entropy. The universality of this mechanism relies on the far-from-equilibrium dynamical scenario, which can be mapped to a fast cooling process of a suspension of Brownian particles in a liquid. Our results open up new avenues to mold photon statistics in multimode optical systems and may constitute a test bed to investigate out-of-equilibrium thermodynamics using micro or nanocavity arrays.
Equilibrium problems for Raney densities
NASA Astrophysics Data System (ADS)
Forrester, Peter J.; Liu, Dang-Zheng; Zinn-Justin, Paul
2015-07-01
The Raney numbers are a class of combinatorial numbers generalising the Fuss-Catalan numbers. They are indexed by a pair of positive real numbers (p, r) with p > 1 and 0 < r ⩽ p, and form the moments of a probability density function. For certain (p, r) the latter has the interpretation as the density of squared singular values for certain random matrix ensembles, and in this context equilibrium problems characterising the Raney densities for (p, r) = (θ + 1, 1) and (θ/2 + 1, 1/2) have recently been proposed. Using two different techniques—one based on the Wiener-Hopf method for the solution of integral equations and the other on an analysis of the algebraic equation satisfied by the Green's function—we establish the validity of the equilibrium problems for general θ > 0 and similarly use both methods to identify the equilibrium problem for (p, r) = (θ/q + 1, 1/q), θ > 0 and q \\in Z+ . The Wiener-Hopf method is used to extend the latter to parameters (p, r) = (θ/q + 1, m + 1/q) for m a non-negative integer, and also to identify the equilibrium problem for a family of densities with moments given by certain binomial coefficients.
Dinov, Ivo D; Siegrist, Kyle; Pearl, Dennis K; Kalinin, Alexandr; Christou, Nicolas
2016-06-01
Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome , which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the learning assessment protocols.
Dinov, Ivo D.; Siegrist, Kyle; Pearl, Dennis K.; Kalinin, Alexandr; Christou, Nicolas
2015-01-01
Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome, which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the learning assessment protocols. PMID:27158191
Random Partition Distribution Indexed by Pairwise Information
Dahl, David B.; Day, Ryan; Tsai, Jerry W.
2017-01-01
We propose a random partition distribution indexed by pairwise similarity information such that partitions compatible with the similarities are given more probability. The use of pairwise similarities, in the form of distances, is common in some clustering algorithms (e.g., hierarchical clustering), but we show how to use this type of information to define a prior partition distribution for flexible Bayesian modeling. A defining feature of the distribution is that it allocates probability among partitions within a given number of subsets, but it does not shift probability among sets of partitions with different numbers of subsets. Our distribution places more probability on partitions that group similar items yet keeps the total probability of partitions with a given number of subsets constant. The distribution of the number of subsets (and its moments) is available in closed-form and is not a function of the similarities. Our formulation has an explicit probability mass function (with a tractable normalizing constant) so the full suite of MCMC methods may be used for posterior inference. We compare our distribution with several existing partition distributions, showing that our formulation has attractive properties. We provide three demonstrations to highlight the features and relative performance of our distribution. PMID:29276318
Dynamical behaviors of inter-out-of-equilibrium state intervals in Korean futures exchange markets
NASA Astrophysics Data System (ADS)
Lim, Gyuchang; Kim, SooYong; Kim, Kyungsik; Lee, Dong-In; Scalas, Enrico
2008-05-01
A recently discovered feature of financial markets, the two-phase phenomenon, is utilized to categorize a financial time series into two phases, namely equilibrium and out-of-equilibrium states. For out-of-equilibrium states, we analyze the time intervals at which the state is revisited. The power-law distribution of inter-out-of-equilibrium state intervals is shown and we present an analogy with discrete-time heat bath dynamics, similar to random Ising systems. In the mean-field approximation, this model reduces to a one-dimensional multiplicative process. By varying global and local model parameters, the relevance between volatilities in financial markets and the interaction strengths between agents in the Ising model are investigated and discussed.
NASA Astrophysics Data System (ADS)
Rafkin, Scot C. R.; Soto, Alejandro; Michaels, Timothy I.
2016-10-01
A newly developed general circulation model (GCM) for Pluto is used to investigate the impact of a heterogeneous distribution of nitrogen surface ice and large scale topography on Pluto's atmospheric circulation. The GCM is based on the GFDL Flexible Modeling System (FSM). Physics include a gray model radiative-conductive scheme, subsurface conduction, and a nitrogen volatile cycle. The radiative-conductive model takes into account the 2.3, 3.3 and 7.8 μm bands of CH4 and CO, including non-local thermodynamic equilibrium effects. including non-local thermodynamic equilibrium effects. The nitrogen volatile cycle is based on a vapor pressure equilibrium assumption between the atmosphere and surface. Prior to the arrival of the New Horizons spacecraft, the expectation was that the volatile ice distribution on the surface of Pluto would be strongly controlled by the latitudinal temperature gradient. If this were the case, then Pluto would have broad latitudinal bands of both ice covered surface and ice free surface, as dictated by the season. Further, the circulation, and the thus the transport of volatiles, was thought to be driven almost exclusively by sublimation and deposition flows associated with the volatile cycle. In contrast to expectations, images from New Horizon showed an extremely complex, heterogeneous distribution of surface ices draped over substantial and variable topography. To produce such an ice distribution, the atmospheric circulation and volatile transport must be more complex than previously envisioned. Simulations where topography, surface ice distributions, and volatile cycle physics are added individually and in various combinations are used to individually quantify the importance of the general circulation, topography, surface ice distributions, and condensation flows. It is shown that even regional patches of ice or large craters can have global impacts on the atmospheric circulation, the volatile cycle, and hence, the distribution of surface ices. The work demonstrates that explaining Pluto's volatile cycle and the expression of that cycle in the surface ice distributions requires consideration of atmospheric processes beyond simple vapor pressure equilibrium arguments.
Electrostatic turbulence intermittence driven by biasing in Texas Helimak
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toufen, D. L.; Institute of Physics, University of São Paulo, 05315-970 São Paulo, São Paulo; Pereira, F. A. C.
We investigate changes in the intermittent sequence of bursts in the electrostatic turbulence due to imposed positive bias voltage applied to control the plasma radial electric field in Texas Helimak [K. W. Gentle and H. He, Plasma Sci. Technol. 10, 284 (2008)]—a toroidal plasma device with a one-dimensional equilibrium, magnetic curvature, and shear. We identify the burst characteristics by analyzing ion saturation current fluctuations collected in a large set of Langmuir probes. The number of bursts increase with positive biasing, giving rise to a long tailed skewed turbulence probability distribution function. The burst shape does not change much with themore » applied bias voltage, while their vertical velocity increases monotonically. For high values of bias voltage, the bursts propagate mainly in the vertical direction which is perpendicular to the radial density gradient and the toroidal magnetic field. Moreover, in contrast with the bursts in tokamaks, the burst velocity agrees with the phase velocity of the overall turbulence in both vertical and radial directions. For a fixed bias voltage, the time interval between bursts and their amplitudes follows exponential distributions. Altogether, these burst characteristics indicate that their production can be modelled by a stochastic process.« less
Evidence for non-synchronous rotation of Europa. Galileo Imaging Team.
Geissler, P E; Greenberg, R; Hoppa, G; Helfenstein, P; McEwen, A; Pappalardo, R; Tufts, R; Ockert-Bell, M; Sullivan, R; Greeley, R; Belton, M J; Denk, T; Clark, B; Burns, J; Veverka, J
1998-01-22
Non-synchronous rotation of Europa was predicted on theoretical grounds, by considering the orbitally averaged torque exerted by Jupiter on the satellite's tidal bulges. If Europa's orbit were circular, or the satellite were comprised of a frictionless fluid without tidal dissipation, this torque would average to zero. However, Europa has a small forced eccentricity e approximately 0.01 , generated by its dynamical interaction with Io and Ganymede, which should cause the equilibrium spin rate of the satellite to be slightly faster than synchronous. Recent gravity data suggest that there may be a permanent asymmetry in Europa's interior mass distribution which is large enough to offset the tidal torque; hence, if non-synchronous rotation is observed, the surface is probably decoupled from the interior by a subsurface layer of liquid or ductile ice. Non-synchronous rotation was invoked to explain Europa's global system of lineaments and an equatorial region of rifting seen in Voyager images. Here we report an analysis of the orientation and distribution of these surface features, based on initial observations made by the Galileo spacecraft. We find evidence that Europa spins faster than the synchronous rate (or did so in the past), consistent with the possibility of a global subsurface ocean.
A brief introduction to probability.
Di Paola, Gioacchino; Bertani, Alessandro; De Monte, Lavinia; Tuzzolino, Fabio
2018-02-01
The theory of probability has been debated for centuries: back in 1600, French mathematics used the rules of probability to place and win bets. Subsequently, the knowledge of probability has significantly evolved and is now an essential tool for statistics. In this paper, the basic theoretical principles of probability will be reviewed, with the aim of facilitating the comprehension of statistical inference. After a brief general introduction on probability, we will review the concept of the "probability distribution" that is a function providing the probabilities of occurrence of different possible outcomes of a categorical or continuous variable. Specific attention will be focused on normal distribution that is the most relevant distribution applied to statistical analysis.
Transition probability, dynamic regimes, and the critical point of financial crisis
NASA Astrophysics Data System (ADS)
Tang, Yinan; Chen, Ping
2015-07-01
An empirical and theoretical analysis of financial crises is conducted based on statistical mechanics in non-equilibrium physics. The transition probability provides a new tool for diagnosing a changing market. Both calm and turbulent markets can be described by the birth-death process for price movements driven by identical agents. The transition probability in a time window can be estimated from stock market indexes. Positive and negative feedback trading behaviors can be revealed by the upper and lower curves in transition probability. Three dynamic regimes are discovered from two time periods including linear, quasi-linear, and nonlinear patterns. There is a clear link between liberalization policy and market nonlinearity. Numerical estimation of a market turning point is close to the historical event of the US 2008 financial crisis.
ERIC Educational Resources Information Center
Cwikel, Dori; And Others
1986-01-01
Dicusses the use of the separatory cylinder in student laboratory experiments for investigating equilibrium distribution of a solute between immiscible phases. Describes the procedures for four sets of experiments of this nature. Lists of materials needed and quantities of reagents are provided. (TW)
Ehrenfest's Lottery--Time and Entropy Maximization
ERIC Educational Resources Information Center
Ashbaugh, Henry S.
2010-01-01
Successful teaching of the Second Law of Thermodynamics suffers from limited simple examples linking equilibrium to entropy maximization. I describe a thought experiment connecting entropy to a lottery that mixes marbles amongst a collection of urns. This mixing obeys diffusion-like dynamics. Equilibrium is achieved when the marble distribution is…
The U.S. Environmental Protection Agency used insights and methods from its water quality criteria program to develop ESGs. The discovery that freely-dissolved contaminants were the toxic form led to equilibrium partitioning being chosen to model the distribution of contaminants...
NASA Astrophysics Data System (ADS)
Bonnaventure, P. P.; Lewkowicz, A. G.
2008-12-01
Spatial models of permafrost probability for three study areas in northwest Canada between 59°N and 61°N were perturbed to investigate climate change impacts. The models are empirical-statistical in nature, based on basal temperature of snow (BTS) measurements in winter, and summer ground-truthing of the presence or absence of frozen ground. Predictions of BTS values are made using independent variables of elevation and potential incoming solar radiation (PISR), both derived from a 30 m DEM. These are then transformed into the probability of the presence or absence of permafrost through logistic regression. Under present climate conditions, permafrost percentages in the study areas are 44% for Haines Summit, British Columbia, 38% for Wolf Creek, Yukon, and 69% for part of the Ruby Range, Yukon (Bonnaventure and Lewkowicz, 2008; Lewkowicz and Bonaventure, 2008). Scenarios of air temperature change from -2K (approximating Neoglacial conditions) to +5K (possible within the next century according to the IPCC) were examined for the three sites. Manipulations were carried out by lowering or raising the terrain within the DEM assuming a mean environmental lapse rate of 6.5K/km. Under a -2K scenario, permafrost extent increased by 22-43% in the three study areas. Under a +5K warming, permafrost essentially disappeared in Haines Summit and Wolf Creek, while in the Ruby Range less than 12% of the area remained perennially frozen. It should be emphasized that these model predictions are for equilibrium conditions which might not be attained for several decades or longer in areas of cold permafrost. Cloud cover changes of -10% to +10% were examined through adjusting the partitioning of direct beam and diffuse radiation in the PISR input field. Changes to permafrost extent were small, ranging from -2% to -4% for greater cloudiness with changes of the opposite magnitude for less cloud. The results show that air temperature change has a much greater potential to affect mountain permafrost distribution in the long-term than the probable range of cloud cover changes. Modelled results for the individual areas respond according to the hypsometry of the terrain and the relative strength of elevation and PISR in the regression models. This study indicates that significant changes to the distribution and extent of mountain permafrost in northwest Canada can be expected in the next few decades. References Bonnaventure, P.P. and Lewkowicz, A.G. (2008). Mountain permafrost probability mapping using the BTS method in two climatically dissimilar locations, northwest Canada. Canadian Journal of Earth Sciences, 45, 443-455. Lewkowicz, A.G. and Bonnaventure, P.P. (2008). Interchangeability of local mountain permafrost probability models, northwest Canada. Permafrost and Periglacial Processes, 19, 49-62.
Yura, Harold T; Hanson, Steen G
2012-04-01
Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.
The global impact distribution of Near-Earth objects
NASA Astrophysics Data System (ADS)
Rumpf, Clemens; Lewis, Hugh G.; Atkinson, Peter M.
2016-02-01
Asteroids that could collide with the Earth are listed on the publicly available Near-Earth object (NEO) hazard web sites maintained by the National Aeronautics and Space Administration (NASA) and the European Space Agency (ESA). The impact probability distribution of 69 potentially threatening NEOs from these lists that produce 261 dynamically distinct impact instances, or Virtual Impactors (VIs), were calculated using the Asteroid Risk Mitigation and Optimization Research (ARMOR) tool in conjunction with OrbFit. ARMOR projected the impact probability of each VI onto the surface of the Earth as a spatial probability distribution. The projection considers orbit solution accuracy and the global impact probability. The method of ARMOR is introduced and the tool is validated against two asteroid-Earth collision cases with objects 2008 TC3 and 2014 AA. In the analysis, the natural distribution of impact corridors is contrasted against the impact probability distribution to evaluate the distributions' conformity with the uniform impact distribution assumption. The distribution of impact corridors is based on the NEO population and orbital mechanics. The analysis shows that the distribution of impact corridors matches the common assumption of uniform impact distribution and the result extends the evidence base for the uniform assumption from qualitative analysis of historic impact events into the future in a quantitative way. This finding is confirmed in a parallel analysis of impact points belonging to a synthetic population of 10,006 VIs. Taking into account the impact probabilities introduced significant variation into the results and the impact probability distribution, consequently, deviates markedly from uniformity. The concept of impact probabilities is a product of the asteroid observation and orbit determination technique and, thus, represents a man-made component that is largely disconnected from natural processes. It is important to consider impact probabilities because such information represents the best estimate of where an impact might occur.
NASA Astrophysics Data System (ADS)
Budaev, Bair V.; Bogy, David B.
2018-06-01
We extend the statistical analysis of equilibrium systems to systems with a constant heat flux. This extension leads to natural generalizations of Maxwell-Boltzmann's and Planck's equilibrium energy distributions to energy distributions of systems with a net heat flux. This development provides a long needed foundation for addressing problems of nanoscale heat transport by a systematic method based on a few fundamental principles. As an example, we consider the computation of the radiative heat flux between narrowly spaced half-spaces maintained at different temperatures.
Statics and dynamics of DNA knotting
NASA Astrophysics Data System (ADS)
Orlandini, Enzo
2018-02-01
Knots and entanglement in polymers and biopolymers such as DNA and proteins constitute a timely topic that spans various scientific disciplines ranging from physics to chemistry, biology and mathematics. Although in the past many advancements have been made in understanding the equilibrium knotting probability and knot complexity of long polymer chains in solutions, many questions have been addressed in recent years by both experimental and theoretical means—for instance, how the knotting probability depends on the quality of the solvent, the elastic properties of the molecule and its degree of confinement. How knots form, evolve and eventually disappear in a fluctuating chain. Are the equilibrium and non-equilibrium properties of knotted molecules affected by the knot swelling/shrinking dynamics? Moreover, thanks to the great advance in nanotechnology and micromanipulation techniques, nowadays knots can be ‘manually’ tied in a single DNA molecule, followed during their motion along the chains, forced to pass through nanopores, or stretched by external forces or elongational flows. All these experimental approaches allow access to new information on the interplay of topology and polymer physics, and this has opened new perspectives in the field. Here, we provide an overview of the current knowledge of this topic, stressing the main results obtained, including the recent developments in experimental and computational approaches. Since almost all experiments on knotting involve DNA, the review will be mainly focused on the topological properties of this fascinating and biologically relevant molecule.
Lopez-Escamez, José A; Carey, John; Chung, Won-Ho; Goebel, Joel A; Magnusson, Måns; Mandalà, Marco; Newman-Toker, David E; Strupp, Michael; Suzuki, Mamoru; Trabalzini, Franco; Bisdorff, Alexandre
2016-01-01
This paper presents diagnostic criteria for Menière's disease jointly formulated by the Classification Committee of the Bárány Society, The Japan Society for Equilibrium Research, the European Academy of Otology and Neurotology (EAONO), the Equilibrium Committee of the American Academy of Otolaryngology-Head and Neck Surgery (AAO-HNS) and the Korean Balance Society. The classification includes 2 categories: definite Menière's disease and probable Menière's disease. The diagnosis of definite Menière's disease is based on clinical criteria and requires the observation of an episodic vertigo syndrome associated with low-to medium-frequency sensorineural hearing loss and fluctuating aural symptoms (hearing, tinnitus and/or fullness) in the affected ear. Duration of vertigo episodes is limited to a period between 20 min and 12h. Probable Menière's disease is a broader concept defined by episodic vestibular symptoms (vertigo or dizziness) associated with fluctuating aural symptoms occurring in a period from 20 min to 24h. Copyright © 2015 Elsevier España, S.L.U. and Sociedad Española de Otorrinolaringología y Patología Cérvico-Facial. All rights reserved.
Mamikhin, S V; Manakhov, D V; Shcheglov, A I
2014-01-01
The additional study of the distribution of radioactive isotopes of caesium and strontium and their chemical analogues in the above-ground components of pine in the remote from the accident period was carried out. The results of the research confirmed the existence of analogy in the distribution of these elements on the components of this type of wood vegetation in the quasi-equilibrium (relatively radionuclides) condition. Also shown is the selective possibility of using the data on the ash content of the components of forest stands of pine and oak as an information analogue.
Diagnostic modeling of trace metal partitioning in south San Francisco Bay
Wood, T. W.; Baptista, A. M.; Kuwabara, J.S.; Flegal, A.R.
1995-01-01
The numerical results indicate that aqueous speciation will control basin-scale spatial variations in the apparent distribution coefficient, Kda, if the system is close to equilibrium. However, basin-scale spatial variations in Kda are determined by the location of the sources of metal and the suspended solids concentration of the receiving water if the system is far from equilibrium. The overall spatial variability in Kda also increases as the system moves away from equilibrium.
Integral Equation for the Equilibrium State of Colliding Electron Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warnock, Robert L.
2002-11-11
We study a nonlinear integral equation for the equilibrium phase distribution of stored colliding electron beams. It is analogous to the Haissinski equation, being derived from Vlasov-Fokker-Planck theory, but is quite different in form. We prove existence of a unique solution, thus the existence of a unique equilibrium state, for sufficiently small current. This is done for the Chao-Ruth model of the beam-beam interaction in one degree of freedom. We expect no difficulty in generalizing the argument to more realistic models.
A structural model for the in vivo human cornea including collagen-swelling interaction
Cheng, Xi; Petsche, Steven J.; Pinsky, Peter M.
2015-01-01
A structural model of the in vivo cornea, which accounts for tissue swelling behaviour, for the three-dimensional organization of stromal fibres and for collagen-swelling interaction, is proposed. Modelled as a binary electrolyte gel in thermodynamic equilibrium, the stromal electrostatic free energy is based on the mean-field approximation. To account for active endothelial ionic transport in the in vivo cornea, which modulates osmotic pressure and hydration, stromal mobile ions are shown to satisfy a modified Boltzmann distribution. The elasticity of the stromal collagen network is modelled based on three-dimensional collagen orientation probability distributions for every point in the stroma obtained by synthesizing X-ray diffraction data for azimuthal angle distributions and second harmonic-generated image processing for inclination angle distributions. The model is implemented in a finite-element framework and employed to predict free and confined swelling of stroma in an ionic bath. For the in vivo cornea, the model is used to predict corneal swelling due to increasing intraocular pressure (IOP) and is adapted to model swelling in Fuchs' corneal dystrophy. The biomechanical response of the in vivo cornea to a typical LASIK surgery for myopia is analysed, including tissue fluid pressure and swelling responses. The model provides a new interpretation of the corneal active hydration control (pump-leak) mechanism based on osmotic pressure modulation. The results also illustrate the structural necessity of fibre inclination in stabilizing the corneal refractive surface with respect to changes in tissue hydration and IOP. PMID:26156299
Kinetic Features Observed in the Solar Wind Electron Distributions
NASA Astrophysics Data System (ADS)
Pierrard, V.; Lazar, M.; Poedts, S.
2016-12-01
More than 120 000 of velocity distributions measured by Helios, Cluster and Ulysses in the ecliptic have been analyzed within an extended range of heliocentric distances from 0.3 to over 4 AU. The velocity distribution of electrons reveal a dual structure with a thermal (Maxwellian) core and a suprathermal (Kappa) halo. A detailed observational analysis of these two components provides estimations of their temperatures and temperature anisotropies, and we decode any potential interdependence that their properties may indicate. The core temperature is found to decrease with the radial distance, while the halo temperature slightly increases, clarifying an apparent contradiction in previous observational analysis and providing valuable clues about the temperature of the Kappa-distributed populations. For low values of the power-index kappa, these two components manifest a clear tendency to deviate from isotropy in the same direction, that seems to confirm the existence of mechanisms with similar effects on both components, e.g., the solar wind expansion, or the particle heating by the fluctuations. However, the existence of plasma states with anti-correlated anisotropies of the core and halo populations and the increase of their number for high values of the power-index kappa suggest a dynamic interplay of these components, mediated most probably by the anisotropy-driven instabilities. Estimating the temperature of the solar wind particles and their anisotropies is particularly important for understanding the origin of these deviations from thermal equilibrium as well as their effects.
Power Laws are Disguised Boltzmann Laws
NASA Astrophysics Data System (ADS)
Richmond, Peter; Solomon, Sorin
Using a previously introduced model on generalized Lotka-Volterra dynamics together with some recent results for the solution of generalized Langevin equations, we derive analytically the equilibrium mean field solution for the probability distribution of wealth and show that it has two characteristic regimes. For large values of wealth, it takes the form of a Pareto style power law. For small values of wealth, w<=wm, the distribution function tends sharply to zero. The origin of this law lies in the random multiplicative process built into the model. Whilst such results have been known since the time of Gibrat, the present framework allows for a stable power law in an arbitrary and irregular global dynamics, so long as the market is ``fair'', i.e., there is no net advantage to any particular group or individual. We further show that the dynamics of relative wealth is independent of the specific nature of the agent interactions and exhibits a universal character even though the total wealth may follow an arbitrary and complicated dynamics. In developing the theory, we draw parallels with conventional thermodynamics and derive for the system some new relations for the ``thermodynamics'' associated with the Generalized Lotka-Volterra type of stochastic dynamics. The power law that arises in the distribution function is identified with new additional logarithmic terms in the familiar Boltzmann distribution function for the system. These are a direct consequence of the multiplicative stochastic dynamics and are absent for the usual additive stochastic processes.
Stability of equations with a distributed delay, monotone production and nonlinear mortality
NASA Astrophysics Data System (ADS)
Berezansky, Leonid; Braverman, Elena
2013-10-01
We consider population dynamics models dN/dt = f(N(tτ)) - d(N(t)) with an increasing fecundity function f and any mortality function d which can be quadratic, as in the logistic equation, or have a different form provided that the equation has at most one positive equilibrium. Here the delay in the production term can be distributed and unbounded. It is demonstrated that the positive equilibrium is globally attractive if it exists, otherwise all positive solutions tend to zero. Moreover, we demonstrate that solutions of the equation are intrinsically non-oscillatory: once the initial function is less/greater than the equilibrium K > 0, so is the solution for any positive time value. The assumptions on f, d and the delay are rather nonrestrictive, and several examples demonstrate that none of them can be omitted.
Equilibrium statistical mechanics of self-consistent wave-particle system
NASA Astrophysics Data System (ADS)
Elskens, Yves
2005-10-01
The equilibrium distribution of N particles and M waves (e.g. Langmuir) is analysed in the weak-coupling limit for the self-consistent hamiltonian model H = ∑rpr^2 /(2m) + ∑jφjIj+ ɛ∑r,j(βj/ kj) (kjxr- θj) [1]. In the canonical ensemble, with temperature T and reservoir velocity v < jφj/kj, the wave intensities are almost independent and exponentially distributed, with expectation
Multi-Group Maximum Entropy Model for Translational Non-Equilibrium
NASA Technical Reports Server (NTRS)
Jayaraman, Vegnesh; Liu, Yen; Panesi, Marco
2017-01-01
The aim of the current work is to describe a new model for flows in translational non- equilibrium. Starting from the statistical description of a gas proposed by Boltzmann, the model relies on a domain decomposition technique in velocity space. Using the maximum entropy principle, the logarithm of the distribution function in each velocity sub-domain (group) is expressed with a power series in molecular velocity. New governing equations are obtained using the method of weighted residuals by taking the velocity moments of the Boltzmann equation. The model is applied to a spatially homogeneous Boltzmann equation with a Bhatnagar-Gross-Krook1(BGK) model collision operator and the relaxation of an initial non-equilibrium distribution to a Maxwellian is studied using the model. In addition, numerical results obtained using the model for a 1D shock tube problem are also reported.
An Integrated Approach to Thermodynamics in the Introductory Physics Course.
ERIC Educational Resources Information Center
Alonso, Marcelo; Finn, Edward J.
1995-01-01
Presents an approach to combine the empirical approach of classical thermodynamics with the structural approach of statistical mechanics. Topics covered include dynamical foundation of the first law; mechanical work, heat, radiation, and the first law; thermal equilibrium; thermal processes; thermodynamic probability; entropy; the second law;…
Open Markov Processes and Reaction Networks
ERIC Educational Resources Information Center
Swistock Pollard, Blake Stephen
2017-01-01
We begin by defining the concept of "open" Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain "boundary" states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow…
NASA Astrophysics Data System (ADS)
Taitano, W. T.; Chacón, L.; Simakov, A. N.
2017-06-01
The Fokker-Planck collision operator is an advection-diffusion operator which describe dynamical systems such as weakly coupled plasmas [1,2], photonics in high temperature environment [3,4], biological [5], and even social systems [6]. For plasmas in the continuum, the Fokker-Planck collision operator supports such important physical properties as conservation of number, momentum, and energy, as well as positivity. It also obeys the Boltzmann's H-theorem [7-11], i.e., the operator increases the system entropy while simultaneously driving the distribution function towards a Maxwellian. In the discrete, when these properties are not ensured, numerical simulations can either fail catastrophically or suffer from significant numerical pollution [12,13]. There is strong emphasis in the literature on developing numerical techniques to solve the Fokker-Planck equation while preserving these properties [12-24]. In this short note, we focus on the analytical equilibrium preserving property, meaning that the Fokker-Planck collision operator vanishes when acting on an analytical Maxwellian distribution function. The equilibrium preservation property is especially important, for example, when one is attempting to capture subtle transport physics. Since transport arises from small O (ɛ) corrections to the equilibrium [25] (where ɛ is a small expansion parameter), numerical truncation error present in the equilibrium solution may dominate, overwhelming transport dynamics.
Non-equilibrium flow and sediment transport distribution over mobile river dunes
NASA Astrophysics Data System (ADS)
Hoitink, T.; Naqshband, S.; McElroy, B. J.
2017-12-01
Flow and sediment transport are key processes in the morphodynamics of river dunes. During floods in several rivers (e.g., the Elkhorn, Missouri, Niobrara, and Rio Grande), dunes are observed to grow rapidly as flow strength increases, undergoing an unstable transition regime, after which they are washed out in what is called upper stage plane bed. This morphological evolution of dunes to upper stage plane bed is the strongest bed-form adjustment during non-equilibrium flows and is associated with a significant change in hydraulic roughness and water levels. Detailed experimental investigations, however, have mostly focused on fixed dunes limited to equilibrium flow and bed conditions that are rare in natural channels. Our understanding of the underlying sedimentary processes that result into the washing out of dunes is therefore very limited. In the present study, using the Acoustic Concentration and Velocity Profiler (ACVP), we were able to quantify flow structure and sediment transport distribution over mobile non-equilibrium dunes. Under these non-equilibrium flow conditions average dune heights were decreasing while dune lengths were increasing. Preliminary results suggest that this morphological behaviour is due to a positive phase lag between sediment transport maximum and topographic maximum leading to a larger erosion on the dune stoss side compared to deposition on dune lee side.
Intermittent many-body dynamics at equilibrium
NASA Astrophysics Data System (ADS)
Danieli, C.; Campbell, D. K.; Flach, S.
2017-06-01
The equilibrium value of an observable defines a manifold in the phase space of an ergodic and equipartitioned many-body system. A typical trajectory pierces that manifold infinitely often as time goes to infinity. We use these piercings to measure both the relaxation time of the lowest frequency eigenmode of the Fermi-Pasta-Ulam chain, as well as the fluctuations of the subsequent dynamics in equilibrium. The dynamics in equilibrium is characterized by a power-law distribution of excursion times far off equilibrium, with diverging variance. Long excursions arise from sticky dynamics close to q -breathers localized in normal mode space. Measuring the exponent allows one to predict the transition into nonergodic dynamics. We generalize our method to Klein-Gordon lattices where the sticky dynamics is due to discrete breathers localized in real space.
Toward a Parastatistics in Quantum Nonextensive Statistical Mechanics
NASA Astrophysics Data System (ADS)
Zaripov, R. G.
2018-05-01
On the basis of Bose quantum states in parastatistics the equations for the equilibrium distribution of quantum additive and nonextensive systems are determined. The fluctuations and variances of physical quantities for the equilibrium system are found. The Abelian group of microscopic entropies is determined for the composition law with a quadratic nonlinearity.
Fixed and equilibrium endpoint problems in uneven-aged stand management
Robert G. Haight; Wayne M. Getz
1987-01-01
Studies in uneven-aged management have concentrated on the determination of optimal steady-state diameter distribution harvest policies for single and mixed species stands. To find optimal transition harvests for irregular stands, either fixed endpoint or equilibrium endpoint constraints can be imposed after finite transition periods. Penalty function and gradient...
The Approach to Equilibrium: Detailed Balance and the Master Equation
ERIC Educational Resources Information Center
Alexander, Millard H.; Hall, Gregory E.; Dagdigian, Paul J.
2011-01-01
The approach to the equilibrium (Boltzmann) distribution of populations of internal states of a molecule is governed by inelastic collisions in the gas phase and with surfaces. The set of differential equations governing the time evolution of the internal state populations is commonly called the master equation. An analytic solution to the master…
Dynamics and Tolerance of Superionics in Extreme Environment
NASA Astrophysics Data System (ADS)
Annamareddy, Venkata Ajay Krishna Choudary
Superionic conductors are multi-component solid-state systems in which one sub-lattice exhibits exceptional ionic conductivity, which is comparable to molten state; among other things, the high ionic conductivity facilitates their use as solid-state electrolytes. Uranium di-oxide (UO 2)--the material of choice for fuel in most nuclear reactors--also shows superionic behavior, although very little is understood currently on the fast ion transport in UO2, and its implication. This dissertation aims to provide a better understanding of the dynamical characteristics of superionic conductors under both equilibrium and non-equilibrium thermodynamic conditions. In the first part, the emphasis is on equilibrium fluctuations and associated properties of Type II superionic conductors. Using atomistic simulations as well as available neutron and x-ray scattering data, the order-disorder transition or onset of superionic state for Type II conductors at a certain characteristic temperature (Talpha) is first revealed. Talpha marks a structural and kinetic crossover from a crystalline state to a semi-ordered state and is clearly different from the well-known thermodynamic superionic transition (T lambda). Though not favored by entropic forces, collective and cooperative dynamical effects, reminiscent of glassy states, are manifested in the temperature range spanned by Talpha and T lambda. Using atomistic simulations, dynamical heterogeneity (DH)--presence of clustered mobile and immobile regions in a static-homogeneous system--a ubiquitous feature of supercooled liquids and glassy states, is shown to germinate at Talpha. Using reliable metrics, the DH is shown to strengthen with increasing temperature, peak at an intermediate temperature between Talpha and Tlambda , and then recede. This manifestation of DH in superionics markedly differs from that in supercooled liquids through its initial growth against the destabilizing entropic barriers. Atomistic simulations further show that DH in superionics arises from facilitated dynamics, or the phenomenon of dynamic facilitation (DF). Using mobility transfer function, which gives the probability of a neighbor of a mobile ion becoming mobile relative to that of a random ion becoming mobile, it is shown that mobility propagates continuously to the neighboring ions with the strength of the DF increasing at the order-disorder temperature ( Talpha), exhibiting a maximum at an intermediate temperature, and then decreasing as the temperature approaches T lambda. This waxing and waning behavior with temperature is nearly identical to the variation of DH. Thus the close correspondence between DH and DF strongly indicates that DF underpins the heterogeneous dynamics in Type II superionic conductors. In a dynamically facilitated system, a jammed region can become unjammed only if it is physically adjacent to a mobile region. Remarkably, a string-like displacement of ions, the quintessential mode of particle mobility in jammed systems, is shown to operate in Type II superionics as well. The probability distribution of the length of the string is shown to vary exponentially, which is identical to that observed in supercooled and jammed states. Thus the demonstration of DH, DF and string-like cooperative ionic displacements in superionics that closely parallel the dynamic characteristics of supercooled liquids and glassy states, significantly augments the already existing but scant list of phenomenological similarities between these two distinct types of materials. The second part of this dissertation deals with non-equilibrium displacement-cascade simulations of UO2 that is used as a nuclear fuel. UO2 is known to resist amorphization even when subjected to intense nuclear radiations; analysis based on structure and energy does explain this behavior from a thermodynamic perspective. Radiation is inherently dynamic (non-equilibrium), and thus it is pertinent to understand the dynamics of the displaced ions during the annealing process. In this dissertation, the mechanism of dynamic recovery following a radiation knock at the atomistic level is investigated. It is shown that oxygen ions following a radiation perturbation exhibit correlated motion, which is similar to that in high temperature superionic state. Quite remarkably, the displaced oxygen ions also undergo fast recovery to their native lattice sites through collective string-like displacements that show an exponential distribution. Thus the superionic characteristics of UO2 under equilibrium conditions are also instrumental in fast defect recovery following a radiation perturbation.
SUPERNOVA DRIVING. II. COMPRESSIVE RATIO IN MOLECULAR-CLOUD TURBULENCE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Liubin; Padoan, Paolo; Haugbølle, Troels
2016-07-01
The compressibility of molecular cloud (MC) turbulence plays a crucial role in star formation models, because it controls the amplitude and distribution of density fluctuations. The relation between the compressive ratio (the ratio of powers in compressive and solenoidal motions) and the statistics of turbulence has been previously studied systematically only in idealized simulations with random external forces. In this work, we analyze a simulation of large-scale turbulence (250 pc) driven by supernova (SN) explosions that has been shown to yield realistic MC properties. We demonstrate that SN driving results in MC turbulence with a broad lognormal distribution of themore » compressive ratio, with a mean value ≈0.3, lower than the equilibrium value of ≈0.5 found in the inertial range of isothermal simulations with random solenoidal driving. We also find that the compressibility of the turbulence is not noticeably affected by gravity, nor are the mean cloud radial (expansion or contraction) and solid-body rotation velocities. Furthermore, the clouds follow a general relation between the rms density and the rms Mach number similar to that of supersonic isothermal turbulence, though with a large scatter, and their average gas density probability density function is described well by a lognormal distribution, with the addition of a high-density power-law tail when self-gravity is included.« less
Peller, L
1977-02-08
The free-energy change of phosphodiester bond formation from nucleoside triphosphates is more favorable than with nucleoside diphosphates as substrates. Base-stacking interactions can make significant contributions to both delta G degrees ' values. Pyrophosphate hydrolysis when it accompanies the former reaction dominates all thermodynamic considerations. Three experimental situations are discussed in which high-molecular-weight polynucleotides are synthesized without a strong driving force for covalent bond formation. For one of these, a kinetic scheme is presented which encompasses an early narrow Poisson distribution of chain lengths with ultimate passage to a disperse equilibrium population of chain sizes. Hydrolytic removal of pyrophosphate expands the time scale for this undesirable process by a factor of 10(9), while it enormously elevates the thermodynamic ceiling for the average degrees of polymerization in the other two examples. The electron micrographically revealed broad size population from an early study of partial replication of a T7 DNA template is found to adhere (fortuitously) to a disperse most probable representation. Some possible origins are examined for the branched structures in this product, as well as in a later investigation of replication of this nucleic acid. The achievement of both very high molecular weights and sharply peaked size distributions in polynucleotides synthesized in vitro will require coupling to inorganic pyrophosphatase action as in vivo.
Nonadditive entropies yield probability distributions with biases not warranted by the data.
Pressé, Steve; Ghosh, Kingshuk; Lee, Julian; Dill, Ken A
2013-11-01
Different quantities that go by the name of entropy are used in variational principles to infer probability distributions from limited data. Shore and Johnson showed that maximizing the Boltzmann-Gibbs form of the entropy ensures that probability distributions inferred satisfy the multiplication rule of probability for independent events in the absence of data coupling such events. Other types of entropies that violate the Shore and Johnson axioms, including nonadditive entropies such as the Tsallis entropy, violate this basic consistency requirement. Here we use the axiomatic framework of Shore and Johnson to show how such nonadditive entropy functions generate biases in probability distributions that are not warranted by the underlying data.
Punctuated equilibrium dynamics in human communications
NASA Astrophysics Data System (ADS)
Peng, Dan; Han, Xiao-Pu; Wei, Zong-Wen; Wang, Bing-Hong
2015-10-01
A minimal model based on network incorporating individual interactions is proposed to study the non-Poisson statistical properties of human behavior: individuals in system interact with their neighbors, the probability of an individual acting correlates to its activity, and all the individuals involved in action will change their activities randomly. The model reproduces varieties of spatial-temporal patterns observed in empirical studies of human daily communications, providing insight into various human activities and embracing a range of realistic social interacting systems, particularly, intriguing bimodal phenomenon. This model bridges priority queueing theory and punctuated equilibrium dynamics, and our modeling and analysis is likely to shed light on non-Poisson phenomena in many complex systems.
NASA Astrophysics Data System (ADS)
Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi
2015-02-01
With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.
Dynamics of seismogenic volcanic extrusion at Mount St Helens in 2004-05
Iverson, R.M.; Dzurisin, D.; Gardner, C.A.; Gerlach, T.M.; LaHusen, R.G.; Lisowski, M.; Major, J.J.; Malone, S.D.; Messerich, J.A.; Moran, S.C.; Pallister, J.S.; Qamar, A.I.; Schilling, S.P.; Vallance, J.W.
2006-01-01
The 2004-05 eruption of Mount St Helens exhibited sustained, near-equilibrium behaviour characterized by relatively steady extrusion of a solid dacite plug and nearly periodic shallow earthquakes. Here we present a diverse data set to support our hypothesis that these earthquakes resulted from stick-slip motion along the margins of the plug as it was forced incrementally upwards by ascending, solidifying, gas-poor magma. We formalize this hypothesis with a dynamical model that reveals a strong analogy between behaviour of the magma-plug system and that of a variably damped oscillator. Modelled stick-slip oscillations have properties that help constrain the balance of forces governing the earthquakes and eruption, and they imply that magma pressure never deviated much from the steady equilibrium pressure. We infer that the volcano was probably poised in a near-eruptive equilibrium state long before the onset of the 2004-05 eruption. ??2006 Nature Publishing Group.
ProbOnto: ontology and knowledge base of probability distributions.
Swat, Maciej J; Grenon, Pierre; Wimalaratne, Sarala
2016-09-01
Probability distributions play a central role in mathematical and statistical modelling. The encoding, annotation and exchange of such models could be greatly simplified by a resource providing a common reference for the definition of probability distributions. Although some resources exist, no suitably detailed and complex ontology exists nor any database allowing programmatic access. ProbOnto, is an ontology-based knowledge base of probability distributions, featuring more than 80 uni- and multivariate distributions with their defining functions, characteristics, relationships and re-parameterization formulas. It can be used for model annotation and facilitates the encoding of distribution-based models, related functions and quantities. http://probonto.org mjswat@ebi.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Ma, Chihua; Luciani, Timothy; Terebus, Anna; Liang, Jie; Marai, G Elisabeta
2017-02-15
Visualizing the complex probability landscape of stochastic gene regulatory networks can further biologists' understanding of phenotypic behavior associated with specific genes. We present PRODIGEN (PRObability DIstribution of GEne Networks), a web-based visual analysis tool for the systematic exploration of probability distributions over simulation time and state space in such networks. PRODIGEN was designed in collaboration with bioinformaticians who research stochastic gene networks. The analysis tool combines in a novel way existing, expanded, and new visual encodings to capture the time-varying characteristics of probability distributions: spaghetti plots over one dimensional projection, heatmaps of distributions over 2D projections, enhanced with overlaid time curves to display temporal changes, and novel individual glyphs of state information corresponding to particular peaks. We demonstrate the effectiveness of the tool through two case studies on the computed probabilistic landscape of a gene regulatory network and of a toggle-switch network. Domain expert feedback indicates that our visual approach can help biologists: 1) visualize probabilities of stable states, 2) explore the temporal probability distributions, and 3) discover small peaks in the probability landscape that have potential relation to specific diseases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yilmaz, Şeyda, E-mail: seydayilmaz@ktu.edu.tr; Bayrak, Erdem, E-mail: erdmbyrk@gmail.com; Bayrak, Yusuf, E-mail: bayrak@ktu.edu.tr
In this study we examined and compared the three different probabilistic distribution methods for determining the best suitable model in probabilistic assessment of earthquake hazards. We analyzed a reliable homogeneous earthquake catalogue between a time period 1900-2015 for magnitude M ≥ 6.0 and estimated the probabilistic seismic hazard in the North Anatolian Fault zone (39°-41° N 30°-40° E) using three distribution methods namely Weibull distribution, Frechet distribution and three-parameter Weibull distribution. The distribution parameters suitability was evaluated Kolmogorov-Smirnov (K-S) goodness-of-fit test. We also compared the estimated cumulative probability and the conditional probabilities of occurrence of earthquakes for different elapsed timemore » using these three distribution methods. We used Easyfit and Matlab software to calculate these distribution parameters and plotted the conditional probability curves. We concluded that the Weibull distribution method was the most suitable than other distribution methods in this region.« less
Acoustic equations of state for simple lattice Boltzmann velocity sets.
Viggen, Erlend Magnus
2014-07-01
The lattice Boltzmann (LB) method typically uses an isothermal equation of state. This is not sufficient to simulate a number of acoustic phenomena where the equation of state cannot be approximated as linear and constant. However, it is possible to implement variable equations of state by altering the LB equilibrium distribution. For simple velocity sets with velocity components ξ(iα)∈(-1,0,1) for all i, these equilibria necessarily cause error terms in the momentum equation. These error terms are shown to be either correctable or negligible at the cost of further weakening the compressibility. For the D1Q3 velocity set, such an equilibrium distribution is found and shown to be unique. Its sound propagation properties are found for both forced and free waves, with some generality beyond D1Q3. Finally, this equilibrium distribution is applied to a nonlinear acoustics simulation where both mechanisms of nonlinearity are simulated with good results. This represents an improvement on previous such simulations and proves that the compressibility of the method is still sufficiently strong even for nonlinear acoustics.
Temperature distribution and heat radiation of patterned surfaces at short wavelengths.
Emig, Thorsten
2017-05-01
We analyze the equilibrium spatial distribution of surface temperatures of patterned surfaces. The surface is exposed to a constant external heat flux and has a fixed internal temperature that is coupled to the outside heat fluxes by finite heat conductivity across the surface. It is assumed that the temperatures are sufficiently high so that the thermal wavelength (a few microns at room temperature) is short compared to all geometric length scales of the surface patterns. Hence the radiosity method can be employed. A recursive multiple scattering method is developed that enables rapid convergence to equilibrium temperatures. While the temperature distributions show distinct dependence on the detailed surface shapes (cuboids and cylinder are studied), we demonstrate robust universal relations between the mean and the standard deviation of the temperature distributions and quantities that characterize overall geometric features of the surface shape.
Temperature distribution and heat radiation of patterned surfaces at short wavelengths
NASA Astrophysics Data System (ADS)
Emig, Thorsten
2017-05-01
We analyze the equilibrium spatial distribution of surface temperatures of patterned surfaces. The surface is exposed to a constant external heat flux and has a fixed internal temperature that is coupled to the outside heat fluxes by finite heat conductivity across the surface. It is assumed that the temperatures are sufficiently high so that the thermal wavelength (a few microns at room temperature) is short compared to all geometric length scales of the surface patterns. Hence the radiosity method can be employed. A recursive multiple scattering method is developed that enables rapid convergence to equilibrium temperatures. While the temperature distributions show distinct dependence on the detailed surface shapes (cuboids and cylinder are studied), we demonstrate robust universal relations between the mean and the standard deviation of the temperature distributions and quantities that characterize overall geometric features of the surface shape.
NASA Astrophysics Data System (ADS)
Sakata, Masahiro; Kurata, Masaki; Hijikata, Takatoshi; Inoue, Tadashi
1991-11-01
Distribution experiments for several rare earth elements (La, Ce, Pr, Nd and Y) between molten KCl-LiCl eutectic salt and liquid Cd were carried out at 450, 500 and 600°C. The material balance of rare earth elements after reaching the equilibrium and their distribution and chemical states in a Cd sample frozen after the experiment were examined. The results suggested the formation of solid intermetallic compounds at the lower concentrations of rare earth metals dissolved in liquid Cd than those solubilities measured in the binary alloy system. The distribution coefficients of rare earth elements between two phases (mole fraction in the Cd phase divided by mole fraction in the salt phase) were determined at each temperature. These distribution coefficients were explained satisfactorily by using the activity coefficients of chlorides and metals in salt and Cd. Both the activity coefficients of metal and chloride caused a much smaller distribution coefficient of Y relative to those of other elements.
To predict the niche, model colonization and extinction
Charles B. Yackulic; James D. Nichols; Janice Reid; Ricky Der
2015-01-01
Ecologists frequently try to predict the future geographic distributions of species. Most studies assume that the current distribution of a species reflects its environmental requirements (i.e., the speciesâ niche). However, the current distributions of many species are unlikely to be at equilibrium with the current distribution of environmental conditions, both...
Incorporating Skew into RMS Surface Roughness Probability Distribution
NASA Technical Reports Server (NTRS)
Stahl, Mark T.; Stahl, H. Philip.
2013-01-01
The standard treatment of RMS surface roughness data is the application of a Gaussian probability distribution. This handling of surface roughness ignores the skew present in the surface and overestimates the most probable RMS of the surface, the mode. Using experimental data we confirm the Gaussian distribution overestimates the mode and application of an asymmetric distribution provides a better fit. Implementing the proposed asymmetric distribution into the optical manufacturing process would reduce the polishing time required to meet surface roughness specifications.
Hierarchical Bayesian calibration of tidal orbit decay rates among hot Jupiters
NASA Astrophysics Data System (ADS)
Collier Cameron, Andrew; Jardine, Moira
2018-05-01
Transiting hot Jupiters occupy a wedge-shaped region in the mass ratio-orbital separation diagram. Its upper boundary is eroded by tidal spiral-in of massive, close-in planets and is sensitive to the stellar tidal dissipation parameter Q_s^'. We develop a simple generative model of the orbital separation distribution of the known population of transiting hot Jupiters, subject to tidal orbital decay, XUV-driven evaporation and observational selection bias. From the joint likelihood of the observed orbital separations of hot Jupiters discovered in ground-based wide-field transit surveys, measured with respect to the hyperparameters of the underlying population model, we recover narrow posterior probability distributions for Q_s^' in two different tidal forcing frequency regimes. We validate the method using mock samples of transiting planets with known tidal parameters. We find that Q_s^' and its temperature dependence are retrieved reliably over five orders of magnitude in Q_s^'. A large sample of hot Jupiters from small-aperture ground-based surveys yields log _{10} Q_s^' }=(8.26± 0.14) for 223 systems in the equilibrium-tide regime. We detect no significant dependence of Q_s^' on stellar effective temperature. A further 19 systems in the dynamical-tide regime yield log _{10} Q_s^' }=7.3± 0.4, indicating stronger coupling. Detection probabilities for transiting planets at a given orbital separation scale inversely with the increase in their tidal migration rates since birth. The resulting bias towards younger systems explains why the surface gravities of hot Jupiters correlate with their host stars' chromospheric emission fluxes. We predict departures from a linear transit-timing ephemeris of less than 4 s for WASP-18 over a 20-yr baseline.
Is the Milky Way's hot halo convectively unstable?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henley, David B.; Shelton, Robin L., E-mail: dbh@physast.uga.edu
2014-03-20
We investigate the convective stability of two popular types of model of the gas distribution in the hot Galactic halo. We first consider models in which the halo density and temperature decrease exponentially with height above the disk. These halo models were created to account for the fact that, on some sight lines, the halo's X-ray emission lines and absorption lines yield different temperatures, implying that the halo is non-isothermal. We show that the hot gas in these exponential models is convectively unstable if γ < 3/2, where γ is the ratio of the temperature and density scale heights. Usingmore » published measurements of γ and its uncertainty, we use Bayes' theorem to infer posterior probability distributions for γ, and hence the probability that the halo is convectively unstable for different sight lines. We find that, if these exponential models are good descriptions of the hot halo gas, at least in the first few kiloparsecs from the plane, the hot halo is reasonably likely to be convectively unstable on two of the three sight lines for which scale height information is available. We also consider more extended models of the halo. While isothermal halo models are convectively stable if the density decreases with distance from the Galaxy, a model of an extended adiabatic halo in hydrostatic equilibrium with the Galaxy's dark matter is on the boundary between stability and instability. However, we find that radiative cooling may perturb this model in the direction of convective instability. If the Galactic halo is indeed convectively unstable, this would argue in favor of supernova activity in the Galactic disk contributing to the heating of the hot halo gas.« less
The Schrödinger Equation, the Zero-Point Electromagnetic Radiation, and the Photoelectric Effect
NASA Astrophysics Data System (ADS)
França, H. M.; Kamimura, A.; Barreto, G. A.
2016-04-01
A Schrödinger type equation for a mathematical probability amplitude Ψ( x, t) is derived from the generalized phase space Liouville equation valid for the motion of a microscopic particle, with mass M and charge e, moving in a potential V( x). The particle phase space probability density is denoted Q( x, p, t), and the entire system is immersed in the "vacuum" zero-point electromagnetic radiation. We show, in the first part of the paper, that the generalized Liouville equation is reduced to a simpler Liouville equation in the equilibrium limit where the small radiative corrections cancel each other approximately. This leads us to a simpler Liouville equation that will facilitate the calculations in the second part of the paper. Within this second part, we address ourselves to the following task: Since the Schrödinger equation depends on hbar , and the zero-point electromagnetic spectral distribution, given by ρ 0{(ω )} = hbar ω 3/2 π 2 c3, also depends on hbar , it is interesting to verify the possible dynamical connection between ρ 0( ω) and the Schrödinger equation. We shall prove that the Planck's constant, present in the momentum operator of the Schrödinger equation, is deeply related with the ubiquitous zero-point electromagnetic radiation with spectral distribution ρ 0( ω). For simplicity, we do not use the hypothesis of the existence of the L. de Broglie matter-waves. The implications of our study for the standard interpretation of the photoelectric effect are discussed by considering the main characteristics of the phenomenon. We also mention, briefly, the effects of the zero-point radiation in the tunneling phenomenon and the Compton's effect.
NHPP-Based Software Reliability Models Using Equilibrium Distribution
NASA Astrophysics Data System (ADS)
Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi
Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.
Nonequilibrium approach regarding metals from a linearised kappa distribution
NASA Astrophysics Data System (ADS)
Domenech-Garret, J. L.
2017-10-01
The widely used kappa distribution functions develop high-energy tails through an adjustable kappa parameter. The aim of this work is to show that such a parameter can itself be regarded as a function, which entangles information about the sources of disequilibrium. We first derive and analyse an expanded Fermi-Dirac kappa distribution. Later, we use this expanded form to obtain an explicit analytical expression for the kappa parameter of a heated metal on which an external electric field is applied. We show that such a kappa index causes departures from equilibrium depending on the physical magnitudes. Finally, we study the role of temperature and electric field on such a parameter, which characterises the electron population of a metal out of equilibrium.
Monte Carlo computer simulations of Venus equilibrium and global resurfacing models
NASA Technical Reports Server (NTRS)
Dawson, D. D.; Strom, R. G.; Schaber, G. G.
1992-01-01
Two models have been proposed for the resurfacing history of Venus: (1) equilibrium resurfacing and (2) global resurfacing. The equilibrium model consists of two cases: in case 1, areas less than or equal to 0.03 percent of the planet are spatially randomly resurfaced at intervals of less than or greater than 150,000 yr to produce the observed spatially random distribution of impact craters and average surface age of about 500 m.y.; and in case 2, areas greater than or equal to 10 percent of the planet are resurfaced at intervals of greater than or equal to 50 m.y. The global resurfacing model proposes that the entire planet was resurfaced about 500 m.y. ago, destroying the preexisting crater population and followed by significantly reduced volcanism and tectonism. The present crater population has accumulated since then with only 4 percent of the observed craters having been embayed by more recent lavas. To test the equilibrium resurfacing model we have run several Monte Carlo computer simulations for the two proposed cases. It is shown that the equilibrium resurfacing model is not a valid model for an explanation of the observed crater population characteristics or Venus' resurfacing history. The global resurfacing model is the most likely explanation for the characteristics of Venus' cratering record. The amount of resurfacing since that event, some 500 m.y. ago, can be estimated by a different type of Monte Carolo simulation. To date, our initial simulation has only considered the easiest case to implement. In this case, the volcanic events are randomly distributed across the entire planet and, therefore, contrary to observation, the flooded craters are also randomly distributed across the planet.
Chao, Anne; Jost, Lou; Hsieh, T C; Ma, K H; Sherwin, William B; Rollins, Lee Ann
2015-01-01
Shannon entropy H and related measures are increasingly used in molecular ecology and population genetics because (1) unlike measures based on heterozygosity or allele number, these measures weigh alleles in proportion to their population fraction, thus capturing a previously-ignored aspect of allele frequency distributions that may be important in many applications; (2) these measures connect directly to the rich predictive mathematics of information theory; (3) Shannon entropy is completely additive and has an explicitly hierarchical nature; and (4) Shannon entropy-based differentiation measures obey strong monotonicity properties that heterozygosity-based measures lack. We derive simple new expressions for the expected values of the Shannon entropy of the equilibrium allele distribution at a neutral locus in a single isolated population under two models of mutation: the infinite allele model and the stepwise mutation model. Surprisingly, this complex stochastic system for each model has an entropy expressable as a simple combination of well-known mathematical functions. Moreover, entropy- and heterozygosity-based measures for each model are linked by simple relationships that are shown by simulations to be approximately valid even far from equilibrium. We also identify a bridge between the two models of mutation. We apply our approach to subdivided populations which follow the finite island model, obtaining the Shannon entropy of the equilibrium allele distributions of the subpopulations and of the total population. We also derive the expected mutual information and normalized mutual information ("Shannon differentiation") between subpopulations at equilibrium, and identify the model parameters that determine them. We apply our measures to data from the common starling (Sturnus vulgaris) in Australia. Our measures provide a test for neutrality that is robust to violations of equilibrium assumptions, as verified on real world data from starlings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ball, W.P.
1990-01-01
Concepts for rate limitation of sorptive uptake of hydrophobic organic solutes by aquifer solids are reviewed, emphasizing physical diffusion models and in the context of effects on contaminant transport. Data for the sorption of tetrachloroethene (PCE) and 1,2,4,5-tetrachlorobenzene (TeCB) on Borden sand are presented, showing that equilibrium is attained very slowly, requiring equilibration times on the order of tens of days for PCE and hundreds of days for TeCB. The rate of approach to equilibrium decreased with increasing particle size and sorption distribution coefficient, in accordance with retarded intragranular diffusion models. Pulverization of the samples significantly decreased the required timemore » to equilibrium without changing the sorption capacity of the solids. Batch sorption methodology was refined to allow accurate measurement of long-term distribution coefficients, using purified {sup 14}C-labelled solute spikes and sealed glass ampules. Sorption isotherms for PCE and TeCB were conducted with size fractions of Borden sand over four to five orders of magnitude in aqueous concentration, and were found to be slightly nonlinear (Freundlich exponent = 0.8). A concentrated set of data in the low concentration range (<50 ug/L) revealed that sorption in this range could be equally well described by a linear isotherm. Distribution coefficients of the two solutes with seven size fractions of Borden sand, measured at low concentration and at full equilibrium, were between seven and sixty times the value predicted on the basis of recent correlations with organic carbon content. Rate results for coarse size fractions support a simple pore diffusion model, with pore diffusion coefficients estimated to be approximately 3 {times} 10{sup {minus}8} cm{sup 2}/sec, more than 200{times} lower than the aqueous diffusivities.« less
The Estimation of Tree Posterior Probabilities Using Conditional Clade Probability Distributions
Larget, Bret
2013-01-01
In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample. [Bayesian phylogenetics; conditional clade distributions; improved accuracy; posterior probabilities of trees.] PMID:23479066
Behaviours and influence factors of radon progeny in three typical dwellings.
Li, Hongzhao; Zhang, Lei; Guo, Qiuju
2011-03-01
To investigate the behaviours and influence factors of radon progeny in rural dwellings in China, site measurements of radon equilibrium factor, unattached fraction and some important indoor environmental factors, such as aerosol concentration, aerosol size distribution and ventilation rate, were carried out in three typical types of dwellings, and a theoretical study was also performed synchronously. Good consistency between the results of site measurements and the theoretical calculation on equilibrium factor F and unattached fraction f(p) was achieved. Lower equilibrium factor and higher unattached fraction in mud or cave houses were found compared to those in brick houses, and it was suggested by the theoretical study that the smaller aerosol size distribution in mud or cave houses might be the main reason for what was observed. The dose conversion factor in the mud houses and the cave houses may be higher than that in brick houses.
Nucleation theory without Maxwell demons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katz, J.L.; Wiedersich, H.
1977-09-01
The equations for steady-state nucleation are derived from the rates of growth and decay of clusters with emphasis on a clear distinction between thermodynamic quantities and inherently kinetic quantities. It is shown that the emission rates of molecules from embryos can be related to the equilibrium size distribution of clusters in a saturated vapor. It is therefore not necessary to invoke the existence of an embryo size distribution constrained be in equilibrium with a supersaturated vapor. The driving force for nucleation is shown to be a kinetic quantity called the condensation rate ratio, i.e., the ratio of the rates ofmore » acquisition of molecules by clusters in the supersaturated vapor to that in a saturated vapor at the same temperature, and not a thermodynamic quantity known as the supersaturation, i.e., the ratio of the actual pressure to the equilibrium vapor pressure.« less
NASA Astrophysics Data System (ADS)
Ni, Yong; He, Linghui; Khachaturyan, Armen G.
2010-07-01
A phase field method is proposed to determine the equilibrium fields of a magnetoelectroelastic multiferroic with arbitrarily distributed constitutive constants under applied loadings. This method is based on a developed generalized Eshelby's equivalency principle, in which the elastic strain, electrostatic, and magnetostatic fields at the equilibrium in the original heterogeneous system are exactly the same as those in an equivalent homogeneous magnetoelectroelastic coupled or uncoupled system with properly chosen distributed effective eigenstrain, polarization, and magnetization fields. Finding these effective fields fully solves the equilibrium elasticity, electrostatics, and magnetostatics in the original heterogeneous multiferroic. The paper formulates a variational principle proving that the effective fields are minimizers of appropriate close-form energy functional. The proposed phase field approach produces the energy minimizing effective fields (and thus solving the general multiferroic problem) as a result of artificial relaxation process described by the Ginzburg-Landau-Khalatnikov kinetic equations.
NASA Astrophysics Data System (ADS)
Gernez, Pierre; Stramski, Dariusz; Darecki, Miroslaw
2011-07-01
Time series measurements of fluctuations in underwater downward irradiance, Ed, within the green spectral band (532 nm) show that the probability distribution of instantaneous irradiance varies greatly as a function of depth within the near-surface ocean under sunny conditions. Because of intense light flashes caused by surface wave focusing, the near-surface probability distributions are highly skewed to the right and are heavy tailed. The coefficients of skewness and excess kurtosis at depths smaller than 1 m can exceed 3 and 20, respectively. We tested several probability models, such as lognormal, Gumbel, Fréchet, log-logistic, and Pareto, which are potentially suited to describe the highly skewed heavy-tailed distributions. We found that the models cannot approximate with consistently good accuracy the high irradiance values within the right tail of the experimental distribution where the probability of these values is less than 10%. This portion of the distribution corresponds approximately to light flashes with Ed > 1.5?, where ? is the time-averaged downward irradiance. However, the remaining part of the probability distribution covering all irradiance values smaller than the 90th percentile can be described with a reasonable accuracy (i.e., within 20%) with a lognormal model for all 86 measurements from the top 10 m of the ocean included in this analysis. As the intensity of irradiance fluctuations decreases with depth, the probability distribution tends toward a function symmetrical around the mean like the normal distribution. For the examined data set, the skewness and excess kurtosis assumed values very close to zero at a depth of about 10 m.
Derivation of the Second Law of Thermodynamics from Boltzmann's Distribution Law.
ERIC Educational Resources Information Center
Nelson, P. G.
1988-01-01
Shows how the thermodynamic condition for equilibrium in an isolated system can be derived by the application of Boltzmann's law to a simple physical system. States that this derivation could be included in an introductory course on chemical equilibrium to help prepare students for a statistical mechanical treatment presented in the curriculum.…
Bubbles Are Departures from Equilibrium Housing Markets: Evidence from Singapore and Taiwan
Chou, Chung-I; Li, Sai-Ping; Tee, Shang You; Cheong, Siew Ann
2016-01-01
The housing prices in many Asian cities have grown rapidly since mid-2000s, leading to many reports of bubbles. However, such reports remain controversial as there is no widely accepted definition for a housing bubble. Previous studies have focused on indices, or assumed that home prices are lognomally distributed. Recently, Ohnishi et al. showed that the tail-end of the distribution of (Japan/Tokyo) becomes fatter during years where bubbles are suspected, but stop short of using this feature as a rigorous definition of a housing bubble. In this study, we look at housing transactions for Singapore (1995 to 2014) and Taiwan (2012 to 2014), and found strong evidence that the equilibrium home price distribution is a decaying exponential crossing over to a power law, after accounting for different housing types. We found positive deviations from the equilibrium distributions in Singapore condominiums and Zhu Zhai Da Lou in the Greater Taipei Area. These positive deviations are dragon kings, which thus provide us with an unambiguous and quantitative definition of housing bubbles. Also, the spatial-temporal dynamics show that bubble in Singapore is driven by price pulses in two investment districts. This finding provides a valuable insight for policymakers on implementation and evaluation of cooling measures. PMID:27812187
Bubbles Are Departures from Equilibrium Housing Markets: Evidence from Singapore and Taiwan.
Tay, Darrell Jiajie; Chou, Chung-I; Li, Sai-Ping; Tee, Shang You; Cheong, Siew Ann
2016-01-01
The housing prices in many Asian cities have grown rapidly since mid-2000s, leading to many reports of bubbles. However, such reports remain controversial as there is no widely accepted definition for a housing bubble. Previous studies have focused on indices, or assumed that home prices are lognomally distributed. Recently, Ohnishi et al. showed that the tail-end of the distribution of (Japan/Tokyo) becomes fatter during years where bubbles are suspected, but stop short of using this feature as a rigorous definition of a housing bubble. In this study, we look at housing transactions for Singapore (1995 to 2014) and Taiwan (2012 to 2014), and found strong evidence that the equilibrium home price distribution is a decaying exponential crossing over to a power law, after accounting for different housing types. We found positive deviations from the equilibrium distributions in Singapore condominiums and Zhu Zhai Da Lou in the Greater Taipei Area. These positive deviations are dragon kings, which thus provide us with an unambiguous and quantitative definition of housing bubbles. Also, the spatial-temporal dynamics show that bubble in Singapore is driven by price pulses in two investment districts. This finding provides a valuable insight for policymakers on implementation and evaluation of cooling measures.
Simulation of MAD Cow Disease Propagation
NASA Astrophysics Data System (ADS)
Magdoń-Maksymowicz, M. S.; Maksymowicz, A. Z.; Gołdasz, J.
Computer simulation of dynamic of BSE disease is presented. Both vertical (to baby) and horizontal (to neighbor) mechanisms of the disease spread are considered. The game takes place on a two-dimensional square lattice Nx×Ny = 1000×1000 with initial population randomly distributed on the net. The disease may be introduced either with the initial population or by a spontaneous development of BSE in an item, at a small frequency. Main results show a critical probability of the BSE transmission above which the disease is present in the population. This value is vulnerable to possible spatial clustering of the population and it also depends on the mechanism responsible for the disease onset, evolution and propagation. A threshold birth rate below which the population is extinct is seen. Above this threshold the population is disease free at equilibrium until another birth rate value is reached when the disease is present in population. For typical model parameters used for the simulation, which may correspond to the mad cow disease, we are close to the BSE-free case.
Effect of electromagnetic field on Kordylewski clouds formation
NASA Astrophysics Data System (ADS)
Salnikova, Tatiana; Stepanov, Sergey
2018-05-01
In previous papers the authors suggest a clarification of the phenomenon of appearance-disappearance of Kordylewski clouds - accumulation of cosmic dust mass in the vicinity of the triangle libration points of the Earth-Moon system. Under gravi-tational and light perturbation of the Sun the triangle libration points aren't the points of relative equilibrium. However, there exist the stable periodic motion of the particles, surrounding every of the triangle libration points. Due to this fact we can consider a probabilistic model of the dust clouds formation. These clouds move along the periodical orbits in small vicinity of the point of periodical orbit. To continue this research we suggest a mathematical model to investigate also the electromagnetic influences, arising under consideration of the charged dust particles in the vicinity of the triangle libration points of the Earth-Moon system. In this model we take under consideration the self-unduced force field within the set of charged particles, the probability distribution density evolves according to the Vlasov equation.
Effect of the spatial autocorrelation of empty sites on the evolution of cooperation
NASA Astrophysics Data System (ADS)
Zhang, Hui; Wang, Li; Hou, Dongshuang
2016-02-01
An evolutionary game model is constructed to investigate the spatial autocorrelation of empty sites on the evolution of cooperation. Each individual is assumed to imitate the strategy of the one who scores the highest in its neighborhood including itself. Simulation results illustrate that the evolutionary dynamics based on the Prisoner's Dilemma game (PD) depends severely on the initial conditions, while the Snowdrift game (SD) is hardly affected by that. A high degree of autocorrelation of empty sites is beneficial for the evolution of cooperation in the PD, whereas it shows diversification effects depending on the parameter of temptation to defect in the SD. Moreover, for the repeated game with three strategies, 'always defect' (ALLD), 'tit-for-tat' (TFT), and 'always cooperate' (ALLC), simulations reveal that an amazing evolutionary diversity appears for varying of parameters of the temptation to defect and the probability of playing in the next round of the game. The spatial autocorrelation of empty sites can have profound effects on evolutionary dynamics (equilibrium and oscillation) and spatial distribution.
Interface structure and contact melting in AgCu eutectic. A molecular dynamics study
NASA Astrophysics Data System (ADS)
Bystrenko, O.; Kartuzov, V.
2017-12-01
Molecular dynamics simulations of the interface structure in binary AgCu eutectic were performed by using the realistic EAM potential. In simulations, we examined the time dependence of the total energy in the process of equilibration, the probability distributions, the composition profiles for the components, and the component diffusivities within the interface zone. It is shown that the relaxation to the equilibrium in the solid state is accompanied by the formation of the steady disordered diffusion zone at the boundary between the crystalline components. At higher temperatures, closer to the eutectic point, the increase in the width of the steady diffusion zone is observed. The particle diffusivities grow therewith to the numbers typical for the liquid metals. Above the eutectic point, the steady zone does not form, instead, the complete contact melting in the system occurs. The results of simulations indicate that during the temperature increase the phenomenon of contact melting is preceded by the similar process spatially localized in the vicinity of the interface.
Numerical simulation of submicron particles formation by condensation at coals burning
NASA Astrophysics Data System (ADS)
Kortsenshteyn, N. M.; Petrov, L. V.
2017-11-01
The thermodynamic analysis of the composition of the combustion products of 15 types of coals was carried out with consideration for the formation of potassium and sodium aluminosilicates and solid and liquid slag removal. Based on the results of the analysis, the approximating temperature dependences of the concentrations of condensed components (potassium and sodium sulfates) were obtained for the cases of two-phase and single-phase equilibriums; conclusions on the comparative influence of solid and liquid slag removal on the probability of the formation of submicron particles on the combustion of coals were made. The found dependences was make it possible to perform a numerical simulation of the bulk condensation of potassium and sodium sulfate vapors upon the cooling of coal combustion products in a process flow. The number concentration and size distribution of the formed particles have been determined. Agreement with experimental data on the fraction composition of particles has been reached at a reasonable value of a free parameter of the model.
Mapoma, Harold Wilson Tumwitike; Xie, Xianjun; Pi, Kunfu; Liu, Yaqing; Zhu, Yapeng
2016-03-01
This paper discusses the reactive transport and evolution of arsenic along a selected flow path in a study plot within the central part of Datong basin. The simulation used the TOUGHREACT code. The spatial and temporal trends in hydrochemistry and mineral volume fraction along a flow path were observed. Furthermore, initial simulation of major ions and pH fits closely to the measured data. The study shows that equilibrium conditions may be attained at different stress periods for each parameter simulated. It is noted that the variations in ionic chemistry have a greater impact on arsenic distribution while reducing conditions drive the mobilization of arsenic. The study concluded that the reduction of Fe(iii) and As(v) and probably SO4/HS cycling are significant factors affecting localized mobilization of arsenic. Besides cation exchange and water-rock interaction, incongruent dissolution of silicates is also a significant control mechanism of general chemistry of the Datong basin aquifer.
Flow Equation Approach to the Statistics of Nonlinear Dynamical Systems
NASA Astrophysics Data System (ADS)
Marston, J. B.; Hastings, M. B.
2005-03-01
The probability distribution function of non-linear dynamical systems is governed by a linear framework that resembles quantum many-body theory, in which stochastic forcing and/or averaging over initial conditions play the role of non-zero . Besides the well-known Fokker-Planck approach, there is a related Hopf functional methodootnotetextUriel Frisch, Turbulence: The Legacy of A. N. Kolmogorov (Cambridge University Press, 1995) chapter 9.5.; in both formalisms, zero modes of linear operators describe the stationary non-equilibrium statistics. To access the statistics, we investigate the method of continuous unitary transformationsootnotetextS. D. Glazek and K. G. Wilson, Phys. Rev. D 48, 5863 (1993); Phys. Rev. D 49, 4214 (1994). (also known as the flow equation approachootnotetextF. Wegner, Ann. Phys. 3, 77 (1994).), suitably generalized to the diagonalization of non-Hermitian matrices. Comparison to the more traditional cumulant expansion method is illustrated with low-dimensional attractors. The treatment of high-dimensional dynamical systems is also discussed.
Application of a Modular Particle-Continuum Method to Partially Rarefied, Hypersonic Flow
NASA Astrophysics Data System (ADS)
Deschenes, Timothy R.; Boyd, Iain D.
2011-05-01
The Modular Particle-Continuum (MPC) method is used to simulate partially-rarefied, hypersonic flow over a sting-mounted planetary probe configuration. This hybrid method uses computational fluid dynamics (CFD) to solve the Navier-Stokes equations in regions that are continuum, while using direct simulation Monte Carlo (DSMC) in portions of the flow that are rarefied. The MPC method uses state-based coupling to pass information between the two flow solvers and decouples both time-step and mesh densities required by each solver. It is parallelized for distributed memory systems using dynamic domain decomposition and internal energy modes can be consistently modeled to be out of equilibrium with the translational mode in both solvers. The MPC results are compared to both full DSMC and CFD predictions and available experimental measurements. By using DSMC in only regions where the flow is nonequilibrium, the MPC method is able to reproduce full DSMC results down to the level of velocity and rotational energy probability density functions while requiring a fraction of the computational time.
Energetics and solvation structure of a dihalogen dopant (I2) in (4)He clusters.
Pérez de Tudela, Ricardo; Barragán, Patricia; Valdés, Álvaro; Prosmiti, Rita
2014-08-21
The energetics and structure of small HeNI2 clusters are analyzed as the size of the system changes, with N up to 38. The full interaction between the I2 molecule and the He atoms is based on analytical ab initio He-I2 potentials plus the He-He interaction, obtained from first-principle calculations. The most stable structures, as a function of the number of solvent He atoms, are obtained by employing an evolutionary algorithm and compared with CCSD(T) and MP2 ab initio computations. Further, the classical description is completed by explicitly including thermal corrections and quantum features, such as zero-point-energy values and spatial delocalization. From quantum PIMC calculations, the binding energies and radial/angular probability density distributions of the thermal equilibrium state for selected-size clusters are computed at a low temperature. The sequential formation of regular shell structures is analyzed and discussed for both classical and quantum treatments.
NASA Astrophysics Data System (ADS)
Sardanyés, Josep; Simó, Carles; Martínez, Regina; Solé, Ricard V.; Elena, Santiago F.
2014-04-01
The distribution of mutational fitness effects (DMFE) is crucial to the evolutionary fate of quasispecies. In this article we analyze the effect of the DMFE on the dynamics of a large quasispecies by means of a phenotypic version of the classic Eigen's model that incorporates beneficial, neutral, deleterious, and lethal mutations. By parameterizing the model with available experimental data on the DMFE of Vesicular stomatitis virus (VSV) and Tobacco etch virus (TEV), we found that increasing mutation does not totally push the entire viral quasispecies towards deleterious or lethal regions of the phenotypic sequence space. The probability of finding regions in the parameter space of the general model that results in a quasispecies only composed by lethal phenotypes is extremely small at equilibrium and in transient times. The implications of our findings can be extended to other scenarios, such as lethal mutagenesis or genomically unstable cancer, where increased mutagenesis has been suggested as a potential therapy.
NASA Astrophysics Data System (ADS)
Haque, S. E.; Johannesson, K. H.
2006-05-01
Arsenic (As) concentrations and speciation were determined in groundwaters along a flow-path in the Upper Floridan aquifer (UFA) to investigate the biogeochemical “evolution“ of As in this relatively pristine aquifer. Dissolved inorganic As species were separated in the field using anion-exchange chromatography and subsequently analyzed by inductively coupled plasma mass spectrometry. Total As concentrations are higher in the recharge area groundwaters compared to down-gradient portions of UFA. Redox conditions vary from relatively oxic to anoxic along the flow-path. Mobilization of As species in UFA groundwaters is influenced by ferric iron reduction and subsequent dissolution, sulfate reduction, and probable pyrite precipitation that are inferred from the data to occur along distinct regions of the flow-path. In general, the distribution of As species are consistent with equilibrium thermodynamics, such that arsenate dominates in more oxidizing waters near the recharge area, and arsenite predominates in the progressively reducing groundwaters beyond the recharge area.
Will Renewable Energy Save Our Planet?
NASA Astrophysics Data System (ADS)
Bojić, Milorad
2010-06-01
This paper discusses some important fundamental issues behind application of renewable energy (RE) to evaluate its impact as a climate change mitigation technology. The discussed issues are the following: definition of renewable energy, concentration of RE by weight and volume, generation of electrical energy and its power at unit area, electrical energy demand per unit area, life time approach vs. layman approach, energy return time, energy return ratio, CO2 return time, energy mix for RES production and use, geographical distribution of RES use, huge scale of energy shift from RES to non-RES, increase in energy consumption, Thermodynamic equilibrium of earth, and probable solutions for energy future of our energy and environmental crisis of today. The future solution (that would enable to human civilization further welfare, and good living, but with lower release of CO2 in atmosphere) may not be only RES. This will rather be an energy mix that may contain nuclear energy, non-nuclear renewable energy, or fossil energy with CO2 sequestration, efficient energy technologies, energy saving, and energy consumption decrease.
Statistics of Macroturbulence from Flow Equations
NASA Astrophysics Data System (ADS)
Marston, Brad; Iadecola, Thomas; Qi, Wanming
2012-02-01
Probability distribution functions of stochastically-driven and frictionally-damped fluids are governed by a linear framework that resembles quantum many-body theory. Besides the Fokker-Planck approach, there is a closely related Hopf functional methodfootnotetextOokie Ma and J. B. Marston, J. Stat. Phys. Th. Exp. P10007 (2005).; in both formalisms, zero modes of linear operators describe the stationary non-equilibrium statistics. To access the statistics, we generalize the flow equation approachfootnotetextF. Wegner, Ann. Phys. 3, 77 (1994). (also known as the method of continuous unitary transformationsfootnotetextS. D. Glazek and K. G. Wilson, Phys. Rev. D 48, 5863 (1993); Phys. Rev. D 49, 4214 (1994).) to find the zero mode. We test the approach using a prototypical model of geophysical and astrophysical flows on a rotating sphere that spontaneously organizes into a coherent jet. Good agreement is found with low-order equal-time statistics accumulated by direct numerical simulation, the traditional method. Different choices for the generators of the continuous transformations, and for closure approximations of the operator algebra, are discussed.
Information-geometric measures estimate neural interactions during oscillatory brain states
Nie, Yimin; Fellous, Jean-Marc; Tatsuno, Masami
2014-01-01
The characterization of functional network structures among multiple neurons is essential to understanding neural information processing. Information geometry (IG), a theory developed for investigating a space of probability distributions has recently been applied to spike-train analysis and has provided robust estimations of neural interactions. Although neural firing in the equilibrium state is often assumed in these studies, in reality, neural activity is non-stationary. The brain exhibits various oscillations depending on cognitive demands or when an animal is asleep. Therefore, the investigation of the IG measures during oscillatory network states is important for testing how the IG method can be applied to real neural data. Using model networks of binary neurons or more realistic spiking neurons, we studied how the single- and pairwise-IG measures were influenced by oscillatory neural activity. Two general oscillatory mechanisms, externally driven oscillations and internally induced oscillations, were considered. In both mechanisms, we found that the single-IG measure was linearly related to the magnitude of the external input, and that the pairwise-IG measure was linearly related to the sum of connection strengths between two neurons. We also observed that the pairwise-IG measure was not dependent on the oscillation frequency. These results are consistent with the previous findings that were obtained under the equilibrium conditions. Therefore, we demonstrate that the IG method provides useful insights into neural interactions under the oscillatory condition that can often be observed in the real brain. PMID:24605089
Information-geometric measures estimate neural interactions during oscillatory brain states.
Nie, Yimin; Fellous, Jean-Marc; Tatsuno, Masami
2014-01-01
The characterization of functional network structures among multiple neurons is essential to understanding neural information processing. Information geometry (IG), a theory developed for investigating a space of probability distributions has recently been applied to spike-train analysis and has provided robust estimations of neural interactions. Although neural firing in the equilibrium state is often assumed in these studies, in reality, neural activity is non-stationary. The brain exhibits various oscillations depending on cognitive demands or when an animal is asleep. Therefore, the investigation of the IG measures during oscillatory network states is important for testing how the IG method can be applied to real neural data. Using model networks of binary neurons or more realistic spiking neurons, we studied how the single- and pairwise-IG measures were influenced by oscillatory neural activity. Two general oscillatory mechanisms, externally driven oscillations and internally induced oscillations, were considered. In both mechanisms, we found that the single-IG measure was linearly related to the magnitude of the external input, and that the pairwise-IG measure was linearly related to the sum of connection strengths between two neurons. We also observed that the pairwise-IG measure was not dependent on the oscillation frequency. These results are consistent with the previous findings that were obtained under the equilibrium conditions. Therefore, we demonstrate that the IG method provides useful insights into neural interactions under the oscillatory condition that can often be observed in the real brain.
NASA Astrophysics Data System (ADS)
Berkovich, Ronen; Klafter, Joseph; Urbakh, Michael
Free energy is one of the most fundamental thermodynamic functions, determining relative phase stability and serving as a generating function for other thermodynamic quantities. The calculation of free energies is a challenging enterprise. In equilibrium statistical mechanics, the free energy is related to the canonical partition function. The partition function itself involves integrations over all degrees of freedom in the system and, in most cases, cannot be easily calculated directly. In 1997, Jarzynski proved a remarkable equality that allows computing the equilibrium free-energy difference between two states from the probability distribution of the nonequilibrium work done on the system to switch between the two states. The Jarzynski equality provides a powerful free-energy difference estimator from a set of irreversible experiments. This method is closely related to free-energy perturbation approach, which is also a computational technique for estimating free-energy differences. The ability to map potential profiles and topologies is of major significance to areas as diverse as biological recognition and nanoscale friction. This capability has been demonstrated for frictional studies where a force between the tip of the scanning force microscope and the surface is probed. The surface free-energy corrugation produces a detectable friction forces. Thus, friction force microscopy (FFM) should be able to discriminate between energetically different areas on the probed surface. Here, we apply the Jarzynski equality for the analysis of FFM measurements and thus obtain a variation of the free energy along a surface.
NASA Astrophysics Data System (ADS)
Ma, Jing; Zhu, He
2018-06-01
In this study, we propose a novel rumor spreading model in consideration of the individuals' subjective judgment and diverse characteristics. To reflect the diversity of the individuals' characteristics, we introduce two probability distribution functions, which could be chosen arbitrarily or given by empirical data, to characterize individuals' mastering degree of knowledge with respect to the domain of a specific rumor and individuals' rationality degree. Different from existing models, no two persons in our model are identical, and each individual can judge the authenticity of the information, e.g., rumors, with his distinctive characteristics. In addition, by means of the mean-field method, we establish the expression of the dynamics of the rumor propagation in the complex heterogeneous networks and derive the rumor spreading threshold. Through the theoretical analysis, we find that the threshold is independent of the forms of the two introduced functions. Furthermore, we prove the stability of the rumor-free equilibrium set E0. That is if and only if R0 < 1, the rumor-free equilibrium set E0 is globally asymptotically stable. Finally, we conduct a series of numerical simulations to verify the theoretical results and comprehensively illustrate the evolution of the model. The simulation results show that because of the diversity of individuals' characteristics, it becomes more difficult for the rumor to disseminate in the networks and the higher the mean of knowledge and the mean of rationality are, the more time it will take for the model to evolve to the steady state.
Predicting the probability of slip in gait: methodology and distribution study.
Gragg, Jared; Yang, James
2016-01-01
The likelihood of a slip is related to the available and required friction for a certain activity, here gait. Classical slip and fall analysis presumed that a walking surface was safe if the difference between the mean available and required friction coefficients exceeded a certain threshold. Previous research was dedicated to reformulating the classical slip and fall theory to include the stochastic variation of the available and required friction when predicting the probability of slip in gait. However, when predicting the probability of a slip, previous researchers have either ignored the variation in the required friction or assumed the available and required friction to be normally distributed. Also, there are no published results that actually give the probability of slip for various combinations of required and available frictions. This study proposes a modification to the equation for predicting the probability of slip, reducing the previous equation from a double-integral to a more convenient single-integral form. Also, a simple numerical integration technique is provided to predict the probability of slip in gait: the trapezoidal method. The effect of the random variable distributions on the probability of slip is also studied. It is shown that both the required and available friction distributions cannot automatically be assumed as being normally distributed. The proposed methods allow for any combination of distributions for the available and required friction, and numerical results are compared to analytical solutions for an error analysis. The trapezoidal method is shown to be highly accurate and efficient. The probability of slip is also shown to be sensitive to the input distributions of the required and available friction. Lastly, a critical value for the probability of slip is proposed based on the number of steps taken by an average person in a single day.
Integrated-Circuit Pseudorandom-Number Generator
NASA Technical Reports Server (NTRS)
Steelman, James E.; Beasley, Jeff; Aragon, Michael; Ramirez, Francisco; Summers, Kenneth L.; Knoebel, Arthur
1992-01-01
Integrated circuit produces 8-bit pseudorandom numbers from specified probability distribution, at rate of 10 MHz. Use of Boolean logic, circuit implements pseudorandom-number-generating algorithm. Circuit includes eight 12-bit pseudorandom-number generators, outputs are uniformly distributed. 8-bit pseudorandom numbers satisfying specified nonuniform probability distribution are generated by processing uniformly distributed outputs of eight 12-bit pseudorandom-number generators through "pipeline" of D flip-flops, comparators, and memories implementing conditional probabilities on zeros and ones.
Asymptotics of small deviations of the Bogoliubov processes with respect to a quadratic norm
NASA Astrophysics Data System (ADS)
Pusev, R. S.
2010-10-01
We obtain results on small deviations of Bogoliubov’s Gaussian measure occurring in the theory of the statistical equilibrium of quantum systems. For some random processes related to Bogoliubov processes, we find the exact asymptotic probability of their small deviations with respect to a Hilbert norm.
Atomic Spectroscopic Databases at NIST
NASA Technical Reports Server (NTRS)
Reader, J.; Kramida, A. E.; Ralchenko, Yu.
2006-01-01
We describe recent work at NIST to develop and maintain databases for spectra, transition probabilities, and energy levels of atoms that are astrophysically important. Our programs to critically compile these data as well as to develop a new database to compare plasma calculations for atoms that are not in local thermodynamic equilibrium are also summarized.
Lopez-Escamez, J A; Carey, J; Chung, W-H; Goebel, J A; Magnusson, M; Mandalà, M; Newman-Toker, D E; Strupp, M; Suzuki, M; Trabalzini, F; Bisdorff, A
2017-11-01
This paper presents diagnostic criteria for Menière's disease jointly formulated by the Classification Committee of the Bárány Society, The Japan Society for Equilibrium Research, the European Academy of Otology and Neurotology (EAONO), the Equilibrium Committee of the American Academy of Otolaryngology-Head and Neck Surgery (AAO-HNS) and the Korean Balance Society. The classification includes two categories: definite Menière's disease and probable Menière's disease. The diagnosis of definite Menière's disease is based on clinical criteria and requires the observation of an episodic vertigo syndrome associated with low- to medium-frequency sensorineural hearing loss and fluctuating aural symptoms (hearing, tinnitus and/or fullness) in the affected ear. Duration of vertigo episodes is limited to a period between 20 min and 12 h. Probable Menière's disease is a broader concept defined by episodic vestibular symptoms (vertigo or dizziness) associated with fluctuating aural symptoms occurring in a period from 20 min to 24 h.
NASA Astrophysics Data System (ADS)
Kallinger, Peter; Szymanski, Wladyslaw W.
2015-04-01
Three bipolar aerosol chargers, an AC-corona (Electrical Ionizer 1090, MSP Corp.), a soft X-ray (Advanced Aerosol Neutralizer 3087, TSI Inc.), and an α-radiation-based 241Am charger (tapcon & analysesysteme), were investigated on their charging performance of airborne nanoparticles. The charging probabilities for negatively and positively charged particles and the particle size conservation were measured in the diameter range of 5-40 nm using sucrose nanoparticles. Chargers were operated under various flow conditions in the range of 0.6-5.0 liters per minute. For particular experimental conditions, some deviations from the chosen theoretical model were found for all chargers. For very small particle sizes, the AC-corona charger showed particle losses at low flow rates and did not reach steady-state charge equilibrium at high flow rates. However, for all chargers, operating conditions were identified where the bipolar charge equilibrium was achieved. Practically, excellent particle size conservation was found for all three chargers.
Dynamics of isolated quantum systems: many-body localization and thermalization
NASA Astrophysics Data System (ADS)
Torres-Herrera, E. Jonathan; Tavora, Marco; Santos, Lea F.
2016-05-01
We show that the transition to a many-body localized phase and the onset of thermalization can be inferred from the analysis of the dynamics of isolated quantum systems taken out of equilibrium abruptly. The systems considered are described by one-dimensional spin-1/2 models with static random magnetic fields and by power-law band random matrices. We find that the short-time decay of the survival probability of the initial state is faster than exponential for sufficiently strong perturbations. This initial evolution does not depend on whether the system is integrable or chaotic, disordered or clean. At long-times, the dynamics necessarily slows down and shows a power-law behavior. The value of the power-law exponent indicates whether the system will reach thermal equilibrium or not. We present how the properties of the spectrum, structure of the initial state, and number of particles that interact simultaneously affect the value of the power-law exponent. We also compare the results for the survival probability with those for few-body observables. EJTH aknowledges financial support from PRODEP-SEP and VIEP-BUAP, Mexico.
Macro-economic assessment of flood risk in Italy under current and future climate
NASA Astrophysics Data System (ADS)
Carrera, Lorenzo; Koks, Elco; Mysiak, Jaroslav; Aerts, Jeroen; Standardi, Gabriele
2014-05-01
This paper explores an integrated methodology for assessing direct and indirect costs of fluvial flooding to estimate current and future fluvial flood risk in Italy. Our methodology combines a Geographic Information System spatial approach, with a general economic equilibrium approach using a downscaled modified version of a Computable General Equilibrium model at NUTS2 scale. Given the level of uncertainty in the behavior of disaster-affected economies, the simulation considers a wide range of business recovery periods. We calculate expected annual losses for each NUTS2 region, and exceedence probability curves to determine probable maximum losses. Given a certain acceptable level of risk, we describe the conditions of flood protection and business recovery periods under which losses are contained within this limit. Because of the difference between direct costs, which are an overestimation of stock losses, and indirect costs, which represent the macro-economic effects, our results have different policy meanings. While the former is relevant for post-disaster recovery, the latter is more relevant for public policy issues, particularly for cost-benefit analysis and resilience assessment.
Bivariate normal, conditional and rectangular probabilities: A computer program with applications
NASA Technical Reports Server (NTRS)
Swaroop, R.; Brownlow, J. D.; Ashwworth, G. R.; Winter, W. R.
1980-01-01
Some results for the bivariate normal distribution analysis are presented. Computer programs for conditional normal probabilities, marginal probabilities, as well as joint probabilities for rectangular regions are given: routines for computing fractile points and distribution functions are also presented. Some examples from a closed circuit television experiment are included.
Assessment of source probabilities for potential tsunamis affecting the U.S. Atlantic coast
Geist, E.L.; Parsons, T.
2009-01-01
Estimating the likelihood of tsunamis occurring along the U.S. Atlantic coast critically depends on knowledge of tsunami source probability. We review available information on both earthquake and landslide probabilities from potential sources that could generate local and transoceanic tsunamis. Estimating source probability includes defining both size and recurrence distributions for earthquakes and landslides. For the former distribution, source sizes are often distributed according to a truncated or tapered power-law relationship. For the latter distribution, sources are often assumed to occur in time according to a Poisson process, simplifying the way tsunami probabilities from individual sources can be aggregated. For the U.S. Atlantic coast, earthquake tsunami sources primarily occur at transoceanic distances along plate boundary faults. Probabilities for these sources are constrained from previous statistical studies of global seismicity for similar plate boundary types. In contrast, there is presently little information constraining landslide probabilities that may generate local tsunamis. Though there is significant uncertainty in tsunami source probabilities for the Atlantic, results from this study yield a comparative analysis of tsunami source recurrence rates that can form the basis for future probabilistic analyses.
Multinomial mixture model with heterogeneous classification probabilities
Holland, M.D.; Gray, B.R.
2011-01-01
Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.
In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less