Sample records for maximizing entropy yields

  1. The maximum entropy production and maximum Shannon information entropy in enzyme kinetics

    NASA Astrophysics Data System (ADS)

    Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš

    2018-04-01

    We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.

  2. Energy conservation and maximal entropy production in enzyme reactions.

    PubMed

    Dobovišek, Andrej; Vitas, Marko; Brumen, Milan; Fajmut, Aleš

    2017-08-01

    A procedure for maximization of the density of entropy production in a single stationary two-step enzyme reaction is developed. Under the constraints of mass conservation, fixed equilibrium constant of a reaction and fixed products of forward and backward enzyme rate constants the existence of maximum in the density of entropy production is demonstrated. In the state with maximal density of entropy production the optimal enzyme rate constants, the stationary concentrations of the substrate and the product, the stationary product yield as well as the stationary reaction flux are calculated. The test, whether these calculated values of the reaction parameters are consistent with their corresponding measured values, is performed for the enzyme Glucose Isomerase. It is found that calculated and measured rate constants agree within an order of magnitude, whereas the calculated reaction flux and the product yield differ from their corresponding measured values for less than 20 % and 5 %, respectively. This indicates that the enzyme Glucose Isomerase, considered in a non-equilibrium stationary state, as found in experiments using the continuous stirred tank reactors, possibly operates close to the state with the maximum in the density of entropy production. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Quantum-state reconstruction by maximizing likelihood and entropy.

    PubMed

    Teo, Yong Siah; Zhu, Huangjun; Englert, Berthold-Georg; Řeháček, Jaroslav; Hradil, Zdeněk

    2011-07-08

    Quantum-state reconstruction on a finite number of copies of a quantum system with informationally incomplete measurements, as a rule, does not yield a unique result. We derive a reconstruction scheme where both the likelihood and the von Neumann entropy functionals are maximized in order to systematically select the most-likely estimator with the largest entropy, that is, the least-bias estimator, consistent with a given set of measurement data. This is equivalent to the joint consideration of our partial knowledge and ignorance about the ensemble to reconstruct its identity. An interesting structure of such estimators will also be explored.

  4. From entropy-maximization to equality-maximization: Gauss, Laplace, Pareto, and Subbotin

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2014-12-01

    The entropy-maximization paradigm of statistical physics is well known to generate the omnipresent Gauss law. In this paper we establish an analogous socioeconomic model which maximizes social equality, rather than physical disorder, in the context of the distributions of income and wealth in human societies. We show that-on a logarithmic scale-the Laplace law is the socioeconomic equality-maximizing counterpart of the physical entropy-maximizing Gauss law, and that this law manifests an optimized balance between two opposing forces: (i) the rich and powerful, striving to amass ever more wealth, and thus to increase social inequality; and (ii) the masses, struggling to form more egalitarian societies, and thus to increase social equality. Our results lead from log-Gauss statistics to log-Laplace statistics, yield Paretian power-law tails of income and wealth distributions, and show how the emergence of a middle-class depends on the underlying levels of socioeconomic inequality and variability. Also, in the context of asset-prices with Laplace-distributed returns, our results imply that financial markets generate an optimized balance between risk and predictability.

  5. Nonadditive entropies yield probability distributions with biases not warranted by the data.

    PubMed

    Pressé, Steve; Ghosh, Kingshuk; Lee, Julian; Dill, Ken A

    2013-11-01

    Different quantities that go by the name of entropy are used in variational principles to infer probability distributions from limited data. Shore and Johnson showed that maximizing the Boltzmann-Gibbs form of the entropy ensures that probability distributions inferred satisfy the multiplication rule of probability for independent events in the absence of data coupling such events. Other types of entropies that violate the Shore and Johnson axioms, including nonadditive entropies such as the Tsallis entropy, violate this basic consistency requirement. Here we use the axiomatic framework of Shore and Johnson to show how such nonadditive entropy functions generate biases in probability distributions that are not warranted by the underlying data.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vignat, C.; Bercher, J.-F.

    The family of Tsallis entropies was introduced by Tsallis in 1988. The Shannon entropy belongs to this family as the limit case q{yields}1. The canonical distributions in R{sup n} that maximize this entropy under a covariance constraint are easily derived as Student-t (q<1) and Student-r (q>1) multivariate distributions. A nice geometrical result about these Student-r distributions is that they are marginal of uniform distributions on a sphere of larger dimension d with the relationship p = n+2+(2/q-1). As q{yields}1, we recover the famous Poincare's observation according to which a Gaussian vector can be viewed as the projection of a vectormore » uniformly distributed on the infinite dimensional sphere. A related property in the case q<1 is also available. Often associated to Renyi-Tsallis entropies is the notion of escort distributions. We provide here a geometric interpretation of these distributions. Another result concerns a universal system in physics, the harmonic oscillator: in the usual quantum context, the waveform of the n-th state of the harmonic oscillator is a Gaussian waveform multiplied by the degree n Hermite polynomial. We show, starting from recent results by Carinena et al., that the quantum harmonic oscillator on spaces with constant curvature is described by maximal Tsallis entropy waveforms multiplied by the extended Hermite polynomials derived from this measure. This gives a neat interpretation of the non-extensive parameter q in terms of the curvature of the space the oscillator evolves on; as q{yields}1, the curvature of the space goes to 0 and we recover the classical harmonic oscillator in R{sup 3}.« less

  7. Using the principle of entropy maximization to infer genetic interaction networks from gene expression patterns.

    PubMed

    Lezon, Timothy R; Banavar, Jayanth R; Cieplak, Marek; Maritan, Amos; Fedoroff, Nina V

    2006-12-12

    We describe a method based on the principle of entropy maximization to identify the gene interaction network with the highest probability of giving rise to experimentally observed transcript profiles. In its simplest form, the method yields the pairwise gene interaction network, but it can also be extended to deduce higher-order interactions. Analysis of microarray data from genes in Saccharomyces cerevisiae chemostat cultures exhibiting energy metabolic oscillations identifies a gene interaction network that reflects the intracellular communication pathways that adjust cellular metabolic activity and cell division to the limiting nutrient conditions that trigger metabolic oscillations. The success of the present approach in extracting meaningful genetic connections suggests that the maximum entropy principle is a useful concept for understanding living systems, as it is for other complex, nonequilibrium systems.

  8. Using the principle of entropy maximization to infer genetic interaction networks from gene expression patterns

    PubMed Central

    Lezon, Timothy R.; Banavar, Jayanth R.; Cieplak, Marek; Maritan, Amos; Fedoroff, Nina V.

    2006-01-01

    We describe a method based on the principle of entropy maximization to identify the gene interaction network with the highest probability of giving rise to experimentally observed transcript profiles. In its simplest form, the method yields the pairwise gene interaction network, but it can also be extended to deduce higher-order interactions. Analysis of microarray data from genes in Saccharomyces cerevisiae chemostat cultures exhibiting energy metabolic oscillations identifies a gene interaction network that reflects the intracellular communication pathways that adjust cellular metabolic activity and cell division to the limiting nutrient conditions that trigger metabolic oscillations. The success of the present approach in extracting meaningful genetic connections suggests that the maximum entropy principle is a useful concept for understanding living systems, as it is for other complex, nonequilibrium systems. PMID:17138668

  9. A Maximal Entropy Distribution Derivation of the Sharma-Taneja-Mittal Entropic Form

    NASA Astrophysics Data System (ADS)

    Scarfone, Antonio M.

    In this letter we derive the distribution maximizing the Sharma-Taneja-Mittal entropy under certain constrains by using an information inequality satisfied by the Br`egman divergence associated to this entropic form. The resulting maximal entropy distribution coincides with the one derived from the calculus according to the maximal entropy principle à la Jaynes.

  10. Conditional maximum-entropy method for selecting prior distributions in Bayesian statistics

    NASA Astrophysics Data System (ADS)

    Abe, Sumiyoshi

    2014-11-01

    The conditional maximum-entropy method (abbreviated here as C-MaxEnt) is formulated for selecting prior probability distributions in Bayesian statistics for parameter estimation. This method is inspired by a statistical-mechanical approach to systems governed by dynamics with largely separated time scales and is based on three key concepts: conjugate pairs of variables, dimensionless integration measures with coarse-graining factors and partial maximization of the joint entropy. The method enables one to calculate a prior purely from a likelihood in a simple way. It is shown, in particular, how it not only yields Jeffreys's rules but also reveals new structures hidden behind them.

  11. New classes of Lorenz curves by maximizing Tsallis entropy under mean and Gini equality and inequality constraints

    NASA Astrophysics Data System (ADS)

    Preda, Vasile; Dedu, Silvia; Gheorghe, Carmen

    2015-10-01

    In this paper, by using the entropy maximization principle with Tsallis entropy, new distribution families for modeling the income distribution are derived. Also, new classes of Lorenz curves are obtained by applying the entropy maximization principle with Tsallis entropy, under mean and Gini index equality and inequality constraints.

  12. Nonadditive entropy maximization is inconsistent with Bayesian updating

    NASA Astrophysics Data System (ADS)

    Pressé, Steve

    2014-11-01

    The maximum entropy method—used to infer probabilistic models from data—is a special case of Bayes's model inference prescription which, in turn, is grounded in basic propositional logic. By contrast to the maximum entropy method, the compatibility of nonadditive entropy maximization with Bayes's model inference prescription has never been established. Here we demonstrate that nonadditive entropy maximization is incompatible with Bayesian updating and discuss the immediate implications of this finding. We focus our attention on special cases as illustrations.

  13. Nonadditive entropy maximization is inconsistent with Bayesian updating.

    PubMed

    Pressé, Steve

    2014-11-01

    The maximum entropy method-used to infer probabilistic models from data-is a special case of Bayes's model inference prescription which, in turn, is grounded in basic propositional logic. By contrast to the maximum entropy method, the compatibility of nonadditive entropy maximization with Bayes's model inference prescription has never been established. Here we demonstrate that nonadditive entropy maximization is incompatible with Bayesian updating and discuss the immediate implications of this finding. We focus our attention on special cases as illustrations.

  14. Multidimensional density shaping by sigmoids.

    PubMed

    Roth, Z; Baram, Y

    1996-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the output entropy of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's optimization method, applied to the estimated density, yields a recursive estimator for a random variable or a random sequence. A constrained connectivity structure yields a linear estimator, which is particularly suitable for "real time" prediction. A Gaussian nonlinearity yields a closed-form solution for the network's parameters, which may also be used for initializing the optimization algorithm when other nonlinearities are employed. A triangular connectivity between the neurons and the input, which is naturally suggested by the statistical setting, reduces the number of parameters. Applications to classification and forecasting problems are demonstrated.

  15. Analysis of the phase transition in the two-dimensional Ising ferromagnet using a Lempel-Ziv string-parsing scheme and black-box data-compression utilities

    NASA Astrophysics Data System (ADS)

    Melchert, O.; Hartmann, A. K.

    2015-02-01

    In this work we consider information-theoretic observables to analyze short symbolic sequences, comprising time series that represent the orientation of a single spin in a two-dimensional (2D) Ising ferromagnet on a square lattice of size L2=1282 for different system temperatures T . The latter were chosen from an interval enclosing the critical point Tc of the model. At small temperatures the sequences are thus very regular; at high temperatures they are maximally random. In the vicinity of the critical point, nontrivial, long-range correlations appear. Here we implement estimators for the entropy rate, excess entropy (i.e., "complexity"), and multi-information. First, we implement a Lempel-Ziv string-parsing scheme, providing seemingly elaborate entropy rate and multi-information estimates and an approximate estimator for the excess entropy. Furthermore, we apply easy-to-use black-box data-compression utilities, providing approximate estimators only. For comparison and to yield results for benchmarking purposes, we implement the information-theoretic observables also based on the well-established M -block Shannon entropy, which is more tedious to apply compared to the first two "algorithmic" entropy estimation procedures. To test how well one can exploit the potential of such data-compression techniques, we aim at detecting the critical point of the 2D Ising ferromagnet. Among the above observables, the multi-information, which is known to exhibit an isolated peak at the critical point, is very easy to replicate by means of both efficient algorithmic entropy estimation procedures. Finally, we assess how good the various algorithmic entropy estimates compare to the more conventional block entropy estimates and illustrate a simple modification that yields enhanced results.

  16. Gradient Dynamics and Entropy Production Maximization

    NASA Astrophysics Data System (ADS)

    Janečka, Adam; Pavelka, Michal

    2018-01-01

    We compare two methods for modeling dissipative processes, namely gradient dynamics and entropy production maximization. Both methods require similar physical inputs-how energy (or entropy) is stored and how it is dissipated. Gradient dynamics describes irreversible evolution by means of dissipation potential and entropy, it automatically satisfies Onsager reciprocal relations as well as their nonlinear generalization (Maxwell-Onsager relations), and it has statistical interpretation. Entropy production maximization is based on knowledge of free energy (or another thermodynamic potential) and entropy production. It also leads to the linear Onsager reciprocal relations and it has proven successful in thermodynamics of complex materials. Both methods are thermodynamically sound as they ensure approach to equilibrium, and we compare them and discuss their advantages and shortcomings. In particular, conditions under which the two approaches coincide and are capable of providing the same constitutive relations are identified. Besides, a commonly used but not often mentioned step in the entropy production maximization is pinpointed and the condition of incompressibility is incorporated into gradient dynamics.

  17. Diffusive mixing and Tsallis entropy

    DOE PAGES

    O'Malley, Daniel; Vesselinov, Velimir V.; Cushman, John H.

    2015-04-29

    Brownian motion, the classical diffusive process, maximizes the Boltzmann-Gibbs entropy. The Tsallis q-entropy, which is non-additive, was developed as an alternative to the classical entropy for systems which are non-ergodic. A generalization of Brownian motion is provided that maximizes the Tsallis entropy rather than the Boltzmann-Gibbs entropy. This process is driven by a Brownian measure with a random diffusion coefficient. In addition, the distribution of this coefficient is derived as a function of q for 1 < q < 3. Applications to transport in porous media are considered.

  18. Ehrenfest's Lottery--Time and Entropy Maximization

    ERIC Educational Resources Information Center

    Ashbaugh, Henry S.

    2010-01-01

    Successful teaching of the Second Law of Thermodynamics suffers from limited simple examples linking equilibrium to entropy maximization. I describe a thought experiment connecting entropy to a lottery that mixes marbles amongst a collection of urns. This mixing obeys diffusion-like dynamics. Equilibrium is achieved when the marble distribution is…

  19. Maximum and minimum entropy states yielding local continuity bounds

    NASA Astrophysics Data System (ADS)

    Hanson, Eric P.; Datta, Nilanjana

    2018-04-01

    Given an arbitrary quantum state (σ), we obtain an explicit construction of a state ρɛ * ( σ ) [respectively, ρ * , ɛ ( σ ) ] which has the maximum (respectively, minimum) entropy among all states which lie in a specified neighborhood (ɛ-ball) of σ. Computing the entropy of these states leads to a local strengthening of the continuity bound of the von Neumann entropy, i.e., the Audenaert-Fannes inequality. Our bound is local in the sense that it depends on the spectrum of σ. The states ρɛ * ( σ ) and ρ * , ɛ (σ) depend only on the geometry of the ɛ-ball and are in fact optimizers for a larger class of entropies. These include the Rényi entropy and the minimum- and maximum-entropies, providing explicit formulas for certain smoothed quantities. This allows us to obtain local continuity bounds for these quantities as well. In obtaining this bound, we first derive a more general result which may be of independent interest, namely, a necessary and sufficient condition under which a state maximizes a concave and Gâteaux-differentiable function in an ɛ-ball around a given state σ. Examples of such a function include the von Neumann entropy and the conditional entropy of bipartite states. Our proofs employ tools from the theory of convex optimization under non-differentiable constraints, in particular Fermat's rule, and majorization theory.

  20. Coherence and entanglement measures based on Rényi relative entropies

    NASA Astrophysics Data System (ADS)

    Zhu, Huangjun; Hayashi, Masahito; Chen, Lin

    2017-11-01

    We study systematically resource measures of coherence and entanglement based on Rényi relative entropies, which include the logarithmic robustness of coherence, geometric coherence, and conventional relative entropy of coherence together with their entanglement analogues. First, we show that each Rényi relative entropy of coherence is equal to the corresponding Rényi relative entropy of entanglement for any maximally correlated state. By virtue of this observation, we establish a simple operational connection between entanglement measures and coherence measures based on Rényi relative entropies. We then prove that all these coherence measures, including the logarithmic robustness of coherence, are additive. Accordingly, all these entanglement measures are additive for maximally correlated states. In addition, we derive analytical formulas for Rényi relative entropies of entanglement of maximally correlated states and bipartite pure states, which reproduce a number of classic results on the relative entropy of entanglement and logarithmic robustness of entanglement in a unified framework. Several nontrivial bounds for Rényi relative entropies of coherence (entanglement) are further derived, which improve over results known previously. Moreover, we determine all states whose relative entropy of coherence is equal to the logarithmic robustness of coherence. As an application, we provide an upper bound for the exact coherence distillation rate, which is saturated for pure states.

  1. Quantifying and minimizing entropy generation in AMTEC cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendricks, T.J.; Huang, C.

    1997-12-31

    Entropy generation in an AMTEC cell represents inherent power loss to the AMTEC cell. Minimizing cell entropy generation directly maximizes cell power generation and efficiency. An internal project is on-going at AMPS to identify, quantify and minimize entropy generation mechanisms within an AMTEC cell, with the goal of determining cost-effective design approaches for maximizing AMTEC cell power generation. Various entropy generation mechanisms have been identified and quantified. The project has investigated several cell design techniques in a solar-driven AMTEC system to minimize cell entropy generation and produce maximum power cell designs. In many cases, various sources of entropy generation aremore » interrelated such that minimizing entropy generation requires cell and system design optimization. Some of the tradeoffs between various entropy generation mechanisms are quantified and explained and their implications on cell design are discussed. The relationship between AMTEC cell power and efficiency and entropy generation is presented and discussed.« less

  2. Is the catalytic activity of triosephosphate isomerase fully optimized? An investigation based on maximization of entropy production.

    PubMed

    Bonačić Lošić, Željana; Donđivić, Tomislav; Juretić, Davor

    2017-03-01

    Triosephosphate isomerase (TIM) is often described as a fully evolved housekeeping enzyme with near-maximal possible reaction rate. The assumption that an enzyme is perfectly evolved has not been easy to confirm or refute. In this paper, we use maximization of entropy production within known constraints to examine this assumption by calculating steady-state cyclic flux, corresponding entropy production, and catalytic activity in a reversible four-state scheme of TIM functional states. The maximal entropy production (MaxEP) requirement for any of the first three transitions between TIM functional states leads to decreased total entropy production. Only the MaxEP requirement for the product (R-glyceraldehyde-3-phosphate) release step led to a 30% increase in enzyme activity, specificity constant k cat /K M , and overall entropy production. The product release step, due to the TIM molecular machine working in the physiological direction of glycolysis, has not been identified before as the rate-limiting step by using irreversible thermodynamics. Together with structural studies, our results open the possibility for finding amino acid substitutions leading to an increased frequency of loop six opening and product release.

  3. Transmitting Information by Propagation in an Ocean Waveguide: Computation of Acoustic Field Capacity

    DTIC Science & Technology

    2015-06-17

    progress, Eq. (4) is evaluated in terms of the differential entropy h. The integrals can be identified as differential entropy terms by expanding the log...all ran- dom vectors p with a given covariance matrix, the entropy of p is maximized when p is ZMCSCG since a normal distribution maximizes the... entropy over all distributions with the same covariance [9, 18], implying that this is the optimal distribution on s as well. In addition, of all the

  4. Entropy of spatial network ensembles

    NASA Astrophysics Data System (ADS)

    Coon, Justin P.; Dettmann, Carl P.; Georgiou, Orestis

    2018-04-01

    We analyze complexity in spatial network ensembles through the lens of graph entropy. Mathematically, we model a spatial network as a soft random geometric graph, i.e., a graph with two sources of randomness, namely nodes located randomly in space and links formed independently between pairs of nodes with probability given by a specified function (the "pair connection function") of their mutual distance. We consider the general case where randomness arises in node positions as well as pairwise connections (i.e., for a given pair distance, the corresponding edge state is a random variable). Classical random geometric graph and exponential graph models can be recovered in certain limits. We derive a simple bound for the entropy of a spatial network ensemble and calculate the conditional entropy of an ensemble given the node location distribution for hard and soft (probabilistic) pair connection functions. Under this formalism, we derive the connection function that yields maximum entropy under general constraints. Finally, we apply our analytical framework to study two practical examples: ad hoc wireless networks and the US flight network. Through the study of these examples, we illustrate that both exhibit properties that are indicative of nearly maximally entropic ensembles.

  5. A mechanism producing power law etc. distributions

    NASA Astrophysics Data System (ADS)

    Li, Heling; Shen, Hongjun; Yang, Bin

    2017-07-01

    Power law distribution is playing an increasingly important role in the complex system study. Based on the insolvability of complex systems, the idea of incomplete statistics is utilized and expanded, three different exponential factors are introduced in equations about the normalization condition, statistical average and Shannon entropy, with probability distribution function deduced about exponential function, power function and the product form between power function and exponential function derived from Shannon entropy and maximal entropy principle. So it is shown that maximum entropy principle can totally replace equal probability hypothesis. Owing to the fact that power and probability distribution in the product form between power function and exponential function, which cannot be derived via equal probability hypothesis, can be derived by the aid of maximal entropy principle, it also can be concluded that maximal entropy principle is a basic principle which embodies concepts more extensively and reveals basic principles on motion laws of objects more fundamentally. At the same time, this principle also reveals the intrinsic link between Nature and different objects in human society and principles complied by all.

  6. A Joint Multitarget Estimator for the Joint Target Detection and Tracking Filter

    DTIC Science & Technology

    2015-06-27

    function is the information theoretic part of the problem and aims for entropy maximization, while the second one arises from the constraint in the...objective functions in conflict. The first objective function is the information theo- retic part of the problem and aims for entropy maximization...theory. For the sake of completeness and clarity, we also summarize how each concept is utilized later. Entropy : A random variable is statistically

  7. Maximum entropy models as a tool for building precise neural controls.

    PubMed

    Savin, Cristina; Tkačik, Gašper

    2017-10-01

    Neural responses are highly structured, with population activity restricted to a small subset of the astronomical range of possible activity patterns. Characterizing these statistical regularities is important for understanding circuit computation, but challenging in practice. Here we review recent approaches based on the maximum entropy principle used for quantifying collective behavior in neural activity. We highlight recent models that capture population-level statistics of neural data, yielding insights into the organization of the neural code and its biological substrate. Furthermore, the MaxEnt framework provides a general recipe for constructing surrogate ensembles that preserve aspects of the data, but are otherwise maximally unstructured. This idea can be used to generate a hierarchy of controls against which rigorous statistical tests are possible. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Maps on positive operators preserving Rényi type relative entropies and maximal f-divergences

    NASA Astrophysics Data System (ADS)

    Gaál, Marcell; Nagy, Gergő

    2018-02-01

    In this paper, we deal with two quantum relative entropy preserver problems on the cones of positive (either positive definite or positive semidefinite) operators. The first one is related to a quantum Rényi relative entropy like quantity which plays an important role in classical-quantum channel decoding. The second one is connected to the so-called maximal f-divergences introduced by D. Petz and M. B. Ruskai who considered this quantity as a generalization of the usual Belavkin-Staszewski relative entropy. We emphasize in advance that all the results are obtained for finite-dimensional Hilbert spaces.

  9. Entanglement Entropy of Eigenstates of Quantum Chaotic Hamiltonians.

    PubMed

    Vidmar, Lev; Rigol, Marcos

    2017-12-01

    In quantum statistical mechanics, it is of fundamental interest to understand how close the bipartite entanglement entropy of eigenstates of quantum chaotic Hamiltonians is to maximal. For random pure states in the Hilbert space, the average entanglement entropy is known to be nearly maximal, with a deviation that is, at most, a constant. Here we prove that, in a system that is away from half filling and divided in two equal halves, an upper bound for the average entanglement entropy of random pure states with a fixed particle number and normally distributed real coefficients exhibits a deviation from the maximal value that grows with the square root of the volume of the system. Exact numerical results for highly excited eigenstates of a particle number conserving quantum chaotic model indicate that the bound is saturated with increasing system size.

  10. Minimal entropy approximation for cellular automata

    NASA Astrophysics Data System (ADS)

    Fukś, Henryk

    2014-02-01

    We present a method for the construction of approximate orbits of measures under the action of cellular automata which is complementary to the local structure theory. The local structure theory is based on the idea of Bayesian extension, that is, construction of a probability measure consistent with given block probabilities and maximizing entropy. If instead of maximizing entropy one minimizes it, one can develop another method for the construction of approximate orbits, at the heart of which is the iteration of finite-dimensional maps, called minimal entropy maps. We present numerical evidence that the minimal entropy approximation sometimes outperforms the local structure theory in characterizing the properties of cellular automata. The density response curve for elementary CA rule 26 is used to illustrate this claim.

  11. Most energetic passive states.

    PubMed

    Perarnau-Llobet, Martí; Hovhannisyan, Karen V; Huber, Marcus; Skrzypczyk, Paul; Tura, Jordi; Acín, Antonio

    2015-10-01

    Passive states are defined as those states that do not allow for work extraction in a cyclic (unitary) process. Within the set of passive states, thermal states are the most stable ones: they maximize the entropy for a given energy, and similarly they minimize the energy for a given entropy. Here we find the passive states lying in the other extreme, i.e., those that maximize the energy for a given entropy, which we show also minimize the entropy when the energy is fixed. These extremal properties make these states useful to obtain fundamental bounds for the thermodynamics of finite-dimensional quantum systems, which we show in several scenarios.

  12. Tailoring nanoscopic confines to maximize catalytic activity of hydronium ions

    NASA Astrophysics Data System (ADS)

    Shi, Hui; Eckstein, Sebastian; Vjunov, Aleksei; Camaioni, Donald M.; Lercher, Johannes A.

    2017-05-01

    Acid catalysis by hydronium ions is ubiquitous in aqueous-phase organic reactions. Here we show that hydronium ion catalysis, exemplified by intramolecular dehydration of cyclohexanol, is markedly influenced by steric constraints, yielding turnover rates that increase by up to two orders of magnitude in tight confines relative to an aqueous solution of a Brønsted acid. The higher activities in zeolites BEA and FAU than in water are caused by more positive activation entropies that more than offset higher activation enthalpies. The higher activity in zeolite MFI with pores smaller than BEA and FAU is caused by a lower activation enthalpy in the tighter confines that more than offsets a less positive activation entropy. Molecularly sized pores significantly enhance the association between hydronium ions and alcohols in a steric environment resembling the constraints in pockets of enzymes stabilizing active sites.

  13. Maximum Tsallis entropy with generalized Gini and Gini mean difference indices constraints

    NASA Astrophysics Data System (ADS)

    Khosravi Tanak, A.; Mohtashami Borzadaran, G. R.; Ahmadi, J.

    2017-04-01

    Using the maximum entropy principle with Tsallis entropy, some distribution families for modeling income distribution are obtained. By considering income inequality measures, maximum Tsallis entropy distributions under the constraint on generalized Gini and Gini mean difference indices are derived. It is shown that the Tsallis entropy maximizers with the considered constraints belong to generalized Pareto family.

  14. Holographic equipartition and the maximization of entropy

    NASA Astrophysics Data System (ADS)

    Krishna, P. B.; Mathew, Titus K.

    2017-09-01

    The accelerated expansion of the Universe can be interpreted as a tendency to satisfy holographic equipartition. It can be expressed by a simple law, Δ V =Δ t (Nsurf-ɛ Nbulk) , where V is the Hubble volume in Planck units, t is the cosmic time in Planck units, and Nsurf /bulk is the number of degrees of freedom on the horizon/bulk of the Universe. We show that this holographic equipartition law effectively implies the maximization of entropy. In the cosmological context, a system that obeys the holographic equipartition law behaves as an ordinary macroscopic system that proceeds to an equilibrium state of maximum entropy. We consider the standard Λ CDM model of the Universe and show that it is consistent with the holographic equipartition law. Analyzing the entropy evolution, we find that it also proceeds to an equilibrium state of maximum entropy.

  15. The Role of Shape Complementarity in the Protein-Protein Interactions

    PubMed Central

    Li, Ye; Zhang, Xianren; Cao, Dapeng

    2013-01-01

    We use a dissipative particle dynamic simulation to investigate the effects of shape complementarity on the protein-protein interactions. By monitoring different kinds of protein shape-complementarity modes, we gave a clear mechanism to reveal the role of the shape complementarity in the protein-protein interactions, i.e., when the two proteins with shape complementarity approach each other, the conformation of lipid chains between two proteins would be restricted significantly. The lipid molecules tend to leave the gap formed by two proteins to maximize the configuration entropy, and therefore yield an effective entropy-induced protein-protein attraction, which enhances the protein aggregation. In short, this work provides an insight into understanding the importance of the shape complementarity in the protein-protein interactions especially for protein aggregation and antibody–antigen complexes. Definitely, the shape complementarity is the third key factor affecting protein aggregation and complex, besides the electrostatic-complementarity and hydrophobic complementarity. PMID:24253561

  16. The Role of Shape Complementarity in the Protein-Protein Interactions

    NASA Astrophysics Data System (ADS)

    Li, Ye; Zhang, Xianren; Cao, Dapeng

    2013-11-01

    We use a dissipative particle dynamic simulation to investigate the effects of shape complementarity on the protein-protein interactions. By monitoring different kinds of protein shape-complementarity modes, we gave a clear mechanism to reveal the role of the shape complementarity in the protein-protein interactions, i.e., when the two proteins with shape complementarity approach each other, the conformation of lipid chains between two proteins would be restricted significantly. The lipid molecules tend to leave the gap formed by two proteins to maximize the configuration entropy, and therefore yield an effective entropy-induced protein-protein attraction, which enhances the protein aggregation. In short, this work provides an insight into understanding the importance of the shape complementarity in the protein-protein interactions especially for protein aggregation and antibody-antigen complexes. Definitely, the shape complementarity is the third key factor affecting protein aggregation and complex, besides the electrostatic-complementarity and hydrophobic complementarity.

  17. An entropy maximization problem related to optical communication

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Rodemich, E. R.; Swanson, L.

    1986-01-01

    In relation to a problem in optical communication, the paper considers the general problem of maximizing the entropy of a stationary radom process that is subject to an average transition cost constraint. By using a recent result of Justesen and Hoholdt, an exact solution to the problem is presented and a class of finite state encoders that give a good approximation to the exact solution is suggested.

  18. Maximum Entropy Production As a Framework for Understanding How Living Systems Evolve, Organize and Function

    NASA Astrophysics Data System (ADS)

    Vallino, J. J.; Algar, C. K.; Huber, J. A.; Fernandez-Gonzalez, N.

    2014-12-01

    The maximum entropy production (MEP) principle holds that non equilibrium systems with sufficient degrees of freedom will likely be found in a state that maximizes entropy production or, analogously, maximizes potential energy destruction rate. The theory does not distinguish between abiotic or biotic systems; however, we will show that systems that can coordinate function over time and/or space can potentially dissipate more free energy than purely Markovian processes (such as fire or a rock rolling down a hill) that only maximize instantaneous entropy production. Biological systems have the ability to store useful information acquired via evolution and curated by natural selection in genomic sequences that allow them to execute temporal strategies and coordinate function over space. For example, circadian rhythms allow phototrophs to "predict" that sun light will return and can orchestrate metabolic machinery appropriately before sunrise, which not only gives them a competitive advantage, but also increases the total entropy production rate compared to systems that lack such anticipatory control. Similarly, coordination over space, such a quorum sensing in microbial biofilms, can increase acquisition of spatially distributed resources and free energy and thereby enhance entropy production. In this talk we will develop a modeling framework to describe microbial biogeochemistry based on the MEP conjecture constrained by information and resource availability. Results from model simulations will be compared to laboratory experiments to demonstrate the usefulness of the MEP approach.

  19. Beating the Clauser-Horne-Shimony-Holt and the Svetlichny games with optimal states

    NASA Astrophysics Data System (ADS)

    Su, Hong-Yi; Ren, Changliang; Chen, Jing-Ling; Zhang, Fu-Lin; Wu, Chunfeng; Xu, Zhen-Peng; Gu, Mile; Vinjanampathy, Sai; Kwek, L. C.

    2016-02-01

    We study the relation between the maximal violation of Svetlichny's inequality and the mixedness of quantum states and obtain the optimal state (i.e., maximally nonlocal mixed states, or MNMS, for each value of linear entropy) to beat the Clauser-Horne-Shimony-Holt and the Svetlichny games. For the two-qubit and three-qubit MNMS, we showed that these states are also the most tolerant state against white noise, and thus serve as valuable quantum resources for such games. In particular, the quantum prediction of the MNMS decreases as the linear entropy increases, and then ceases to be nonlocal when the linear entropy reaches the critical points 2 /3 and 9 /14 for the two- and three-qubit cases, respectively. The MNMS are related to classical errors in experimental preparation of maximally entangled states.

  20. Autonomous entropy-based intelligent experimental design

    NASA Astrophysics Data System (ADS)

    Malakar, Nabin Kumar

    2011-07-01

    The aim of this thesis is to explore the application of probability and information theory in experimental design, and to do so in a way that combines what we know about inference and inquiry in a comprehensive and consistent manner. Present day scientific frontiers involve data collection at an ever-increasing rate. This requires that we find a way to collect the most relevant data in an automated fashion. By following the logic of the scientific method, we couple an inference engine with an inquiry engine to automate the iterative process of scientific learning. The inference engine involves Bayesian machine learning techniques to estimate model parameters based upon both prior information and previously collected data, while the inquiry engine implements data-driven exploration. By choosing an experiment whose distribution of expected results has the maximum entropy, the inquiry engine selects the experiment that maximizes the expected information gain. The coupled inference and inquiry engines constitute an autonomous learning method for scientific exploration. We apply it to a robotic arm to demonstrate the efficacy of the method. Optimizing inquiry involves searching for an experiment that promises, on average, to be maximally informative. If the set of potential experiments is described by many parameters, the search involves a high-dimensional entropy space. In such cases, a brute force search method will be slow and computationally expensive. We develop an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment. This helps to reduce the number of computations necessary to find the optimal experiment. We also extended the method of maximizing entropy, and developed a method of maximizing joint entropy so that it could be used as a principle of collaboration between two robots. This is a major achievement of this thesis, as it allows the information-based collaboration between two robotic units towards a same goal in an automated fashion.

  1. Spontaneous breaking of Lorentz invariance, black holes and perpetuum mobile of the 2nd kind

    NASA Astrophysics Data System (ADS)

    Dubovsky, S. L.; Sibiryakov, S. M.

    2006-07-01

    We study the effect of spontaneous breaking of Lorentz invariance on black hole thermodynamics. We consider a scenario where Lorentz symmetry breaking manifests itself by the difference of maximal velocities attainable by particles of different species in a preferred reference frame. The Lorentz breaking sector is represented by the ghost condensate. We find that the notions of black hole entropy and temperature loose their universal meaning. In particular, the standard derivation of the Hawking radiation yields that a black hole does emit thermal radiation in any given particle species, but with temperature depending on the maximal attainable velocity of this species. We demonstrate that this property implies violation of the second law of thermodynamics, and hence, allows construction of a perpetuum mobile of the 2nd kind. We discuss possible interpretation of these results.

  2. Entanglement Equilibrium and the Einstein Equation.

    PubMed

    Jacobson, Ted

    2016-05-20

    A link between the semiclassical Einstein equation and a maximal vacuum entanglement hypothesis is established. The hypothesis asserts that entanglement entropy in small geodesic balls is maximized at fixed volume in a locally maximally symmetric vacuum state of geometry and quantum fields. A qualitative argument suggests that the Einstein equation implies the validity of the hypothesis. A more precise argument shows that, for first-order variations of the local vacuum state of conformal quantum fields, the vacuum entanglement is stationary if and only if the Einstein equation holds. For nonconformal fields, the same conclusion follows modulo a conjecture about the variation of entanglement entropy.

  3. Entropy bound of local quantum field theory with generalized uncertainty principle

    NASA Astrophysics Data System (ADS)

    Kim, Yong-Wan; Lee, Hyung Won; Myung, Yun Soo

    2009-03-01

    We study the entropy bound for local quantum field theory (LQFT) with generalized uncertainty principle. The generalized uncertainty principle provides naturally a UV cutoff to the LQFT as gravity effects. Imposing the non-gravitational collapse condition as the UV-IR relation, we find that the maximal entropy of a bosonic field is limited by the entropy bound A 3 / 4 rather than A with A the boundary area.

  4. Partial knowledge, entropy, and estimation

    PubMed Central

    MacQueen, James; Marschak, Jacob

    1975-01-01

    In a growing body of literature, available partial knowledge is used to estimate the prior probability distribution p≡(p1,...,pn) by maximizing entropy H(p)≡-Σpi log pi, subject to constraints on p which express that partial knowledge. The method has been applied to distributions of income, of traffic, of stock-price changes, and of types of brand-article purchases. We shall respond to two justifications given for the method: (α) It is “conservative,” and therefore good, to maximize “uncertainty,” as (uniquely) represented by the entropy parameter. (β) One should apply the mathematics of statistical thermodynamics, which implies that the most probable distribution has highest entropy. Reason (α) is rejected. Reason (β) is valid when “complete ignorance” is defined in a particular way and both the constraint and the estimator's loss function are of certain kinds. PMID:16578733

  5. Nonequilibrium thermodynamics and information theory: basic concepts and relaxing dynamics

    NASA Astrophysics Data System (ADS)

    Altaner, Bernhard

    2017-11-01

    Thermodynamics is based on the notions of energy and entropy. While energy is the elementary quantity governing physical dynamics, entropy is the fundamental concept in information theory. In this work, starting from first principles, we give a detailed didactic account on the relations between energy and entropy and thus physics and information theory. We show that thermodynamic process inequalities, like the second law, are equivalent to the requirement that an effective description for physical dynamics is strongly relaxing. From the perspective of information theory, strongly relaxing dynamics govern the irreversible convergence of a statistical ensemble towards the maximally non-commital probability distribution that is compatible with thermodynamic equilibrium parameters. In particular, Markov processes that converge to a thermodynamic equilibrium state are strongly relaxing. Our framework generalizes previous results to arbitrary open and driven systems, yielding novel thermodynamic bounds for idealized and real processes. , which features invited work from the best early-career researchers working within the scope of J. Phys. A. This project is part of the Journal of Physics series’ 50th anniversary celebrations in 2017. Bernhard Altaner was selected by the Editorial Board of J. Phys. A as an Emerging Talent.

  6. Use and validity of principles of extremum of entropy production in the study of complex systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heitor Reis, A., E-mail: ahr@uevora.pt

    2014-07-15

    It is shown how both the principles of extremum of entropy production, which are often used in the study of complex systems, follow from the maximization of overall system conductivities, under appropriate constraints. In this way, the maximum rate of entropy production (MEP) occurs when all the forces in the system are kept constant. On the other hand, the minimum rate of entropy production (mEP) occurs when all the currents that cross the system are kept constant. A brief discussion on the validity of the application of the mEP and MEP principles in several cases, and in particular to themore » Earth’s climate is also presented. -- Highlights: •The principles of extremum of entropy production are not first principles. •They result from the maximization of conductivities under appropriate constraints. •The conditions of their validity are set explicitly. •Some long-standing controversies are discussed and clarified.« less

  7. Generalized Entanglement Entropies of Quantum Designs.

    PubMed

    Liu, Zi-Wen; Lloyd, Seth; Zhu, Elton Yechao; Zhu, Huangjun

    2018-03-30

    The entanglement properties of random quantum states or dynamics are important to the study of a broad spectrum of disciplines of physics, ranging from quantum information to high energy and many-body physics. This Letter investigates the interplay between the degrees of entanglement and randomness in pure states and unitary channels. We reveal strong connections between designs (distributions of states or unitaries that match certain moments of the uniform Haar measure) and generalized entropies (entropic functions that depend on certain powers of the density operator), by showing that Rényi entanglement entropies averaged over designs of the same order are almost maximal. This strengthens the celebrated Page's theorem. Moreover, we find that designs of an order that is logarithmic in the dimension maximize all Rényi entanglement entropies and so are completely random in terms of the entanglement spectrum. Our results relate the behaviors of Rényi entanglement entropies to the complexity of scrambling and quantum chaos in terms of the degree of randomness, and suggest a generalization of the fast scrambling conjecture.

  8. Generalized Entanglement Entropies of Quantum Designs

    NASA Astrophysics Data System (ADS)

    Liu, Zi-Wen; Lloyd, Seth; Zhu, Elton Yechao; Zhu, Huangjun

    2018-03-01

    The entanglement properties of random quantum states or dynamics are important to the study of a broad spectrum of disciplines of physics, ranging from quantum information to high energy and many-body physics. This Letter investigates the interplay between the degrees of entanglement and randomness in pure states and unitary channels. We reveal strong connections between designs (distributions of states or unitaries that match certain moments of the uniform Haar measure) and generalized entropies (entropic functions that depend on certain powers of the density operator), by showing that Rényi entanglement entropies averaged over designs of the same order are almost maximal. This strengthens the celebrated Page's theorem. Moreover, we find that designs of an order that is logarithmic in the dimension maximize all Rényi entanglement entropies and so are completely random in terms of the entanglement spectrum. Our results relate the behaviors of Rényi entanglement entropies to the complexity of scrambling and quantum chaos in terms of the degree of randomness, and suggest a generalization of the fast scrambling conjecture.

  9. Efficient algorithms and implementations of entropy-based moment closures for rarefied gases

    NASA Astrophysics Data System (ADS)

    Schaerer, Roman Pascal; Bansal, Pratyuksh; Torrilhon, Manuel

    2017-07-01

    We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) [13], we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropy distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.

  10. Identifying Student Resources in Reasoning about Entropy and the Approach to Thermal Equilibrium

    ERIC Educational Resources Information Center

    Loverude, Michael

    2015-01-01

    As part of an ongoing project to examine student learning in upper-division courses in thermal and statistical physics, we have examined student reasoning about entropy and the second law of thermodynamics. We have examined reasoning in terms of heat transfer, entropy maximization, and statistical treatments of multiplicity and probability. In…

  11. Entangled Dynamics in Macroscopic Quantum Tunneling of Bose-Einstein Condensates

    NASA Astrophysics Data System (ADS)

    Alcala, Diego A.; Glick, Joseph A.; Carr, Lincoln D.

    2017-05-01

    Tunneling of a quasibound state is a nonsmooth process in the entangled many-body case. Using time-evolving block decimation, we show that repulsive (attractive) interactions speed up (slow down) tunneling. While the escape time scales exponentially with small interactions, the maximization time of the von Neumann entanglement entropy between the remaining quasibound and escaped atoms scales quadratically. Stronger interactions require higher-order corrections. Entanglement entropy is maximized when about half the atoms have escaped.

  12. Taxi trips distribution modeling based on Entropy-Maximizing theory: A case study in Harbin city-China

    NASA Astrophysics Data System (ADS)

    Tang, Jinjun; Zhang, Shen; Chen, Xinqiang; Liu, Fang; Zou, Yajie

    2018-03-01

    Understanding Origin-Destination distribution of taxi trips is very important for improving effects of transportation planning and enhancing quality of taxi services. This study proposes a new method based on Entropy-Maximizing theory to model OD distribution in Harbin city using large-scale taxi GPS trajectories. Firstly, a K-means clustering method is utilized to partition raw pick-up and drop-off location into different zones, and trips are assumed to start from and end at zone centers. A generalized cost function is further defined by considering travel distance, time and fee between each OD pair. GPS data collected from more than 1000 taxis at an interval of 30 s during one month are divided into two parts: data from first twenty days is treated as training dataset and last ten days is taken as testing dataset. The training dataset is used to calibrate model while testing dataset is used to validate model. Furthermore, three indicators, mean absolute error (MAE), root mean square error (RMSE) and mean percentage absolute error (MPAE), are applied to evaluate training and testing performance of Entropy-Maximizing model versus Gravity model. The results demonstrate Entropy-Maximizing model is superior to Gravity model. Findings of the study are used to validate the feasibility of OD distribution from taxi GPS data in urban system.

  13. Maximum entropy production: Can it be used to constrain conceptual hydrological models?

    Treesearch

    M.C. Westhoff; E. Zehe

    2013-01-01

    In recent years, optimality principles have been proposed to constrain hydrological models. The principle of maximum entropy production (MEP) is one of the proposed principles and is subject of this study. It states that a steady state system is organized in such a way that entropy production is maximized. Although successful applications have been reported in...

  14. Free Energy in Introductory Physics

    NASA Astrophysics Data System (ADS)

    Prentis, Jeffrey J.; Obsniuk, Michael J.

    2016-02-01

    Energy and entropy are two of the most important concepts in science. For all natural processes where a system exchanges energy with its environment, the energy of the system tends to decrease and the entropy of the system tends to increase. Free energy is the special concept that specifies how to balance the opposing tendencies to minimize energy and maximize entropy. There are many pedagogical articles on energy and entropy. Here we present a simple model to illustrate the concept of free energy and the principle of minimum free energy.

  15. Competition between Homophily and Information Entropy Maximization in Social Networks

    PubMed Central

    Zhao, Jichang; Liang, Xiao; Xu, Ke

    2015-01-01

    In social networks, it is conventionally thought that two individuals with more overlapped friends tend to establish a new friendship, which could be stated as homophily breeding new connections. While the recent hypothesis of maximum information entropy is presented as the possible origin of effective navigation in small-world networks. We find there exists a competition between information entropy maximization and homophily in local structure through both theoretical and experimental analysis. This competition suggests that a newly built relationship between two individuals with more common friends would lead to less information entropy gain for them. We demonstrate that in the evolution of the social network, both of the two assumptions coexist. The rule of maximum information entropy produces weak ties in the network, while the law of homophily makes the network highly clustered locally and the individuals would obtain strong and trust ties. A toy model is also presented to demonstrate the competition and evaluate the roles of different rules in the evolution of real networks. Our findings could shed light on the social network modeling from a new perspective. PMID:26334994

  16. Settlement Dynamics and Hierarchy from Agent Decision-Making: a Method Derived from Entropy Maximization.

    PubMed

    Altaweel, Mark

    2015-01-01

    This paper presents an agent-based complex system simulation of settlement structure change using methods derived from entropy maximization modeling. The approach is applied to model the movement of people and goods in urban settings to study how settlement size hierarchy develops. While entropy maximization is well known for assessing settlement structure change over different spatiotemporal settings, approaches have rarely attempted to develop and apply this methodology to understand how individual and household decisions may affect settlement size distributions. A new method developed in this paper allows individual decision-makers to chose where to settle based on social-environmental factors, evaluate settlements based on geography and relative benefits, while retaining concepts derived from entropy maximization with settlement size affected by movement ability and site attractiveness feedbacks. To demonstrate the applicability of the theoretical and methodological approach, case study settlement patterns from the Middle Bronze (MBA) and Iron Ages (IA) in the Iraqi North Jazirah Survey (NJS) are used. Results indicate clear differences in settlement factors and household choices in simulations that lead to settlement size hierarchies comparable to the two evaluated periods. Conflict and socio-political cohesion, both their presence and absence, are suggested to have major roles in affecting the observed settlement hierarchy. More broadly, the model is made applicable for different empirically based settings, while being generalized to incorporate data uncertainty, making the model useful for understanding urbanism from top-down and bottom-up perspectives.

  17. Time-series analysis of sleep wake stage of rat EEG using time-dependent pattern entropy

    NASA Astrophysics Data System (ADS)

    Ishizaki, Ryuji; Shinba, Toshikazu; Mugishima, Go; Haraguchi, Hikaru; Inoue, Masayoshi

    2008-05-01

    We performed electroencephalography (EEG) for six male Wistar rats to clarify temporal behaviors at different levels of consciousness. Levels were identified both by conventional sleep analysis methods and by our novel entropy method. In our method, time-dependent pattern entropy is introduced, by which EEG is reduced to binary symbolic dynamics and the pattern of symbols in a sliding temporal window is considered. A high correlation was obtained between level of consciousness as measured by the conventional method and mean entropy in our entropy method. Mean entropy was maximal while awake (stage W) and decreased as sleep deepened. These results suggest that time-dependent pattern entropy may offer a promising method for future sleep research.

  18. Optimization and experimental validation of a thermal cycle that maximizes entropy coefficient fisher identifiability for lithium iron phosphate cells

    NASA Astrophysics Data System (ADS)

    Mendoza, Sergio; Rothenberger, Michael; Hake, Alison; Fathy, Hosam

    2016-03-01

    This article presents a framework for optimizing the thermal cycle to estimate a battery cell's entropy coefficient at 20% state of charge (SOC). Our goal is to maximize Fisher identifiability: a measure of the accuracy with which a parameter can be estimated. Existing protocols in the literature for estimating entropy coefficients demand excessive laboratory time. Identifiability optimization makes it possible to achieve comparable accuracy levels in a fraction of the time. This article demonstrates this result for a set of lithium iron phosphate (LFP) cells. We conduct a 24-h experiment to obtain benchmark measurements of their entropy coefficients. We optimize a thermal cycle to maximize parameter identifiability for these cells. This optimization proceeds with respect to the coefficients of a Fourier discretization of this thermal cycle. Finally, we compare the estimated parameters using (i) the benchmark test, (ii) the optimized protocol, and (iii) a 15-h test from the literature (by Forgez et al.). The results are encouraging for two reasons. First, they confirm the simulation-based prediction that the optimized experiment can produce accurate parameter estimates in 2 h, compared to 15-24. Second, the optimized experiment also estimates a thermal time constant representing the effects of thermal capacitance and convection heat transfer.

  19. Discrete-time entropy formulation of optimal and adaptive control problems

    NASA Technical Reports Server (NTRS)

    Tsai, Yweting A.; Casiello, Francisco A.; Loparo, Kenneth A.

    1992-01-01

    The discrete-time version of the entropy formulation of optimal control of problems developed by G. N. Saridis (1988) is discussed. Given a dynamical system, the uncertainty in the selection of the control is characterized by the probability distribution (density) function which maximizes the total entropy. The equivalence between the optimal control problem and the optimal entropy problem is established, and the total entropy is decomposed into a term associated with the certainty equivalent control law, the entropy of estimation, and the so-called equivocation of the active transmission of information from the controller to the estimator. This provides a useful framework for studying the certainty equivalent and adaptive control laws.

  20. Maximum entropy principle for stationary states underpinned by stochastic thermodynamics.

    PubMed

    Ford, Ian J

    2015-11-01

    The selection of an equilibrium state by maximizing the entropy of a system, subject to certain constraints, is often powerfully motivated as an exercise in logical inference, a procedure where conclusions are reached on the basis of incomplete information. But such a framework can be more compelling if it is underpinned by dynamical arguments, and we show how this can be provided by stochastic thermodynamics, where an explicit link is made between the production of entropy and the stochastic dynamics of a system coupled to an environment. The separation of entropy production into three components allows us to select a stationary state by maximizing the change, averaged over all realizations of the motion, in the principal relaxational or nonadiabatic component, equivalent to requiring that this contribution to the entropy production should become time independent for all realizations. We show that this recovers the usual equilibrium probability density function (pdf) for a conservative system in an isothermal environment, as well as the stationary nonequilibrium pdf for a particle confined to a potential under nonisothermal conditions, and a particle subject to a constant nonconservative force under isothermal conditions. The two remaining components of entropy production account for a recently discussed thermodynamic anomaly between over- and underdamped treatments of the dynamics in the nonisothermal stationary state.

  1. Evidence for surprise minimization over value maximization in choice behavior

    PubMed Central

    Schwartenbeck, Philipp; FitzGerald, Thomas H. B.; Mathys, Christoph; Dolan, Ray; Kronbichler, Martin; Friston, Karl

    2015-01-01

    Classical economic models are predicated on the idea that the ultimate aim of choice is to maximize utility or reward. In contrast, an alternative perspective highlights the fact that adaptive behavior requires agents’ to model their environment and minimize surprise about the states they frequent. We propose that choice behavior can be more accurately accounted for by surprise minimization compared to reward or utility maximization alone. Minimizing surprise makes a prediction at variance with expected utility models; namely, that in addition to attaining valuable states, agents attempt to maximize the entropy over outcomes and thus ‘keep their options open’. We tested this prediction using a simple binary choice paradigm and show that human decision-making is better explained by surprise minimization compared to utility maximization. Furthermore, we replicated this entropy-seeking behavior in a control task with no explicit utilities. These findings highlight a limitation of purely economic motivations in explaining choice behavior and instead emphasize the importance of belief-based motivations. PMID:26564686

  2. On q-non-extensive statistics with non-Tsallisian entropy

    NASA Astrophysics Data System (ADS)

    Jizba, Petr; Korbel, Jan

    2016-02-01

    We combine an axiomatics of Rényi with the q-deformed version of Khinchin axioms to obtain a measure of information (i.e., entropy) which accounts both for systems with embedded self-similarity and non-extensivity. We show that the entropy thus obtained is uniquely solved in terms of a one-parameter family of information measures. The ensuing maximal-entropy distribution is phrased in terms of a special function known as the Lambert W-function. We analyze the corresponding "high" and "low-temperature" asymptotics and reveal a non-trivial structure of the parameter space. Salient issues such as concavity and Schur concavity of the new entropy are also discussed.

  3. Exact Maximum-Entropy Estimation with Feynman Diagrams

    NASA Astrophysics Data System (ADS)

    Netser Zernik, Amitai; Schlank, Tomer M.; Tessler, Ran J.

    2018-02-01

    A longstanding open problem in statistics is finding an explicit expression for the probability measure which maximizes entropy with respect to given constraints. In this paper a solution to this problem is found, using perturbative Feynman calculus. The explicit expression is given as a sum over weighted trees.

  4. Extremal entanglement and mixedness in continuous variable systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adesso, Gerardo; Serafini, Alessio; Illuminati, Fabrizio

    2004-08-01

    We investigate the relationship between mixedness and entanglement for Gaussian states of continuous variable systems. We introduce generalized entropies based on Schatten p norms to quantify the mixedness of a state and derive their explicit expressions in terms of symplectic spectra. We compare the hierarchies of mixedness provided by such measures with the one provided by the purity (defined as tr {rho}{sup 2} for the state {rho}) for generic n-mode states. We then review the analysis proving the existence of both maximally and minimally entangled states at given global and marginal purities, with the entanglement quantified by the logarithmic negativity.more » Based on these results, we extend such an analysis to generalized entropies, introducing and fully characterizing maximally and minimally entangled states for given global and local generalized entropies. We compare the different roles played by the purity and by the generalized p entropies in quantifying the entanglement and the mixedness of continuous variable systems. We introduce the concept of average logarithmic negativity, showing that it allows a reliable quantitative estimate of continuous variable entanglement by direct measurements of global and marginal generalized p entropies.« less

  5. Faithful nonclassicality indicators and extremal quantum correlations in two-qubit states

    NASA Astrophysics Data System (ADS)

    Girolami, Davide; Paternostro, Mauro; Adesso, Gerardo

    2011-09-01

    The state disturbance induced by locally measuring a quantum system yields a signature of nonclassical correlations beyond entanglement. Here, we present a detailed study of such correlations for two-qubit mixed states. To overcome the asymmetry of quantum discord and the unfaithfulness of measurement-induced disturbance (severely overestimating quantum correlations), we propose an ameliorated measurement-induced disturbance as nonclassicality indicator, optimized over joint local measurements, and we derive its closed expression for relevant two-qubit states. We study its analytical relation with discord, and characterize the maximally quantum-correlated mixed states, that simultaneously extremize both quantifiers at given von Neumann entropy: among all two-qubit states, these states possess the most robust quantum correlations against noise.

  6. Evidence of the big fix

    NASA Astrophysics Data System (ADS)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2014-06-01

    We give an evidence of the Big Fix. The theory of wormholes and multiverse suggests that the parameters of the Standard Model are fixed in such a way that the total entropy at the late stage of the universe is maximized, which we call the maximum entropy principle. In this paper, we discuss how it can be confirmed by the experimental data, and we show that it is indeed true for the Higgs vacuum expectation value vh. We assume that the baryon number is produced by the sphaleron process, and that the current quark masses, the gauge couplings and the Higgs self-coupling are fixed when we vary vh. It turns out that the existence of the atomic nuclei plays a crucial role to maximize the entropy. This is reminiscent of the anthropic principle, however it is required by the fundamental law in our case.

  7. Deep inelastic scattering as a probe of entanglement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kharzeev, Dmitri E.; Levin, Eugene M.

    Using nonlinear evolution equations of QCD, we compute the von Neumann entropy of the system of partons resolved by deep inelastic scattering at a given Bjorken x and momentum transfer q 2 = - Q 2 . We interpret the result as the entropy of entanglement between the spatial region probed by deep inelastic scattering and the rest of the proton. At small x the relation between the entanglement entropy S ( x ) and the parton distribution x G ( x ) becomes very simple: S ( x ) = ln [ x G ( x ) ] .more » In this small x , large rapidity Y regime, all partonic microstates have equal probabilities—the proton is composed by an exponentially large number exp ( Δ Y ) of microstates that occur with equal and exponentially small probabilities exp ( - Δ Y ) , where Δ is defined by x G ( x ) ~ 1 / x Δ . For this equipartitioned state, the entanglement entropy is maximal—so at small x , deep inelastic scattering probes a maximally entangled state. Here, we propose the entanglement entropy as an observable that can be studied in deep inelastic scattering. This will then require event-by-event measurements of hadronic final states, and would allow to study the transformation of entanglement entropy into the Boltzmann one. We estimate that the proton is represented by the maximally entangled state at x ≤ 10 -3 ; this kinematic region will be amenable to studies at the Electron Ion Collider.« less

  8. Deep inelastic scattering as a probe of entanglement

    DOE PAGES

    Kharzeev, Dmitri E.; Levin, Eugene M.

    2017-06-03

    Using nonlinear evolution equations of QCD, we compute the von Neumann entropy of the system of partons resolved by deep inelastic scattering at a given Bjorken x and momentum transfer q 2 = - Q 2 . We interpret the result as the entropy of entanglement between the spatial region probed by deep inelastic scattering and the rest of the proton. At small x the relation between the entanglement entropy S ( x ) and the parton distribution x G ( x ) becomes very simple: S ( x ) = ln [ x G ( x ) ] .more » In this small x , large rapidity Y regime, all partonic microstates have equal probabilities—the proton is composed by an exponentially large number exp ( Δ Y ) of microstates that occur with equal and exponentially small probabilities exp ( - Δ Y ) , where Δ is defined by x G ( x ) ~ 1 / x Δ . For this equipartitioned state, the entanglement entropy is maximal—so at small x , deep inelastic scattering probes a maximally entangled state. Here, we propose the entanglement entropy as an observable that can be studied in deep inelastic scattering. This will then require event-by-event measurements of hadronic final states, and would allow to study the transformation of entanglement entropy into the Boltzmann one. We estimate that the proton is represented by the maximally entangled state at x ≤ 10 -3 ; this kinematic region will be amenable to studies at the Electron Ion Collider.« less

  9. Some New Properties of Quantum Correlations

    NASA Astrophysics Data System (ADS)

    Liu, Feng; Li, Fei; Wei, Yunxia

    2017-02-01

    Quantum coherence measures the correlation between different measurement results in a single-system, while entanglement and quantum discord measure the correlation among different subsystems in a multipartite system. In this paper, we focus on the relative entropy form of them, and obtain three new properties of them as follows: 1) General forms of maximally coherent states for the relative entropy coherence, 2) Linear monogamy of the relative entropy entanglement, and 3) Subadditivity of quantum discord. Here, the linear monogamy is defined as there is a small constant as the upper bound on the sum of the relative entropy entanglement in subsystems.

  10. Chemical equilibrium. [maximizing entropy of gas system to derive relations between thermodynamic variables

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The entropy of a gas system with the number of particles subject to external control is maximized to derive relations between the thermodynamic variables that obtain at equilibrium. These relations are described in terms of the chemical potential, defined as equivalent partial derivatives of entropy, energy, enthalpy, free energy, or free enthalpy. At equilibrium, the change in total chemical potential must vanish. This fact is used to derive the equilibrium constants for chemical reactions in terms of the partition functions of the species involved in the reaction. Thus the equilibrium constants can be determined accurately, just as other thermodynamic properties, from a knowledge of the energy levels and degeneracies for the gas species involved. These equilibrium constants permit one to calculate the equilibrium concentrations or partial pressures of chemically reacting species that occur in gas mixtures at any given condition of pressure and temperature or volume and temperature.

  11. Entropy, non-linearity and hierarchy in ecosystems

    NASA Astrophysics Data System (ADS)

    Addiscott, T.

    2009-04-01

    Soil-plant systems are open systems thermodynamically because they exchange both energy and matter with their surroundings. Thus they are properly described by the second and third of the three stages of thermodynamics defined by Prigogine and Stengers (1984). The second stage describes a system in which the flow is linearly related to the force. Such a system tends towards a steady state in which entropy production is minimized, but it depends on the capacity of the system for self-organization. In a third stage system, flow is non-linearly related to force, and the system can move far from equilibrium. This system maximizes entropy production but in so doing facilitates self-organization. The second stage system was suggested earlier to provide a useful analogue of the behaviour of natural and agricultural ecosystems subjected to perturbations, but it needs the capacity for self-organization. Considering an ecosystem as a hierarchy suggests this capacity is provided by the soil population, which releases from dead plant matter nutrients such as nitrate, phosphate and captions needed for growth of new plants and the renewal of the whole ecosystem. This release of small molecules from macromolecules increases entropy, and the soil population maximizes entropy production by releasing nutrients and carbon dioxide as vigorously as conditions allow. In so doing it behaves as a third stage thermodynamic system. Other authors (Schneider and Kay, 1994, 1995) consider that it is in the plants in an ecosystem that maximize entropy, mainly through transpiration, but studies on transpiration efficiency suggest that this is questionable. Prigogine, I. & Stengers, I. 1984. Order out of chaos. Bantam Books, Toronto. Schneider, E.D. & Kay, J.J. 1994. Life as a manifestation of the Second Law of Thermodynamics. Mathematical & Computer Modelling, 19, 25-48. Schneider, E.D. & Kay, J.J. 1995. Order from disorder: The Thermodynamics of Complexity in Biology. In: What is Life: the Next Fifty Years (eds. M.P. Murphy & L.A.J. O'Neill), pp. 161-172, Cambridge University Press, Cambridge.

  12. Maximally Informative Hierarchical Representations of High-Dimensional Data

    DTIC Science & Technology

    2015-05-11

    will be considered dis- crete but the domain of the X i ’s is not restricted. Entropy is defined in the usual way as H(X) ⌘ E X [log 1/p(x)]. We use...natural logarithms so that the unit of information is nats. Higher-order entropies can be constructed in various ways from this standard definition. For...sense, not truly high-dimensional and can be charac- terized separately. On the other hand, the entropy of X, H(X), can naively be considered the

  13. Entropy Inequality Violations from Ultraspinning Black Holes.

    PubMed

    Hennigar, Robie A; Mann, Robert B; Kubizňák, David

    2015-07-17

    We construct a new class of rotating anti-de Sitter (AdS) black hole solutions with noncompact event horizons of finite area in any dimension and study their thermodynamics. In four dimensions these black holes are solutions to gauged supergravity. We find that their entropy exceeds the maximum implied from the conjectured reverse isoperimetric inequality, which states that for a given thermodynamic volume, the black hole entropy is maximized for Schwarzschild-AdS space. We use this result to suggest more stringent conditions under which this conjecture may hold.

  14. Uncountably many maximizing measures for a dense subset of continuous functions

    NASA Astrophysics Data System (ADS)

    Shinoda, Mao

    2018-05-01

    Ergodic optimization aims to single out dynamically invariant Borel probability measures which maximize the integral of a given ‘performance’ function. For a continuous self-map of a compact metric space and a dense set of continuous functions, we show the existence of uncountably many ergodic maximizing measures. We also show that, for a topologically mixing subshift of finite type and a dense set of continuous functions there exist uncountably many ergodic maximizing measures with full support and positive entropy.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goncharov, V.N.; Knauer, J.P.; McKenty, P.W.

    (B204)Hydrodynamic instabilities seeded by laser imprint and surface roughness limit the compression ratio and neutron yield in the direct-drive inertial confinement fusion target designs. New improved-performance designs use adiabat shaping to increase the entropy of only the outer portion of the shell, reducing the instability growth. The inner portion of the shell is kept on a lower entropy to maximize shell compressibility. The adiabat shaping is implemented using a high-intensity picket in front of the main-drive pulse. The picket launches a strong shock that decays as it propagates through the shell. This increases the ablation velocity and reduces the Rayleigh-Taylormore » growth rates. In addition, as shown earlier [T.J.B. Collis and S. Skupsky, Phys. Plasmas 9 275 (2002)], the picket reduces the instability seed due to the laser imprint. To test the results of calculations, a series of the picket pulse implosions of CH capsules were performed on the OMEGA laser system [T.R. Boehly, D .L. Brown, R.S. Craxton, et al., Opt. Commun. 133, 495 (1997)]. The experiments demonstrated a significant improvement in target yields for the pulses with the picket compared to the pulses without the picket. Results of the theory and experiments with adiabat shaping are being extended to future OMEGA and the National Ignition Facility's [J.A. Paisner, J.D. Boyes, S.A. Kumpan, W.H. Lowdermilk, and M.S. Sorem, Laser Focus World 30, 75 (1994)] cryogenic target designs.« less

  16. Atomic and electronic basis for the serrations of refractory high-entropy alloys

    NASA Astrophysics Data System (ADS)

    Wang, William Yi; Shang, Shun Li; Wang, Yi; Han, Fengbo; Darling, Kristopher A.; Wu, Yidong; Xie, Xie; Senkov, Oleg N.; Li, Jinshan; Hui, Xi Dong; Dahmen, Karin A.; Liaw, Peter K.; Kecskes, Laszlo J.; Liu, Zi-Kui

    2017-06-01

    Refractory high-entropy alloys present attractive mechanical properties, i.e., high yield strength and fracture toughness, making them potential candidates for structural applications. Understandings of atomic and electronic interactions are important to reveal the origins for the formation of high-entropy alloys and their structure-dominated mechanical properties, thus enabling the development of a predictive approach for rapidly designing advanced materials. Here, we report the atomic and electronic basis for the valence-electron-concentration-categorized principles and the observed serration behavior in high-entropy alloys and high-entropy metallic glass, including MoNbTaW, MoNbVW, MoTaVW, HfNbTiZr, and Vitreloy-1 MG (Zr41Ti14Cu12.5Ni10Be22.5). We find that the yield strengths of high-entropy alloys and high-entropy metallic glass are a power-law function of the electron-work function, which is dominated by local atomic arrangements. Further, a reliance on the bonding-charge density provides a groundbreaking insight into the nature of loosely bonded spots in materials. The presence of strongly bonded clusters and weakly bonded glue atoms imply a serrated deformation of high-entropy alloys, resulting in intermittent avalanches of defects movement.

  17. Relating quantum coherence and correlations with entropy-based measures.

    PubMed

    Wang, Xiao-Li; Yue, Qiu-Ling; Yu, Chao-Hua; Gao, Fei; Qin, Su-Juan

    2017-09-21

    Quantum coherence and quantum correlations are important quantum resources for quantum computation and quantum information. In this paper, using entropy-based measures, we investigate the relationships between quantum correlated coherence, which is the coherence between subsystems, and two main kinds of quantum correlations as defined by quantum discord as well as quantum entanglement. In particular, we show that quantum discord and quantum entanglement can be well characterized by quantum correlated coherence. Moreover, we prove that the entanglement measure formulated by quantum correlated coherence is lower and upper bounded by the relative entropy of entanglement and the entanglement of formation, respectively, and equal to the relative entropy of entanglement for all the maximally correlated states.

  18. Neuronal Entropy-Rate Feature of Entopeduncular Nucleus in Rat Model of Parkinson's Disease.

    PubMed

    Darbin, Olivier; Jin, Xingxing; Von Wrangel, Christof; Schwabe, Kerstin; Nambu, Atsushi; Naritoku, Dean K; Krauss, Joachim K; Alam, Mesbah

    2016-03-01

    The function of the nigro-striatal pathway on neuronal entropy in the basal ganglia (BG) output nucleus, i.e. the entopeduncular nucleus (EPN) was investigated in the unilaterally 6-hyroxydopamine (6-OHDA)-lesioned rat model of Parkinson's disease (PD). In both control subjects and subjects with 6-OHDA lesion of dopamine (DA) the nigro-striatal pathway, a histological hallmark for parkinsonism, neuronal entropy in EPN was maximal in neurons with firing rates ranging between 15 and 25 Hz. In 6-OHDA lesioned rats, neuronal entropy in the EPN was specifically higher in neurons with firing rates above 25 Hz. Our data establishes that the nigro-striatal pathway controls neuronal entropy in motor circuitry and that the parkinsonian condition is associated with abnormal relationship between firing rate and neuronal entropy in BG output nuclei. The neuronal firing rates and entropy relationship provide putative relevant electrophysiological information to investigate the sensory-motor processing in normal condition and conditions such as movement disorders.

  19. RELATIONSHIP BETWEEN ENTROPY OF SPIKE TIMING AND FIRING RATE IN ENTOPEDUNCULAR NUCLEUS NEURONS IN ANESTHETIZED RATS: FUNCTION OF THE NIGRO-STRIATAL PATHWAY

    PubMed Central

    Darbin, Olivier; Jin, Xingxing; von Wrangel, Christof; Schwabe, Kerstin; Nambu, Atsushi; Naritoku, Dean K; Krauss, Joachim K.; Alam, Mesbah

    2016-01-01

    The function of the nigro-striatal pathway on neuronal entropy in the basal ganglia (BG) output nucleus (entopeduncular nucleus, EPN) was investigated in the unilaterally 6-hyroxydopamine (6-OHDA)-lesioned rat model of Parkinson’s disease (PD). In both control subjects and subjects with 6-OHDA lesion of the nigro-striatal pathway, a histological hallmark for parkinsonism, neuronal entropy in EPN was maximal in neurons with firing rates ranging between 15Hz and 25 Hz. In 6-OHDA lesioned rats, neuronal entropy in the EPN was specifically higher in neurons with firing rates above 25Hz. Our data establishes that nigro-striatal pathway controls neuronal entropy in motor circuitry and that the parkinsonian condition is associated with abnormal relationship between firing rate and neuronal entropy in BG output nuclei. The neuronal firing rates and entropy relationship provide putative relevant electrophysiological information to investigate the sensory-motor processing in normal condition and conditions with movement disorders. PMID:26711712

  20. Maximum Relative Entropy of Coherence: An Operational Coherence Measure.

    PubMed

    Bu, Kaifeng; Singh, Uttam; Fei, Shao-Ming; Pati, Arun Kumar; Wu, Junde

    2017-10-13

    The operational characterization of quantum coherence is the cornerstone in the development of the resource theory of coherence. We introduce a new coherence quantifier based on maximum relative entropy. We prove that the maximum relative entropy of coherence is directly related to the maximum overlap with maximally coherent states under a particular class of operations, which provides an operational interpretation of the maximum relative entropy of coherence. Moreover, we show that, for any coherent state, there are examples of subchannel discrimination problems such that this coherent state allows for a higher probability of successfully discriminating subchannels than that of all incoherent states. This advantage of coherent states in subchannel discrimination can be exactly characterized by the maximum relative entropy of coherence. By introducing a suitable smooth maximum relative entropy of coherence, we prove that the smooth maximum relative entropy of coherence provides a lower bound of one-shot coherence cost, and the maximum relative entropy of coherence is equivalent to the relative entropy of coherence in the asymptotic limit. Similar to the maximum relative entropy of coherence, the minimum relative entropy of coherence has also been investigated. We show that the minimum relative entropy of coherence provides an upper bound of one-shot coherence distillation, and in the asymptotic limit the minimum relative entropy of coherence is equivalent to the relative entropy of coherence.

  1. An adaptive technique to maximize lossless image data compression of satellite images

    NASA Technical Reports Server (NTRS)

    Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe

    1994-01-01

    Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.

  2. MoNbTaV Medium-Entropy Alloy

    DOE PAGES

    Yao, Hongwei; Qiao, Jun -Wei; Gao, Michael; ...

    2016-05-19

    Guided by CALPHAD (Calculation of Phase Diagrams) modeling, the refractory medium-entropy alloy MoNbTaV was synthesized by vacuum arc melting under a high-purity argon atmosphere. A body-centered cubic solid solution phase was experimentally confirmed in the as-cast ingot using X-ray diffraction and scanning electron microscopy. The measured lattice parameter of the alloy (3.208 Å) obeys the rule of mixtures (ROM), but the Vickers microhardness (4.95 GPa) and the yield strength (1.5 GPa) are about 4.5 and 4.6 times those estimated from the ROM, respectively. Using a simple model on solid solution strengthening predicts a yield strength of approximately 1.5 GPa. Inmore » conclusion, thermodynamic analysis shows that the total entropy of the alloy is more than three times the configurational entropy at room temperature, and the entropy of mixing exhibits a small negative departure from ideal mixing.« less

  3. Entropy for quantum pure states and quantum H theorem

    NASA Astrophysics Data System (ADS)

    Han, Xizhi; Wu, Biao

    2015-06-01

    We construct a complete set of Wannier functions that are localized at both given positions and momenta. This allows us to introduce the quantum phase space, onto which a quantum pure state can be mapped unitarily. Using its probability distribution in quantum phase space, we define an entropy for a quantum pure state. We prove an inequality regarding the long-time behavior of our entropy's fluctuation. For a typical initial state, this inequality indicates that our entropy can relax dynamically to a maximized value and stay there most of time with small fluctuations. This result echoes the quantum H theorem proved by von Neumann [Zeitschrift für Physik 57, 30 (1929), 10.1007/BF01339852]. Our entropy is different from the standard von Neumann entropy, which is always zero for quantum pure states. According to our definition, a system always has bigger entropy than its subsystem even when the system is described by a pure state. As the construction of the Wannier basis can be implemented numerically, the dynamical evolution of our entropy is illustrated with an example.

  4. Efficient algorithms and implementations of entropy-based moment closures for rarefied gases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaerer, Roman Pascal, E-mail: schaerer@mathcces.rwth-aachen.de; Bansal, Pratyuksh; Torrilhon, Manuel

    We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) , we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropymore » distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.« less

  5. Quantum thermalization through entanglement in an isolated many-body system.

    PubMed

    Kaufman, Adam M; Tai, M Eric; Lukin, Alexander; Rispoli, Matthew; Schittko, Robert; Preiss, Philipp M; Greiner, Markus

    2016-08-19

    Statistical mechanics relies on the maximization of entropy in a system at thermal equilibrium. However, an isolated quantum many-body system initialized in a pure state remains pure during Schrödinger evolution, and in this sense it has static, zero entropy. We experimentally studied the emergence of statistical mechanics in a quantum state and observed the fundamental role of quantum entanglement in facilitating this emergence. Microscopy of an evolving quantum system indicates that the full quantum state remains pure, whereas thermalization occurs on a local scale. We directly measured entanglement entropy, which assumes the role of the thermal entropy in thermalization. The entanglement creates local entropy that validates the use of statistical physics for local observables. Our measurements are consistent with the eigenstate thermalization hypothesis. Copyright © 2016, American Association for the Advancement of Science.

  6. A diameter-sensitive flow entropy method for reliability consideration in water distribution system design

    NASA Astrophysics Data System (ADS)

    Liu, Haixing; Savić, Dragan; Kapelan, Zoran; Zhao, Ming; Yuan, Yixing; Zhao, Hongbin

    2014-07-01

    Flow entropy is a measure of uniformity of pipe flows in water distribution systems. By maximizing flow entropy one can identify reliable layouts or connectivity in networks. In order to overcome the disadvantage of the common definition of flow entropy that does not consider the impact of pipe diameter on reliability, an extended definition of flow entropy, termed as diameter-sensitive flow entropy, is proposed. This new methodology is then assessed by using other reliability methods, including Monte Carlo Simulation, a pipe failure probability model, and a surrogate measure (resilience index) integrated with water demand and pipe failure uncertainty. The reliability assessment is based on a sample of WDS designs derived from an optimization process for each of the two benchmark networks. Correlation analysis is used to evaluate quantitatively the relationship between entropy and reliability. To ensure reliability, a comparative analysis between the flow entropy and the new method is conducted. The results demonstrate that the diameter-sensitive flow entropy shows consistently much stronger correlation with the three reliability measures than simple flow entropy. Therefore, the new flow entropy method can be taken as a better surrogate measure for reliability and could be potentially integrated into the optimal design problem of WDSs. Sensitivity analysis results show that the velocity parameters used in the new flow entropy has no significant impact on the relationship between diameter-sensitive flow entropy and reliability.

  7. High resolution schemes and the entropy condition

    NASA Technical Reports Server (NTRS)

    Osher, S.; Chakravarthy, S.

    1983-01-01

    A systematic procedure for constructing semidiscrete, second order accurate, variation diminishing, five point band width, approximations to scalar conservation laws, is presented. These schemes are constructed to also satisfy a single discrete entropy inequality. Thus, in the convex flux case, convergence is proven to be the unique physically correct solution. For hyperbolic systems of conservation laws, this construction is used formally to extend the first author's first order accurate scheme, and show (under some minor technical hypotheses) that limit solutions satisfy an entropy inequality. Results concerning discrete shocks, a maximum principle, and maximal order of accuracy are obtained. Numerical applications are also presented.

  8. 1+1 dimensional compactifications of string theory.

    PubMed

    Goheer, Naureen; Kleban, Matthew; Susskind, Leonard

    2004-05-14

    We argue that stable, maximally symmetric compactifications of string theory to 1+1 dimensions are in conflict with holography. In particular, the finite horizon entropies of the Rindler wedge in 1+1 dimensional Minkowski and anti-de Sitter space, and of the de Sitter horizon in any dimension, are inconsistent with the symmetries of these spaces. The argument parallels one made recently by the same authors, in which we demonstrated the incompatibility of the finiteness of the entropy and the symmetries of de Sitter space in any dimension. If the horizon entropy is either infinite or zero, the conflict is resolved.

  9. Are all maximally entangled states pure?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cavalcanti, D.; Brandao, F.G.S.L.; Terra Cunha, M.O.

    We study if all maximally entangled states are pure through several entanglement monotones. In the bipartite case, we find that the same conditions which lead to the uniqueness of the entropy of entanglement as a measure of entanglement exclude the existence of maximally mixed entangled states. In the multipartite scenario, our conclusions allow us to generalize the idea of the monogamy of entanglement: we establish the polygamy of entanglement, expressing that if a general state is maximally entangled with respect to some kind of multipartite entanglement, then it is necessarily factorized of any other system.

  10. Are all maximally entangled states pure?

    NASA Astrophysics Data System (ADS)

    Cavalcanti, D.; Brandão, F. G. S. L.; Terra Cunha, M. O.

    2005-10-01

    We study if all maximally entangled states are pure through several entanglement monotones. In the bipartite case, we find that the same conditions which lead to the uniqueness of the entropy of entanglement as a measure of entanglement exclude the existence of maximally mixed entangled states. In the multipartite scenario, our conclusions allow us to generalize the idea of the monogamy of entanglement: we establish the polygamy of entanglement, expressing that if a general state is maximally entangled with respect to some kind of multipartite entanglement, then it is necessarily factorized of any other system.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou Huanqiang; School of Physical Sciences, University of Queensland, Brisbane, Queensland 4072; Barthel, Thomas

    We investigate boundary critical phenomena from a quantum-information perspective. Bipartite entanglement in the ground state of one-dimensional quantum systems is quantified using the Renyi entropy S{sub {alpha}}, which includes the von Neumann entropy ({alpha}{yields}1) and the single-copy entanglement ({alpha}{yields}{infinity}) as special cases. We identify the contribution of the boundaries to the Renyi entropy, and show that there is an entanglement loss along boundary renormalization group (RG) flows. This property, which is intimately related to the Affleck-Ludwig g theorem, is a consequence of majorization relations between the spectra of the reduced density matrix along the boundary RG flows. We also pointmore » out that the bulk contribution to the single-copy entanglement is half of that to the von Neumann entropy, whereas the boundary contribution is the same.« less

  12. How multiplicity determines entropy and the derivation of the maximum entropy principle for complex systems.

    PubMed

    Hanel, Rudolf; Thurner, Stefan; Gell-Mann, Murray

    2014-05-13

    The maximum entropy principle (MEP) is a method for obtaining the most likely distribution functions of observables from statistical systems by maximizing entropy under constraints. The MEP has found hundreds of applications in ergodic and Markovian systems in statistical mechanics, information theory, and statistics. For several decades there has been an ongoing controversy over whether the notion of the maximum entropy principle can be extended in a meaningful way to nonextensive, nonergodic, and complex statistical systems and processes. In this paper we start by reviewing how Boltzmann-Gibbs-Shannon entropy is related to multiplicities of independent random processes. We then show how the relaxation of independence naturally leads to the most general entropies that are compatible with the first three Shannon-Khinchin axioms, the (c,d)-entropies. We demonstrate that the MEP is a perfectly consistent concept for nonergodic and complex statistical systems if their relative entropy can be factored into a generalized multiplicity and a constraint term. The problem of finding such a factorization reduces to finding an appropriate representation of relative entropy in a linear basis. In a particular example we show that path-dependent random processes with memory naturally require specific generalized entropies. The example is to our knowledge the first exact derivation of a generalized entropy from the microscopic properties of a path-dependent random process.

  13. Entanglement Entropy of Eigenstates of Quadratic Fermionic Hamiltonians.

    PubMed

    Vidmar, Lev; Hackl, Lucas; Bianchi, Eugenio; Rigol, Marcos

    2017-07-14

    In a seminal paper [D. N. Page, Phys. Rev. Lett. 71, 1291 (1993)PRLTAO0031-900710.1103/PhysRevLett.71.1291], Page proved that the average entanglement entropy of subsystems of random pure states is S_{ave}≃lnD_{A}-(1/2)D_{A}^{2}/D for 1≪D_{A}≤sqrt[D], where D_{A} and D are the Hilbert space dimensions of the subsystem and the system, respectively. Hence, typical pure states are (nearly) maximally entangled. We develop tools to compute the average entanglement entropy ⟨S⟩ of all eigenstates of quadratic fermionic Hamiltonians. In particular, we derive exact bounds for the most general translationally invariant models lnD_{A}-(lnD_{A})^{2}/lnD≤⟨S⟩≤lnD_{A}-[1/(2ln2)](lnD_{A})^{2}/lnD. Consequently, we prove that (i) if the subsystem size is a finite fraction of the system size, then ⟨S⟩

  14. Information theory-based decision support system for integrated design of multivariable hydrometric networks

    NASA Astrophysics Data System (ADS)

    Keum, Jongho; Coulibaly, Paulin

    2017-07-01

    Adequate and accurate hydrologic information from optimal hydrometric networks is an essential part of effective water resources management. Although the key hydrologic processes in the water cycle are interconnected, hydrometric networks (e.g., streamflow, precipitation, groundwater level) have been routinely designed individually. A decision support framework is proposed for integrated design of multivariable hydrometric networks. The proposed method is applied to design optimal precipitation and streamflow networks simultaneously. The epsilon-dominance hierarchical Bayesian optimization algorithm was combined with Shannon entropy of information theory to design and evaluate hydrometric networks. Specifically, the joint entropy from the combined networks was maximized to provide the most information, and the total correlation was minimized to reduce redundant information. To further optimize the efficiency between the networks, they were designed by maximizing the conditional entropy of the streamflow network given the information of the precipitation network. Compared to the traditional individual variable design approach, the integrated multivariable design method was able to determine more efficient optimal networks by avoiding the redundant stations. Additionally, four quantization cases were compared to evaluate their effects on the entropy calculations and the determination of the optimal networks. The evaluation results indicate that the quantization methods should be selected after careful consideration for each design problem since the station rankings and the optimal networks can change accordingly.

  15. Is the hypothesis about a low entropy initial state of the Universe necessary for explaining the arrow of time?

    NASA Astrophysics Data System (ADS)

    Goldstein, Sheldon; Tumulka, Roderich; Zanghı, Nino

    2016-07-01

    According to statistical mechanics, microstates of an isolated physical system (say, a gas in a box) at time t0 in a given macrostate of less-than-maximal entropy typically evolve in such a way that the entropy at time t increases with |t -t0| in both time directions. In order to account for the observed entropy increase in only one time direction, the thermodynamic arrow of time, one usually appeals to the hypothesis that the initial state of the Universe was one of very low entropy. In certain recent models of cosmology, however, no hypothesis about the initial state of the Universe is invoked. We discuss how the emergence of a thermodynamic arrow of time in such models can nevertheless be compatible with the above-mentioned consequence of statistical mechanics, appearances to the contrary notwithstanding.

  16. Noise and complexity in human postural control: interpreting the different estimations of entropy.

    PubMed

    Rhea, Christopher K; Silver, Tobin A; Hong, S Lee; Ryu, Joong Hyun; Studenka, Breanna E; Hughes, Charmayne M L; Haddad, Jeffrey M

    2011-03-17

    Over the last two decades, various measures of entropy have been used to examine the complexity of human postural control. In general, entropy measures provide information regarding the health, stability and adaptability of the postural system that is not captured when using more traditional analytical techniques. The purpose of this study was to examine how noise, sampling frequency and time series length influence various measures of entropy when applied to human center of pressure (CoP) data, as well as in synthetic signals with known properties. Such a comparison is necessary to interpret data between and within studies that use different entropy measures, equipment, sampling frequencies or data collection durations. The complexity of synthetic signals with known properties and standing CoP data was calculated using Approximate Entropy (ApEn), Sample Entropy (SampEn) and Recurrence Quantification Analysis Entropy (RQAEn). All signals were examined at varying sampling frequencies and with varying amounts of added noise. Additionally, an increment time series of the original CoP data was examined to remove long-range correlations. Of the three measures examined, ApEn was the least robust to sampling frequency and noise manipulations. Additionally, increased noise led to an increase in SampEn, but a decrease in RQAEn. Thus, noise can yield inconsistent results between the various entropy measures. Finally, the differences between the entropy measures were minimized in the increment CoP data, suggesting that long-range correlations should be removed from CoP data prior to calculating entropy. The various algorithms typically used to quantify the complexity (entropy) of CoP may yield very different results, particularly when sampling frequency and noise are different. The results of this study are discussed within the context of the neural noise and loss of complexity hypotheses.

  17. On determining absolute entropy without quantum theory or the third law of thermodynamics

    NASA Astrophysics Data System (ADS)

    Steane, Andrew M.

    2016-04-01

    We employ classical thermodynamics to gain information about absolute entropy, without recourse to statistical methods, quantum mechanics or the third law of thermodynamics. The Gibbs-Duhem equation yields various simple methods to determine the absolute entropy of a fluid. We also study the entropy of an ideal gas and the ionization of a plasma in thermal equilibrium. A single measurement of the degree of ionization can be used to determine an unknown constant in the entropy equation, and thus determine the absolute entropy of a gas. It follows from all these examples that the value of entropy at absolute zero temperature does not need to be assigned by postulate, but can be deduced empirically.

  18. Comparing Postural Stability Entropy Analyses to Differentiate Fallers and Non-Fallers

    PubMed Central

    Fino, Peter C.; Mojdehi, Ahmad R.; Adjerid, Khaled; Habibi, Mohammad; Lockhart, Thurmon E.; Ross, Shane D.

    2015-01-01

    The health and financial cost of falls has spurred research to differentiate the characteristics of fallers and non-fallers. Postural stability has received much of the attention with recent studies exploring various measures of entropy. This study compared the discriminatory ability of several entropy methods at differentiating two paradigms in the center-of-pressure (COP) of elderly individuals: 1.) eyes open (EO) versus eyes closed (EC) and 2.) fallers (F) versus non-fallers (NF). Methods were compared using the area under the curve (AUC) of the receiver-operating characteristic (ROC) curves developed from logistic regression models. Overall, multiscale entropy (MSE) and composite multiscale entropy (CompMSE) performed the best with AUCs of 0.71 for EO/EC and 0.77 for F/NF. When methods were combined together to maximize the AUC, the entropy classifier had an AUC of for 0.91 the F/NF comparison. These results suggest researchers and clinicians attempting to create clinical tests to identify fallers should consider a combination of every entropy method when creating a classifying test. Additionally, MSE and CompMSE classifiers using polar coordinate data outperformed rectangular coordinate data, encouraging more research into the most appropriate time series for postural stability entropy analysis. PMID:26464267

  19. Rényi entropy of the totally asymmetric exclusion process

    NASA Astrophysics Data System (ADS)

    Wood, Anthony J.; Blythe, Richard A.; Evans, Martin R.

    2017-11-01

    The Rényi entropy is a generalisation of the Shannon entropy that is sensitive to the fine details of a probability distribution. We present results for the Rényi entropy of the totally asymmetric exclusion process (TASEP). We calculate explicitly an entropy whereby the squares of configuration probabilities are summed, using the matrix product formalism to map the problem to one involving a six direction lattice walk in the upper quarter plane. We derive the generating function across the whole phase diagram, using an obstinate kernel method. This gives the leading behaviour of the Rényi entropy and corrections in all phases of the TASEP. The leading behaviour is given by the result for a Bernoulli measure and we conjecture that this holds for all Rényi entropies. Within the maximal current phase the correction to the leading behaviour is logarithmic in the system size. Finally, we remark upon a special property of equilibrium systems whereby discontinuities in the Rényi entropy arise away from phase transitions, which we refer to as secondary transitions. We find no such secondary transition for this nonequilibrium system, supporting the notion that these are specific to equilibrium cases.

  20. On variational expressions for quantum relative entropies

    NASA Astrophysics Data System (ADS)

    Berta, Mario; Fawzi, Omar; Tomamichel, Marco

    2017-12-01

    Distance measures between quantum states like the trace distance and the fidelity can naturally be defined by optimizing a classical distance measure over all measurement statistics that can be obtained from the respective quantum states. In contrast, Petz showed that the measured relative entropy, defined as a maximization of the Kullback-Leibler divergence over projective measurement statistics, is strictly smaller than Umegaki's quantum relative entropy whenever the states do not commute. We extend this result in two ways. First, we show that Petz' conclusion remains true if we allow general positive operator-valued measures. Second, we extend the result to Rényi relative entropies and show that for non-commuting states the sandwiched Rényi relative entropy is strictly larger than the measured Rényi relative entropy for α \\in (1/2, \\infty ) and strictly smaller for α \\in [0,1/2). The latter statement provides counterexamples for the data processing inequality of the sandwiched Rényi relative entropy for α < 1/2. Our main tool is a new variational expression for the measured Rényi relative entropy, which we further exploit to show that certain lower bounds on quantum conditional mutual information are superadditive.

  1. Comparing Postural Stability Entropy Analyses to Differentiate Fallers and Non-fallers.

    PubMed

    Fino, Peter C; Mojdehi, Ahmad R; Adjerid, Khaled; Habibi, Mohammad; Lockhart, Thurmon E; Ross, Shane D

    2016-05-01

    The health and financial cost of falls has spurred research to differentiate the characteristics of fallers and non-fallers. Postural stability has received much of the attention with recent studies exploring various measures of entropy. This study compared the discriminatory ability of several entropy methods at differentiating two paradigms in the center-of-pressure of elderly individuals: (1) eyes open (EO) vs. eyes closed (EC) and (2) fallers (F) vs. non-fallers (NF). Methods were compared using the area under the curve (AUC) of the receiver-operating characteristic curves developed from logistic regression models. Overall, multiscale entropy (MSE) and composite multiscale entropy (CompMSE) performed the best with AUCs of 0.71 for EO/EC and 0.77 for F/NF. When methods were combined together to maximize the AUC, the entropy classifier had an AUC of for 0.91 the F/NF comparison. These results suggest researchers and clinicians attempting to create clinical tests to identify fallers should consider a combination of every entropy method when creating a classifying test. Additionally, MSE and CompMSE classifiers using polar coordinate data outperformed rectangular coordinate data, encouraging more research into the most appropriate time series for postural stability entropy analysis.

  2. Information Measures for Multisensor Systems

    DTIC Science & Technology

    2013-12-11

    permuted to generate spectra that were non- physical but preserved the entropy of the source spectra. Another 1000 spectra were constructed to mimic co...Research Laboratory (NRL) has yielded probabilistic models for spectral data that enable the computation of information measures such as entropy and...22308 Chemical sensing Information theory Spectral data Information entropy Information divergence Mass spectrometry Infrared spectroscopy Multisensor

  3. Human vision is determined based on information theory.

    PubMed

    Delgado-Bonal, Alfonso; Martín-Torres, Javier

    2016-11-03

    It is commonly accepted that the evolution of the human eye has been driven by the maximum intensity of the radiation emitted by the Sun. However, the interpretation of the surrounding environment is constrained not only by the amount of energy received but also by the information content of the radiation. Information is related to entropy rather than energy. The human brain follows Bayesian statistical inference for the interpretation of visual space. The maximization of information occurs in the process of maximizing the entropy. Here, we show that the photopic and scotopic vision absorption peaks in humans are determined not only by the intensity but also by the entropy of radiation. We suggest that through the course of evolution, the human eye has not adapted only to the maximum intensity or to the maximum information but to the optimal wavelength for obtaining information. On Earth, the optimal wavelengths for photopic and scotopic vision are 555 nm and 508 nm, respectively, as inferred experimentally. These optimal wavelengths are determined by the temperature of the star (in this case, the Sun) and by the atmospheric composition.

  4. Human vision is determined based on information theory

    NASA Astrophysics Data System (ADS)

    Delgado-Bonal, Alfonso; Martín-Torres, Javier

    2016-11-01

    It is commonly accepted that the evolution of the human eye has been driven by the maximum intensity of the radiation emitted by the Sun. However, the interpretation of the surrounding environment is constrained not only by the amount of energy received but also by the information content of the radiation. Information is related to entropy rather than energy. The human brain follows Bayesian statistical inference for the interpretation of visual space. The maximization of information occurs in the process of maximizing the entropy. Here, we show that the photopic and scotopic vision absorption peaks in humans are determined not only by the intensity but also by the entropy of radiation. We suggest that through the course of evolution, the human eye has not adapted only to the maximum intensity or to the maximum information but to the optimal wavelength for obtaining information. On Earth, the optimal wavelengths for photopic and scotopic vision are 555 nm and 508 nm, respectively, as inferred experimentally. These optimal wavelengths are determined by the temperature of the star (in this case, the Sun) and by the atmospheric composition.

  5. Natural convection of a two-dimensional Boussinesq fluid does not maximize entropy production.

    PubMed

    Bartlett, Stuart; Bullock, Seth

    2014-08-01

    Rayleigh-Bénard convection is a canonical example of spontaneous pattern formation in a nonequilibrium system. It has been the subject of considerable theoretical and experimental study, primarily for systems with constant (temperature or heat flux) boundary conditions. In this investigation, we have explored the behavior of a convecting fluid system with negative feedback boundary conditions. At the upper and lower system boundaries, the inward heat flux is defined such that it is a decreasing function of the boundary temperature. Thus the system's heat transport is not constrained in the same manner that it is in the constant temperature or constant flux cases. It has been suggested that the entropy production rate (which has a characteristic peak at intermediate heat flux values) might apply as a selection rule for such a system. In this work, we demonstrate with Lattice Boltzmann simulations that entropy production maximization does not dictate the steady state of this system, despite its success in other, somewhat similar scenarios. Instead, we will show that the same scaling law of dimensionless variables found for constant boundary conditions also applies to this system.

  6. Human vision is determined based on information theory

    PubMed Central

    Delgado-Bonal, Alfonso; Martín-Torres, Javier

    2016-01-01

    It is commonly accepted that the evolution of the human eye has been driven by the maximum intensity of the radiation emitted by the Sun. However, the interpretation of the surrounding environment is constrained not only by the amount of energy received but also by the information content of the radiation. Information is related to entropy rather than energy. The human brain follows Bayesian statistical inference for the interpretation of visual space. The maximization of information occurs in the process of maximizing the entropy. Here, we show that the photopic and scotopic vision absorption peaks in humans are determined not only by the intensity but also by the entropy of radiation. We suggest that through the course of evolution, the human eye has not adapted only to the maximum intensity or to the maximum information but to the optimal wavelength for obtaining information. On Earth, the optimal wavelengths for photopic and scotopic vision are 555 nm and 508 nm, respectively, as inferred experimentally. These optimal wavelengths are determined by the temperature of the star (in this case, the Sun) and by the atmospheric composition. PMID:27808236

  7. The third law of thermodynamics and the fractional entropies

    NASA Astrophysics Data System (ADS)

    Baris Bagci, G.

    2016-08-01

    We consider the fractal calculus based Ubriaco and Machado entropies and investigate whether they conform to the third law of thermodynamics. The Ubriaco entropy satisfies the third law of thermodynamics in the interval 0 < q ≤ 1 exactly where it is also thermodynamically stable. The Machado entropy, on the other hand, yields diverging inverse temperature in the region 0 < q ≤ 1, albeit with non-vanishing negative entropy values. Therefore, despite the divergent inverse temperature behavior, the Machado entropy fails the third law of thermodynamics. We also show that the aforementioned results are also supported by the one-dimensional Ising model with no external field.

  8. Shape-designed frustration by local polymorphism in a near-equilibrium colloidal glass.

    PubMed

    Zhao, Kun; Mason, Thomas G

    2015-09-29

    We show that hard, convex, lithographic, prismatic kite platelets, each having three 72° vertices and one 144° vertex, preferentially form a disordered and arrested 2D glass when concentrated quasi-statically in a monolayer while experiencing thermal Brownian fluctuations. By contrast with 2D systems of other hard convex shapes, such as squares, rhombs, and pentagons, which readily form crystals at high densities, 72° kites retain a liquid-like disordered structure that becomes frozen-in as their long-time translational and rotational diffusion become highly bounded, yielding a 2D colloidal glass. This robust glass-forming propensity arises from competition between highly diverse few-particle local polymorphic configurations (LPCs) that have incommensurate features and symmetries. Thus, entropy maximization is consistent with the preservation of highly diverse LPCs en route to the arrested glass.

  9. Integrated design of multivariable hydrometric networks using entropy theory with a multiobjective optimization approach

    NASA Astrophysics Data System (ADS)

    Kim, Y.; Hwang, T.; Vose, J. M.; Martin, K. L.; Band, L. E.

    2016-12-01

    Obtaining quality hydrologic observations is the first step towards a successful water resources management. While remote sensing techniques have enabled to convert satellite images of the Earth's surface to hydrologic data, the importance of ground-based observations has never been diminished because in-situ data are often highly accurate and can be used to validate remote measurements. The existence of efficient hydrometric networks is becoming more important to obtain as much as information with minimum redundancy. The World Meteorological Organization (WMO) has recommended a guideline for the minimum hydrometric network density based on physiography; however, this guideline is not for the optimum network design but for avoiding serious deficiency from a network. Moreover, all hydrologic variables are interconnected within the hydrologic cycle, while monitoring networks have been designed individually. This study proposes an integrated network design method using entropy theory with a multiobjective optimization approach. In specific, a precipitation and a streamflow networks in a semi-urban watershed in Ontario, Canada were designed simultaneously by maximizing joint entropy, minimizing total correlation, and maximizing conditional entropy of streamflow network given precipitation network. After comparing with the typical individual network designs, the proposed design method would be able to determine more efficient optimal networks by avoiding the redundant stations, in which hydrologic information is transferable. Additionally, four quantization cases were applied in entropy calculations to assess their implications on the station rankings and the optimal networks. The results showed that the selection of quantization method should be considered carefully because the rankings and optimal networks are subject to change accordingly.

  10. Integrated design of multivariable hydrometric networks using entropy theory with a multiobjective optimization approach

    NASA Astrophysics Data System (ADS)

    Keum, J.; Coulibaly, P. D.

    2017-12-01

    Obtaining quality hydrologic observations is the first step towards a successful water resources management. While remote sensing techniques have enabled to convert satellite images of the Earth's surface to hydrologic data, the importance of ground-based observations has never been diminished because in-situ data are often highly accurate and can be used to validate remote measurements. The existence of efficient hydrometric networks is becoming more important to obtain as much as information with minimum redundancy. The World Meteorological Organization (WMO) has recommended a guideline for the minimum hydrometric network density based on physiography; however, this guideline is not for the optimum network design but for avoiding serious deficiency from a network. Moreover, all hydrologic variables are interconnected within the hydrologic cycle, while monitoring networks have been designed individually. This study proposes an integrated network design method using entropy theory with a multiobjective optimization approach. In specific, a precipitation and a streamflow networks in a semi-urban watershed in Ontario, Canada were designed simultaneously by maximizing joint entropy, minimizing total correlation, and maximizing conditional entropy of streamflow network given precipitation network. After comparing with the typical individual network designs, the proposed design method would be able to determine more efficient optimal networks by avoiding the redundant stations, in which hydrologic information is transferable. Additionally, four quantization cases were applied in entropy calculations to assess their implications on the station rankings and the optimal networks. The results showed that the selection of quantization method should be considered carefully because the rankings and optimal networks are subject to change accordingly.

  11. Ecosystem growth and development.

    PubMed

    Fath, Brian D; Jørgensen, Sven E; Patten, Bernard C; Straskraba, Milan

    2004-11-01

    One of the most important features of biosystems is how they are able to maintain local order (low entropy) within their system boundaries. At the ecosystem scale, this organization can be observed in the thermodynamic parameters that describe it, such that these parameters can be used to track ecosystem growth and development during succession. Thermodynamically, ecosystem growth is the increase of energy throughflow and stored biomass, and ecosystem development is the internal reorganization of these energy mass stores, which affect transfers, transformations, and time lags within the system. Several proposed hypotheses describe thermodynamically the orientation or natural tendency that ecosystems follow during succession, and here, we consider five: minimize specific entropy production, maximize dissipation, maximize exergy storage (includes biomass and information), maximize energy throughflow, and maximize retention time. These thermodynamic orientors were previously all shown to occur to some degree during succession, and here we present a refinement by observing them during different stages of succession. We view ecosystem succession as a series of four growth and development stages: boundary, structural, network, and informational. We demonstrate how each of these ecological thermodynamic orientors behaves during the different growth and development stages, and show that while all apply during some stages only maximizing energy throughflow and maximizing exergy storage are applicable during all four stages. Therefore, we conclude that the movement away from thermodynamic equilibrium, and the subsequent increase in organization during ecosystem growth and development, is a result of system components and configurations that maximize the flux of useful energy and the amount of stored exergy. Empirical data and theoretical models support these conclusions.

  12. Credit market Jitters in the course of the financial crisis: A permutation entropy approach in measuring informational efficiency in financial assets

    NASA Astrophysics Data System (ADS)

    Siokis, Fotios M.

    2018-06-01

    We explore the evolution of the informational efficiency for specific instruments of the U.S. money, bond and stock exchange markets, prior and after the outbreak of the Great Recession. We utilize the permutation entropy and the complexity-entropy causality plane to rank the time series and measure the degree of informational efficiency. We find that after the credit crunch and the collapse of Lehman Brothers the efficiency level of specific money market instruments' yield falls considerably. This is an evidence of less uncertainty included in predicting the related yields throughout the financial disarray. Similar trend is depicted in the indices of the stock exchange markets but efficiency remains in much higher levels. On the other hand, bond market instruments maintained their efficiency levels even after the outbreak of the crisis, which could be interpreted into greater randomness and less predictability of their yields.

  13. Quantum Discord for d⊗2 Systems

    PubMed Central

    Ma, Zhihao; Chen, Zhihua; Fanchini, Felipe Fernandes; Fei, Shao-Ming

    2015-01-01

    We present an analytical solution for classical correlation, defined in terms of linear entropy, in an arbitrary system when the second subsystem is measured. We show that the optimal measurements used in the maximization of the classical correlation in terms of linear entropy, when used to calculate the quantum discord in terms of von Neumann entropy, result in a tight upper bound for arbitrary systems. This bound agrees with all known analytical results about quantum discord in terms of von Neumann entropy and, when comparing it with the numerical results for 106 two-qubit random density matrices, we obtain an average deviation of order 10−4. Furthermore, our results give a way to calculate the quantum discord for arbitrary n-qubit GHZ and W states evolving under the action of the amplitude damping noisy channel. PMID:26036771

  14. Entropy Production in Collisionless Systems. II. Arbitrary Phase-space Occupation Numbers

    NASA Astrophysics Data System (ADS)

    Barnes, Eric I.; Williams, Liliya L. R.

    2012-04-01

    We present an analysis of two thermodynamic techniques for determining equilibria of self-gravitating systems. One is the Lynden-Bell (LB) entropy maximization analysis that introduced violent relaxation. Since we do not use the Stirling approximation, which is invalid at small occupation numbers, our systems have finite mass, unlike LB's isothermal spheres. (Instead of Stirling, we utilize a very accurate smooth approximation for ln x!.) The second analysis extends entropy production extremization to self-gravitating systems, also without the use of the Stirling approximation. In addition to the LB statistical family characterized by the exclusion principle in phase space, and designed to treat collisionless systems, we also apply the two approaches to the Maxwell-Boltzmann (MB) families, which have no exclusion principle and hence represent collisional systems. We implicitly assume that all of the phase space is equally accessible. We derive entropy production expressions for both families and give the extremum conditions for entropy production. Surprisingly, our analysis indicates that extremizing entropy production rate results in systems that have maximum entropy, in both LB and MB statistics. In other words, both thermodynamic approaches lead to the same equilibrium structures.

  15. A practical comparison of algorithms for the measurement of multiscale entropy in neural time series data.

    PubMed

    Kuntzelman, Karl; Jack Rhodes, L; Harrington, Lillian N; Miskovic, Vladimir

    2018-06-01

    There is a broad family of statistical methods for capturing time series regularity, with increasingly widespread adoption by the neuroscientific community. A common feature of these methods is that they permit investigators to quantify the entropy of brain signals - an index of unpredictability/complexity. Despite the proliferation of algorithms for computing entropy from neural time series data there is scant evidence concerning their relative stability and efficiency. Here we evaluated several different algorithmic implementations (sample, fuzzy, dispersion and permutation) of multiscale entropy in terms of their stability across sessions, internal consistency and computational speed, accuracy and precision using a combination of electroencephalogram (EEG) and synthetic 1/ƒ noise signals. Overall, we report fair to excellent internal consistency and longitudinal stability over a one-week period for the majority of entropy estimates, with several caveats. Computational timing estimates suggest distinct advantages for dispersion and permutation entropy over other entropy estimates. Considered alongside the psychometric evidence, we suggest several ways in which researchers can maximize computational resources (without sacrificing reliability), especially when working with high-density M/EEG data or multivoxel BOLD time series signals. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Entropy Growth in the Early Universe and Confirmation of Initial Big Bang Conditions

    NASA Astrophysics Data System (ADS)

    Beckwith, Andrew

    2009-09-01

    This paper shows how increased entropy values from an initially low big bang level can be measured experimentally by counting relic gravitons. Furthermore the physical mechanism of this entropy increase is explained via analogies with early-universe phase transitions. The role of Jack Ng's (2007, 2008a, 2008b) revised infinite quantum statistics in the physics of gravitational wave detection is acknowledged. Ng's infinite quantum statistics can be used to show that ΔS~ΔNgravitons is a startmg point to the increasing net universe cosmological entropy. Finally, in a nod to similarities AS ZPE analysis, it is important to note that the resulting ΔS~ΔNgravitons ≠ 1088, that in fact it is much lower, allowing for evaluating initial graviton production as an emergent field phenomena, which may be similar to how ZPE states can be used to extract energy from a vacuum if entropy is not maximized. The rapid increase in entropy so alluded to without near sudden increases to 1088 may be enough to allow successful modeling of relic graviton production for entropy in a manner similar to ZPE energy extraction from a vacuum state.

  17. n-Order and maximum fuzzy similarity entropy for discrimination of signals of different complexity: Application to fetal heart rate signals.

    PubMed

    Zaylaa, Amira; Oudjemia, Souad; Charara, Jamal; Girault, Jean-Marc

    2015-09-01

    This paper presents two new concepts for discrimination of signals of different complexity. The first focused initially on solving the problem of setting entropy descriptors by varying the pattern size instead of the tolerance. This led to the search for the optimal pattern size that maximized the similarity entropy. The second paradigm was based on the n-order similarity entropy that encompasses the 1-order similarity entropy. To improve the statistical stability, n-order fuzzy similarity entropy was proposed. Fractional Brownian motion was simulated to validate the different methods proposed, and fetal heart rate signals were used to discriminate normal from abnormal fetuses. In all cases, it was found that it was possible to discriminate time series of different complexity such as fractional Brownian motion and fetal heart rate signals. The best levels of performance in terms of sensitivity (90%) and specificity (90%) were obtained with the n-order fuzzy similarity entropy. However, it was shown that the optimal pattern size and the maximum similarity measurement were related to intrinsic features of the time series. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Noise and Complexity in Human Postural Control: Interpreting the Different Estimations of Entropy

    PubMed Central

    Rhea, Christopher K.; Silver, Tobin A.; Hong, S. Lee; Ryu, Joong Hyun; Studenka, Breanna E.; Hughes, Charmayne M. L.; Haddad, Jeffrey M.

    2011-01-01

    Background Over the last two decades, various measures of entropy have been used to examine the complexity of human postural control. In general, entropy measures provide information regarding the health, stability and adaptability of the postural system that is not captured when using more traditional analytical techniques. The purpose of this study was to examine how noise, sampling frequency and time series length influence various measures of entropy when applied to human center of pressure (CoP) data, as well as in synthetic signals with known properties. Such a comparison is necessary to interpret data between and within studies that use different entropy measures, equipment, sampling frequencies or data collection durations. Methods and Findings The complexity of synthetic signals with known properties and standing CoP data was calculated using Approximate Entropy (ApEn), Sample Entropy (SampEn) and Recurrence Quantification Analysis Entropy (RQAEn). All signals were examined at varying sampling frequencies and with varying amounts of added noise. Additionally, an increment time series of the original CoP data was examined to remove long-range correlations. Of the three measures examined, ApEn was the least robust to sampling frequency and noise manipulations. Additionally, increased noise led to an increase in SampEn, but a decrease in RQAEn. Thus, noise can yield inconsistent results between the various entropy measures. Finally, the differences between the entropy measures were minimized in the increment CoP data, suggesting that long-range correlations should be removed from CoP data prior to calculating entropy. Conclusions The various algorithms typically used to quantify the complexity (entropy) of CoP may yield very different results, particularly when sampling frequency and noise are different. The results of this study are discussed within the context of the neural noise and loss of complexity hypotheses. PMID:21437281

  19. Stabilization of a protein conferred by an increase in folded state entropy.

    PubMed

    Dagan, Shlomi; Hagai, Tzachi; Gavrilov, Yulian; Kapon, Ruti; Levy, Yaakov; Reich, Ziv

    2013-06-25

    Entropic stabilization of native protein structures typically relies on strategies that serve to decrease the entropy of the unfolded state. Here we report, using a combination of experimental and computational approaches, on enhanced thermodynamic stability conferred by an increase in the configurational entropy of the folded state. The enhanced stability is observed upon modifications of a loop region in the enzyme acylphosphatase and is achieved despite significant enthalpy losses. The modifications that lead to increased stability, as well as those that result in destabilization, however, strongly compromise enzymatic activity, rationalizing the preservation of the native loop structure even though it does not provide the protein with maximal stability or kinetic foldability.

  20. Soft hairy warped black hole entropy

    NASA Astrophysics Data System (ADS)

    Grumiller, Daniel; Hacker, Philip; Merbis, Wout

    2018-02-01

    We reconsider warped black hole solutions in topologically massive gravity and find novel boundary conditions that allow for soft hairy excitations on the horizon. To compute the associated symmetry algebra we develop a general framework to compute asymptotic symmetries in any Chern-Simons-like theory of gravity. We use this to show that the near horizon symmetry algebra consists of two u (1) current algebras and recover the surprisingly simple entropy formula S = 2 π( J 0 + + J 0 - ), where J 0 ± are zero mode charges of the current algebras. This provides the first example of a locally non-maximally symmetric configuration exhibiting this entropy law and thus non-trivial evidence for its universality.

  1. Modelling the spreading rate of controlled communicable epidemics through an entropy-based thermodynamic model

    NASA Astrophysics Data System (ADS)

    Wang, WenBin; Wu, ZiNiu; Wang, ChunFeng; Hu, RuiFeng

    2013-11-01

    A model based on a thermodynamic approach is proposed for predicting the dynamics of communicable epidemics assumed to be governed by controlling efforts of multiple scales so that an entropy is associated with the system. All the epidemic details are factored into a single and time-dependent coefficient, the functional form of this coefficient is found through four constraints, including notably the existence of an inflexion point and a maximum. The model is solved to give a log-normal distribution for the spread rate, for which a Shannon entropy can be defined. The only parameter, that characterizes the width of the distribution function, is uniquely determined through maximizing the rate of entropy production. This entropy-based thermodynamic (EBT) model predicts the number of hospitalized cases with a reasonable accuracy for SARS in the year 2003. This EBT model can be of use for potential epidemics such as avian influenza and H7N9 in China.

  2. Time dependence of Hawking radiation entropy

    NASA Astrophysics Data System (ADS)

    Page, Don N.

    2013-09-01

    If a black hole starts in a pure quantum state and evaporates completely by a unitary process, the von Neumann entropy of the Hawking radiation initially increases and then decreases back to zero when the black hole has disappeared. Here numerical results are given for an approximation to the time dependence of the radiation entropy under an assumption of fast scrambling, for large nonrotating black holes that emit essentially only photons and gravitons. The maximum of the von Neumann entropy then occurs after about 53.81% of the evaporation time, when the black hole has lost about 40.25% of its original Bekenstein-Hawking (BH) entropy (an upper bound for its von Neumann entropy) and then has a BH entropy that equals the entropy in the radiation, which is about 59.75% of the original BH entropy 4πM02, or about 7.509M02 ≈ 6.268 × 1076(M0/Msolar)2, using my 1976 calculations that the photon and graviton emission process into empty space gives about 1.4847 times the BH entropy loss of the black hole. Results are also given for black holes in initially impure states. If the black hole starts in a maximally mixed state, the von Neumann entropy of the Hawking radiation increases from zero up to a maximum of about 119.51% of the original BH entropy, or about 15.018M02 ≈ 1.254 × 1077(M0/Msolar)2, and then decreases back down to 4πM02 = 1.049 × 1077(M0/Msolar)2.

  3. Formulating the shear stress distribution in circular open channels based on the Renyi entropy

    NASA Astrophysics Data System (ADS)

    Khozani, Zohreh Sheikh; Bonakdari, Hossein

    2018-01-01

    The principle of maximum entropy is employed to derive the shear stress distribution by maximizing the Renyi entropy subject to some constraints and by assuming that dimensionless shear stress is a random variable. A Renyi entropy-based equation can be used to model the shear stress distribution along the entire wetted perimeter of circular channels and circular channels with flat beds and deposited sediments. A wide range of experimental results for 12 hydraulic conditions with different Froude numbers (0.375 to 1.71) and flow depths (20.3 to 201.5 mm) were used to validate the derived shear stress distribution. For circular channels, model performance enhanced with increasing flow depth (mean relative error (RE) of 0.0414) and only deteriorated slightly at the greatest flow depth (RE of 0.0573). For circular channels with flat beds, the Renyi entropy model predicted the shear stress distribution well at lower sediment depth. The Renyi entropy model results were also compared with Shannon entropy model results. Both models performed well for circular channels, but for circular channels with flat beds the Renyi entropy model displayed superior performance in estimating the shear stress distribution. The Renyi entropy model was highly precise and predicted the shear stress distribution in a circular channel with RE of 0.0480 and in a circular channel with a flat bed with RE of 0.0488.

  4. Maximum Kolmogorov-Sinai Entropy Versus Minimum Mixing Time in Markov Chains

    NASA Astrophysics Data System (ADS)

    Mihelich, M.; Dubrulle, B.; Paillard, D.; Kral, Q.; Faranda, D.

    2018-01-01

    We establish a link between the maximization of Kolmogorov Sinai entropy (KSE) and the minimization of the mixing time for general Markov chains. Since the maximisation of KSE is analytical and easier to compute in general than mixing time, this link provides a new faster method to approximate the minimum mixing time dynamics. It could be interesting in computer sciences and statistical physics, for computations that use random walks on graphs that can be represented as Markov chains.

  5. Statistics of Infima and Stopping Times of Entropy Production and Applications to Active Molecular Processes

    NASA Astrophysics Data System (ADS)

    Neri, Izaak; Roldán, Édgar; Jülicher, Frank

    2017-01-01

    We study the statistics of infima, stopping times, and passage probabilities of entropy production in nonequilibrium steady states, and we show that they are universal. We consider two examples of stopping times: first-passage times of entropy production and waiting times of stochastic processes, which are the times when a system reaches a given state for the first time. Our main results are as follows: (i) The distribution of the global infimum of entropy production is exponential with mean equal to minus Boltzmann's constant; (ii) we find exact expressions for the passage probabilities of entropy production; (iii) we derive a fluctuation theorem for stopping-time distributions of entropy production. These results have interesting implications for stochastic processes that can be discussed in simple colloidal systems and in active molecular processes. In particular, we show that the timing and statistics of discrete chemical transitions of molecular processes, such as the steps of molecular motors, are governed by the statistics of entropy production. We also show that the extreme-value statistics of active molecular processes are governed by entropy production; for example, we derive a relation between the maximal excursion of a molecular motor against the direction of an external force and the infimum of the corresponding entropy-production fluctuations. Using this relation, we make predictions for the distribution of the maximum backtrack depth of RNA polymerases, which follow from our universal results for entropy-production infima.

  6. Statistical thermodynamics of amphiphile chains in micelles

    PubMed Central

    Ben-Shaul, A.; Szleifer, I.; Gelbart, W. M.

    1984-01-01

    The probability distribution of amphiphile chain conformations in micelles of different geometries is derived through maximization of their packing entropy. A lattice model, first suggested by Dill and Flory, is used to represent the possible chain conformations in the micellar core. The polar heads of the chains are assumed to be anchored to the micellar surface, with the other chain segments occupying all lattice sites in the interior of the micelle. This “volume-filling” requirement, the connectivity of the chains, and the geometry of the micelle define constraints on the possible probability distributions of chain conformations. The actual distribution is derived by maximizing the chain's entropy subject to these constraints; “reversals” of the chains back towards the micellar surface are explicitly included. Results are presented for amphiphiles organized in planar bilayers and in cylindrical and spherical micelles of different sizes. It is found that, for all three geometries, the bond order parameters decrease as a function of the bond distance from the polar head, in accordance with recent experimental data. The entropy differences associated with geometrical changes are shown to be significant, suggesting thereby the need to include curvature (environmental)-dependent “tail” contributions in statistical thermodynamic treatments of micellization. PMID:16593492

  7. Contraction coefficients for noisy quantum channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiai, Fumio, E-mail: hiai.fumio@gmail.com; Ruskai, Mary Beth, E-mail: ruskai@member.ams.org

    Generalized relative entropy, monotone Riemannian metrics, geodesic distance, and trace distance are all known to decrease under the action of quantum channels. We give some new bounds on, and relationships between, the maximal contraction for these quantities.

  8. Isobaric yield ratio difference and Shannon information entropy

    NASA Astrophysics Data System (ADS)

    Ma, Chun-Wang; Wei, Hui-Ling; Wang, Shan-Shan; Ma, Yu-Gang; Wada, Ryoichi; Zhang, Yan-Li

    2015-03-01

    The Shannon information entropy theory is used to explain the recently proposed isobaric yield ratio difference (IBD) probe which aims to determine the nuclear symmetry energy. Theoretically, the difference between the Shannon uncertainties carried by isobars in two different reactions (ΔIn21), is found to be equivalent to the difference between the chemical potentials of protons and neutrons of the reactions [the IBD probe, IB- Δ(βμ)21, with β the reverse temperature]. From the viewpoints of Shannon information entropy, the physical meaning of the above chemical potential difference is interpreted by ΔIn21 as denoting the nuclear symmetry energy or density difference between neutrons and protons in reactions more concisely than from the statistical ablation-abrasion model.

  9. Entropy production of doubly stochastic quantum channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Müller-Hermes, Alexander, E-mail: muellerh@posteo.net; Department of Mathematical Sciences, University of Copenhagen, 2100 Copenhagen; Stilck França, Daniel, E-mail: dsfranca@mytum.de

    2016-02-15

    We study the entropy increase of quantum systems evolving under primitive, doubly stochastic Markovian noise and thus converging to the maximally mixed state. This entropy increase can be quantified by a logarithmic-Sobolev constant of the Liouvillian generating the noise. We prove a universal lower bound on this constant that stays invariant under taking tensor-powers. Our methods involve a new comparison method to relate logarithmic-Sobolev constants of different Liouvillians and a technique to compute logarithmic-Sobolev inequalities of Liouvillians with eigenvectors forming a projective representation of a finite abelian group. Our bounds improve upon similar results established before and as an applicationmore » we prove an upper bound on continuous-time quantum capacities. In the last part of this work we study entropy production estimates of discrete-time doubly stochastic quantum channels by extending the framework of discrete-time logarithmic-Sobolev inequalities to the quantum case.« less

  10. Temporal Correlations and Neural Spike Train Entropy

    NASA Astrophysics Data System (ADS)

    Schultz, Simon R.; Panzeri, Stefano

    2001-06-01

    Sampling considerations limit the experimental conditions under which information theoretic analyses of neurophysiological data yield reliable results. We develop a procedure for computing the full temporal entropy and information of ensembles of neural spike trains, which performs reliably for limited samples of data. This approach also yields insight to the role of correlations between spikes in temporal coding mechanisms. The method, when applied to recordings from complex cells of the monkey primary visual cortex, results in lower rms error information estimates in comparison to a ``brute force'' approach.

  11. Decoherence estimation in quantum theory and beyond

    NASA Astrophysics Data System (ADS)

    Pfister, Corsin

    The quantum physics literature provides many different characterizations of decoherence. Most of them have in common that they describe decoherence as a kind of influence on a quantum system upon interacting with an another system. In the spirit of quantum information theory, we adapt a particular viewpoint on decoherence which describes it as the loss of information into a system that is possibly controlled by an adversary. We use a quantitative framework for decoherence that builds on operational characterizations of the min-entropy that have been developed in the quantum information literature. It characterizes decoherence as an influence on quantum channels that reduces their suitability for a variety of quantifiable tasks such as the distribution of secret cryptographic keys of a certain length or the distribution of a certain number of maximally entangled qubit pairs. This allows for a quantitative and operational characterization of decoherence via operational characterizations of the min-entropy. In this thesis, we present a series of results about the estimation of the minentropy, subdivided into three parts. The first part concerns the estimation of a quantum adversary's uncertainty about classical information--expressed by the smooth min-entropy--as it is done in protocols for quantum key distribution (QKD). We analyze this form of min-entropy estimation in detail and find that some of the more recently suggested QKD protocols have previously unnoticed security loopholes. We show that the specifics of the sifting subroutine of a QKD protocol are crucial for security by pointing out mistakes in the security analysis in the literature and by presenting eavesdropping attacks on those problematic protocols. We provide solutions to the identified problems and present a formalized analysis of the min-entropy estimate that incorporates the sifting stage of QKD protocols. In the second part, we extend ideas from QKD to a protocol that allows to estimate an adversary's uncertainty about quantum information, expressed by the fully quantum smooth min-entropy. Roughly speaking, we show that a protocol that resembles the parallel execution of two QKD protocols can be used to lower bound the min-entropy of some unmeasured qubits. We explain how this result may influence the ongoing search for protocols for entanglement distribution. The third part is dedicated to the development of a framework that allows the estimation of decoherence even in experiments that cannot be correctly described by quantum theory. Inspired by an equivalent formulation of the min-entropy that relates it to the fidelity with a maximally entangled state, we define a decoherence quantity for a very general class of probabilistic theories that reduces to the min-entropy in the special case of quantum theory. This entails a definition of maximal entanglement for generalized probabilistic theories. Using techniques from semidefinite and linear programming, we show how bounds on this quantity can be estimated through Bell-type experiments. This allows to test models for decoherence that cannot be described by quantum theory. As an example application, we devise an experimental test of a model for gravitational decoherence that has been suggested in the literature.

  12. Time dependence of Hawking radiation entropy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Page, Don N., E-mail: profdonpage@gmail.com

    2013-09-01

    If a black hole starts in a pure quantum state and evaporates completely by a unitary process, the von Neumann entropy of the Hawking radiation initially increases and then decreases back to zero when the black hole has disappeared. Here numerical results are given for an approximation to the time dependence of the radiation entropy under an assumption of fast scrambling, for large nonrotating black holes that emit essentially only photons and gravitons. The maximum of the von Neumann entropy then occurs after about 53.81% of the evaporation time, when the black hole has lost about 40.25% of its originalmore » Bekenstein-Hawking (BH) entropy (an upper bound for its von Neumann entropy) and then has a BH entropy that equals the entropy in the radiation, which is about 59.75% of the original BH entropy 4πM{sub 0}{sup 2}, or about 7.509M{sub 0}{sup 2} ≈ 6.268 × 10{sup 76}(M{sub 0}/M{sub s}un){sup 2}, using my 1976 calculations that the photon and graviton emission process into empty space gives about 1.4847 times the BH entropy loss of the black hole. Results are also given for black holes in initially impure states. If the black hole starts in a maximally mixed state, the von Neumann entropy of the Hawking radiation increases from zero up to a maximum of about 119.51% of the original BH entropy, or about 15.018M{sub 0}{sup 2} ≈ 1.254 × 10{sup 77}(M{sub 0}/M{sub s}un){sup 2}, and then decreases back down to 4πM{sub 0}{sup 2} = 1.049 × 10{sup 77}(M{sub 0}/M{sub s}un){sup 2}.« less

  13. Single-copy entanglement in critical quantum spin chains

    NASA Astrophysics Data System (ADS)

    Eisert, J.; Cramer, M.

    2005-10-01

    We consider the single-copy entanglement as a quantity to assess quantum correlations in the ground state in quantum many-body systems. We show for a large class of models that already on the level of single specimens of spin chains, criticality is accompanied with the possibility of distilling a maximally entangled state of arbitrary dimension from a sufficiently large block deterministically, with local operations and classical communication. These analytical results—which refine previous results on the divergence of block entropy as the rate at which maximally entangled pairs can be distilled from many identically prepared chains—are made quantitative for general isotropic translationally invariant spin chains that can be mapped onto a quasifree fermionic system, and for the anisotropic XY model. For the XX model, we provide the asymptotic scaling of ˜(1/6)log2(L) , and contrast it with the block entropy.

  14. A generalized complexity measure based on Rényi entropy

    NASA Astrophysics Data System (ADS)

    Sánchez-Moreno, Pablo; Angulo, Juan Carlos; Dehesa, Jesus S.

    2014-08-01

    The intrinsic statistical complexities of finite many-particle systems (i.e., those defined in terms of the single-particle density) quantify the degree of structure or patterns, far beyond the entropy measures. They are intuitively constructed to be minima at the opposite extremes of perfect order and maximal randomness. Starting from the pioneering LMC measure, which satisfies these requirements, some extensions of LMC-Rényi type have been published in the literature. The latter measures were shown to describe a variety of physical aspects of the internal disorder in atomic and molecular systems (e.g., quantum phase transitions, atomic shell filling) which are not grasped by their mother LMC quantity. However, they are not minimal for maximal randomness in general. In this communication, we propose a generalized LMC-Rényi complexity which overcomes this problem. Some applications which illustrate this fact are given.

  15. Maximum-entropy probability distributions under Lp-norm constraints

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1991-01-01

    Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.

  16. The High Temperature Tensile and Creep Behaviors of High Entropy Superalloy.

    PubMed

    Tsao, Te-Kang; Yeh, An-Chou; Kuo, Chen-Ming; Kakehi, Koji; Murakami, Hideyuki; Yeh, Jien-Wei; Jian, Sheng-Rui

    2017-10-04

    This article presents the high temperature tensile and creep behaviors of a novel high entropy alloy (HEA). The microstructure of this HEA resembles that of advanced superalloys with a high entropy FCC matrix and L1 2 ordered precipitates, so it is also named as "high entropy superalloy (HESA)". The tensile yield strengths of HESA surpass those of the reported HEAs from room temperature to elevated temperatures; furthermore, its creep resistance at 982 °C can be compared to those of some Ni-based superalloys. Analysis on experimental results indicate that HESA could be strengthened by the low stacking-fault energy of the matrix, high anti-phase boundary energy of the strengthening precipitate, and thermally stable microstructure. Positive misfit between FCC matrix and precipitate has yielded parallel raft microstructure during creep at 982 °C, and the creep curves of HESA were dominated by tertiary creep behavior. To the best of authors' knowledge, this article is the first to present the elevated temperature tensile creep study on full scale specimens of a high entropy alloy, and the potential of HESA for high temperature structural application is discussed.

  17. Secure self-calibrating quantum random-bit generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fiorentino, M.; Santori, C.; Spillane, S. M.

    2007-03-15

    Random-bit generators (RBGs) are key components of a variety of information processing applications ranging from simulations to cryptography. In particular, cryptographic systems require 'strong' RBGs that produce high-entropy bit sequences, but traditional software pseudo-RBGs have very low entropy content and therefore are relatively weak for cryptography. Hardware RBGs yield entropy from chaotic or quantum physical systems and therefore are expected to exhibit high entropy, but in current implementations their exact entropy content is unknown. Here we report a quantum random-bit generator (QRBG) that harvests entropy by measuring single-photon and entangled two-photon polarization states. We introduce and implement a quantum tomographicmore » method to measure a lower bound on the 'min-entropy' of the system, and we employ this value to distill a truly random-bit sequence. This approach is secure: even if an attacker takes control of the source of optical states, a secure random sequence can be distilled.« less

  18. Separability of a family of one-parameter W and Greenberger-Horne-Zeilinger multiqubit states using the Abe-Rajagopal q-conditional-entropy approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prabhu, R.; Usha Devi, A. R.; Inspire Institute Inc., McLean, Virginia 22101

    2007-10-15

    We employ conditional Tsallis q entropies to study the separability of symmetric one parameter W and GHZ multiqubit mixed states. The strongest limitation on separability is realized in the limit q{yields}{infinity}, and is found to be much superior to the condition obtained using the von Neumann conditional entropy (q=1 case). Except for the example of two qubit and three qubit symmetric states of GHZ family, the q-conditional entropy method leads to sufficient--but not necessary--conditions on separability.

  19. Modern Empirical Statistical Spectral Analysis.

    DTIC Science & Technology

    1980-05-01

    716-723. Akaike, H. (1977). On entropy maximization principle, Applications of Statistics, P.R. Krishnaiah , ed., North-Holland, Amsterdam, 27-41...by P. Krishnaiah , North Holland: Amsterdam, 283-295. Parzen, E. (1979). Forecasting and whitening filter estimation, TIMS Studies in the Management

  20. Optimality and inference in hydrology from entropy production considerations: synthetic hillslope numerical experiments

    NASA Astrophysics Data System (ADS)

    Kollet, S. J.

    2015-05-01

    In this study, entropy production optimization and inference principles are applied to a synthetic semi-arid hillslope in high-resolution, physics-based simulations. The results suggest that entropy or power is indeed maximized, because of the strong nonlinearity of variably saturated flow and competing processes related to soil moisture fluxes, the depletion of gradients, and the movement of a free water table. Thus, it appears that the maximum entropy production (MEP) principle may indeed be applicable to hydrologic systems. In the application to hydrologic system, the free water table constitutes an important degree of freedom in the optimization of entropy production and may also relate the theory to actual observations. In an ensuing analysis, an attempt is made to transfer the complex, "microscopic" hillslope model into a macroscopic model of reduced complexity using the MEP principle as an interference tool to obtain effective conductance coefficients and forces/gradients. The results demonstrate a new approach for the application of MEP to hydrologic systems and may form the basis for fruitful discussions and research in future.

  1. Entropy of Mixing of Distinguishable Particles

    ERIC Educational Resources Information Center

    Kozliak, Evguenii I.

    2014-01-01

    The molar entropy of mixing yields values that depend only on the number of mixing components rather than on their chemical nature. To explain this phenomenon using the logic of chemistry, this article considers mixing of distinguishable particles, thus complementing the well-known approach developed for nondistinguishable particles, for example,…

  2. How to Calculate Renyi Entropy from Heart Rate Variability, and Why it Matters for Detecting Cardiac Autonomic Neuropathy.

    PubMed

    Cornforth, David J; Tarvainen, Mika P; Jelinek, Herbert F

    2014-01-01

    Cardiac autonomic neuropathy (CAN) is a disease that involves nerve damage leading to an abnormal control of heart rate. An open question is to what extent this condition is detectable from heart rate variability (HRV), which provides information only on successive intervals between heart beats, yet is non-invasive and easy to obtain from a three-lead ECG recording. A variety of measures may be extracted from HRV, including time domain, frequency domain, and more complex non-linear measures. Among the latter, Renyi entropy has been proposed as a suitable measure that can be used to discriminate CAN from controls. However, all entropy methods require estimation of probabilities, and there are a number of ways in which this estimation can be made. In this work, we calculate Renyi entropy using several variations of the histogram method and a density method based on sequences of RR intervals. In all, we calculate Renyi entropy using nine methods and compare their effectiveness in separating the different classes of participants. We found that the histogram method using single RR intervals yields an entropy measure that is either incapable of discriminating CAN from controls, or that it provides little information that could not be gained from the SD of the RR intervals. In contrast, probabilities calculated using a density method based on sequences of RR intervals yield an entropy measure that provides good separation between groups of participants and provides information not available from the SD. The main contribution of this work is that different approaches to calculating probability may affect the success of detecting disease. Our results bring new clarity to the methods used to calculate the Renyi entropy in general, and in particular, to the successful detection of CAN.

  3. How to Calculate Renyi Entropy from Heart Rate Variability, and Why it Matters for Detecting Cardiac Autonomic Neuropathy

    PubMed Central

    Cornforth, David J.;  Tarvainen, Mika P.; Jelinek, Herbert F.

    2014-01-01

    Cardiac autonomic neuropathy (CAN) is a disease that involves nerve damage leading to an abnormal control of heart rate. An open question is to what extent this condition is detectable from heart rate variability (HRV), which provides information only on successive intervals between heart beats, yet is non-invasive and easy to obtain from a three-lead ECG recording. A variety of measures may be extracted from HRV, including time domain, frequency domain, and more complex non-linear measures. Among the latter, Renyi entropy has been proposed as a suitable measure that can be used to discriminate CAN from controls. However, all entropy methods require estimation of probabilities, and there are a number of ways in which this estimation can be made. In this work, we calculate Renyi entropy using several variations of the histogram method and a density method based on sequences of RR intervals. In all, we calculate Renyi entropy using nine methods and compare their effectiveness in separating the different classes of participants. We found that the histogram method using single RR intervals yields an entropy measure that is either incapable of discriminating CAN from controls, or that it provides little information that could not be gained from the SD of the RR intervals. In contrast, probabilities calculated using a density method based on sequences of RR intervals yield an entropy measure that provides good separation between groups of participants and provides information not available from the SD. The main contribution of this work is that different approaches to calculating probability may affect the success of detecting disease. Our results bring new clarity to the methods used to calculate the Renyi entropy in general, and in particular, to the successful detection of CAN. PMID:25250311

  4. Explanation to the difference in the ketyl radical formation yields of benzophenone and benzil

    NASA Astrophysics Data System (ADS)

    Okutsu, Tetsuo; Muramatsu, Hidenori; Horiuchi, Hiroaki; Hiratsuka, Hiroshi

    2005-03-01

    p Ka values of benzophenone ketyl and benzil ketyl radicals were determined as 9.4 and 12.4, respectively. We can successfully explain the difference in quantum yield of the proton transfer between benzophenone ketyl and benzil ketyl radicals by these values. Reaction enthalpies of the proton transfer are the same (-80 kJ mol -1) for these radicals, and the difference in p Ka value can be explained by that reaction entropies. Reaction entropies between two radicals are discussed by the possible structure of the radicals.

  5. Entropy maximization under the constraints on the generalized Gini index and its application in modeling income distributions

    NASA Astrophysics Data System (ADS)

    Khosravi Tanak, A.; Mohtashami Borzadaran, G. R.; Ahmadi, J.

    2015-11-01

    In economics and social sciences, the inequality measures such as Gini index, Pietra index etc., are commonly used to measure the statistical dispersion. There is a generalization of Gini index which includes it as special case. In this paper, we use principle of maximum entropy to approximate the model of income distribution with a given mean and generalized Gini index. Many distributions have been used as descriptive models for the distribution of income. The most widely known of these models are the generalized beta of second kind and its subclass distributions. The obtained maximum entropy distributions are fitted to the US family total money income in 2009, 2011 and 2013 and their relative performances with respect to generalized beta of second kind family are compared.

  6. Optimal quantum networks and one-shot entropies

    NASA Astrophysics Data System (ADS)

    Chiribella, Giulio; Ebler, Daniel

    2016-09-01

    We develop a semidefinite programming method for the optimization of quantum networks, including both causal networks and networks with indefinite causal structure. Our method applies to a broad class of performance measures, defined operationally in terms of interative tests set up by a verifier. We show that the optimal performance is equal to a max relative entropy, which quantifies the informativeness of the test. Building on this result, we extend the notion of conditional min-entropy from quantum states to quantum causal networks. The optimization method is illustrated in a number of applications, including the inversion, charge conjugation, and controlization of an unknown unitary dynamics. In the non-causal setting, we show a proof-of-principle application to the maximization of the winning probability in a non-causal quantum game.

  7. Entropy and generalized least square methods in assessment of the regional value of streamgages

    USGS Publications Warehouse

    Markus, M.; Vernon, Knapp H.; Tasker, Gary D.

    2003-01-01

    The Illinois State Water Survey performed a study to assess the streamgaging network in the State of Illinois. One of the important aspects of the study was to assess the regional value of each station through an assessment of the information transfer among gaging records for low, average, and high flow conditions. This analysis was performed for the main hydrologic regions in the State, and the stations were initially evaluated using a new approach based on entropy analysis. To determine the regional value of each station within a region, several information parameters, including total net information, were defined based on entropy. Stations were ranked based on the total net information. For comparison, the regional value of the same stations was assessed using the generalized least square regression (GLS) method, developed by the US Geological Survey. Finally, a hybrid combination of GLS and entropy was created by including a function of the negative net information as a penalty function in the GLS. The weights of the combined model were determined to maximize the average correlation with the results of GLS and entropy. The entropy and GLS methods were evaluated using the high-flow data from southern Illinois stations. The combined method was compared with the entropy and GLS approaches using the high-flow data from eastern Illinois stations. ?? 2003 Elsevier B.V. All rights reserved.

  8. Particle swarm optimization-based local entropy weighted histogram equalization for infrared image enhancement

    NASA Astrophysics Data System (ADS)

    Wan, Minjie; Gu, Guohua; Qian, Weixian; Ren, Kan; Chen, Qian; Maldague, Xavier

    2018-06-01

    Infrared image enhancement plays a significant role in intelligent urban surveillance systems for smart city applications. Unlike existing methods only exaggerating the global contrast, we propose a particle swam optimization-based local entropy weighted histogram equalization which involves the enhancement of both local details and fore-and background contrast. First of all, a novel local entropy weighted histogram depicting the distribution of detail information is calculated based on a modified hyperbolic tangent function. Then, the histogram is divided into two parts via a threshold maximizing the inter-class variance in order to improve the contrasts of foreground and background, respectively. To avoid over-enhancement and noise amplification, double plateau thresholds of the presented histogram are formulated by means of particle swarm optimization algorithm. Lastly, each sub-image is equalized independently according to the constrained sub-local entropy weighted histogram. Comparative experiments implemented on real infrared images prove that our algorithm outperforms other state-of-the-art methods in terms of both visual and quantized evaluations.

  9. The entropy and Gibbs free energy of formation of the aluminum ion

    USGS Publications Warehouse

    Hemingway, B.S.; Robie, R.A.

    1977-01-01

    A reevaluation of the entropy and Gibbs free energy of formation of Al3+(aq) yields -308 ?? 15 J/K??mol and 489.4 ?? 1.4kj/mol for S0298 and ??G0f{hook},298 respectively. The standard electrode potential for aluminum is 1.691 ?? 0.005 volts. ?? 1977.

  10. Entropy generation of nanofluid flow in a microchannel heat sink

    NASA Astrophysics Data System (ADS)

    Manay, Eyuphan; Akyürek, Eda Feyza; Sahin, Bayram

    2018-06-01

    Present study aims to investigate the effects of the presence of nano sized TiO2 particles in the base fluid on entropy generation rate in a microchannel heat sink. Pure water was chosen as base fluid, and TiO2 particles were suspended into the pure water in five different particle volume fractions of 0.25%, 0.5%, 1.0%, 1.5% and 2.0%. Under laminar, steady state flow and constant heat flux boundary conditions, thermal, frictional, total entropy generation rates and entropy generation number ratios of nanofluids were experimentally analyzed in microchannel flow for different channel heights of 200 μm, 300 μm, 400 μm and 500 μm. It was observed that frictional and total entropy generation rates increased as thermal entropy generation rate were decreasing with an increase in particle volume fraction. In microchannel flows, thermal entropy generation could be neglected due to its too low rate smaller than 1.10e-07 in total entropy generation. Higher channel heights caused higher thermal entropy generation rates, and increasing channel height yielded an increase from 30% to 52% in thermal entropy generation. When channel height decreased, an increase of 66%-98% in frictional entropy generation was obtained. Adding TiO2 nanoparticles into the base fluid caused thermal entropy generation to decrease about 1.8%-32.4%, frictional entropy generation to increase about 3.3%-21.6%.

  11. Fast and Efficient Stochastic Optimization for Analytic Continuation

    DOE PAGES

    Bao, Feng; Zhang, Guannan; Webster, Clayton G; ...

    2016-09-28

    In this analytic continuation of imaginary-time quantum Monte Carlo data to extract real-frequency spectra remains a key problem in connecting theory with experiment. Here we present a fast and efficient stochastic optimization method (FESOM) as a more accessible variant of the stochastic optimization method introduced by Mishchenko et al. [Phys. Rev. B 62, 6317 (2000)], and we benchmark the resulting spectra with those obtained by the standard maximum entropy method for three representative test cases, including data taken from studies of the two-dimensional Hubbard model. Genearally, we find that our FESOM approach yields spectra similar to the maximum entropy results.more » In particular, while the maximum entropy method yields superior results when the quality of the data is strong, we find that FESOM is able to resolve fine structure with more detail when the quality of the data is poor. In addition, because of its stochastic nature, the method provides detailed information on the frequency-dependent uncertainty of the resulting spectra, while the maximum entropy method does so only for the spectral weight integrated over a finite frequency region. Therefore, we believe that this variant of the stochastic optimization approach provides a viable alternative to the routinely used maximum entropy method, especially for data of poor quality.« less

  12. Prediction of Protein Configurational Entropy (Popcoen).

    PubMed

    Goethe, Martin; Gleixner, Jan; Fita, Ignacio; Rubi, J Miguel

    2018-03-13

    A knowledge-based method for configurational entropy prediction of proteins is presented; this methodology is extremely fast, compared to previous approaches, because it does not involve any type of configurational sampling. Instead, the configurational entropy of a query fold is estimated by evaluating an artificial neural network, which was trained on molecular-dynamics simulations of ∼1000 proteins. The predicted entropy can be incorporated into a large class of protein software based on cost-function minimization/evaluation, in which configurational entropy is currently neglected for performance reasons. Software of this type is used for all major protein tasks such as structure predictions, proteins design, NMR and X-ray refinement, docking, and mutation effect predictions. Integrating the predicted entropy can yield a significant accuracy increase as we show exemplarily for native-state identification with the prominent protein software FoldX. The method has been termed Popcoen for Prediction of Protein Configurational Entropy. An implementation is freely available at http://fmc.ub.edu/popcoen/ .

  13. Characterization of time series via Rényi complexity-entropy curves

    NASA Astrophysics Data System (ADS)

    Jauregui, M.; Zunino, L.; Lenzi, E. K.; Mendes, R. S.; Ribeiro, H. V.

    2018-05-01

    One of the most useful tools for distinguishing between chaotic and stochastic time series is the so-called complexity-entropy causality plane. This diagram involves two complexity measures: the Shannon entropy and the statistical complexity. Recently, this idea has been generalized by considering the Tsallis monoparametric generalization of the Shannon entropy, yielding complexity-entropy curves. These curves have proven to enhance the discrimination among different time series related to stochastic and chaotic processes of numerical and experimental nature. Here we further explore these complexity-entropy curves in the context of the Rényi entropy, which is another monoparametric generalization of the Shannon entropy. By combining the Rényi entropy with the proper generalization of the statistical complexity, we associate a parametric curve (the Rényi complexity-entropy curve) with a given time series. We explore this approach in a series of numerical and experimental applications, demonstrating the usefulness of this new technique for time series analysis. We show that the Rényi complexity-entropy curves enable the differentiation among time series of chaotic, stochastic, and periodic nature. In particular, time series of stochastic nature are associated with curves displaying positive curvature in a neighborhood of their initial points, whereas curves related to chaotic phenomena have a negative curvature; finally, periodic time series are represented by vertical straight lines.

  14. Applications of quantum entropy to statistics

    NASA Astrophysics Data System (ADS)

    Silver, R. N.; Martz, H. F.

    This paper develops two generalizations of the maximum entropy (ME) principle. First, Shannon classical entropy is replaced by von Neumann quantum entropy to yield a broader class of information divergences (or penalty functions) for statistics applications. Negative relative quantum entropy enforces convexity, positivity, non-local extensivity and prior correlations such as smoothness. This enables the extension of ME methods from their traditional domain of ill-posed in-verse problems to new applications such as non-parametric density estimation. Second, given a choice of information divergence, a combination of ME and Bayes rule is used to assign both prior and posterior probabilities. Hyperparameters are interpreted as Lagrange multipliers enforcing constraints. Conservation principles are proposed to act statistical regularization and other hyperparameters, such as conservation of information and smoothness. ME provides an alternative to hierarchical Bayes methods.

  15. Variability of textural features in FDG PET images due to different acquisition modes and reconstruction parameters.

    PubMed

    Galavis, Paulina E; Hollensen, Christian; Jallow, Ngoneh; Paliwal, Bhudatt; Jeraj, Robert

    2010-10-01

    Characterization of textural features (spatial distributions of image intensity levels) has been considered as a tool for automatic tumor segmentation. The purpose of this work is to study the variability of the textural features in PET images due to different acquisition modes and reconstruction parameters. Twenty patients with solid tumors underwent PET/CT scans on a GE Discovery VCT scanner, 45-60 minutes post-injection of 10 mCi of [(18)F]FDG. Scans were acquired in both 2D and 3D modes. For each acquisition the raw PET data was reconstructed using five different reconstruction parameters. Lesions were segmented on a default image using the threshold of 40% of maximum SUV. Fifty different texture features were calculated inside the tumors. The range of variations of the features were calculated with respect to the average value. Fifty textural features were classified based on the range of variation in three categories: small, intermediate and large variability. Features with small variability (range ≤ 5%) were entropy-first order, energy, maximal correlation coefficient (second order feature) and low-gray level run emphasis (high-order feature). The features with intermediate variability (10% ≤ range ≤ 25%) were entropy-GLCM, sum entropy, high gray level run emphsis, gray level non-uniformity, small number emphasis, and entropy-NGL. Forty remaining features presented large variations (range > 30%). Textural features such as entropy-first order, energy, maximal correlation coefficient, and low-gray level run emphasis exhibited small variations due to different acquisition modes and reconstruction parameters. Features with low level of variations are better candidates for reproducible tumor segmentation. Even though features such as contrast-NGTD, coarseness, homogeneity, and busyness have been previously used, our data indicated that these features presented large variations, therefore they could not be considered as a good candidates for tumor segmentation.

  16. Variability of textural features in FDG PET images due to different acquisition modes and reconstruction parameters

    PubMed Central

    GALAVIS, PAULINA E.; HOLLENSEN, CHRISTIAN; JALLOW, NGONEH; PALIWAL, BHUDATT; JERAJ, ROBERT

    2014-01-01

    Background Characterization of textural features (spatial distributions of image intensity levels) has been considered as a tool for automatic tumor segmentation. The purpose of this work is to study the variability of the textural features in PET images due to different acquisition modes and reconstruction parameters. Material and methods Twenty patients with solid tumors underwent PET/CT scans on a GE Discovery VCT scanner, 45–60 minutes post-injection of 10 mCi of [18F]FDG. Scans were acquired in both 2D and 3D modes. For each acquisition the raw PET data was reconstructed using five different reconstruction parameters. Lesions were segmented on a default image using the threshold of 40% of maximum SUV. Fifty different texture features were calculated inside the tumors. The range of variations of the features were calculated with respect to the average value. Results Fifty textural features were classified based on the range of variation in three categories: small, intermediate and large variability. Features with small variability (range ≤ 5%) were entropy-first order, energy, maximal correlation coefficient (second order feature) and low-gray level run emphasis (high-order feature). The features with intermediate variability (10% ≤ range ≤ 25%) were entropy-GLCM, sum entropy, high gray level run emphsis, gray level non-uniformity, small number emphasis, and entropy-NGL. Forty remaining features presented large variations (range > 30%). Conclusion Textural features such as entropy-first order, energy, maximal correlation coefficient, and low-gray level run emphasis exhibited small variations due to different acquisition modes and reconstruction parameters. Features with low level of variations are better candidates for reproducible tumor segmentation. Even though features such as contrast-NGTD, coarseness, homogeneity, and busyness have been previously used, our data indicated that these features presented large variations, therefore they could not be considered as a good candidates for tumor segmentation. PMID:20831489

  17. Optimal behavior of viscoelastic flow at resonant frequencies.

    PubMed

    Lambert, A A; Ibáñez, G; Cuevas, S; del Río, J A

    2004-11-01

    The global entropy generation rate in the zero-mean oscillatory flow of a Maxwell fluid in a pipe is analyzed with the aim of determining its behavior at resonant flow conditions. This quantity is calculated explicitly using the analytic expression for the velocity field and assuming isothermal conditions. The global entropy generation rate shows well-defined peaks at the resonant frequencies where the flow displays maximum velocities. It was found that resonant frequencies can be considered optimal in the sense that they maximize the power transmitted to the pulsating flow at the expense of maximum dissipation.

  18. EEG analysis using wavelet-based information tools.

    PubMed

    Rosso, O A; Martin, M T; Figliola, A; Keller, K; Plastino, A

    2006-06-15

    Wavelet-based informational tools for quantitative electroencephalogram (EEG) record analysis are reviewed. Relative wavelet energies, wavelet entropies and wavelet statistical complexities are used in the characterization of scalp EEG records corresponding to secondary generalized tonic-clonic epileptic seizures. In particular, we show that the epileptic recruitment rhythm observed during seizure development is well described in terms of the relative wavelet energies. In addition, during the concomitant time-period the entropy diminishes while complexity grows. This is construed as evidence supporting the conjecture that an epileptic focus, for this kind of seizures, triggers a self-organized brain state characterized by both order and maximal complexity.

  19. Optimal resolution in maximum entropy image reconstruction from projections with multigrid acceleration

    NASA Technical Reports Server (NTRS)

    Limber, Mark A.; Manteuffel, Thomas A.; Mccormick, Stephen F.; Sholl, David S.

    1993-01-01

    We consider the problem of image reconstruction from a finite number of projections over the space L(sup 1)(Omega), where Omega is a compact subset of the set of Real numbers (exp 2). We prove that, given a discretization of the projection space, the function that generates the correct projection data and maximizes the Boltzmann-Shannon entropy is piecewise constant on a certain discretization of Omega, which we call the 'optimal grid'. It is on this grid that one obtains the maximum resolution given the problem setup. The size of this grid grows very quickly as the number of projections and number of cells per projection grow, indicating fast computational methods are essential to make its use feasible. We use a Fenchel duality formulation of the problem to keep the number of variables small while still using the optimal discretization, and propose a multilevel scheme to improve convergence of a simple cyclic maximization scheme applied to the dual problem.

  20. Textural features of dynamic contrast-enhanced MRI derived model-free and model-based parameter maps in glioma grading.

    PubMed

    Xie, Tian; Chen, Xiao; Fang, Jingqin; Kang, Houyi; Xue, Wei; Tong, Haipeng; Cao, Peng; Wang, Sumei; Yang, Yizeng; Zhang, Weiguo

    2018-04-01

    Presurgical glioma grading by dynamic contrast-enhanced MRI (DCE-MRI) has unresolved issues. The aim of this study was to investigate the ability of textural features derived from pharmacokinetic model-based or model-free parameter maps of DCE-MRI in discriminating between different grades of gliomas, and their correlation with pathological index. Retrospective. Forty-two adults with brain gliomas. 3.0T, including conventional anatomic sequences and DCE-MRI sequences (variable flip angle T1-weighted imaging and three-dimensional gradient echo volumetric imaging). Regions of interest on the cross-sectional images with maximal tumor lesion. Five commonly used textural features, including Energy, Entropy, Inertia, Correlation, and Inverse Difference Moment (IDM), were generated. All textural features of model-free parameters (initial area under curve [IAUC], maximal signal intensity [Max SI], maximal up-slope [Max Slope]) could effectively differentiate between grade II (n = 15), grade III (n = 13), and grade IV (n = 14) gliomas (P < 0.05). Two textural features, Entropy and IDM, of four DCE-MRI parameters, including Max SI, Max Slope (model-free parameters), vp (Extended Tofts), and vp (Patlak) could differentiate grade III and IV gliomas (P < 0.01) in four measurements. Both Entropy and IDM of Patlak-based K trans and vp could differentiate grade II (n = 15) from III (n = 13) gliomas (P < 0.01) in four measurements. No textural features of any DCE-MRI parameter maps could discriminate between subtypes of grade II and III gliomas (P < 0.05). Both Entropy and IDM of Extended Tofts- and Patlak-based vp showed highest area under curve in discriminating between grade III and IV gliomas. However, intraclass correlation coefficient (ICC) of these features revealed relatively lower inter-observer agreement. No significant correlation was found between microvascular density and textural features, compared with a moderate correlation found between cellular proliferation index and those features. Textural features of DCE-MRI parameter maps displayed a good ability in glioma grading. 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:1099-1111. © 2017 International Society for Magnetic Resonance in Medicine.

  1. Metastable high-entropy dual-phase alloys overcome the strength-ductility trade-off.

    PubMed

    Li, Zhiming; Pradeep, Konda Gokuldoss; Deng, Yun; Raabe, Dierk; Tasan, Cemal Cem

    2016-06-09

    Metals have been mankind's most essential materials for thousands of years; however, their use is affected by ecological and economical concerns. Alloys with higher strength and ductility could alleviate some of these concerns by reducing weight and improving energy efficiency. However, most metallurgical mechanisms for increasing strength lead to ductility loss, an effect referred to as the strength-ductility trade-off. Here we present a metastability-engineering strategy in which we design nanostructured, bulk high-entropy alloys with multiple compositionally equivalent high-entropy phases. High-entropy alloys were originally proposed to benefit from phase stabilization through entropy maximization. Yet here, motivated by recent work that relaxes the strict restrictions on high-entropy alloy compositions by demonstrating the weakness of this connection, the concept is overturned. We decrease phase stability to achieve two key benefits: interface hardening due to a dual-phase microstructure (resulting from reduced thermal stability of the high-temperature phase); and transformation-induced hardening (resulting from the reduced mechanical stability of the room-temperature phase). This combines the best of two worlds: extensive hardening due to the decreased phase stability known from advanced steels and massive solid-solution strengthening of high-entropy alloys. In our transformation-induced plasticity-assisted, dual-phase high-entropy alloy (TRIP-DP-HEA), these two contributions lead respectively to enhanced trans-grain and inter-grain slip resistance, and hence, increased strength. Moreover, the increased strain hardening capacity that is enabled by dislocation hardening of the stable phase and transformation-induced hardening of the metastable phase produces increased ductility. This combined increase in strength and ductility distinguishes the TRIP-DP-HEA alloy from other recently developed structural materials. This metastability-engineering strategy should thus usefully guide design in the near-infinite compositional space of high-entropy alloys.

  2. Metastable high-entropy dual-phase alloys overcome the strength-ductility trade-off

    NASA Astrophysics Data System (ADS)

    Li, Zhiming; Pradeep, Konda Gokuldoss; Deng, Yun; Raabe, Dierk; Tasan, Cemal Cem

    2016-06-01

    Metals have been mankind’s most essential materials for thousands of years; however, their use is affected by ecological and economical concerns. Alloys with higher strength and ductility could alleviate some of these concerns by reducing weight and improving energy efficiency. However, most metallurgical mechanisms for increasing strength lead to ductility loss, an effect referred to as the strength-ductility trade-off. Here we present a metastability-engineering strategy in which we design nanostructured, bulk high-entropy alloys with multiple compositionally equivalent high-entropy phases. High-entropy alloys were originally proposed to benefit from phase stabilization through entropy maximization. Yet here, motivated by recent work that relaxes the strict restrictions on high-entropy alloy compositions by demonstrating the weakness of this connection, the concept is overturned. We decrease phase stability to achieve two key benefits: interface hardening due to a dual-phase microstructure (resulting from reduced thermal stability of the high-temperature phase); and transformation-induced hardening (resulting from the reduced mechanical stability of the room-temperature phase). This combines the best of two worlds: extensive hardening due to the decreased phase stability known from advanced steels and massive solid-solution strengthening of high-entropy alloys. In our transformation-induced plasticity-assisted, dual-phase high-entropy alloy (TRIP-DP-HEA), these two contributions lead respectively to enhanced trans-grain and inter-grain slip resistance, and hence, increased strength. Moreover, the increased strain hardening capacity that is enabled by dislocation hardening of the stable phase and transformation-induced hardening of the metastable phase produces increased ductility. This combined increase in strength and ductility distinguishes the TRIP-DP-HEA alloy from other recently developed structural materials. This metastability-engineering strategy should thus usefully guide design in the near-infinite compositional space of high-entropy alloys.

  3. It is not the entropy you produce, rather, how you produce it

    PubMed Central

    Volk, Tyler; Pauluis, Olivier

    2010-01-01

    The principle of maximum entropy production (MEP) seeks to better understand a large variety of the Earth's environmental and ecological systems by postulating that processes far from thermodynamic equilibrium will ‘adapt to steady states at which they dissipate energy and produce entropy at the maximum possible rate’. Our aim in this ‘outside view’, invited by Axel Kleidon, is to focus on what we think is an outstanding challenge for MEP and for irreversible thermodynamics in general: making specific predictions about the relative contribution of individual processes to entropy production. Using studies that compared entropy production in the atmosphere of a dry versus humid Earth, we show that two systems might have the same entropy production rate but very different internal dynamics of dissipation. Using the results of several of the papers in this special issue and a thought experiment, we show that components of life-containing systems can evolve to either lower or raise the entropy production rate. Our analysis makes explicit fundamental questions for MEP that should be brought into focus: can MEP predict not just the overall state of entropy production of a system but also the details of the sub-systems of dissipaters within the system? Which fluxes of the system are those that are most likely to be maximized? How it is possible for MEP theory to be so domain-neutral that it can claim to apply equally to both purely physical–chemical systems and also systems governed by the ‘laws’ of biological evolution? We conclude that the principle of MEP needs to take on the issue of exactly how entropy is produced. PMID:20368249

  4. Repetitive transient extraction for machinery fault diagnosis using multiscale fractional order entropy infogram

    NASA Astrophysics Data System (ADS)

    Xu, Xuefang; Qiao, Zijian; Lei, Yaguo

    2018-03-01

    The presence of repetitive transients in vibration signals is a typical symptom of local faults of rotating machinery. Infogram was developed to extract the repetitive transients from vibration signals based on Shannon entropy. Unfortunately, the Shannon entropy is maximized for random processes and unable to quantify the repetitive transients buried in heavy random noise. In addition, the vibration signals always contain multiple intrinsic oscillatory modes due to interaction and coupling effects between machine components. Under this circumstance, high values of Shannon entropy appear in several frequency bands or high value of Shannon entropy doesn't appear in the optimal frequency band, and the infogram becomes difficult to interpret. Thus, it also becomes difficult to select the optimal frequency band for extracting the repetitive transients from the whole frequency bands. To solve these problems, multiscale fractional order entropy (MSFE) infogram is proposed in this paper. With the help of MSFE infogram, the complexity and nonlinear signatures of the vibration signals can be evaluated by quantifying spectral entropy over a range of scales in fractional domain. Moreover, the similarity tolerance of MSFE infogram is helpful for assessing the regularity of signals. A simulation and two experiments concerning a locomotive bearing and a wind turbine gear are used to validate the MSFE infogram. The results demonstrate that the MSFE infogram is more robust to the heavy noise than infogram and the high value is able to only appear in the optimal frequency band for the repetitive transient extraction.

  5. Entropy measures detect increased movement variability in resistance training when elite rugby players use the ball.

    PubMed

    Moras, Gerard; Fernández-Valdés, Bruno; Vázquez-Guerrero, Jairo; Tous-Fajardo, Julio; Exel, Juliana; Sampaio, Jaime

    2018-05-24

    This study described the variability in acceleration during a resistance training task, performed in horizontal inertial flywheels without (NOBALL) or with the constraint of catching and throwing a rugby ball (BALL). Twelve elite rugby players (mean±SD: age 25.6±3.0years, height 1.82±0.07m, weight 94.0±9.9kg) performed a resistance training task in both conditions (NOBALL AND BALL). Players had five minutes of a standardized warm-up, followed by two series of six repetitions of both conditions: at the first three repetitions the intensity was progressively increased while the last three were performed at maximal voluntary effort. Thereafter, the participants performed two series of eight repetitions from each condition for two days and in a random order, with a minimum of 10min between series. The structure of variability was analysed using non-linear measures of entropy. Mean changes (%; ±90% CL) of 4.64; ±3.1g for mean acceleration and 39.48; ±36.63a.u. for sample entropy indicated likely and very likely increase when in BALL condition. Multiscale entropy also showed higher unpredictability of acceleration under the BALL condition, especially at higher time scales. The application of match specific constraints in resistance training for rugby players elicit different amount of variability of body acceleration across multiple physiological time scales. Understanding the non-linear process inherent to the manipulation of resistance training variables with constraints and its motor adaptations may help coaches and trainers to enhance the effectiveness of physical training and, ultimately, better understand and maximize sports performance. Copyright © 2018 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  6. Finding the quantum thermoelectric with maximal efficiency and minimal entropy production at given power output

    NASA Astrophysics Data System (ADS)

    Whitney, Robert S.

    2015-03-01

    We investigate the nonlinear scattering theory for quantum systems with strong Seebeck and Peltier effects, and consider their use as heat engines and refrigerators with finite power outputs. This paper gives detailed derivations of the results summarized in a previous paper [R. S. Whitney, Phys. Rev. Lett. 112, 130601 (2014), 10.1103/PhysRevLett.112.130601]. It shows how to use the scattering theory to find (i) the quantum thermoelectric with maximum possible power output, and (ii) the quantum thermoelectric with maximum efficiency at given power output. The latter corresponds to a minimal entropy production at that power output. These quantities are of quantum origin since they depend on system size over electronic wavelength, and so have no analog in classical thermodynamics. The maximal efficiency coincides with Carnot efficiency at zero power output, but decreases with increasing power output. This gives a fundamental lower bound on entropy production, which means that reversibility (in the thermodynamic sense) is impossible for finite power output. The suppression of efficiency by (nonlinear) phonon and photon effects is addressed in detail; when these effects are strong, maximum efficiency coincides with maximum power. Finally, we show in particular limits (typically without magnetic fields) that relaxation within the quantum system does not allow the system to exceed the bounds derived for relaxation-free systems, however, a general proof of this remains elusive.

  7. Cleavage Entropy as Quantitative Measure of Protease Specificity

    PubMed Central

    Fuchs, Julian E.; von Grafenstein, Susanne; Huber, Roland G.; Margreiter, Michael A.; Spitzer, Gudrun M.; Wallnoefer, Hannes G.; Liedl, Klaus R.

    2013-01-01

    A purely information theory-guided approach to quantitatively characterize protease specificity is established. We calculate an entropy value for each protease subpocket based on sequences of cleaved substrates extracted from the MEROPS database. We compare our results with known subpocket specificity profiles for individual proteases and protease groups (e.g. serine proteases, metallo proteases) and reflect them quantitatively. Summation of subpocket-wise cleavage entropy contributions yields a measure for overall protease substrate specificity. This total cleavage entropy allows ranking of different proteases with respect to their specificity, separating unspecific digestive enzymes showing high total cleavage entropy from specific proteases involved in signaling cascades. The development of a quantitative cleavage entropy score allows an unbiased comparison of subpocket-wise and overall protease specificity. Thus, it enables assessment of relative importance of physicochemical and structural descriptors in protease recognition. We present an exemplary application of cleavage entropy in tracing substrate specificity in protease evolution. This highlights the wide range of substrate promiscuity within homologue proteases and hence the heavy impact of a limited number of mutations on individual substrate specificity. PMID:23637583

  8. Calculation of Cyclodextrin Binding Affinities: Energy, Entropy, and Implications for Drug Design

    PubMed Central

    Chen, Wei; Chang, Chia-En; Gilson, Michael K.

    2004-01-01

    The second generation Mining Minima method yields binding affinities accurate to within 0.8 kcal/mol for the associations of α-, β-, and γ-cyclodextrin with benzene, resorcinol, flurbiprofen, naproxen, and nabumetone. These calculations require hours to a day on a commodity computer. The calculations also indicate that the changes in configurational entropy upon binding oppose association by as much as 24 kcal/mol and result primarily from a narrowing of energy wells in the bound versus the free state, rather than from a drop in the number of distinct low-energy conformations on binding. Also, the configurational entropy is found to vary substantially among the bound conformations of a given cyclodextrin-guest complex. This result suggests that the configurational entropy must be accounted for to reliably rank docked conformations in both host-guest and ligand-protein complexes. In close analogy with the common experimental observation of entropy-enthalpy compensation, the computed entropy changes show a near-linear relationship with the changes in mean potential plus solvation energy. PMID:15339804

  9. Microstructural Design for Improving Ductility of An Initially Brittle Refractory High Entropy Alloy.

    PubMed

    Soni, V; Senkov, O N; Gwalani, B; Miracle, D B; Banerjee, R

    2018-06-11

    Typically, refractory high-entropy alloys (RHEAs), comprising a two-phase ordered B2 + BCC microstructure, exhibit extraordinarily high yield strengths, but poor ductility at room temperature, limiting their engineering application. The poor ductility is attributed to the continuous matrix being the ordered B2 phase in these alloys. This paper presents a novel approach to microstructural engineering of RHEAs to form an "inverted" BCC + B2 microstructure with discrete B2 precipitates dispersed within a continuous BCC matrix, resulting in improved room temperature compressive ductility, while maintaining high yield strength at both room and elevated temperature.

  10. Maximum Entropy for the International Division of Labor.

    PubMed

    Lei, Hongmei; Chen, Ying; Li, Ruiqi; He, Deli; Zhang, Jiang

    2015-01-01

    As a result of the international division of labor, the trade value distribution on different products substantiated by international trade flows can be regarded as one country's strategy for competition. According to the empirical data of trade flows, countries may spend a large fraction of export values on ubiquitous and competitive products. Meanwhile, countries may also diversify their exports share on different types of products to reduce the risk. In this paper, we report that the export share distribution curves can be derived by maximizing the entropy of shares on different products under the product's complexity constraint once the international market structure (the country-product bipartite network) is given. Therefore, a maximum entropy model provides a good fit to empirical data. The empirical data is consistent with maximum entropy subject to a constraint on the expected value of the product complexity for each country. One country's strategy is mainly determined by the types of products this country can export. In addition, our model is able to fit the empirical export share distribution curves of nearly every country very well by tuning only one parameter.

  11. Maximum Entropy for the International Division of Labor

    PubMed Central

    Lei, Hongmei; Chen, Ying; Li, Ruiqi; He, Deli; Zhang, Jiang

    2015-01-01

    As a result of the international division of labor, the trade value distribution on different products substantiated by international trade flows can be regarded as one country’s strategy for competition. According to the empirical data of trade flows, countries may spend a large fraction of export values on ubiquitous and competitive products. Meanwhile, countries may also diversify their exports share on different types of products to reduce the risk. In this paper, we report that the export share distribution curves can be derived by maximizing the entropy of shares on different products under the product’s complexity constraint once the international market structure (the country-product bipartite network) is given. Therefore, a maximum entropy model provides a good fit to empirical data. The empirical data is consistent with maximum entropy subject to a constraint on the expected value of the product complexity for each country. One country’s strategy is mainly determined by the types of products this country can export. In addition, our model is able to fit the empirical export share distribution curves of nearly every country very well by tuning only one parameter. PMID:26172052

  12. Gravitational vacuum condensate stars.

    PubMed

    Mazur, Pawel O; Mottola, Emil

    2004-06-29

    A new final state of gravitational collapse is proposed. By extending the concept of Bose-Einstein condensation to gravitational systems, a cold, dark, compact object with an interior de Sitter condensate p(v) = -rho(v) and an exterior Schwarzschild geometry of arbitrary total mass M is constructed. These regions are separated by a shell with a small but finite proper thickness l of fluid with equation of state p = +rho, replacing both the Schwarzschild and de Sitter classical horizons. The new solution has no singularities, no event horizons, and a global time. Its entropy is maximized under small fluctuations and is given by the standard hydrodynamic entropy of the thin shell, which is of the order k(B)lMc/Planck's over 2 pi, instead of the Bekenstein-Hawking entropy formula, S(BH) = 4 pi k(B)GM(2)/Planck's over 2 pi c. Hence, unlike black holes, the new solution is thermodynamically stable and has no information paradox.

  13. On the entropy function in sociotechnical systems

    PubMed Central

    Montroll, Elliott W.

    1981-01-01

    The entropy function H = -Σpj log pj (pj being the probability of a system being in state j) and its continuum analogue H = ∫p(x) log p(x) dx are fundamental in Shannon's theory of information transfer in communication systems. It is here shown that the discrete form of H also appears naturally in single-lane traffic flow theory. In merchandising, goods flow from a whole-saler through a retailer to a customer. Certain features of the process may be deduced from price distribution functions derived from Sears Roebuck and Company catalogues. It is found that the dispersion in logarithm of catalogue prices of a given year has remained about constant, independently of the year, for over 75 years. From this it may be inferred that the continuum entropy function for the variable logarithm of price had inadvertently, through Sears Roebuck policies, been maximized for that firm subject to the observed dispersion. PMID:16593136

  14. On the entropy function in sociotechnical systems.

    PubMed

    Montroll, E W

    1981-12-01

    The entropy function H = -Sigmap(j) log p(j) (p(j) being the probability of a system being in state j) and its continuum analogue H = integralp(x) log p(x) dx are fundamental in Shannon's theory of information transfer in communication systems. It is here shown that the discrete form of H also appears naturally in single-lane traffic flow theory. In merchandising, goods flow from a whole-saler through a retailer to a customer. Certain features of the process may be deduced from price distribution functions derived from Sears Roebuck and Company catalogues. It is found that the dispersion in logarithm of catalogue prices of a given year has remained about constant, independently of the year, for over 75 years. From this it may be inferred that the continuum entropy function for the variable logarithm of price had inadvertently, through Sears Roebuck policies, been maximized for that firm subject to the observed dispersion.

  15. Maximum Renyi entropy principle for systems with power-law Hamiltonians.

    PubMed

    Bashkirov, A G

    2004-09-24

    The Renyi distribution ensuring the maximum of Renyi entropy is investigated for a particular case of a power-law Hamiltonian. Both Lagrange parameters alpha and beta can be eliminated. It is found that beta does not depend on a Renyi parameter q and can be expressed in terms of an exponent kappa of the power-law Hamiltonian and an average energy U. The Renyi entropy for the resulting Renyi distribution reaches its maximal value at q=1/(1+kappa) that can be considered as the most probable value of q when we have no additional information on the behavior of the stochastic process. The Renyi distribution for such q becomes a power-law distribution with the exponent -(kappa+1). When q=1/(1+kappa)+epsilon (0

  16. Mixture models with entropy regularization for community detection in networks

    NASA Astrophysics Data System (ADS)

    Chang, Zhenhai; Yin, Xianjun; Jia, Caiyan; Wang, Xiaoyang

    2018-04-01

    Community detection is a key exploratory tool in network analysis and has received much attention in recent years. NMM (Newman's mixture model) is one of the best models for exploring a range of network structures including community structure, bipartite and core-periphery structures, etc. However, NMM needs to know the number of communities in advance. Therefore, in this study, we have proposed an entropy regularized mixture model (called EMM), which is capable of inferring the number of communities and identifying network structure contained in a network, simultaneously. In the model, by minimizing the entropy of mixing coefficients of NMM using EM (expectation-maximization) solution, the small clusters contained little information can be discarded step by step. The empirical study on both synthetic networks and real networks has shown that the proposed model EMM is superior to the state-of-the-art methods.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dehesa, J.S.; Martinez-Finkelshtein, A.; Sorokin, V.N.

    The asymptotics of the Boltzmann-Shannon information entropy as well as the Renyi entropy for the quantum probability density of a single-particle system with a confining (i.e., bounded below) power-type potential V(x)=x{sup 2k} with k is a member of N and x is a member of R, is investigated in the position and momentum spaces within the semiclassical (WKB) approximation. It is found that for highly excited states both physical entropies, as well as their sum, have a logarithmic dependence on its quantum number not only when k=1 (harmonic oscillator), but also for any fixed k. As a by-product, the extremalmore » case k{yields}{infinity} (the infinite well potential) is also rigorously analyzed. It is shown that not only the position-space entropy has the same constant value for all quantum states, which is a known result, but also that the momentum-space entropy is constant for highly excited states.« less

  18. Entropy generation in Gaussian quantum transformations: applying the replica method to continuous-variable quantum information theory

    NASA Astrophysics Data System (ADS)

    Gagatsos, Christos N.; Karanikas, Alexandros I.; Kordas, Georgios; Cerf, Nicolas J.

    2016-02-01

    In spite of their simple description in terms of rotations or symplectic transformations in phase space, quadratic Hamiltonians such as those modelling the most common Gaussian operations on bosonic modes remain poorly understood in terms of entropy production. For instance, determining the quantum entropy generated by a Bogoliubov transformation is notably a hard problem, with generally no known analytical solution, while it is vital to the characterisation of quantum communication via bosonic channels. Here we overcome this difficulty by adapting the replica method, a tool borrowed from statistical physics and quantum field theory. We exhibit a first application of this method to continuous-variable quantum information theory, where it enables accessing entropies in an optical parametric amplifier. As an illustration, we determine the entropy generated by amplifying a binary superposition of the vacuum and a Fock state, which yields a surprisingly simple, yet unknown analytical expression.

  19. Maximal yields from multispecies fisheries systems: rules for systems with multiple trophic levels.

    PubMed

    Matsuda, Hiroyuki; Abrams, Peter A

    2006-02-01

    Increasing centralization of the control of fisheries combined with increased knowledge of food-web relationships is likely to lead to attempts to maximize economic yield from entire food webs. With the exception of predator-prey systems, we lack any analysis of the nature of such yield-maximizing strategies. We use simple food-web models to investigate the nature of yield- or profit-maximizing exploitation of communities including two types of three-species food webs and a variety of six-species systems with as many as five trophic levels. These models show that, for most webs, relatively few species are harvested at equilibrium and that a significant fraction of the species is lost from the web. These extinctions occur for two reasons: (1) indirect effects due to harvesting of species that had positive effects on the extinct species, and (2) intentional eradication of species that are not themselves valuable, but have negative effects on more valuable species. In most cases, the yield-maximizing harvest involves taking only species from one trophic level. In no case was an unharvested top predator part of the yield-maximizing strategy. Analyses reveal that the existence of direct density dependence in consumers has a large effect on the nature of the optimal harvest policy, typically resulting in harvest of a larger number of species. A constraint that all species must be retained in the system (a "constraint of biodiversity conservation") usually increases the number of species and trophic levels harvested at the yield-maximizing policy. The reduction in total yield caused by such a constraint is modest for most food webs but can be over 90% in some cases. Independent harvesting of species within the web can also cause extinctions but is less likely to do so.

  20. Effect of site disorder on the ground state of a frustrated spin dimer quantum magnet

    NASA Astrophysics Data System (ADS)

    Hristov, Alexander; Shapiro, Maxwell; Lee, Minseong; Rodenbach, Linsey; Choi, Eun Sang; Park, Ju-Hyun; Munsie, Tim; Luke, Graeme; Fisher, Ian

    Ba3Mn2O8 is a geometrically frustrated spin dimer quantum magnet. Pairs of Mn 5+ (S = 1) ions are strongly coupled via antiferromagnetic exchange to yield a singlet ground state, with excited triplet and quintuplet states. Isovalent substitution of V5+ (S = 0) for Mn breaks dimers, resulting in unpaired S = 1 spins, the ground state of which is investigated here for compositions spanning the range 0 <= x <= 1 of Ba3(Mn1-xVx)2O8. From a theoretical perspective, for dimers occupying an unfrustrated bipartite lattice, such site disorder is anticipated to yield long range magnetism for unpaired Mn spins both in the dilute limit where x is small, a phenomena known as order-by-disorder, and in the proximity of x = 1 / 2 where the system is maximally disordered and close to a percolation threshold. In this frustrated system, however, our experiments find evidence of spin freezing for six compositions 0 . 05 <= x <= 0 . 85 . In this regime, we find entropy removed at an energy scale independent of the freezing temperature. We discuss the possibility of a spin-glass to random singlet transition for critical compositions in the two dilute limits x -> 0 and x -> 1 . NSF DMR-Award 1205165.

  1. Cardiorespiratory Coordination in Repeated Maximal Exercise

    PubMed Central

    Garcia-Retortillo, Sergi; Javierre, Casimiro; Hristovski, Robert; Ventura, Josep L.; Balagué, Natàlia

    2017-01-01

    Increases in cardiorespiratory coordination (CRC) after training with no differences in performance and physiological variables have recently been reported using a principal component analysis approach. However, no research has yet evaluated the short-term effects of exercise on CRC. The aim of this study was to delineate the behavior of CRC under different physiological initial conditions produced by repeated maximal exercises. Fifteen participants performed 2 consecutive graded and maximal cycling tests. Test 1 was performed without any previous exercise, and Test 2 6 min after Test 1. Both tests started at 0 W and the workload was increased by 25 W/min in males and 20 W/min in females, until they were not able to maintain the prescribed cycling frequency of 70 rpm for more than 5 consecutive seconds. A principal component (PC) analysis of selected cardiovascular and cardiorespiratory variables (expired fraction of O2, expired fraction of CO2, ventilation, systolic blood pressure, diastolic blood pressure, and heart rate) was performed to evaluate the CRC defined by the number of PCs in both tests. In order to quantify the degree of coordination, the information entropy was calculated and the eigenvalues of the first PC (PC1) were compared between tests. Although no significant differences were found between the tests with respect to the performed maximal workload (Wmax), maximal oxygen consumption (VO2 max), or ventilatory threshold (VT), an increase in the number of PCs and/or a decrease of eigenvalues of PC1 (t = 2.95; p = 0.01; d = 1.08) was found in Test 2 compared to Test 1. Moreover, entropy was significantly higher (Z = 2.33; p = 0.02; d = 1.43) in the last test. In conclusion, despite the fact that no significant differences were observed in the conventionally explored maximal performance and physiological variables (Wmax, VO2 max, and VT) between tests, a reduction of CRC was observed in Test 2. These results emphasize the interest of CRC evaluation in the assessment and interpretation of cardiorespiratory exercise testing. PMID:28638349

  2. Fatigue reduces the complexity of knee extensor torque fluctuations during maximal and submaximal intermittent isometric contractions in man

    PubMed Central

    Pethick, Jamie; Winter, Samantha L; Burnley, Mark

    2015-01-01

    Neuromuscular fatigue increases the amplitude of fluctuations in torque output during isometric contractions, but the effect of fatigue on the temporal structure, or complexity, of these fluctuations is not known. We hypothesised that fatigue would result in a loss of temporal complexity and a change in fractal scaling of the torque signal during isometric knee extensor exercise. Eleven healthy participants performed a maximal test (5 min of intermittent maximal voluntary contractions, MVCs), and a submaximal test (contractions at a target of 40% MVC performed until task failure), each with a 60% duty factor (6 s contraction, 4 s rest). Torque and surface EMG signals were sampled continuously. Complexity and fractal scaling of torque were quantified by calculating approximate entropy (ApEn), sample entropy (SampEn) and the detrended fluctuation analysis (DFA) scaling exponent α. Fresh submaximal contractions were more complex than maximal contractions (mean ± SEM, submaximal vs. maximal: ApEn 0.65 ± 0.09 vs. 0.15 ± 0.02; SampEn 0.62 ± 0.09 vs. 0.14 ± 0.02; DFA α 1.35 ± 0.04 vs. 1.55 ± 0.03; all P < 0.005). Fatigue reduced the complexity of submaximal contractions (ApEn to 0.24 ± 0.05; SampEn to 0.22 ± 0.04; DFA α to 1.55 ± 0.03; all P < 0.005) and maximal contractions (ApEn to 0.10 ± 0.02; SampEn to 0.10 ± 0.02; DFA α to 1.63 ± 0.02; all P < 0.01). This loss of complexity and shift towards Brownian-like noise suggests that as well as reducing the capacity to produce torque, fatigue reduces the neuromuscular system's adaptability to external perturbations. PMID:25664928

  3. An Information Transmission Measure for the Analysis of Effective Connectivity among Cortical Neurons

    PubMed Central

    Law, Andrew J.; Sharma, Gaurav; Schieber, Marc H.

    2014-01-01

    We present a methodology for detecting effective connections between simultaneously recorded neurons using an information transmission measure to identify the presence and direction of information flow from one neuron to another. Using simulated and experimentally-measured data, we evaluate the performance of our proposed method and compare it to the traditional transfer entropy approach. In simulations, our measure of information transmission outperforms transfer entropy in identifying the effective connectivity structure of a neuron ensemble. For experimentally recorded data, where ground truth is unavailable, the proposed method also yields a more plausible connectivity structure than transfer entropy. PMID:21096617

  4. Solid-solution CrCoCuFeNi high-entropy alloy thin films synthesized by sputter deposition

    DOE PAGES

    An, Zhinan; Jia, Haoling; Wu, Yueying; ...

    2015-05-04

    The concept of high configurational entropy requires that the high-entropy alloys (HEAs) yield single-phase solid solutions. However, phase separations are quite common in bulk HEAs. A five-element alloy, CrCoCuFeNi, was deposited via radio frequency magnetron sputtering and confirmed to be a single-phase solid solution through the high-energy synchrotron X-ray diffraction, energy-dispersive spectroscopy, wavelength-dispersive spectroscopy, and transmission electron microscopy. The formation of the solid-solution phase is presumed to be due to the high cooling rate of the sputter-deposition process.

  5. Entropy-Based Bounds On Redundancies Of Huffman Codes

    NASA Technical Reports Server (NTRS)

    Smyth, Padhraic J.

    1992-01-01

    Report presents extension of theory of redundancy of binary prefix code of Huffman type which includes derivation of variety of bounds expressed in terms of entropy of source and size of alphabet. Recent developments yielded bounds on redundancy of Huffman code in terms of probabilities of various components in source alphabet. In practice, redundancies of optimal prefix codes often closer to 0 than to 1.

  6. Entropy-Based Search Algorithm for Experimental Design

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  7. Grammars Leak: Modeling How Phonotactic Generalizations Interact within the Grammar

    ERIC Educational Resources Information Center

    Martin, Andrew

    2011-01-01

    I present evidence from Navajo and English that weaker, gradient versions of morpheme-internal phonotactic constraints, such as the ban on geminate consonants in English, hold even across prosodic word boundaries. I argue that these lexical biases are the result of a MAXIMUM ENTROPY phonotactic learning algorithm that maximizes the probability of…

  8. Differences in grip force control between young and late middle-aged adults.

    PubMed

    Zheng, Lianrong; Li, Kunyang; Wang, Qian; Chen, Wenhui; Song, Rong; Liu, Guanzheng

    2017-09-01

    Grip force control is a crucial function for human to guarantee the quality of life. To examine the effects of age on grip force control, 10 young adults and 11 late middle-aged adults participated in visually guided tracking tasks using different target force levels (25, 50, and 75% of the subject's maximal grip force). Multiple measures were used to evaluate the tracking performance during force rising phase and force maintenance phase. The measurements include the rise time, fuzzy entropy, mean force percentage, coefficient of variation, and target deviation ratio. The results show that the maximal grip force was significantly lower in the late middle-aged adults than in the young adults. The time of rising phase was systematically longer among late middle-aged adults. The fuzzy entropy is a useful indicator for quantitating the force variability of the grip force signal at higher force levels. These results suggest that the late middle-aged adults applied a compensatory strategy that allow allows for sufficient time to reach the required grip force and reduce the impact of the early and subtle degenerative changes in hand motor function.

  9. Refined generalized multiscale entropy analysis for physiological signals

    NASA Astrophysics Data System (ADS)

    Liu, Yunxiao; Lin, Youfang; Wang, Jing; Shang, Pengjian

    2018-01-01

    Multiscale entropy analysis has become a prevalent complexity measurement and been successfully applied in various fields. However, it only takes into account the information of mean values (first moment) in coarse-graining procedure. Then generalized multiscale entropy (MSEn) considering higher moments to coarse-grain a time series was proposed and MSEσ2 has been implemented. However, the MSEσ2 sometimes may yield an imprecise estimation of entropy or undefined entropy, and reduce statistical reliability of sample entropy estimation as scale factor increases. For this purpose, we developed the refined model, RMSEσ2, to improve MSEσ2. Simulations on both white noise and 1 / f noise show that RMSEσ2 provides higher entropy reliability and reduces the occurrence of undefined entropy, especially suitable for short time series. Besides, we discuss the effect on RMSEσ2 analysis from outliers, data loss and other concepts in signal processing. We apply the proposed model to evaluate the complexity of heartbeat interval time series derived from healthy young and elderly subjects, patients with congestive heart failure and patients with atrial fibrillation respectively, compared to several popular complexity metrics. The results demonstrate that RMSEσ2 measured complexity (a) decreases with aging and diseases, and (b) gives significant discrimination between different physiological/pathological states, which may facilitate clinical application.

  10. Learning Maximal Entropy Models from finite size datasets: a fast Data-Driven algorithm allows to sample from the posterior distribution

    NASA Astrophysics Data System (ADS)

    Ferrari, Ulisse

    A maximal entropy model provides the least constrained probability distribution that reproduces experimental averages of an observables set. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a ``rectified'' Data-Driven algorithm that is fast and by sampling from the parameters posterior avoids both under- and over-fitting along all the directions of the parameters space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method. This research was supported by a Grant from the Human Brain Project (HBP CLAP).

  11. Using the Maximum Entropy Principle as a Unifying Theory Characterization and Sampling of Multi-Scaling Processes in Hydrometeorology

    DTIC Science & Technology

    2015-08-20

    evapotranspiration (ET) over oceans may be significantly lower than previously thought. The MEP model parameterized turbulent transfer coefficients...fluxes, ocean freshwater fluxes, regional crop yield among others. An on-going study suggests that the global annual evapotranspiration (ET) over...Bras, Jingfeng Wang. A model of evapotranspiration based on the theory of maximum entropy production, Water Resources Research, (03 2011): 0. doi

  12. Optimizing the coupled effects of Hall-Petch and precipitation strengthening in a Al 0.3 CoCrFeNi high entropy alloy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gwalani, B.; Soni, Vishal; Lee, Michael

    2017-05-01

    A successful demonstration of applying integrated strengthening using Hall-Petch strengthening (grains size effect) and precipitation strengthening is shown in the fcc based high entropy alloy (HEA) Al0.3CoCrFeNi, leading to quantitative determinations of the Hall-Petch coefficients for both hardness and tensile yield strength, aswell as the enhancements in the yield strength fromtwo distinct types of ordered precipitates, L12 and B2. An excellent combination of yield strength (~490MPa), ultimate tensile strength (~850MPa), and ductility (~45% elongation) was achieved by optimizing and coupling both strengtheningmechanisms, resulting from a refined grain size as well as both L12 and B2 ordered precipitates. This opens upmore » new avenues for the future development of HEAs, with the appropriate balance of properties required for engineering applications.« less

  13. Controllable gaussian-qubit interface for extremal quantum state engineering.

    PubMed

    Adesso, Gerardo; Campbell, Steve; Illuminati, Fabrizio; Paternostro, Mauro

    2010-06-18

    We study state engineering through bilinear interactions between two remote qubits and two-mode gaussian light fields. The attainable two-qubit states span the entire physically allowed region in the entanglement-versus-global-purity plane. Two-mode gaussian states with maximal entanglement at fixed global and marginal entropies produce maximally entangled two-qubit states in the corresponding entropic diagram. We show that a small set of parameters characterizing extremally entangled two-mode gaussian states is sufficient to control the engineering of extremally entangled two-qubit states, which can be realized in realistic matter-light scenarios.

  14. Characterizing entanglement with global and marginal entropic measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adesso, Gerardo; Illuminati, Fabrizio; De Siena, Silvio

    2003-12-01

    We qualify the entanglement of arbitrary mixed states of bipartite quantum systems by comparing global and marginal mixednesses quantified by different entropic measures. For systems of two qubits we discriminate the class of maximally entangled states with fixed marginal mixednesses, and determine an analytical upper bound relating the entanglement of formation to the marginal linear entropies. This result partially generalizes to mixed states the quantification of entanglement with marginal mixednesses holding for pure states. We identify a class of entangled states that, for fixed marginals, are globally more mixed than product states when measured by the linear entropy. Such statesmore » cannot be discriminated by the majorization criterion.« less

  15. The Population Inversion and the Entropy of a Moving Two-Level Atom in Interaction with a Quantized Field

    NASA Astrophysics Data System (ADS)

    Abo-Kahla, D. A. M.; Abdel-Aty, M.; Farouk, A.

    2018-05-01

    An atom with only two energy eigenvalues is described by a two-dimensional state space spanned by the two energy eigenstates is called a two-level atom. We consider the interaction between a two-level atom system with a constant velocity. An analytic solution of the systems which interacts with a quantized field is provided. Furthermore, the significant effect of the temperature on the atomic inversion, the purity and the information entropy are discussed in case of the initial state either an exited state or a maximally mixed state. Additionally, the effect of the half wavelengths number of the field-mode is investigated.

  16. On the design of script languages for neural simulation.

    PubMed

    Brette, Romain

    2012-01-01

    In neural network simulators, models are specified according to a language, either specific or based on a general programming language (e.g. Python). There are also ongoing efforts to develop standardized languages, for example NeuroML. When designing these languages, efforts are often focused on expressivity, that is, on maximizing the number of model types than can be described and simulated. I argue that a complementary goal should be to minimize the cognitive effort required on the part of the user to use the language. I try to formalize this notion with the concept of "language entropy", and I propose a few practical guidelines to minimize the entropy of languages for neural simulation.

  17. Applications of the principle of maximum entropy: from physics to ecology.

    PubMed

    Banavar, Jayanth R; Maritan, Amos; Volkov, Igor

    2010-02-17

    There are numerous situations in physics and other disciplines which can be described at different levels of detail in terms of probability distributions. Such descriptions arise either intrinsically as in quantum mechanics, or because of the vast amount of details necessary for a complete description as, for example, in Brownian motion and in many-body systems. We show that an application of the principle of maximum entropy for estimating the underlying probability distribution can depend on the variables used for describing the system. The choice of characterization of the system carries with it implicit assumptions about fundamental attributes such as whether the system is classical or quantum mechanical or equivalently whether the individuals are distinguishable or indistinguishable. We show that the correct procedure entails the maximization of the relative entropy subject to known constraints and, additionally, requires knowledge of the behavior of the system in the absence of these constraints. We present an application of the principle of maximum entropy to understanding species diversity in ecology and introduce a new statistical ensemble corresponding to the distribution of a variable population of individuals into a set of species not defined a priori.

  18. Probability distributions of bed load particle velocities, accelerations, hop distances, and travel times informed by Jaynes's principle of maximum entropy

    USGS Publications Warehouse

    Furbish, David; Schmeeckle, Mark; Schumer, Rina; Fathel, Siobhan

    2016-01-01

    We describe the most likely forms of the probability distributions of bed load particle velocities, accelerations, hop distances, and travel times, in a manner that formally appeals to inferential statistics while honoring mechanical and kinematic constraints imposed by equilibrium transport conditions. The analysis is based on E. Jaynes's elaboration of the implications of the similarity between the Gibbs entropy in statistical mechanics and the Shannon entropy in information theory. By maximizing the information entropy of a distribution subject to known constraints on its moments, our choice of the form of the distribution is unbiased. The analysis suggests that particle velocities and travel times are exponentially distributed and that particle accelerations follow a Laplace distribution with zero mean. Particle hop distances, viewed alone, ought to be distributed exponentially. However, the covariance between hop distances and travel times precludes this result. Instead, the covariance structure suggests that hop distances follow a Weibull distribution. These distributions are consistent with high-resolution measurements obtained from high-speed imaging of bed load particle motions. The analysis brings us closer to choosing distributions based on our mechanical insight.

  19. On the composition dependence of faceting behaviour of primary phases during solidification

    NASA Astrophysics Data System (ADS)

    Saroch, Mamta; Dubey, K. S.; Ramachandrarao, P.

    1993-02-01

    The entropy of solution of the primary aluminium-rich phase in the aluminium-tin melts has been evaluated as a function of temperature using available thermodynamic and phase equilibria data with a view to understand the faceting behaviour of this phase. It was noticed that the range of compositions in which alloys of aluminium and tin yield a faceted primary phase is correlated with the domain of compositions over which the entropy of solution shows a strong temperature dependence. It is demonstrated that both a high value of the entropy of solution and a strong temperature dependence of it are essential for providing faceting. A strong temperature dependence of the entropy of solution is in turn a consequence of negligible liquidus slope and existence of retrograde solubility. The AgBi and AgPb systems have similar features.

  20. Performance Analysis of Entropy Methods on K Means in Clustering Process

    NASA Astrophysics Data System (ADS)

    Dicky Syahputra Lubis, Mhd.; Mawengkang, Herman; Suwilo, Saib

    2017-12-01

    K Means is a non-hierarchical data clustering method that attempts to partition existing data into one or more clusters / groups. This method partitions the data into clusters / groups so that data that have the same characteristics are grouped into the same cluster and data that have different characteristics are grouped into other groups.The purpose of this data clustering is to minimize the objective function set in the clustering process, which generally attempts to minimize variation within a cluster and maximize the variation between clusters. However, the main disadvantage of this method is that the number k is often not known before. Furthermore, a randomly chosen starting point may cause two points to approach the distance to be determined as two centroids. Therefore, for the determination of the starting point in K Means used entropy method where this method is a method that can be used to determine a weight and take a decision from a set of alternatives. Entropy is able to investigate the harmony in discrimination among a multitude of data sets. Using Entropy criteria with the highest value variations will get the highest weight. Given this entropy method can help K Means work process in determining the starting point which is usually determined at random. Thus the process of clustering on K Means can be more quickly known by helping the entropy method where the iteration process is faster than the K Means Standard process. Where the postoperative patient dataset of the UCI Repository Machine Learning used and using only 12 data as an example of its calculations is obtained by entropy method only with 2 times iteration can get the desired end result.

  1. Retinal blood vessel extraction using tunable bandpass filter and fuzzy conditional entropy.

    PubMed

    Sil Kar, Sudeshna; Maity, Santi P

    2016-09-01

    Extraction of blood vessels on retinal images plays a significant role for screening of different opthalmologic diseases. However, accurate extraction of the entire and individual type of vessel silhouette from the noisy images with poorly illuminated background is a complicated task. To this aim, an integrated system design platform is suggested in this work for vessel extraction using a sequential bandpass filter followed by fuzzy conditional entropy maximization on matched filter response. At first noise is eliminated from the image under consideration through curvelet based denoising. To include the fine details and the relatively less thick vessel structures, the image is passed through a bank of sequential bandpass filter structure optimized for contrast enhancement. Fuzzy conditional entropy on matched filter response is then maximized to find the set of multiple optimal thresholds to extract the different types of vessel silhouettes from the background. Differential Evolution algorithm is used to determine the optimal gain in bandpass filter and the combination of the fuzzy parameters. Using the multiple thresholds, retinal image is classified as the thick, the medium and the thin vessels including neovascularization. Performance evaluated on different publicly available retinal image databases shows that the proposed method is very efficient in identifying the diverse types of vessels. Proposed method is also efficient in extracting the abnormal and the thin blood vessels in pathological retinal images. The average values of true positive rate, false positive rate and accuracy offered by the method is 76.32%, 1.99% and 96.28%, respectively for the DRIVE database and 72.82%, 2.6% and 96.16%, respectively for the STARE database. Simulation results demonstrate that the proposed method outperforms the existing methods in detecting the various types of vessels and the neovascularization structures. The combination of curvelet transform and tunable bandpass filter is found to be very much effective in edge enhancement whereas fuzzy conditional entropy efficiently distinguishes vessels of different widths. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Variability in cotton fiber yield, fiber quality, and soil properties in a southeastern coastal plain

    USDA-ARS?s Scientific Manuscript database

    To maximize profitability, cotton (GossypiumhirsutumL.) producers must attempt to control the quality of the crop while maximizing yield. The objective of this research was to measure the intrinsic variability present in cotton fiber yield and quality. The 0.5-ha experimental site was located in a...

  3. Hidden disorder in the α '→δ transformation of Pu-1.9 at.% Ga

    DOE PAGES

    Jeffries, J. R.; Manley, M. E.; Wall, M. A.; ...

    2012-06-06

    Enthalpy and entropy are thermodynamic quantities critical to determining how and at what temperature a phase transition occurs. At a phase transition, the enthalpy and temperature-weighted entropy differences between two phases are equal (ΔH=TΔS), but there are materials where this balance has not been experimentally or theoretically realized, leading to the idea of hidden order and disorder. In a Pu-1.9 at. % Ga alloy, the δ phase is retained as a metastable state at room temperature, but at low temperatures, the δ phase yields to a mixed-phase microstructure of δ- and α'-Pu. The previously measured sources of entropy associated withmore » the α'→δ transformation fail to sum to the entropy predicted theoretically. We report an experimental measurement of the entropy of the α'→δ transformation that corroborates the theoretical prediction, and implies that only about 65% of the entropy stabilizing the δ phase is accounted for, leaving a missing entropy of about 0.5 k B/atom. Some previously proposed mechanisms for generating entropy are discussed, but none seem capable of providing the necessary disorder to stabilize the δ phase. This hidden disorder represents multiple accessible states per atom within the δ phase of Pu that may not be included in our current understanding of the properties and phase stability of δ-Pu.« less

  4. Fast estimate of Hartley entropy in image sharpening

    NASA Astrophysics Data System (ADS)

    Krbcová, Zuzana; Kukal, Jaromír.; Svihlik, Jan; Fliegel, Karel

    2016-09-01

    Two classes of linear IIR filters: Laplacian of Gaussian (LoG) and Difference of Gaussians (DoG) are frequently used as high pass filters for contextual vision and edge detection. They are also used for image sharpening when linearly combined with the original image. Resulting sharpening filters are radially symmetric in spatial and frequency domains. Our approach is based on the radial approximation of unknown optimal filter, which is designed as a weighted sum of Gaussian filters with various radii. The novel filter is designed for MRI image enhancement where the image intensity represents anatomical structure plus additive noise. We prefer the gradient norm of Hartley entropy of whole image intensity as a measure which has to be maximized for the best sharpening. The entropy estimation procedure is as fast as FFT included in the filter but this estimate is a continuous function of enhanced image intensities. Physically motivated heuristic is used for optimum sharpening filter design by its parameter tuning. Our approach is compared with Wiener filter on MRI images.

  5. An entropy-based method for determining the flow depth distribution in natural channels

    NASA Astrophysics Data System (ADS)

    Moramarco, Tommaso; Corato, Giovanni; Melone, Florisa; Singh, Vijay P.

    2013-08-01

    A methodology for determining the bathymetry of river cross-sections during floods by the sampling of surface flow velocity and existing low flow hydraulic data is developed . Similar to Chiu (1988) who proposed an entropy-based velocity distribution, the flow depth distribution in a cross-section of a natural channel is derived by entropy maximization. The depth distribution depends on one parameter, whose estimate is straightforward, and on the maximum flow depth. Applying to a velocity data set of five river gage sites, the method modeled the flow area observed during flow measurements and accurately assessed the corresponding discharge by coupling the flow depth distribution and the entropic relation between mean velocity and maximum velocity. The methodology unfolds a new perspective for flow monitoring by remote sensing, considering that the two main quantities on which the methodology is based, i.e., surface flow velocity and flow depth, might be potentially sensed by new sensors operating aboard an aircraft or satellite.

  6. Perspective: Maximum caliber is a general variational principle for dynamical systems

    NASA Astrophysics Data System (ADS)

    Dixit, Purushottam D.; Wagoner, Jason; Weistuch, Corey; Pressé, Steve; Ghosh, Kingshuk; Dill, Ken A.

    2018-01-01

    We review here Maximum Caliber (Max Cal), a general variational principle for inferring distributions of paths in dynamical processes and networks. Max Cal is to dynamical trajectories what the principle of maximum entropy is to equilibrium states or stationary populations. In Max Cal, you maximize a path entropy over all possible pathways, subject to dynamical constraints, in order to predict relative path weights. Many well-known relationships of non-equilibrium statistical physics—such as the Green-Kubo fluctuation-dissipation relations, Onsager's reciprocal relations, and Prigogine's minimum entropy production—are limited to near-equilibrium processes. Max Cal is more general. While it can readily derive these results under those limits, Max Cal is also applicable far from equilibrium. We give examples of Max Cal as a method of inference about trajectory distributions from limited data, finding reaction coordinates in bio-molecular simulations, and modeling the complex dynamics of non-thermal systems such as gene regulatory networks or the collective firing of neurons. We also survey its basis in principle and some limitations.

  7. A secure image encryption method based on dynamic harmony search (DHS) combined with chaotic map

    NASA Astrophysics Data System (ADS)

    Mirzaei Talarposhti, Khadijeh; Khaki Jamei, Mehrzad

    2016-06-01

    In recent years, there has been increasing interest in the security of digital images. This study focuses on the gray scale image encryption using dynamic harmony search (DHS). In this research, first, a chaotic map is used to create cipher images, and then the maximum entropy and minimum correlation coefficient is obtained by applying a harmony search algorithm on them. This process is divided into two steps. In the first step, the diffusion of a plain image using DHS to maximize the entropy as a fitness function will be performed. However, in the second step, a horizontal and vertical permutation will be applied on the best cipher image, which is obtained in the previous step. Additionally, DHS has been used to minimize the correlation coefficient as a fitness function in the second step. The simulation results have shown that by using the proposed method, the maximum entropy and the minimum correlation coefficient, which are approximately 7.9998 and 0.0001, respectively, have been obtained.

  8. A maximum entropy thermodynamics of small systems.

    PubMed

    Dixit, Purushottam D

    2013-05-14

    We present a maximum entropy approach to analyze the state space of a small system in contact with a large bath, e.g., a solvated macromolecular system. For the solute, the fluctuations around the mean values of observables are not negligible and the probability distribution P(r) of the state space depends on the intricate details of the interaction of the solute with the solvent. Here, we employ a superstatistical approach: P(r) is expressed as a marginal distribution summed over the variation in β, the inverse temperature of the solute. The joint distribution P(β, r) is estimated by maximizing its entropy. We also calculate the first order system-size corrections to the canonical ensemble description of the state space. We test the development on a simple harmonic oscillator interacting with two baths with very different chemical identities, viz., (a) Lennard-Jones particles and (b) water molecules. In both cases, our method captures the state space of the oscillator sufficiently well. Future directions and connections with traditional statistical mechanics are discussed.

  9. A Stationary Wavelet Entropy-Based Clustering Approach Accurately Predicts Gene Expression

    PubMed Central

    Nguyen, Nha; Vo, An; Choi, Inchan

    2015-01-01

    Abstract Studying epigenetic landscapes is important to understand the condition for gene regulation. Clustering is a useful approach to study epigenetic landscapes by grouping genes based on their epigenetic conditions. However, classical clustering approaches that often use a representative value of the signals in a fixed-sized window do not fully use the information written in the epigenetic landscapes. Clustering approaches to maximize the information of the epigenetic signals are necessary for better understanding gene regulatory environments. For effective clustering of multidimensional epigenetic signals, we developed a method called Dewer, which uses the entropy of stationary wavelet of epigenetic signals inside enriched regions for gene clustering. Interestingly, the gene expression levels were highly correlated with the entropy levels of epigenetic signals. Dewer separates genes better than a window-based approach in the assessment using gene expression and achieved a correlation coefficient above 0.9 without using any training procedure. Our results show that the changes of the epigenetic signals are useful to study gene regulation. PMID:25383910

  10. Radar detection with the Neyman-Pearson criterion using supervised-learning-machines trained with the cross-entropy error

    NASA Astrophysics Data System (ADS)

    Jarabo-Amores, María-Pilar; la Mata-Moya, David de; Gil-Pita, Roberto; Rosa-Zurera, Manuel

    2013-12-01

    The application of supervised learning machines trained to minimize the Cross-Entropy error to radar detection is explored in this article. The detector is implemented with a learning machine that implements a discriminant function, which output is compared to a threshold selected to fix a desired probability of false alarm. The study is based on the calculation of the function the learning machine approximates to during training, and the application of a sufficient condition for a discriminant function to be used to approximate the optimum Neyman-Pearson (NP) detector. In this article, the function a supervised learning machine approximates to after being trained to minimize the Cross-Entropy error is obtained. This discriminant function can be used to implement the NP detector, which maximizes the probability of detection, maintaining the probability of false alarm below or equal to a predefined value. Some experiments about signal detection using neural networks are also presented to test the validity of the study.

  11. Perspective: Maximum caliber is a general variational principle for dynamical systems.

    PubMed

    Dixit, Purushottam D; Wagoner, Jason; Weistuch, Corey; Pressé, Steve; Ghosh, Kingshuk; Dill, Ken A

    2018-01-07

    We review here Maximum Caliber (Max Cal), a general variational principle for inferring distributions of paths in dynamical processes and networks. Max Cal is to dynamical trajectories what the principle of maximum entropy is to equilibrium states or stationary populations. In Max Cal, you maximize a path entropy over all possible pathways, subject to dynamical constraints, in order to predict relative path weights. Many well-known relationships of non-equilibrium statistical physics-such as the Green-Kubo fluctuation-dissipation relations, Onsager's reciprocal relations, and Prigogine's minimum entropy production-are limited to near-equilibrium processes. Max Cal is more general. While it can readily derive these results under those limits, Max Cal is also applicable far from equilibrium. We give examples of Max Cal as a method of inference about trajectory distributions from limited data, finding reaction coordinates in bio-molecular simulations, and modeling the complex dynamics of non-thermal systems such as gene regulatory networks or the collective firing of neurons. We also survey its basis in principle and some limitations.

  12. Entropy of uremia and dialysis technology.

    PubMed

    Ronco, Claudio

    2013-01-01

    The second law of thermodynamics applies with local exceptions to patient history and therapy interventions. Living things preserve their low level of entropy throughout time because they receive energy from their surroundings in the form of food. They gain their order at the expense of disordering the nutrients they consume. Death is the thermodynamically favored state: it represents a large increase in entropy as molecular structure yields to chaos. The kidney is an organ dissipating large amounts of energy to maintain the level of entropy of the organism as low as possible. Diseases, and in particular uremia, represent conditions of rapid increase in entropy. Therapeutic strategies are oriented towards a reduction in entropy or at least a decrease in the speed of entropy increase. Uremia is a process accelerating the trend towards randomness and disorder (increase in entropy). Dialysis is a factor external to the patient that tends to reduce the level of entropy caused by kidney disease. Since entropy can only increase in closed systems, energy and work must be spent to limit the entropy of uremia. This energy should be adapted to the system (patient) and be specifically oriented and personalized. This includes a multidimensional effort to achieve an adequate dialysis that goes beyond small molecular weight solute clearance. It includes a biological plan for recovery of homeostasis and a strategy towards long-term rehabilitation of the patient. Such objectives can be achieved with a combination of technology and innovation to answer specific questions that are still present after 60 years of dialysis history. This change in the individual bioentropy may represent a local exception to natural trends as the patient could be considered an isolated universe responding to the classic laws of thermodynamics. Copyright © 2013 S. Karger AG, Basel.

  13. A Theoretical Basis for Entropy-Scaling Effects in Human Mobility Patterns.

    PubMed

    Osgood, Nathaniel D; Paul, Tuhin; Stanley, Kevin G; Qian, Weicheng

    2016-01-01

    Characterizing how people move through space has been an important component of many disciplines. With the advent of automated data collection through GPS and other location sensing systems, researchers have the opportunity to examine human mobility at spatio-temporal resolution heretofore impossible. However, the copious and complex data collected through these logging systems can be difficult for humans to fully exploit, leading many researchers to propose novel metrics for encapsulating movement patterns in succinct and useful ways. A particularly salient proposed metric is the mobility entropy rate of the string representing the sequence of locations visited by an individual. However, mobility entropy rate is not scale invariant: entropy rate calculations based on measurements of the same trajectory at varying spatial or temporal granularity do not yield the same value, limiting the utility of mobility entropy rate as a metric by confounding inter-experimental comparisons. In this paper, we derive a scaling relationship for mobility entropy rate of non-repeating straight line paths from the definition of Lempel-Ziv compression. We show that the resulting formulation predicts the scaling behavior of simulated mobility traces, and provides an upper bound on mobility entropy rate under certain assumptions. We further show that this formulation has a maximum value for a particular sampling rate, implying that optimal sampling rates for particular movement patterns exist.

  14. Predictive uncertainty in auditory sequence processing

    PubMed Central

    Hansen, Niels Chr.; Pearce, Marcus T.

    2014-01-01

    Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty—a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music. PMID:25295018

  15. Dynamic Wireless Network Based on Open Physical Layer

    DTIC Science & Technology

    2011-02-18

    would yield the error- exponent optimal solutions. We solved this problem, and the detailed works are reported in [?]. It turns out that when Renyi ...is, during the communication session. A natural set of metrics of interests are the family of Renyi divergences. With a parameter of α that can be...tuned, Renyi entropy of a given distribution corresponds to the Shannon entropy, at α = 1, to the probability of detection error, at α =∞. This gives a

  16. Data Decomposition Techniques with Multi-Scale Permutation Entropy Calculations for Bearing Fault Diagnosis

    PubMed Central

    Yasir, Muhammad Naveed; Koh, Bong-Hwan

    2018-01-01

    This paper presents the local mean decomposition (LMD) integrated with multi-scale permutation entropy (MPE), also known as LMD-MPE, to investigate the rolling element bearing (REB) fault diagnosis from measured vibration signals. First, the LMD decomposed the vibration data or acceleration measurement into separate product functions that are composed of both amplitude and frequency modulation. MPE then calculated the statistical permutation entropy from the product functions to extract the nonlinear features to assess and classify the condition of the healthy and damaged REB system. The comparative experimental results of the conventional LMD-based multi-scale entropy and MPE were presented to verify the authenticity of the proposed technique. The study found that LMD-MPE’s integrated approach provides reliable, damage-sensitive features when analyzing the bearing condition. The results of REB experimental datasets show that the proposed approach yields more vigorous outcomes than existing methods. PMID:29690526

  17. On the statistical equivalence of restrained-ensemble simulations with the maximum entropy method

    PubMed Central

    Roux, Benoît; Weare, Jonathan

    2013-01-01

    An issue of general interest in computer simulations is to incorporate information from experiments into a structural model. An important caveat in pursuing this goal is to avoid corrupting the resulting model with spurious and arbitrary biases. While the problem of biasing thermodynamic ensembles can be formulated rigorously using the maximum entropy method introduced by Jaynes, the approach can be cumbersome in practical applications with the need to determine multiple unknown coefficients iteratively. A popular alternative strategy to incorporate the information from experiments is to rely on restrained-ensemble molecular dynamics simulations. However, the fundamental validity of this computational strategy remains in question. Here, it is demonstrated that the statistical distribution produced by restrained-ensemble simulations is formally consistent with the maximum entropy method of Jaynes. This clarifies the underlying conditions under which restrained-ensemble simulations will yield results that are consistent with the maximum entropy method. PMID:23464140

  18. Data Decomposition Techniques with Multi-Scale Permutation Entropy Calculations for Bearing Fault Diagnosis.

    PubMed

    Yasir, Muhammad Naveed; Koh, Bong-Hwan

    2018-04-21

    This paper presents the local mean decomposition (LMD) integrated with multi-scale permutation entropy (MPE), also known as LMD-MPE, to investigate the rolling element bearing (REB) fault diagnosis from measured vibration signals. First, the LMD decomposed the vibration data or acceleration measurement into separate product functions that are composed of both amplitude and frequency modulation. MPE then calculated the statistical permutation entropy from the product functions to extract the nonlinear features to assess and classify the condition of the healthy and damaged REB system. The comparative experimental results of the conventional LMD-based multi-scale entropy and MPE were presented to verify the authenticity of the proposed technique. The study found that LMD-MPE’s integrated approach provides reliable, damage-sensitive features when analyzing the bearing condition. The results of REB experimental datasets show that the proposed approach yields more vigorous outcomes than existing methods.

  19. Diversity of Poissonian populations.

    PubMed

    Eliazar, Iddo I; Sokolov, Igor M

    2010-01-01

    Populations represented by collections of points scattered randomly on the real line are ubiquitous in science and engineering. The statistical modeling of such populations leads naturally to Poissonian populations-Poisson processes on the real line with a distinguished maximal point. Poissonian populations are infinite objects underlying key issues in statistical physics, probability theory, and random fractals. Due to their infiniteness, measuring the diversity of Poissonian populations depends on the lower-bound cut-off applied. This research characterizes the classes of Poissonian populations whose diversities are invariant with respect to the cut-off level applied and establishes an elemental connection between these classes and extreme-value theory. The measures of diversity considered are variance and dispersion, Simpson's index and inverse participation ratio, Shannon's entropy and Rényi's entropy, and Gini's index.

  20. The maximal cooling power of magnetic and thermoelectric refrigerators with La(FeCoSi)13 alloys

    NASA Astrophysics Data System (ADS)

    Skokov, K. P.; Karpenkov, A. Yu.; Karpenkov, D. Yu.; Gutfleisch, O.

    2013-05-01

    Using our data on magnetic entropy change ΔSm, adiabatic temperature change ΔTad and heat capacity CH for La(FeCoSi)13 alloys, the upper limit of heat Qc transferred per cycle, and the lowest limit of consumed work Wc were established for magnetic refrigerators operating in Δμ0H =1.9 T. In order to estimate the cooling power, attributable to thermoelectric refrigerators with La(FeCoSi)13, thermal conductivity λ, resistivity ρ, and Seebeck coefficient α were measured and the maximal cooling power QL, the input power Pi, and coefficient of performance have been calculated.

  1. An analogy of the charge distribution on Julia sets with the Brownian motion

    NASA Astrophysics Data System (ADS)

    Lopes, Artur O.

    1989-09-01

    A way to compute the entropy of an invariant measure of a hyperbolic rational map from the information given by a Ruelle-Perron-Frobenius operator of a generic Holder-continuous function will be shown. This result was motivated by an analogy of the Brownian motion with the dynamical system given by a rational map and the maximal measure. In the case the rational map is a polynomial, then the maximal measure is the charge distribution in the Julia set. The main theorem of this paper can be seen as a large deviation result. It is a kind of Donsker-Varadhan formula for dynamical systems.

  2. Discriminative components of data.

    PubMed

    Peltonen, Jaakko; Kaski, Samuel

    2005-01-01

    A simple probabilistic model is introduced to generalize classical linear discriminant analysis (LDA) in finding components that are informative of or relevant for data classes. The components maximize the predictability of the class distribution which is asymptotically equivalent to 1) maximizing mutual information with the classes, and 2) finding principal components in the so-called learning or Fisher metrics. The Fisher metric measures only distances that are relevant to the classes, that is, distances that cause changes in the class distribution. The components have applications in data exploration, visualization, and dimensionality reduction. In empirical experiments, the method outperformed, in addition to more classical methods, a Renyi entropy-based alternative while having essentially equivalent computational cost.

  3. Optimal control of orientation and entanglement for two dipole-dipole coupled quantum planar rotors.

    PubMed

    Yu, Hongling; Ho, Tak-San; Rabitz, Herschel

    2018-05-09

    Optimal control simulations are performed for orientation and entanglement of two dipole-dipole coupled identical quantum rotors. The rotors at various fixed separations lie on a model non-interacting plane with an applied control field. It is shown that optimal control of orientation or entanglement represents two contrasting control scenarios. In particular, the maximally oriented state (MOS) of the two rotors has a zero entanglement entropy and is readily attainable at all rotor separations. Whereas, the contrasting maximally entangled state (MES) has a zero orientation expectation value and is most conveniently attainable at small separations where the dipole-dipole coupling is strong. It is demonstrated that the peak orientation expectation value attained by the MOS at large separations exhibits a long time revival pattern due to the small energy splittings arising form the extremely weak dipole-dipole coupling between the degenerate product states of the two free rotors. Moreover, it is found that the peak entanglement entropy value attained by the MES remains largely unchanged as the two rotors are transported to large separations after turning off the control field. Finally, optimal control simulations of transition dynamics between the MOS and the MES reveal the intricate interplay between orientation and entanglement.

  4. ℓ1-norm and entanglement in screening out braiding from Yang-Baxter equation associated with Z3 parafermion

    NASA Astrophysics Data System (ADS)

    Yu, Li-Wei; Ge, Mo-Lin

    2017-03-01

    The relationships between quantum entangled states and braid matrices have been well studied in recent years. However, most of the results are based on qubits. In this paper, we investigate the applications of 2-qutrit entanglement in the braiding associated with Z3 parafermion. The 2-qutrit entangled state | Ψ (θ) >, generated by the action of the localized unitary solution R ˘ (θ) of YBE on 2-qutrit natural basis, achieves its maximal ℓ1-norm and maximal von Neumann entropy simultaneously at θ = π / 3. Meanwhile, at θ = π / 3, the solutions of YBE reduces braid matrices, which implies the role of ℓ1-norm and entropy plays in determining real physical quantities. On the other hand, we give a new realization of 4-anyon topological basis by qutrit entangled states, then the 9 × 9 localized braid representation in 4-qutrit tensor product space (C3) ⊗ 4 is reduced to Jones representation of braiding in the 4-anyon topological basis. Hence, we conclude that the entangled states are powerful tools in analysing the characteristics of braiding and R ˘ -matrix.

  5. Reply to "Comment on `Third law of thermodynamics as a key test of generalized entropies' "

    NASA Astrophysics Data System (ADS)

    Bento, E. P.; Viswanathan, G. M.; da Luz, M. G. E.; Silva, R.

    2015-07-01

    In Bento et al. [Phys. Rev. E 91, 039901 (2015), 10.1103/PhysRevE.91.039901] we develop a method to verify if an arbitrary generalized statistics does or does not obey the third law of thermodynamics. As examples, we address two important formulations, Kaniadakis and Tsallis. In their Comment on the paper, Bagci and Oikonomou suggest that our examination of the Tsallis statistics is valid only for q ≥1 , using arguments like there is no distribution maximizing the Tsallis entropy for the interval q <0 (in which the third law is not verified) compatible with the problem energy expression. In this Reply, we first (and most importantly) show that the Comment misses the point. In our original work we have considered the now already standard construction of the Tsallis statistics. So, if indeed such statistics lacks a maximization principle (a fact irrelevant in our protocol), this is an inherent feature of the statistics itself and not a problem with our analysis. Second, some arguments used by Bagci and Oikonomou (for 0

  6. High pressure synthesis of a hexagonal close-packed phase of the high-entropy alloy CrMnFeCoNi

    NASA Astrophysics Data System (ADS)

    Tracy, Cameron L.; Park, Sulgiye; Rittman, Dylan R.; Zinkle, Steven J.; Bei, Hongbin; Lang, Maik; Ewing, Rodney C.; Mao, Wendy L.

    2017-05-01

    High-entropy alloys, near-equiatomic solid solutions of five or more elements, represent a new strategy for the design of materials with properties superior to those of conventional alloys. However, their phase space remains constrained, with transition metal high-entropy alloys exhibiting only face- or body-centered cubic structures. Here, we report the high-pressure synthesis of a hexagonal close-packed phase of the prototypical high-entropy alloy CrMnFeCoNi. This martensitic transformation begins at 14 GPa and is attributed to suppression of the local magnetic moments, destabilizing the initial fcc structure. Similar to fcc-to-hcp transformations in Al and the noble gases, the transformation is sluggish, occurring over a range of >40 GPa. However, the behaviour of CrMnFeCoNi is unique in that the hcp phase is retained following decompression to ambient pressure, yielding metastable fcc-hcp mixtures. This demonstrates a means of tuning the structures and properties of high-entropy alloys in a manner not achievable by conventional processing techniques.

  7. Shallow water equations: viscous solutions and inviscid limit

    NASA Astrophysics Data System (ADS)

    Chen, Gui-Qiang; Perepelitsa, Mikhail

    2012-12-01

    We establish the inviscid limit of the viscous shallow water equations to the Saint-Venant system. For the viscous equations, the viscosity terms are more degenerate when the shallow water is close to the bottom, in comparison with the classical Navier-Stokes equations for barotropic gases; thus, the analysis in our earlier work for the classical Navier-Stokes equations does not apply directly, which require new estimates to deal with the additional degeneracy. We first introduce a notion of entropy solutions to the viscous shallow water equations and develop an approach to establish the global existence of such solutions and their uniform energy-type estimates with respect to the viscosity coefficient. These uniform estimates yield the existence of measure-valued solutions to the Saint-Venant system generated by the viscous solutions. Based on the uniform energy-type estimates and the features of the Saint-Venant system, we further establish that the entropy dissipation measures of the viscous solutions for weak entropy-entropy flux pairs, generated by compactly supported C 2 test-functions, are confined in a compact set in H -1, which yields that the measure-valued solutions are confined by the Tartar-Murat commutator relation. Then, the reduction theorem established in Chen and Perepelitsa [5] for the measure-valued solutions with unbounded support leads to the convergence of the viscous solutions to a finite-energy entropy solution of the Saint-Venant system with finite-energy initial data, which is relative with respect to the different end-states of the bottom topography of the shallow water at infinity. The analysis also applies to the inviscid limit problem for the Saint-Venant system in the presence of friction.

  8. Potential advantages of curve sawing non-straight hardwood logs

    Treesearch

    Philip A. Araman

    2007-01-01

    Curve sawing is not new to the softwood industry. Softwood sawmill managers think about how fast they can push logs through their sawmill to maximize the yield of 1x and 2x lumber. Curve sawing helps mills maximize yield when sawing non-straight logs. Hardwood sawmill managers don’t want to push logs through their sawmills, because they want to maximize lumber value...

  9. The Anterior Insula Tracks Behavioral Entropy during an Interpersonal Competitive Game

    PubMed Central

    Matsumoto, Madoka; Matsumoto, Kenji; Omori, Takashi

    2015-01-01

    In competitive situations, individuals need to adjust their behavioral strategy dynamically in response to their opponent’s behavior. In the present study, we investigated the neural basis of how individuals adjust their strategy during a simple, competitive game of matching pennies. We used entropy as a behavioral index of randomness in decision-making, because maximizing randomness is thought to be an optimal strategy in the game, according to game theory. While undergoing functional magnetic resonance imaging (fMRI), subjects played matching pennies with either a human or computer opponent in each block, although in reality they played the game with the same computer algorithm under both conditions. The winning rate of each block was also manipulated. Both the opponent (human or computer), and the winning rate, independently affected subjects’ block-wise entropy during the game. The fMRI results revealed that activity in the bilateral anterior insula was positively correlated with subjects’ (not opponent’s) behavioral entropy during the game, which indicates that during an interpersonal competitive game, the anterior insula tracked how uncertain subjects’ behavior was, rather than how uncertain subjects felt their opponent's behavior was. Our results suggest that intuitive or automatic processes based on somatic markers may be a key to optimally adjusting behavioral strategies in competitive situations. PMID:26039634

  10. On the pH Dependence of the Potential of Maximum Entropy of Ir(111) Electrodes.

    PubMed

    Ganassin, Alberto; Sebastián, Paula; Climent, Víctor; Schuhmann, Wolfgang; Bandarenka, Aliaksandr S; Feliu, Juan

    2017-04-28

    Studies over the entropy of components forming the electrode/electrolyte interface can give fundamental insights into the properties of electrified interphases. In particular, the potential where the entropy of formation of the double layer is maximal (potential of maximum entropy, PME) is an important parameter for the characterization of electrochemical systems. Indeed, this parameter determines the majority of electrode processes. In this work, we determine PMEs for Ir(111) electrodes. The latter currently play an important role to understand electrocatalysis for energy provision; and at the same time, iridium is one of the most stable metals against corrosion. For the experiments, we used a combination of the laser induced potential transient to determine the PME, and CO charge-displacement to determine the potentials of zero total charge, (E PZTC ). Both PME and E PZTC were assessed for perchlorate solutions in the pH range from 1 to 4. Surprisingly, we found that those are located in the potential region where the adsorption of hydrogen and hydroxyl species takes place, respectively. The PMEs demonstrated a shift by ~30 mV per a pH unit (in the RHE scale). Connections between the PME and electrocatalytic properties of the electrode surface are discussed.

  11. Bayesian cross-entropy methodology for optimal design of validation experiments

    NASA Astrophysics Data System (ADS)

    Jiang, X.; Mahadevan, S.

    2006-07-01

    An important concern in the design of validation experiments is how to incorporate the mathematical model in the design in order to allow conclusive comparisons of model prediction with experimental output in model assessment. The classical experimental design methods are more suitable for phenomena discovery and may result in a subjective, expensive, time-consuming and ineffective design that may adversely impact these comparisons. In this paper, an integrated Bayesian cross-entropy methodology is proposed to perform the optimal design of validation experiments incorporating the computational model. The expected cross entropy, an information-theoretic distance between the distributions of model prediction and experimental observation, is defined as a utility function to measure the similarity of two distributions. A simulated annealing algorithm is used to find optimal values of input variables through minimizing or maximizing the expected cross entropy. The measured data after testing with the optimum input values are used to update the distribution of the experimental output using Bayes theorem. The procedure is repeated to adaptively design the required number of experiments for model assessment, each time ensuring that the experiment provides effective comparison for validation. The methodology is illustrated for the optimal design of validation experiments for a three-leg bolted joint structure and a composite helicopter rotor hub component.

  12. A Theoretical Basis for Entropy-Scaling Effects in Human Mobility Patterns

    PubMed Central

    2016-01-01

    Characterizing how people move through space has been an important component of many disciplines. With the advent of automated data collection through GPS and other location sensing systems, researchers have the opportunity to examine human mobility at spatio-temporal resolution heretofore impossible. However, the copious and complex data collected through these logging systems can be difficult for humans to fully exploit, leading many researchers to propose novel metrics for encapsulating movement patterns in succinct and useful ways. A particularly salient proposed metric is the mobility entropy rate of the string representing the sequence of locations visited by an individual. However, mobility entropy rate is not scale invariant: entropy rate calculations based on measurements of the same trajectory at varying spatial or temporal granularity do not yield the same value, limiting the utility of mobility entropy rate as a metric by confounding inter-experimental comparisons. In this paper, we derive a scaling relationship for mobility entropy rate of non-repeating straight line paths from the definition of Lempel-Ziv compression. We show that the resulting formulation predicts the scaling behavior of simulated mobility traces, and provides an upper bound on mobility entropy rate under certain assumptions. We further show that this formulation has a maximum value for a particular sampling rate, implying that optimal sampling rates for particular movement patterns exist. PMID:27571423

  13. Maximizing plant density affects broccoli yield and quality

    USDA-ARS?s Scientific Manuscript database

    Increased demand for fresh market bunch broccoli (Brassica oleracea L. var. italica) has led to increased production along the United States east coast. Maximizing broccoli yields is a primary concern for quickly expanding southeastern commercial markets. This broccoli plant density study was carr...

  14. An instructive model of entropy

    NASA Astrophysics Data System (ADS)

    Zimmerman, Seth

    2010-09-01

    This article first notes the misinterpretation of a common thought experiment, and the misleading comment that 'systems tend to flow from less probable to more probable macrostates'. It analyses the experiment, generalizes it and introduces a new tool of investigation, the simplectic structure. A time-symmetric model is built upon this structure, yielding several non-intuitive results. The approach is combinatorial rather than statistical, and assumes that entropy is equivalent to 'missing information'. The intention of this article is not only to present interesting results, but also, by deliberately starting with a simple example and developing it through proof and computer simulation, to clarify the often confusing subject of entropy. The article should be particularly stimulating to students and instructors of discrete mathematics or undergraduate physics.

  15. Polarimetric Decomposition Analysis of the Deepwater Horizon Oil Slick Using L-Band UAVSAR Data

    NASA Technical Reports Server (NTRS)

    Jones, Cathleen; Minchew, Brent; Holt, Benjamin

    2011-01-01

    We report here an analysis of the polarization dependence of L-band radar backscatter from the main slick of the Deepwater Horizon oil spill, with specific attention to the utility of polarimetric decomposition analysis for discrimination of oil from clean water and identification of variations in the oil characteristics. For this study we used data collected with the UAVSAR instrument from opposing look directions directly over the main oil slick. We find that both the Cloude-Pottier and Shannon entropy polarimetric decomposition methods offer promise for oil discrimination, with the Shannon entropy method yielding the same information as contained in the Cloude-Pottier entropy and averaged in tensity parameters, but with significantly less computational complexity

  16. Selection of entropy-measure parameters for knowledge discovery in heart rate variability data

    PubMed Central

    2014-01-01

    Background Heart rate variability is the variation of the time interval between consecutive heartbeats. Entropy is a commonly used tool to describe the regularity of data sets. Entropy functions are defined using multiple parameters, the selection of which is controversial and depends on the intended purpose. This study describes the results of tests conducted to support parameter selection, towards the goal of enabling further biomarker discovery. Methods This study deals with approximate, sample, fuzzy, and fuzzy measure entropies. All data were obtained from PhysioNet, a free-access, on-line archive of physiological signals, and represent various medical conditions. Five tests were defined and conducted to examine the influence of: varying the threshold value r (as multiples of the sample standard deviation σ, or the entropy-maximizing rChon), the data length N, the weighting factors n for fuzzy and fuzzy measure entropies, and the thresholds rF and rL for fuzzy measure entropy. The results were tested for normality using Lilliefors' composite goodness-of-fit test. Consequently, the p-value was calculated with either a two sample t-test or a Wilcoxon rank sum test. Results The first test shows a cross-over of entropy values with regard to a change of r. Thus, a clear statement that a higher entropy corresponds to a high irregularity is not possible, but is rather an indicator of differences in regularity. N should be at least 200 data points for r = 0.2 σ and should even exceed a length of 1000 for r = rChon. The results for the weighting parameters n for the fuzzy membership function show different behavior when coupled with different r values, therefore the weighting parameters have been chosen independently for the different threshold values. The tests concerning rF and rL showed that there is no optimal choice, but r = rF = rL is reasonable with r = rChon or r = 0.2σ. Conclusions Some of the tests showed a dependency of the test significance on the data at hand. Nevertheless, as the medical conditions are unknown beforehand, compromises had to be made. Optimal parameter combinations are suggested for the methods considered. Yet, due to the high number of potential parameter combinations, further investigations of entropy for heart rate variability data will be necessary. PMID:25078574

  17. Selection of entropy-measure parameters for knowledge discovery in heart rate variability data.

    PubMed

    Mayer, Christopher C; Bachler, Martin; Hörtenhuber, Matthias; Stocker, Christof; Holzinger, Andreas; Wassertheurer, Siegfried

    2014-01-01

    Heart rate variability is the variation of the time interval between consecutive heartbeats. Entropy is a commonly used tool to describe the regularity of data sets. Entropy functions are defined using multiple parameters, the selection of which is controversial and depends on the intended purpose. This study describes the results of tests conducted to support parameter selection, towards the goal of enabling further biomarker discovery. This study deals with approximate, sample, fuzzy, and fuzzy measure entropies. All data were obtained from PhysioNet, a free-access, on-line archive of physiological signals, and represent various medical conditions. Five tests were defined and conducted to examine the influence of: varying the threshold value r (as multiples of the sample standard deviation σ, or the entropy-maximizing rChon), the data length N, the weighting factors n for fuzzy and fuzzy measure entropies, and the thresholds rF and rL for fuzzy measure entropy. The results were tested for normality using Lilliefors' composite goodness-of-fit test. Consequently, the p-value was calculated with either a two sample t-test or a Wilcoxon rank sum test. The first test shows a cross-over of entropy values with regard to a change of r. Thus, a clear statement that a higher entropy corresponds to a high irregularity is not possible, but is rather an indicator of differences in regularity. N should be at least 200 data points for r = 0.2 σ and should even exceed a length of 1000 for r = rChon. The results for the weighting parameters n for the fuzzy membership function show different behavior when coupled with different r values, therefore the weighting parameters have been chosen independently for the different threshold values. The tests concerning rF and rL showed that there is no optimal choice, but r = rF = rL is reasonable with r = rChon or r = 0.2σ. Some of the tests showed a dependency of the test significance on the data at hand. Nevertheless, as the medical conditions are unknown beforehand, compromises had to be made. Optimal parameter combinations are suggested for the methods considered. Yet, due to the high number of potential parameter combinations, further investigations of entropy for heart rate variability data will be necessary.

  18. Molecular Probe Dynamics Reveals Suppression of Ice-Like Regions in Strongly Confined Supercooled Water

    PubMed Central

    Banerjee, Debamalya; Bhat, Shrivalli N.; Bhat, Subray V.; Leporini, Dino

    2012-01-01

    The structure of the hydrogen bond network is a key element for understanding water's thermodynamic and kinetic anomalies. While ambient water is strongly believed to be a uniform, continuous hydrogen-bonded liquid, there is growing consensus that supercooled water is better described in terms of distinct domains with either a low-density ice-like structure or a high-density disordered one. We evidenced two distinct rotational mobilities of probe molecules in interstitial supercooled water of polycrystalline ice [Banerjee D, et al. (2009) ESR evidence for 2 coexisting liquid phases in deeply supercooled bulk water. Proc Natl Acad Sci USA 106: 11448–11453]. Here we show that, by increasing the confinement of interstitial water, the mobility of probe molecules, surprisingly, increases. We argue that loose confinement allows the presence of ice-like regions in supercooled water, whereas a tighter confinement yields the suppression of this ordered fraction and leads to higher fluidity. Compelling evidence of the presence of ice-like regions is provided by the probe orientational entropy barrier which is set, through hydrogen bonding, by the configuration of the surrounding water molecules and yields a direct measure of the configurational entropy of the same. We find that, under loose confinement of supercooled water, the entropy barrier surmounted by the slower probe fraction exceeds that of equilibrium water by the melting entropy of ice, whereas no increase of the barrier is observed under stronger confinement. The lower limit of metastability of supercooled water is discussed. PMID:23049747

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Furukawa, Shunsuke; Kim, Yong Baek; School of Physics, Korea Institute for Advanced Study, Seoul 130-722

    We consider a system of two coupled Tomonaga-Luttinger liquids (TLL's) on parallel chains and study the Renyi entanglement entropy S{sub n} between the two chains. Here the entanglement cut is introduced between the chains, not along the perpendicular direction, as has been done in previous studies of one-dimensional systems. The limit n{yields}1 corresponds to the von Neumann entanglement entropy. The system is effectively described by two-component bosonic field theory with different TLL parameters in the symmetric and antisymmetric channels as far as the coupled system remains in a gapless phase. We argue that in this system, S{sub n} is amore » linear function of the length of the chains (boundary law) followed by a universal subleading constant {gamma}{sub n} determined by the ratio of the two TLL parameters. The formulas of {gamma}{sub n} for integer n{>=}2 are derived using (a) ground-state wave functionals of TLL's and (b) boundary conformal field theory, which lead to the same result. These predictions are checked in a numerical diagonalization analysis of a hard-core bosonic model on a ladder. Although the analytic continuation of {gamma}{sub n} to n{yields}1 turns out to be a difficult problem, our numerical result suggests that the subleading constant in the von Neumann entropy is also universal. Our results may provide useful characterization of inherently anisotropic quantum phases such as the sliding Luttinger liquid phase via qualitatively different behaviors of the entanglement entropy with the entanglement partitions along different directions.« less

  20. A noninvasive measure of negative-feedback strength, approximate entropy, unmasks strong diurnal variations in the regularity of LH secretion.

    PubMed

    Liu, Peter Y; Iranmanesh, Ali; Keenan, Daniel M; Pincus, Steven M; Veldhuis, Johannes D

    2007-11-01

    The secretion of anterior-pituitary hormones is subject to negative feedback. Whether negative feedback evolves dynamically over 24 h is not known. Conventional experimental paradigms to test this concept may induce artifacts due to nonphysiological feedback. These limitations might be overcome by a noninvasive methodology to quantify negative feedback continuously over 24 h without disrupting the axis. The present study exploits a recently validated model-free regularity statistic, approximate entropy (ApEn), which monitors feedback changes with high sensitivity and specificity (both >90%; Pincus SM, Hartman ML, Roelfsema F, Thorner MO, Veldhuis JD. Am J Physiol Endocrinol Metab 273: E948-E957, 1999). A time-incremented moving window of ApEn was applied to LH time series obtained by intensive (10-min) blood sampling for four consecutive days (577 successive measurements) in each of eight healthy men. Analyses unveiled marked 24-h variations in ApEn with daily maxima (lowest feedback) at 1100 +/- 1.7 h (mean +/- SE) and minima (highest feedback) at 0430 +/- 1.9 h. The mean difference between maximal and minimal 24-h LH ApEn was 0.348 +/- 0.018, which differed by P < 0.001 from all three of randomly shuffled versions of the same LH time series, simulated pulsatile data and assay noise. Analyses artificially limited to 24-h rather than 96-h data yielded reproducibility coefficients of 3.7-9.0% for ApEn maxima and minima. In conclusion, a feedback-sensitive regularity statistic unmasks strong and consistent 24-h rhythmicity of the orderliness of unperturbed pituitary-hormone secretion. These outcomes suggest that ApEn may have general utility in probing dynamic mechanisms mediating feedback in other endocrine systems.

  1. Bayesian Image Segmentations by Potts Prior and Loopy Belief Propagation

    NASA Astrophysics Data System (ADS)

    Tanaka, Kazuyuki; Kataoka, Shun; Yasuda, Muneki; Waizumi, Yuji; Hsu, Chiou-Ting

    2014-12-01

    This paper presents a Bayesian image segmentation model based on Potts prior and loopy belief propagation. The proposed Bayesian model involves several terms, including the pairwise interactions of Potts models, and the average vectors and covariant matrices of Gauss distributions in color image modeling. These terms are often referred to as hyperparameters in statistical machine learning theory. In order to determine these hyperparameters, we propose a new scheme for hyperparameter estimation based on conditional maximization of entropy in the Potts prior. The algorithm is given based on loopy belief propagation. In addition, we compare our conditional maximum entropy framework with the conventional maximum likelihood framework, and also clarify how the first order phase transitions in loopy belief propagations for Potts models influence our hyperparameter estimation procedures.

  2. Bounds on the entanglement entropy of droplet states in the XXZ spin chain

    NASA Astrophysics Data System (ADS)

    Beaud, V.; Warzel, S.

    2018-01-01

    We consider a class of one-dimensional quantum spin systems on the finite lattice Λ ⊂Z , related to the XXZ spin chain in its Ising phase. It includes in particular the so-called droplet Hamiltonian. The entanglement entropy of energetically low-lying states over a bipartition Λ = B ∪ Bc is investigated and proven to satisfy a logarithmic bound in terms of min{n, |B|, |Bc|}, where n denotes the maximal number of down spins in the considered state. Upon addition of any (positive) random potential, the bound becomes uniformly constant on average, thereby establishing an area law. The proof is based on spectral methods: a deterministic bound on the local (many-body integrated) density of states is derived from an energetically motivated Combes-Thomas estimate.

  3. Studying the dynamics of interbeat interval time series of healthy and congestive heart failure subjects using scale based symbolic entropy analysis

    PubMed Central

    Awan, Imtiaz; Aziz, Wajid; Habib, Nazneen; Alowibdi, Jalal S.; Saeed, Sharjil; Nadeem, Malik Sajjad Ahmed; Shah, Syed Ahsin Ali

    2018-01-01

    Considerable interest has been devoted for developing a deeper understanding of the dynamics of healthy biological systems and how these dynamics are affected due to aging and disease. Entropy based complexity measures have widely been used for quantifying the dynamics of physical and biological systems. These techniques have provided valuable information leading to a fuller understanding of the dynamics of these systems and underlying stimuli that are responsible for anomalous behavior. The single scale based traditional entropy measures yielded contradictory results about the dynamics of real world time series data of healthy and pathological subjects. Recently the multiscale entropy (MSE) algorithm was introduced for precise description of the complexity of biological signals, which was used in numerous fields since its inception. The original MSE quantified the complexity of coarse-grained time series using sample entropy. The original MSE may be unreliable for short signals because the length of the coarse-grained time series decreases with increasing scaling factor τ, however, MSE works well for long signals. To overcome the drawback of original MSE, various variants of this method have been proposed for evaluating complexity efficiently. In this study, we have proposed multiscale normalized corrected Shannon entropy (MNCSE), in which instead of using sample entropy, symbolic entropy measure NCSE has been used as an entropy estimate. The results of the study are compared with traditional MSE. The effectiveness of the proposed approach is demonstrated using noise signals as well as interbeat interval signals from healthy and pathological subjects. The preliminary results of the study indicate that MNCSE values are more stable and reliable than original MSE values. The results show that MNCSE based features lead to higher classification accuracies in comparison with the MSE based features. PMID:29771977

  4. Studying the dynamics of interbeat interval time series of healthy and congestive heart failure subjects using scale based symbolic entropy analysis.

    PubMed

    Awan, Imtiaz; Aziz, Wajid; Shah, Imran Hussain; Habib, Nazneen; Alowibdi, Jalal S; Saeed, Sharjil; Nadeem, Malik Sajjad Ahmed; Shah, Syed Ahsin Ali

    2018-01-01

    Considerable interest has been devoted for developing a deeper understanding of the dynamics of healthy biological systems and how these dynamics are affected due to aging and disease. Entropy based complexity measures have widely been used for quantifying the dynamics of physical and biological systems. These techniques have provided valuable information leading to a fuller understanding of the dynamics of these systems and underlying stimuli that are responsible for anomalous behavior. The single scale based traditional entropy measures yielded contradictory results about the dynamics of real world time series data of healthy and pathological subjects. Recently the multiscale entropy (MSE) algorithm was introduced for precise description of the complexity of biological signals, which was used in numerous fields since its inception. The original MSE quantified the complexity of coarse-grained time series using sample entropy. The original MSE may be unreliable for short signals because the length of the coarse-grained time series decreases with increasing scaling factor τ, however, MSE works well for long signals. To overcome the drawback of original MSE, various variants of this method have been proposed for evaluating complexity efficiently. In this study, we have proposed multiscale normalized corrected Shannon entropy (MNCSE), in which instead of using sample entropy, symbolic entropy measure NCSE has been used as an entropy estimate. The results of the study are compared with traditional MSE. The effectiveness of the proposed approach is demonstrated using noise signals as well as interbeat interval signals from healthy and pathological subjects. The preliminary results of the study indicate that MNCSE values are more stable and reliable than original MSE values. The results show that MNCSE based features lead to higher classification accuracies in comparison with the MSE based features.

  5. Evaluation of the entropy consistent euler flux on 1D and 2D test problems

    NASA Astrophysics Data System (ADS)

    Roslan, Nur Khairunnisa Hanisah; Ismail, Farzad

    2012-06-01

    Perhaps most CFD simulations may yield good predictions of pressure and velocity when compared to experimental data. Unfortunately, these results will most likely not adhere to the second law of thermodynamics hence comprising the authenticity of predicted data. Currently, the test of a good CFD code is to check how much entropy is generated in a smooth flow and hope that the numerical entropy produced is of the correct sign when a shock is encountered. Herein, a shock capturing code written in C++ based on a recent entropy consistent Euler flux is developed to simulate 1D and 2D flows. Unlike other finite volume schemes in commercial CFD code, this entropy consistent flux (EC) function precisely satisfies the discrete second law of thermodynamics. This EC flux has an entropy-conserved part, preserving entropy for smooth flows and a numerical diffusion part that will accurately produce the proper amount of entropy, consistent with the second law. Several numerical simulations of the entropy consistent flux have been tested on two dimensional test cases. The first case is a Mach 3 flow over a forward facing step. The second case is a flow over a NACA 0012 airfoil while the third case is a hypersonic flow passing over a 2D cylinder. Local flow quantities such as velocity and pressure are analyzed and then compared with mainly the Roe flux. The results herein show that the EC flux does not capture the unphysical rarefaction shock unlike the Roe-flux and does not easily succumb to the carbuncle phenomenon. In addition, the EC flux maintains good performance in cases where the Roe flux is known to be superior.

  6. Investigating dynamical complexity in the magnetosphere using various entropy measures

    NASA Astrophysics Data System (ADS)

    Balasis, Georgios; Daglis, Ioannis A.; Papadimitriou, Constantinos; Kalimeri, Maria; Anastasiadis, Anastasios; Eftaxias, Konstantinos

    2009-09-01

    The complex system of the Earth's magnetosphere corresponds to an open spatially extended nonequilibrium (input-output) dynamical system. The nonextensive Tsallis entropy has been recently introduced as an appropriate information measure to investigate dynamical complexity in the magnetosphere. The method has been employed for analyzing Dst time series and gave promising results, detecting the complexity dissimilarity among different physiological and pathological magnetospheric states (i.e., prestorm activity and intense magnetic storms, respectively). This paper explores the applicability and effectiveness of a variety of computable entropy measures (e.g., block entropy, Kolmogorov entropy, T complexity, and approximate entropy) to the investigation of dynamical complexity in the magnetosphere. We show that as the magnetic storm approaches there is clear evidence of significant lower complexity in the magnetosphere. The observed higher degree of organization of the system agrees with that inferred previously, from an independent linear fractal spectral analysis based on wavelet transforms. This convergence between nonlinear and linear analyses provides a more reliable detection of the transition from the quiet time to the storm time magnetosphere, thus showing evidence that the occurrence of an intense magnetic storm is imminent. More precisely, we claim that our results suggest an important principle: significant complexity decrease and accession of persistency in Dst time series can be confirmed as the magnetic storm approaches, which can be used as diagnostic tools for the magnetospheric injury (global instability). Overall, approximate entropy and Tsallis entropy yield superior results for detecting dynamical complexity changes in the magnetosphere in comparison to the other entropy measures presented herein. Ultimately, the analysis tools developed in the course of this study for the treatment of Dst index can provide convenience for space weather applications.

  7. Entropy-based consensus clustering for patient stratification.

    PubMed

    Liu, Hongfu; Zhao, Rui; Fang, Hongsheng; Cheng, Feixiong; Fu, Yun; Liu, Yang-Yu

    2017-09-01

    Patient stratification or disease subtyping is crucial for precision medicine and personalized treatment of complex diseases. The increasing availability of high-throughput molecular data provides a great opportunity for patient stratification. Many clustering methods have been employed to tackle this problem in a purely data-driven manner. Yet, existing methods leveraging high-throughput molecular data often suffers from various limitations, e.g. noise, data heterogeneity, high dimensionality or poor interpretability. Here we introduced an Entropy-based Consensus Clustering (ECC) method that overcomes those limitations all together. Our ECC method employs an entropy-based utility function to fuse many basic partitions to a consensus one that agrees with the basic ones as much as possible. Maximizing the utility function in ECC has a much more meaningful interpretation than any other consensus clustering methods. Moreover, we exactly map the complex utility maximization problem to the classic K -means clustering problem, which can then be efficiently solved with linear time and space complexity. Our ECC method can also naturally integrate multiple molecular data types measured from the same set of subjects, and easily handle missing values without any imputation. We applied ECC to 110 synthetic and 48 real datasets, including 35 cancer gene expression benchmark datasets and 13 cancer types with four molecular data types from The Cancer Genome Atlas. We found that ECC shows superior performance against existing clustering methods. Our results clearly demonstrate the power of ECC in clinically relevant patient stratification. The Matlab package is available at http://scholar.harvard.edu/yyl/ecc . yunfu@ece.neu.edu or yyl@channing.harvard.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  8. Predictions of the causal entropic principle for environmental conditions of the universe

    NASA Astrophysics Data System (ADS)

    Cline, James M.; Frey, Andrew R.; Holder, Gilbert

    2008-03-01

    The causal entropic principle has been proposed as an alternative to the anthropic principle for understanding the magnitude of the cosmological constant. In this approach, the probability to create observers is assumed to be proportional to the entropy production ΔS in a maximal causally connected region—the causal diamond. We improve on the original treatment by better quantifying the entropy production due to stars, using an analytic model for the star formation history which accurately accounts for changes in cosmological parameters. We calculate the dependence of ΔS on the density contrast Q=δρ/ρ, and find that our universe is much closer to the most probable value of Q than in the usual anthropic approach and that probabilities are relatively weakly dependent on this amplitude. In addition, we make first estimates of the dependence of ΔS on the baryon fraction and overall matter abundance. Finally, we also explore the possibility that decays of dark matter, suggested by various observed gamma ray excesses, might produce a comparable amount of entropy to stars.

  9. Direct 4D reconstruction of parametric images incorporating anato-functional joint entropy.

    PubMed

    Tang, Jing; Kuwabara, Hiroto; Wong, Dean F; Rahmim, Arman

    2010-08-07

    We developed an anatomy-guided 4D closed-form algorithm to directly reconstruct parametric images from projection data for (nearly) irreversible tracers. Conventional methods consist of individually reconstructing 2D/3D PET data, followed by graphical analysis on the sequence of reconstructed image frames. The proposed direct reconstruction approach maintains the simplicity and accuracy of the expectation-maximization (EM) algorithm by extending the system matrix to include the relation between the parametric images and the measured data. A closed-form solution was achieved using a different hidden complete-data formulation within the EM framework. Furthermore, the proposed method was extended to maximum a posterior reconstruction via incorporation of MR image information, taking the joint entropy between MR and parametric PET features as the prior. Using realistic simulated noisy [(11)C]-naltrindole PET and MR brain images/data, the quantitative performance of the proposed methods was investigated. Significant improvements in terms of noise versus bias performance were demonstrated when performing direct parametric reconstruction, and additionally upon extending the algorithm to its Bayesian counterpart using the MR-PET joint entropy measure.

  10. UniEnt: uniform entropy model for the dynamics of a neuronal population

    NASA Astrophysics Data System (ADS)

    Hernandez Lahme, Damian; Nemenman, Ilya

    Sensory information and motor responses are encoded in the brain in a collective spiking activity of a large number of neurons. Understanding the neural code requires inferring statistical properties of such collective dynamics from multicellular neurophysiological recordings. Questions of whether synchronous activity or silence of multiple neurons carries information about the stimuli or the motor responses are especially interesting. Unfortunately, detection of such high order statistical interactions from data is especially challenging due to the exponentially large dimensionality of the state space of neural collectives. Here we present UniEnt, a method for the inference of strengths of multivariate neural interaction patterns. The method is based on the Bayesian prior that makes no assumptions (uniform a priori expectations) about the value of the entropy of the observed multivariate neural activity, in contrast to popular approaches that maximize this entropy. We then study previously published multi-electrode recordings data from salamander retina, exposing the relevance of higher order neural interaction patterns for information encoding in this system. This work was supported in part by Grants JSMF/220020321 and NSF/IOS/1208126.

  11. Anisotropic magnetocaloric response in AlFe 2B 2

    DOE PAGES

    Barua, R.; Lejeune, B. T.; Ke, L.; ...

    2018-02-19

    Experimental investigations of the magnetocaloric response of the intermetallic layered AlFe 2B 2 compound along the principle axes of the orthorhombic cell were carried out using aligned plate-like crystallites with an anisotropic [101] growth habit. Results were confirmed to be consistent with density functional theory calculations. Field-dependent magnetization data confirm that the a-axis is the easy direction of magnetization within the (ac) plane. The magnetocrystalline anisotropy energy required to rotate the spin quantization vector from the c-to the a-axis direction is determined as K~0.9 MJ/m 3 at 50 K. Magnetic entropy change curves measured near the Curie transition temperature ofmore » 285 K reveal a large rotating magnetic entropy change of 1.3 J kg -1K -1 at μ 0H app = 2 T, consistent with large differences in magnetic entropy change ΔS mag measured along the a- and c-axes. Overall, this study provides insight of both fundamental and applied relevance concerning pathways for maximizing the magnetocaloric potential of AlFe 2B 2 for thermal management applications.« less

  12. Anisotropic magnetocaloric response in AlFe 2B 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barua, R.; Lejeune, B. T.; Ke, L.

    Experimental investigations of the magnetocaloric response of the intermetallic layered AlFe 2B 2 compound along the principle axes of the orthorhombic cell were carried out using aligned plate-like crystallites with an anisotropic [101] growth habit. Results were confirmed to be consistent with density functional theory calculations. Field-dependent magnetization data confirm that the a-axis is the easy direction of magnetization within the (ac) plane. The magnetocrystalline anisotropy energy required to rotate the spin quantization vector from the c-to the a-axis direction is determined as K~0.9 MJ/m 3 at 50 K. Magnetic entropy change curves measured near the Curie transition temperature ofmore » 285 K reveal a large rotating magnetic entropy change of 1.3 J kg -1K -1 at μ 0H app = 2 T, consistent with large differences in magnetic entropy change ΔS mag measured along the a- and c-axes. Overall, this study provides insight of both fundamental and applied relevance concerning pathways for maximizing the magnetocaloric potential of AlFe 2B 2 for thermal management applications.« less

  13. The limits of crop productivity: validating theoretical estimates and determining the factors that limit crop yields in optimal environments

    NASA Technical Reports Server (NTRS)

    Bugbee, B.; Monje, O.

    1992-01-01

    Plant scientists have sought to maximize the yield of food crops since the beginning of agriculture. There are numerous reports of record food and biomass yields (per unit area) in all major crop plants, but many of the record yield reports are in error because they exceed the maximal theoretical rates of the component processes. In this article, we review the component processes that govern yield limits and describe how each process can be individually measured. This procedure has helped us validate theoretical estimates and determine what factors limit yields in optimal environments.

  14. Lepton asymmetry, neutrino spectral distortions, and big bang nucleosynthesis

    NASA Astrophysics Data System (ADS)

    Grohs, E.; Fuller, George M.; Kishimoto, C. T.; Paris, Mark W.

    2017-03-01

    We calculate Boltzmann neutrino energy transport with self-consistently coupled nuclear reactions through the weak-decoupling-nucleosynthesis epoch in an early universe with significant lepton numbers. We find that the presence of lepton asymmetry enhances processes which give rise to nonthermal neutrino spectral distortions. Our results reveal how asymmetries in energy and entropy density uniquely evolve for different transport processes and neutrino flavors. The enhanced distortions in the neutrino spectra alter the expected big bang nucleosynthesis light element abundance yields relative to those in the standard Fermi-Dirac neutrino distribution cases. These yields, sensitive to the shapes of the neutrino energy spectra, are also sensitive to the phasing of the growth of distortions and entropy flow with time/scale factor. We analyze these issues and speculate on new sensitivity limits of deuterium and helium to lepton number.

  15. Information loss in effective field theory: Entanglement and thermal entropies

    NASA Astrophysics Data System (ADS)

    Boyanovsky, Daniel

    2018-03-01

    Integrating out high energy degrees of freedom to yield a low energy effective field theory leads to a loss of information with a concomitant increase in entropy. We obtain the effective field theory of a light scalar field interacting with heavy fields after tracing out the heavy degrees of freedom from the time evolved density matrix. The initial density matrix describes the light field in its ground state and the heavy fields in equilibrium at a common temperature T . For T =0 , we obtain the reduced density matrix in a perturbative expansion; it reveals an emergent mixed state as a consequence of the entanglement between light and heavy fields. We obtain the effective action that determines the time evolution of the reduced density matrix for the light field in a nonperturbative Dyson resummation of one-loop correlations of the heavy fields. The Von-Neumann entanglement entropy associated with the reduced density matrix is obtained for the nonresonant and resonant cases in the asymptotic long time limit. In the nonresonant case the reduced density matrix displays an incipient thermalization albeit with a wave-vector, time and coupling dependent effective temperature as a consequence of memory of initial conditions. The entanglement entropy is time independent and is the thermal entropy for this effective, nonequilibrium temperature. In the resonant case the light field fully thermalizes with the heavy fields, the reduced density matrix loses memory of the initial conditions and the entanglement entropy becomes the thermal entropy of the light field. We discuss the relation between the entanglement entropy ultraviolet divergences and renormalization.

  16. Effects of Controlled-Release Fertilizer on Leaf Area Index and Fruit Yield in High-Density Soilless Tomato Culture Using Low Node-Order Pinching

    PubMed Central

    Kinoshita, Takafumi; Yano, Takayoshi; Sugiura, Makoto; Nagasaki, Yuji

    2014-01-01

    To further development of a simplified fertigation system using controlled-release fertilizers (CRF), we investigated the effects of differing levels of fertilizers and plant density on leaf area index (LAI), fruit yields, and nutrient use in soilless tomato cultures with low node-order pinching and high plant density during spring-summer (SS), summer-fall (SF), and fall-winter (FW) seasons. Plants were treated with 1 of 3 levels of CRF in a closed system, or with liquid fertilizer (LF) with constant electrical conductivity (EC) in a drip-draining system. Two plant densities were examined for each fertilizer treatment. In CRF treatments, LAI at pinching increased linearly with increasing nutrient supply for all cropping seasons. In SS, both light interception by plant canopy at pinching and total marketable fruit yield increased linearly with increasing LAI up to 6 m2·m−2; the maximization point was not reached for any of the treatments. In FW, both light interception and yield were maximized at an LAI of approximately 4. These results suggest that maximizing the LAI in SS and FW to the saturation point for light interception is important for increasing yield. In SF, however, the yield maximized at an LAI of approximately 3, although the light interception linearly increased with increasing LAI, up to 4.5. According to our results, the optimal LAI at pinching may be 6 in SS, 3 in SF, and 4 in FW. In comparing LAI values with similar fruit yield, we found that nutrient supply was 32−46% lower with the CRF method than with LF. In conclusion, CRF application in a closed system enables growers to achieve a desirable LAI to maximize fruit yield with a regulated amount of nutrient supply per unit area. Further, the CRF method greatly reduced nutrient use without decreasing fruit yield at similar LAIs, as compared to the LF method. PMID:25402478

  17. 1/ f noise from the laws of thermodynamics for finite-size fluctuations.

    PubMed

    Chamberlin, Ralph V; Nasir, Derek M

    2014-07-01

    Computer simulations of the Ising model exhibit white noise if thermal fluctuations are governed by Boltzmann's factor alone; whereas we find that the same model exhibits 1/f noise if Boltzmann's factor is extended to include local alignment entropy to all orders. We show that this nonlinear correction maintains maximum entropy during equilibrium fluctuations. Indeed, as with the usual way to resolve Gibbs' paradox that avoids entropy reduction during reversible processes, the correction yields the statistics of indistinguishable particles. The correction also ensures conservation of energy if an instantaneous contribution from local entropy is included. Thus, a common mechanism for 1/f noise comes from assuming that finite-size fluctuations strictly obey the laws of thermodynamics, even in small parts of a large system. Empirical evidence for the model comes from its ability to match the measured temperature dependence of the spectral-density exponents in several metals and to show non-Gaussian fluctuations characteristic of nanoscale systems.

  18. Formation of soft magnetic high entropy amorphous alloys composites containing in situ solid solution phase

    NASA Astrophysics Data System (ADS)

    Wei, Ran; Sun, Huan; Chen, Chen; Tao, Juan; Li, Fushan

    2018-03-01

    Fe-Co-Ni-Si-B high entropy amorphous alloys composites (HEAACs), which containing high entropy solid solution phase in amorphous matrix, show good soft magnetic properties and bending ductility even in optimal annealed state, were successfully developed by melt spinning method. The crystallization phase of the HEAACs is solid solution phase with body centered cubic (BCC) structure instead of brittle intermetallic phase. In addition, the BCC phase can transformed into face centered cubic (FCC) phase with temperature rise. Accordingly, Fe-Co-Ni-Si-B high entropy alloys (HEAs) with FCC structure and a small amount of BCC phase was prepared by copper mold casting method. The HEAs exhibit high yield strength (about 1200 MPa) and good plastic strain (about 18%). Meanwhile, soft magnetic characteristics of the HEAs are largely reserved from HEAACs. This work provides a new strategy to overcome the annealing induced brittleness of amorphous alloys and design new advanced materials with excellent comprehensive properties.

  19. Entropy and chemical change. 1: Characterization of product (and reactant) energy distributions in reactive molecular collisions: Information and enthropy deficiency

    NASA Technical Reports Server (NTRS)

    Bernstein, R. B.; Levine, R. D.

    1972-01-01

    Optimal means of characterizing the distribution of product energy states resulting from reactive collisions of molecules with restricted distributions of initial states are considered, along with those for characterizing the particular reactant state distribution which yields a given set of product states at a specified total energy. It is suggested to represent the energy-dependence of global-type results in the form of square-faced bar plots, and of data for specific-type experiments as triangular-faced prismatic plots. The essential parameters defining the internal state distribution are isolated, and the information content of such a distribution is put on a quantitative basis. The relationship between the information content, the surprisal, and the entropy of the continuous distribution is established. The concept of an entropy deficiency, which characterizes the specificity of product state formation, is suggested as a useful measure of the deviance from statistical behavior. The degradation of information by experimental averaging is considered, leading to bounds on the entropy deficiency.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tracy, Cameron L.; Park, Sulgiye; Rittman, Dylan R.

    High-entropy alloys, near-equiatomic solid solutions of five or more elements, represent a new strategy for the design of materials with properties superior to those of conventional alloys. However, their phase space remains constrained, with transition metal high-entropy alloys exhibiting only face- or body-centered cubic structures. Here, we report the high-pressure synthesis of a hexagonal close-packed phase of the prototypical high-entropy alloy CrMnFeCoNi. This martensitic transformation begins at 14 GPa and is attributed to suppression of the local magnetic moments, destabilizing the initial fcc structure. Similar to fcc-to-hcp transformations in Al and the noble gases, the transformation is sluggish, occurring overmore » a range of >40 GPa. However, the behaviour of CrMnFeCoNi is unique in that the hcp phase is retained following decompression to ambient pressure, yielding metastable fcc-hcp mixtures. This demonstrates a means of tuning the structures and properties of high-entropy alloys in a manner not achievable by conventional processing techniques.« less

  1. Increasing plant density in eastern United States broccoli production systems to maximize marketable head yields

    USDA-ARS?s Scientific Manuscript database

    Increased demand for fresh market broccoli (Brassica oleracea L. var. italica) has led to increased production along the eastern seaboard of the United States. Maximizing broccoli yields is a primary concern for quickly expanding eastern commercial markets. Thus, a plant density study was carried ...

  2. Comment on "Troublesome aspects of the Renyi-MaxEnt treatment"

    NASA Astrophysics Data System (ADS)

    Oikonomou, Thomas; Bagci, G. Baris

    2017-11-01

    Plastino et al. [Plastino et al., Phys. Rev. E 94, 012145 (2016), 10.1103/PhysRevE.94.012145] recently stated that the Rényi entropy is not suitable for thermodynamics by using functional calculus, since it leads to anomalous results unlike the Tsallis entropy. We first show that the Tsallis entropy also leads to such anomalous behaviors if one adopts the same functional calculus approach. Second, we note that one of the Lagrange multipliers is set in an ad hoc manner in the functional calculus approach of Plastino et al. Finally, the explanation for these anomalous behaviors is provided by observing that the generalized distributions obtained by Plastino et al. do not yield the ordinary canonical partition function in the appropriate limit and therefore cannot be considered as genuine generalized distributions.

  3. Vergence variability: a key to understanding oculomotor adaptability?

    PubMed

    Petrock, Annie Marie; Reisman, S; Alvarez, T

    2006-01-01

    Vergence eye movements were recorded from three different populations: healthy young (ages 18-35 years), adaptive presbyopic and non-adaptive presbyopic(the presbyopic groups aged above 45 years) to determine how the variability of the eye movements made by the populations differs. The variability was determined using Shannon Entropy calculations of Wavelet transform coefficients, to yield a non-linear analysis of the vergence movement variability. The data were then fed through a k-means clustering algorithm to classify each subject, with no a priori knowledge of true subject classification. The results indicate a highly significant difference in the total entropy values between the three groups, indicating a difference in the level of information content, and thus hypothetically the oculomotor adaptability, between the three groups.Further, the frequency distribution of the entropy varied across groups.

  4. Thermodynamics for the interaction of epsilon-dinitrophenyl-L-lysine and bovine colostral anti-dinitrophenyl immunoglobulin G2.

    PubMed Central

    Mukkur, T K

    1978-01-01

    The effect of varying the temperature over a wide range (4--60 degrees C) on the binding of epsilon-dinitrophenyl-L-lysine to bovine colostral anti-dinitrophenyl immunoglobulin G2 yielded a non-linear van't Hoff plot. The extent of curvature was indicative of a large positive heat-capacity change, and the thermodynamic parameters, calculated by using a non-linear least squares computer procedure, revealed an enthalpy--entropy-compensation mechanism for hapten-antibody binding. The enthalpy factor was found to be the primary contributor for the complex-formation at low temperatures, but at increasing temperatures the entropy factor assumed greater importance. At physiological temperature (39 degrees C), the entropy factor was the major contributor to the free energy of reaction. PMID:687378

  5. Low-dimensional approximation searching strategy for transfer entropy from non-uniform embedding

    PubMed Central

    2018-01-01

    Transfer entropy from non-uniform embedding is a popular tool for the inference of causal relationships among dynamical subsystems. In this study we present an approach that makes use of low-dimensional conditional mutual information quantities to decompose the original high-dimensional conditional mutual information in the searching procedure of non-uniform embedding for significant variables at different lags. We perform a series of simulation experiments to assess the sensitivity and specificity of our proposed method to demonstrate its advantage compared to previous algorithms. The results provide concrete evidence that low-dimensional approximations can help to improve the statistical accuracy of transfer entropy in multivariate causality analysis and yield a better performance over other methods. The proposed method is especially efficient as the data length grows. PMID:29547669

  6. Maximum entropy and equations of state for random cellular structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rivier, N.

    Random, space-filling cellular structures (biological tissues, metallurgical grain aggregates, foams, etc.) are investigated. Maximum entropy inference under a few constraints yields structural equations of state, relating the size of cells to their topological shape. These relations are known empirically as Lewis's law in Botany, or Desch's relation in Metallurgy. Here, the functional form of the constraints is now known as a priori, and one takes advantage of this arbitrariness to increase the entropy further. The resulting structural equations of state are independent of priors, they are measurable experimentally and constitute therefore a direct test for the applicability of MaxEnt inferencemore » (given that the structure is in statistical equilibrium, a fact which can be tested by another simple relation (Aboav's law)). 23 refs., 2 figs., 1 tab.« less

  7. The non-equilibrium statistical mechanics of a simple geophysical fluid dynamics model

    NASA Astrophysics Data System (ADS)

    Verkley, Wim; Severijns, Camiel

    2014-05-01

    Lorenz [1] has devised a dynamical system that has proved to be very useful as a benchmark system in geophysical fluid dynamics. The system in its simplest form consists of a periodic array of variables that can be associated with an atmospheric field on a latitude circle. The system is driven by a constant forcing, is damped by linear friction and has a simple advection term that causes the model to behave chaotically if the forcing is large enough. Our aim is to predict the statistics of Lorenz' model on the basis of a given average value of its total energy - obtained from a numerical integration - and the assumption of statistical stationarity. Our method is the principle of maximum entropy [2] which in this case reads: the information entropy of the system's probability density function shall be maximal under the constraints of normalization, a given value of the average total energy and statistical stationarity. Statistical stationarity is incorporated approximately by using `stationarity constraints', i.e., by requiring that the average first and possibly higher-order time-derivatives of the energy are zero in the maximization of entropy. The analysis [3] reveals that, if the first stationarity constraint is used, the resulting probability density function rather accurately reproduces the statistics of the individual variables. If the second stationarity constraint is used as well, the correlations between the variables are also reproduced quite adequately. The method can be generalized straightforwardly and holds the promise of a viable non-equilibrium statistical mechanics of the forced-dissipative systems of geophysical fluid dynamics. [1] E.N. Lorenz, 1996: Predictability - A problem partly solved, in Proc. Seminar on Predictability (ECMWF, Reading, Berkshire, UK), Vol. 1, pp. 1-18. [2] E.T. Jaynes, 2003: Probability Theory - The Logic of Science (Cambridge University Press, Cambridge). [3] W.T.M. Verkley and C.A. Severijns, 2014: The maximum entropy principle applied to a dynamical system proposed by Lorenz, Eur. Phys. J. B, 87:7, http://dx.doi.org/10.1140/epjb/e2013-40681-2 (open access).

  8. Entropic characterization of separability in Gaussian states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sudha; Devi, A. R. Usha; Inspire Institute Inc., McLean, Virginia 22101

    2010-02-15

    We explore separability of bipartite divisions of mixed Gaussian states based on the positivity of the Abe-Rajagopal (AR) q-conditional entropy. The AR q-conditional entropic characterization provide more stringent restrictions on separability (in the limit q{yields}{infinity}) than that obtained from the corresponding von Neumann conditional entropy (q=1 case)--similar to the situation in finite dimensional states. Effectiveness of this approach, in relation to the results obtained by partial transpose criterion, is explicitly analyzed in three illustrative examples of two-mode Gaussian states of physical significance.

  9. Canonical ensemble ground state and correlation entropy of Bose-Einstein condensate

    NASA Astrophysics Data System (ADS)

    Svidzinsky, Anatoly; Kim, Moochan; Agarwal, Girish; Scully, Marlan O.

    2018-01-01

    Constraint of a fixed total number of particles yields a correlation between the fluctuation of particles in different states in the canonical ensemble. Here we show that, below the temperature of Bose-Einstein condensation (BEC), the correlation part of the entropy of an ideal Bose gas is cancelled by the ground-state contribution. Thus, in the BEC region, the thermodynamic properties of the gas in the canonical ensemble can be described accurately in a simplified model which excludes the ground state and assumes no correlation between excited levels.

  10. Isotopic Ratio, Isotonic Ratio, Isobaric Ratio and Shannon Information Uncertainty

    NASA Astrophysics Data System (ADS)

    Ma, Chun-Wang; Wei, Hui-Ling

    2014-11-01

    The isoscaling and the isobaric yield ratio difference (IBD) probes, both of which are constructed by yield ratio of fragment, provide cancelation of parameters. The information entropy theory is introduced to explain the physical meaning of the isoscaling and IBD probes. The similarity between the isoscaling and IBD results is found, i.e., the information uncertainty determined by the IBD method equals to β - α determined by the isoscaling (α (β) is the parameter fitted from the isotopic (isotonic) yield ratio).

  11. Universal Entropy of Word Ordering Across Linguistic Families

    PubMed Central

    Montemurro, Marcelo A.; Zanette, Damián H.

    2011-01-01

    Background The language faculty is probably the most distinctive feature of our species, and endows us with a unique ability to exchange highly structured information. In written language, information is encoded by the concatenation of basic symbols under grammatical and semantic constraints. As is also the case in other natural information carriers, the resulting symbolic sequences show a delicate balance between order and disorder. That balance is determined by the interplay between the diversity of symbols and by their specific ordering in the sequences. Here we used entropy to quantify the contribution of different organizational levels to the overall statistical structure of language. Methodology/Principal Findings We computed a relative entropy measure to quantify the degree of ordering in word sequences from languages belonging to several linguistic families. While a direct estimation of the overall entropy of language yielded values that varied for the different families considered, the relative entropy quantifying word ordering presented an almost constant value for all those families. Conclusions/Significance Our results indicate that despite the differences in the structure and vocabulary of the languages analyzed, the impact of word ordering in the structure of language is a statistical linguistic universal. PMID:21603637

  12. Entropy as a Gene-Like Performance Indicator Promoting Thermoelectric Materials.

    PubMed

    Liu, Ruiheng; Chen, Hongyi; Zhao, Kunpeng; Qin, Yuting; Jiang, Binbin; Zhang, Tiansong; Sha, Gang; Shi, Xun; Uher, Ctirad; Zhang, Wenqing; Chen, Lidong

    2017-10-01

    High-throughput explorations of novel thermoelectric materials based on the Materials Genome Initiative paradigm only focus on digging into the structure-property space using nonglobal indicators to design materials with tunable electrical and thermal transport properties. As the genomic units, following the biogene tradition, such indicators include localized crystal structural blocks in real space or band degeneracy at certain points in reciprocal space. However, this nonglobal approach does not consider how real materials differentiate from others. Here, this study successfully develops a strategy of using entropy as the global gene-like performance indicator that shows how multicomponent thermoelectric materials with high entropy can be designed via a high-throughput screening method. Optimizing entropy works as an effective guide to greatly improve the thermoelectric performance through either a significantly depressed lattice thermal conductivity down to its theoretical minimum value and/or via enhancing the crystal structure symmetry to yield large Seebeck coefficients. The entropy engineering using multicomponent crystal structures or other possible techniques provides a new avenue for an improvement of the thermoelectric performance beyond the current methods and approaches. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Computing algebraic transfer entropy and coupling directions via transcripts

    NASA Astrophysics Data System (ADS)

    Amigó, José M.; Monetti, Roberto; Graff, Beata; Graff, Grzegorz

    2016-11-01

    Most random processes studied in nonlinear time series analysis take values on sets endowed with a group structure, e.g., the real and rational numbers, and the integers. This fact allows to associate with each pair of group elements a third element, called their transcript, which is defined as the product of the second element in the pair times the first one. The transfer entropy of two such processes is called algebraic transfer entropy. It measures the information transferred between two coupled processes whose values belong to a group. In this paper, we show that, subject to one constraint, the algebraic transfer entropy matches the (in general, conditional) mutual information of certain transcripts with one variable less. This property has interesting practical applications, especially to the analysis of short time series. We also derive weak conditions for the 3-dimensional algebraic transfer entropy to yield the same coupling direction as the corresponding mutual information of transcripts. A related issue concerns the use of mutual information of transcripts to determine coupling directions in cases where the conditions just mentioned are not fulfilled. We checked the latter possibility in the lowest dimensional case with numerical simulations and cardiovascular data, and obtained positive results.

  14. Analysis of entanglement measures and LOCC maximized quantum Fisher information of general two qubit systems.

    PubMed

    Erol, Volkan; Ozaydin, Fatih; Altintas, Azmi Ali

    2014-06-24

    Entanglement has been studied extensively for unveiling the mysteries of non-classical correlations between quantum systems. In the bipartite case, there are well known measures for quantifying entanglement such as concurrence, relative entropy of entanglement (REE) and negativity, which cannot be increased via local operations. It was found that for sets of non-maximally entangled states of two qubits, comparing these entanglement measures may lead to different entanglement orderings of the states. On the other hand, although it is not an entanglement measure and not monotonic under local operations, due to its ability of detecting multipartite entanglement, quantum Fisher information (QFI) has recently received an intense attraction generally with entanglement in the focus. In this work, we revisit the state ordering problem of general two qubit states. Generating a thousand random quantum states and performing an optimization based on local general rotations of each qubit, we calculate the maximal QFI for each state. We analyze the maximized QFI in comparison with concurrence, REE and negativity and obtain new state orderings. We show that there are pairs of states having equal maximized QFI but different values for concurrence, REE and negativity and vice versa.

  15. Analysis of Entanglement Measures and LOCC Maximized Quantum Fisher Information of General Two Qubit Systems

    PubMed Central

    Erol, Volkan; Ozaydin, Fatih; Altintas, Azmi Ali

    2014-01-01

    Entanglement has been studied extensively for unveiling the mysteries of non-classical correlations between quantum systems. In the bipartite case, there are well known measures for quantifying entanglement such as concurrence, relative entropy of entanglement (REE) and negativity, which cannot be increased via local operations. It was found that for sets of non-maximally entangled states of two qubits, comparing these entanglement measures may lead to different entanglement orderings of the states. On the other hand, although it is not an entanglement measure and not monotonic under local operations, due to its ability of detecting multipartite entanglement, quantum Fisher information (QFI) has recently received an intense attraction generally with entanglement in the focus. In this work, we revisit the state ordering problem of general two qubit states. Generating a thousand random quantum states and performing an optimization based on local general rotations of each qubit, we calculate the maximal QFI for each state. We analyze the maximized QFI in comparison with concurrence, REE and negativity and obtain new state orderings. We show that there are pairs of states having equal maximized QFI but different values for concurrence, REE and negativity and vice versa. PMID:24957694

  16. Moisture sorption isotherms and thermodynamic properties of mexican mennonite-style cheese.

    PubMed

    Martinez-Monteagudo, Sergio I; Salais-Fierro, Fabiola

    2014-10-01

    Moisture adsorption isotherms of fresh and ripened Mexican Mennonite-style cheese were investigated using the static gravimetric method at 4, 8, and 12 °C in a water activity range (aw) of 0.08-0.96. These isotherms were modeled using GAB, BET, Oswin and Halsey equations through weighed non-linear regression. All isotherms were sigmoid in shape, showing a type II BET isotherm, and the data were best described by GAB model. GAB model coefficients revealed that water adsorption by cheese matrix is a multilayer process characterized by molecules that are strongly bound in the monolayer and molecules that are slightly structured in a multilayer. Using the GAB model, it was possible to estimate thermodynamic functions (net isosteric heat, differential entropy, integral enthalpy and entropy, and enthalpy-entropy compensation) as function of moisture content. For both samples, the isosteric heat and differential entropy decreased with moisture content in exponential fashion. The integral enthalpy gradually decreased with increasing moisture content after reached a maximum value, while the integral entropy decreased with increasing moisture content after reached a minimum value. A linear compensation was found between integral enthalpy and entropy suggesting enthalpy controlled adsorption. Determination of moisture content and aw relationship yields to important information of controlling the ripening, drying and storage operations as well as understanding of the water state within a cheese matrix.

  17. EEG entropy measures indicate decrease of cortical information processing in Disorders of Consciousness.

    PubMed

    Thul, Alexander; Lechinger, Julia; Donis, Johann; Michitsch, Gabriele; Pichler, Gerald; Kochs, Eberhard F; Jordan, Denis; Ilg, Rüdiger; Schabus, Manuel

    2016-02-01

    Clinical assessments that rely on behavioral responses to differentiate Disorders of Consciousness are at times inapt because of some patients' motor disabilities. To objectify patients' conditions of reduced consciousness the present study evaluated the use of electroencephalography to measure residual brain activity. We analyzed entropy values of 18 scalp EEG channels of 15 severely brain-damaged patients with clinically diagnosed Minimally-Conscious-State (MCS) or Unresponsive-Wakefulness-Syndrome (UWS) and compared the results to a sample of 24 control subjects. Permutation entropy (PeEn) and symbolic transfer entropy (STEn), reflecting information processes in the EEG, were calculated for all subjects. Participants were tested on a modified active own-name paradigm to identify correlates of active instruction following. PeEn showed reduced local information content in the EEG in patients, that was most pronounced in UWS. STEn analysis revealed altered directed information flow in the EEG of patients, indicating impaired feed-backward connectivity. Responses to auditory stimulation yielded differences in entropy measures, indicating reduced information processing in MCS and UWS. Local EEG information content and information flow are affected in Disorders of Consciousness. This suggests local cortical information capacity and feedback information transfer as neural correlates of consciousness. The utilized EEG entropy analyses were able to relate to patient groups with different Disorders of Consciousness. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  18. Lepton asymmetry, neutrino spectral distortions, and big bang nucleosynthesis

    DOE PAGES

    Grohs, E.; Fuller, George M.; Kishimoto, C. T.; ...

    2017-03-03

    In this paper, we calculate Boltzmann neutrino energy transport with self-consistently coupled nuclear reactions through the weak-decoupling-nucleosynthesis epoch in an early universe with significant lepton numbers. We find that the presence of lepton asymmetry enhances processes which give rise to nonthermal neutrino spectral distortions. Our results reveal how asymmetries in energy and entropy density uniquely evolve for different transport processes and neutrino flavors. The enhanced distortions in the neutrino spectra alter the expected big bang nucleosynthesis light element abundance yields relative to those in the standard Fermi-Dirac neutrino distribution cases. These yields, sensitive to the shapes of the neutrino energymore » spectra, are also sensitive to the phasing of the growth of distortions and entropy flow with time/scale factor. Finally, we analyze these issues and speculate on new sensitivity limits of deuterium and helium to lepton number.« less

  19. Lepton asymmetry, neutrino spectral distortions, and big bang nucleosynthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grohs, E.; Fuller, George M.; Kishimoto, C. T.

    In this paper, we calculate Boltzmann neutrino energy transport with self-consistently coupled nuclear reactions through the weak-decoupling-nucleosynthesis epoch in an early universe with significant lepton numbers. We find that the presence of lepton asymmetry enhances processes which give rise to nonthermal neutrino spectral distortions. Our results reveal how asymmetries in energy and entropy density uniquely evolve for different transport processes and neutrino flavors. The enhanced distortions in the neutrino spectra alter the expected big bang nucleosynthesis light element abundance yields relative to those in the standard Fermi-Dirac neutrino distribution cases. These yields, sensitive to the shapes of the neutrino energymore » spectra, are also sensitive to the phasing of the growth of distortions and entropy flow with time/scale factor. Finally, we analyze these issues and speculate on new sensitivity limits of deuterium and helium to lepton number.« less

  20. Predicting the Cosmological Constant from the CausalEntropic Principle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bousso, Raphael; Harnik, Roni; Kribs, Graham D.

    2007-02-20

    We compute the expected value of the cosmological constant in our universe from the Causal Entropic Principle. Since observers must obey the laws of thermodynamics and causality, it asserts that physical parameters are most likely to be found in the range of values for which the total entropy production within a causally connected region is maximized. Despite the absence of more explicit anthropic criteria, the resulting probability distribution turns out to be in excellent agreement with observation. In particular, we find that dust heated by stars dominates the entropy production, demonstrating the remarkable power of this thermodynamic selection criterion. Themore » alternative approach--weighting by the number of ''observers per baryon''--is less well-defined, requires problematic assumptions about the nature of observers, and yet prefers values larger than present experimental bounds.« less

  1. Maximum nonlocality and minimum uncertainty using magic states

    NASA Astrophysics Data System (ADS)

    Howard, Mark

    2015-04-01

    We prove that magic states from the Clifford hierarchy give optimal solutions for tasks involving nonlocality and entropic uncertainty with respect to Pauli measurements. For both the nonlocality and uncertainty tasks, stabilizer states are the worst possible pure states, so our solutions have an operational interpretation as being highly nonstabilizer. The optimal strategy for a qudit version of the Clauser-Horne-Shimony-Holt game in prime dimensions is achieved by measuring maximally entangled states that are isomorphic to single-qudit magic states. These magic states have an appealingly simple form, and our proof shows that they are "balanced" with respect to all but one of the mutually unbiased stabilizer bases. Of all equatorial qudit states, magic states minimize the average entropic uncertainties for collision entropy and also, for small prime dimensions, min-entropy, a fact that may have implications for cryptography.

  2. Binarized cross-approximate entropy in crowdsensing environment.

    PubMed

    Skoric, Tamara; Mohamoud, Omer; Milovanovic, Branislav; Japundzic-Zigon, Nina; Bajic, Dragana

    2017-01-01

    Personalised monitoring in health applications has been recognised as part of the mobile crowdsensing concept, where subjects equipped with sensors extract information and share them for personal or common benefit. Limited transmission resources impose the use of local analyses methodology, but this approach is incompatible with analytical tools that require stationary and artefact-free data. This paper proposes a computationally efficient binarised cross-approximate entropy, referred to as (X)BinEn, for unsupervised cardiovascular signal processing in environments where energy and processor resources are limited. The proposed method is a descendant of the cross-approximate entropy ((X)ApEn). It operates on binary, differentially encoded data series split into m-sized vectors. The Hamming distance is used as a distance measure, while a search for similarities is performed on the vector sets. The procedure is tested on rats under shaker and restraint stress, and compared to the existing (X)ApEn results. The number of processing operations is reduced. (X)BinEn captures entropy changes in a similar manner to (X)ApEn. The coding coarseness yields an adverse effect of reduced sensitivity, but it attenuates parameter inconsistency and binary bias. A special case of (X)BinEn is equivalent to Shannon's entropy. A binary conditional entropy for m =1 vectors is embedded into the (X)BinEn procedure. (X)BinEn can be applied to a single time series as an auto-entropy method, or to a pair of time series, as a cross-entropy method. Its low processing requirements makes it suitable for mobile, battery operated, self-attached sensing devices, with limited power and processor resources. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Efficiency of a thermodynamic motor at maximum power

    NASA Astrophysics Data System (ADS)

    Moreau, M.; Gaveau, B.; Schulman, L. S.

    2012-02-01

    Several recent theories address the efficiency of a macroscopic thermodynamic motor at maximum power and question the so-called Curzon-Ahlborn (CA) efficiency. Considering the entropy exchanges and productions in an n-sources motor, we study the maximization of its power and show that the controversies are partly due to some imprecision in the maximization variables. When power is maximized with respect to the system temperatures, these temperatures are proportional to the square root of the corresponding source temperatures, which leads to the CA formula for a bithermal motor. On the other hand, when power is maximized with respect to the transition durations, the Carnot efficiency of a bithermal motor admits the CA efficiency as a lower bound, which is attained if the duration of the adiabatic transitions can be neglected. Additionally, we compute the energetic efficiency, or “sustainable efficiency,” which can be defined for n sources, and we show that it has no other universal upper bound than 1, but that in certain situations, which are favorable for power production, it does not exceed ½.

  4. Efficiency of a thermodynamic motor at maximum power.

    PubMed

    Moreau, M; Gaveau, B; Schulman, L S

    2012-02-01

    Several recent theories address the efficiency of a macroscopic thermodynamic motor at maximum power and question the so-called Curzon-Ahlborn (CA) efficiency. Considering the entropy exchanges and productions in an n-sources motor, we study the maximization of its power and show that the controversies are partly due to some imprecision in the maximization variables. When power is maximized with respect to the system temperatures, these temperatures are proportional to the square root of the corresponding source temperatures, which leads to the CA formula for a bithermal motor. On the other hand, when power is maximized with respect to the transition durations, the Carnot efficiency of a bithermal motor admits the CA efficiency as a lower bound, which is attained if the duration of the adiabatic transitions can be neglected. Additionally, we compute the energetic efficiency, or "sustainable efficiency," which can be defined for n sources, and we show that it has no other universal upper bound than 1, but that in certain situations, which are favorable for power production, it does not exceed ½. © 2012 American Physical Society

  5. The Molecular Origin of Enthalpy/Entropy Compensation in Biomolecular Recognition.

    PubMed

    Fox, Jerome M; Zhao, Mengxia; Fink, Michael J; Kang, Kyungtae; Whitesides, George M

    2018-05-20

    Biomolecular recognition can be stubborn; changes in the structures of associating molecules, or the environments in which they associate, often yield compensating changes in enthalpies and entropies of binding and no net change in affinities. This phenomenon-termed enthalpy/entropy (H/S) compensation-hinders efforts in biomolecular design, and its incidence-often a surprise to experimentalists-makes interactions between biomolecules difficult to predict. Although characterizing H/S compensation requires experimental care, it is unquestionably a real phenomenon that has, from an engineering perspective, useful physical origins. Studying H/S compensation can help illuminate the still-murky roles of water and dynamics in biomolecular recognition and self-assembly. This review summarizes known sources of H/ S compensation (real and perceived) and lays out a conceptual framework for understanding and dissecting-and, perhaps, avoiding or exploiting-this phenomenon in biophysical systems.

  6. Comment on "Troublesome aspects of the Renyi-MaxEnt treatment".

    PubMed

    Oikonomou, Thomas; Bagci, G Baris

    2017-11-01

    Plastino et al. [Plastino et al., Phys. Rev. E 94, 012145 (2016)1539-375510.1103/PhysRevE.94.012145] recently stated that the Rényi entropy is not suitable for thermodynamics by using functional calculus, since it leads to anomalous results unlike the Tsallis entropy. We first show that the Tsallis entropy also leads to such anomalous behaviors if one adopts the same functional calculus approach. Second, we note that one of the Lagrange multipliers is set in an ad hoc manner in the functional calculus approach of Plastino et al. Finally, the explanation for these anomalous behaviors is provided by observing that the generalized distributions obtained by Plastino et al. do not yield the ordinary canonical partition function in the appropriate limit and therefore cannot be considered as genuine generalized distributions.

  7. Breast mass detection in tomosynthesis projection images using information-theoretic similarity measures

    NASA Astrophysics Data System (ADS)

    Singh, Swatee; Tourassi, Georgia D.; Lo, Joseph Y.

    2007-03-01

    The purpose of this project is to study Computer Aided Detection (CADe) of breast masses for digital tomosynthesis. It is believed that tomosynthesis will show improvement over conventional mammography in detection and characterization of breast masses by removing overlapping dense fibroglandular tissue. This study used the 60 human subject cases collected as part of on-going clinical trials at Duke University. Raw projections images were used to identify suspicious regions in the algorithm's high-sensitivity, low-specificity stage using a Difference of Gaussian (DoG) filter. The filtered images were thresholded to yield initial CADe hits that were then shifted and added to yield a 3D distribution of suspicious regions. These were further summed in the depth direction to yield a flattened probability map of suspicious hits for ease of scoring. To reduce false positives, we developed an algorithm based on information theory where similarity metrics were calculated using knowledge databases consisting of tomosynthesis regions of interest (ROIs) obtained from projection images. We evaluated 5 similarity metrics to test the false positive reduction performance of our algorithm, specifically joint entropy, mutual information, Jensen difference divergence, symmetric Kullback-Liebler divergence, and conditional entropy. The best performance was achieved using the joint entropy similarity metric, resulting in ROC A z of 0.87 +/- 0.01. As a whole, the CADe system can detect breast masses in this data set with 79% sensitivity and 6.8 false positives per scan. In comparison, the original radiologists performed with only 65% sensitivity when using mammography alone, and 91% sensitivity when using tomosynthesis alone.

  8. Navier-Stokes-like equations for traffic flow.

    PubMed

    Velasco, R M; Marques, W

    2005-10-01

    The macroscopic traffic flow equations derived from the reduced Paveri-Fontana equation are closed starting with the maximization of the informational entropy. The homogeneous steady state taken as a reference is obtained for a specific model of the desired velocity and a kind of Chapman-Enskog method is developed to calculate the traffic pressure at the Navier-Stokes level. Numerical solution of the macroscopic traffic equations is obtained and its characteristics are analyzed.

  9. Geometric Universality in Brain Allosteric Protein Dynamics: Complex Hydrophobic Transformation Predicts Mutual Recognition by Polypeptides and Proteins,

    DTIC Science & Technology

    1986-10-01

    organic acids using the Hammett equation , has been called the hydrophobic effect.’ Water adjusts its geometry to maximize the number of intact hydrogen...understanding both structural stability with respect to the underlying equations (not initial values) and phase transitions in these dynamical hierarchies...for quantitative characterization. Although the complicated behavior is gen- erated by deterministic equations , its description in entropies leads to

  10. Silica-promoted Diels-Alder reactions in carbon dioxide from gaseous to supercritical conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weinstein, R.D.; Renslo, A.R.; Danheiser, R.L.

    1999-04-15

    Amorphous fumed silica (SiO{sub 2}) was shown to increase yields and selectivities of several Diels-Alder reactions in gaseous and supercritical CO{sub 2}. Pressure effects on the Diels-Alder reaction were explored using methyl vinyl ketone and penta-1,3-diene at 80 C. The selectivity of the reaction was not affected by pressure/density. As pressure was increased, the yield decreased. At the reaction temperature, adsorption isotherms at various pressures were obtained for the reactants and the Diels-Alder adduct. As expected when pressure is increased, the ratio of the amount of reactants adsorbed to the amount of reactants in the fluid phase decreases, thus causingmore » the yield to decrease. The Langmuir adsorption model fit the adsorption data. The Langmuir equilibrium partitioning constants all decreased with increasing pressure. The effect of temperature on adsorption was experimentally determined and traditional heats of adsorption were calculated. However, since supercritical CO{sub 2} is a highly compressible fluid, it is logical to examine the effect of temperature at constant density. In this case, entropies of adsorption were obtained. The thermodynamic properties that influence the real enthalpy and entropy of adsorption were derived. Methods of doping the silica and improving yields and selectivities were also explored.« less

  11. Assessment of the Maximal Split-Half Coefficient to Estimate Reliability

    ERIC Educational Resources Information Center

    Thompson, Barry L.; Green, Samuel B.; Yang, Yanyun

    2010-01-01

    The maximal split-half coefficient is computed by calculating all possible split-half reliability estimates for a scale and then choosing the maximal value as the reliability estimate. Osburn compared the maximal split-half coefficient with 10 other internal consistency estimates of reliability and concluded that it yielded the most consistently…

  12. Entanglement from dissipation and holographic interpretation

    NASA Astrophysics Data System (ADS)

    Cantcheff, M. Botta; Gadelha, Alexandre L.; Marchioro, Dáfni F. Z.; Nedel, Daniel Luiz

    2018-02-01

    In this work we study a dissipative field theory where the dissipation process is manifestly related to dynamical entanglement and put it in the holographic context. Such endeavour is realized by further development of a canonical approach to study quantum dissipation, which consists of doubling the degrees of freedom of the original system by defining an auxiliary one. A time dependent entanglement entropy for the vacumm state is calculated and a geometrical interpretation of the auxiliary system and the entropy is given in the context of the AdS/CFT correspondence using the Ryu-Takayanagi formula. We show that the dissipative dynamics is controlled by the entanglement entropy and there are two distinct stages: in the early times the holographic interpretation requires some deviation from classical General Relativity; in the later times the quantum system is described as a wormhole, a solution of the Einstein's equations near to a maximally extended black hole with two asymptotically AdS boundaries. We focus our holographic analysis in this regime, and suggest a mechanism similar to teleportation protocol to exchange (quantum) information between the two CFTs on the boundaries (see Maldacena et al. in Fortschr Phys 65(5):1700034, arXiv:1704.05333 [hep-th], 2017).

  13. Thermalization dynamics in a quenched many-body state

    NASA Astrophysics Data System (ADS)

    Kaufman, Adam; Preiss, Philipp; Tai, Eric; Lukin, Alex; Rispoli, Matthew; Schittko, Robert; Greiner, Markus

    2016-05-01

    Quantum and classical many-body systems appear to have disparate behavior due to the different mechanisms that govern their evolution. The dynamics of a classical many-body system equilibrate to maximally entropic states and quickly re-thermalize when perturbed. The assumptions of ergodicity and unbiased configurations lead to a successful framework of describing classical systems by a sampling of thermal ensembles that are blind to the system's microscopic details. By contrast, an isolated quantum many-body system is governed by unitary evolution: the system retains memory of past dynamics and constant global entropy. However, even with differing characteristics, the long-term behavior for local observables in quenched, non-integrable quantum systems are often well described by the same thermal framework. We explore the onset of this convergence in a many-body system of bosonic atoms in an optical lattice. Our system's finite size allows us to verify full state purity and measure local observables. We observe rapid growth and saturation of the entanglement entropy with constant global purity. The combination of global purity and thermalized local observables agree with the Eigenstate Thermalization Hypothesis in the presence of a near-volume law in the entanglement entropy.

  14. Multi-Material Closure Model for High-Order Finite Element Lagrangian Hydrodynamics

    DOE PAGES

    Dobrev, V. A.; Kolev, T. V.; Rieben, R. N.; ...

    2016-04-27

    We present a new closure model for single fluid, multi-material Lagrangian hydrodynamics and its application to high-order finite element discretizations of these equations [1]. The model is general with respect to the number of materials, dimension and space and time discretizations. Knowledge about exact material interfaces is not required. Material indicator functions are evolved by a closure computation at each quadrature point of mixed cells, which can be viewed as a high-order variational generalization of the method of Tipton [2]. This computation is defined by the notion of partial non-instantaneous pressure equilibration, while the full pressure equilibration is achieved bymore » both the closure model and the hydrodynamic motion. Exchange of internal energy between materials is derived through entropy considerations, that is, every material produces positive entropy, and the total entropy production is maximized in compression and minimized in expansion. Results are presented for standard one-dimensional two-material problems, followed by two-dimensional and three-dimensional multi-material high-velocity impact arbitrary Lagrangian–Eulerian calculations. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.« less

  15. Zeroth Law, Entropy, Equilibrium, and All That

    NASA Astrophysics Data System (ADS)

    Canagaratna, Sebastian G.

    2008-05-01

    The place of the zeroth law in the teaching of thermodynamics is examined in the context of the recent discussion by Gislason and Craig of some problems involving the establishment of thermal equilibrium. The concept of thermal equilibrium is introduced through the zeroth law. The relation between the zeroth law and the second law in the traditional approach to thermodynamics is discussed. It is shown that the traditional approach does not need to appeal to the second law to solve with rigor the type of problems discussed by Gislason and Craig: in problems not involving chemical reaction, the zeroth law and the condition for mechanical equilibrium, complemented by the first law and any necessary equations of state, are sufficient to determine the final state. We have to invoke the second law only if we wish to calculate the change of entropy. Since most students are exposed to a traditional approach to thermodynamics, the examples of Gislason and Craig are re-examined in terms of the traditional formulation. The maximization of the entropy in the final state can be verified in the traditional approach quite directly by the use of the fundamental equations of thermodynamics. This approach uses relatively simple mathematics in as general a setting as possible.

  16. Multi-Material Closure Model for High-Order Finite Element Lagrangian Hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobrev, V. A.; Kolev, T. V.; Rieben, R. N.

    We present a new closure model for single fluid, multi-material Lagrangian hydrodynamics and its application to high-order finite element discretizations of these equations [1]. The model is general with respect to the number of materials, dimension and space and time discretizations. Knowledge about exact material interfaces is not required. Material indicator functions are evolved by a closure computation at each quadrature point of mixed cells, which can be viewed as a high-order variational generalization of the method of Tipton [2]. This computation is defined by the notion of partial non-instantaneous pressure equilibration, while the full pressure equilibration is achieved bymore » both the closure model and the hydrodynamic motion. Exchange of internal energy between materials is derived through entropy considerations, that is, every material produces positive entropy, and the total entropy production is maximized in compression and minimized in expansion. Results are presented for standard one-dimensional two-material problems, followed by two-dimensional and three-dimensional multi-material high-velocity impact arbitrary Lagrangian–Eulerian calculations. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.« less

  17. Optimization and large scale computation of an entropy-based moment closure

    NASA Astrophysics Data System (ADS)

    Kristopher Garrett, C.; Hauck, Cory; Hill, Judith

    2015-12-01

    We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. These results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.

  18. Bacterial protease uses distinct thermodynamic signatures for substrate recognition.

    PubMed

    Bezerra, Gustavo Arruda; Ohara-Nemoto, Yuko; Cornaciu, Irina; Fedosyuk, Sofiya; Hoffmann, Guillaume; Round, Adam; Márquez, José A; Nemoto, Takayuki K; Djinović-Carugo, Kristina

    2017-06-06

    Porphyromonas gingivalis and Porphyromonas endodontalis are important bacteria related to periodontitis, the most common chronic inflammatory disease in humans worldwide. Its comorbidity with systemic diseases, such as type 2 diabetes, oral cancers and cardiovascular diseases, continues to generate considerable interest. Surprisingly, these two microorganisms do not ferment carbohydrates; rather they use proteinaceous substrates as carbon and energy sources. However, the underlying biochemical mechanisms of their energy metabolism remain unknown. Here, we show that dipeptidyl peptidase 11 (DPP11), a central metabolic enzyme in these bacteria, undergoes a conformational change upon peptide binding to distinguish substrates from end products. It binds substrates through an entropy-driven process and end products in an enthalpy-driven fashion. We show that increase in protein conformational entropy is the main-driving force for substrate binding via the unfolding of specific regions of the enzyme ("entropy reservoirs"). The relationship between our structural and thermodynamics data yields a distinct model for protein-protein interactions where protein conformational entropy modulates the binding free-energy. Further, our findings provide a framework for the structure-based design of specific DPP11 inhibitors.

  19. Elastic moduli and thermal expansion coefficients of medium-entropy subsystems of the CrMnFeCoNi high-entropy alloy

    DOE PAGES

    Laplanche, Guillaume; Gadaud, P.; Barsch, C.; ...

    2018-02-23

    Elastic moduli of a set of equiatomic alloys (CrFeCoNi, CrCoNi, CrFeNi, FeCoNi, MnCoNi, MnFeNi, and CoNi), which are medium-entropy subsystems of the CrMnFeCoNi high-entropy alloy were determined as a function of temperature over the range 293 K–1000 K. Thermal expansion coefficients were determined for these alloys over the temperature range 100 K–673 K. All alloys were single-phase and had the face-centered cubic (FCC) crystal structure, except CrFeNi which is a two-phase alloy containing a small amount of body-centered cubic (BCC) precipitates in a FCC matrix. The temperature dependences of thermal expansion coefficients and elastic moduli obtained here are useful formore » quantifying fundamental aspects such as solid solution strengthening, and for structural analysis/design. Furthermore, using the above results, the yield strengths reported in literature for these alloys were normalized by their shear moduli to reveal the influence of shear modulus on solid solution strengthening.« less

  20. Optimization and large scale computation of an entropy-based moment closure

    DOE PAGES

    Hauck, Cory D.; Hill, Judith C.; Garrett, C. Kristopher

    2015-09-10

    We present computational advances and results in the implementation of an entropy-based moment closure, M N, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as P N, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which aremore » used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. Lastly, these results show, in particular, load balancing issues in scaling the M N algorithm that do not appear for the P N algorithm. We also observe that in weak scaling tests, the ratio in time to solution of M N to P N decreases.« less

  1. Advances of two-stage riser catalytic cracking of heavy oil for maximizing propylene yield (TMP) process.

    PubMed

    Chaohe, Yang; Xiaobo, Chen; Jinhong, Zhang; Chunyi, Li; Honghong, Shan

    Two-stage riser catalytic cracking of heavy oil for maximizing propylene yield (TMP) process proposed by State Key Laboratory of Heavy oil Processing, China University of Petroleum, can remarkably enhance the propylene yield and minimize the dry gas and coke yields, and obtain high-quality light oils (gasoline and diesel). It has been commercialized since 2006. Up to now, three TMP commercial units have been put into production and other four commercial units are under design and construction. The commercial data showed that taking paraffinic based Daqing (China) atmospheric residue as the feedstock, the propylene yield reached 20.31 wt%, the liquid products yield (the total yield of liquefied petroleum gas, gasoline, and diesel) was 82.66 wt%, and the total yield of dry gas and coke was 14.28 wt%. Moreover, the research octane number of gasoline could be up to 96.

  2. Synthesis of a single phase of high-entropy Laves intermetallics in the Ti-Zr-V-Cr-Ni equiatomic alloy

    NASA Astrophysics Data System (ADS)

    Yadav, T. P.; Mukhopadhyay, Semanti; Mishra, S. S.; Mukhopadhyay, N. K.; Srivastava, O. N.

    2017-12-01

    The high-entropy Ti-Zr-V-Cr-Ni (20 at% each) alloy consisting of all five hydride-forming elements was successfully synthesised by the conventional melting and casting as well as by the melt-spinning technique. The as-cast alloy consists entirely of the micron size hexagonal Laves Phase of C14 type; whereas, the melt-spun ribbon exhibits the evolution of nanocrystalline Laves phase. There was no evidence of any amorphous or any other metastable phases in the present processing condition. This is the first report of synthesising a single phase of high-entropy complex intermetallic compound in the equiatomic quinary alloy system. The detailed characterisation by X-ray diffraction, scanning and transmission electron microscopy and energy-dispersive X-ray spectroscopy confirmed the existence of a single-phase multi-component hexagonal C14-type Laves phase in all the as-cast, melt-spun and annealed alloys. The lattice parameter a = 5.08 Å and c = 8.41 Å was determined from the annealed material (annealing at 1173 K). The thermodynamic calculations following the Miedema's approach support the stability of the high-entropy multi-component Laves phase compared to that of the solid solution or glassy phases. The high hardness value (8.92 GPa at 25 g load) has been observed in nanocrystalline high-entropy alloy ribbon without any cracking. It implies that high-yield strength ( 3.00 GPa) and the reasonable fracture toughness can be achieved in this high-entropy material.

  3. Temperature lapse rates at restricted thermodynamic equilibrium. Part II: Saturated air and further discussions

    NASA Astrophysics Data System (ADS)

    Björnbom, Pehr

    2016-03-01

    In the first part of this work equilibrium temperature profiles in fluid columns with ideal gas or ideal liquid were obtained by numerically minimizing the column energy at constant entropy, equivalent to maximizing column entropy at constant energy. A minimum in internal plus potential energy for an isothermal temperature profile was obtained in line with Gibbs' classical equilibrium criterion. However, a minimum in internal energy alone for adiabatic temperature profiles was also obtained. This led to a hypothesis that the adiabatic lapse rate corresponds to a restricted equilibrium state, a type of state in fact discussed already by Gibbs. In this paper similar numerical results for a fluid column with saturated air suggest that also the saturated adiabatic lapse rate corresponds to a restricted equilibrium state. The proposed hypothesis is further discussed and amended based on the previous and the present numerical results and a theoretical analysis based on Gibbs' equilibrium theory.

  4. Entanglement tsunami: universal scaling in holographic thermalization.

    PubMed

    Liu, Hong; Suh, S Josephine

    2014-01-10

    We consider the time evolution of entanglement entropy after a global quench in a strongly coupled holographic system, whose subsequent equilibration is described in the gravity dual by the gravitational collapse of a thin shell of matter resulting in a black hole. In the limit of large regions of entanglement, the evolution of entanglement entropy is controlled by the geometry around and inside the event horizon of the black hole, resulting in regimes of pre-local-equilibration quadratic growth (in time), post-local-equilibration linear growth, a late-time regime in which the evolution does not carry memory of the size and shape of the entangled region, and a saturation regime with critical behavior resembling those in continuous phase transitions. Collectively, these regimes suggest a picture of entanglement growth in which an "entanglement tsunami" carries entanglement inward from the boundary. We also make a conjecture on the maximal rate of entanglement growth in relativistic systems.

  5. Nighttime image dehazing using local atmospheric selection rule and weighted entropy for visible-light systems

    NASA Astrophysics Data System (ADS)

    Park, Dubok; Han, David K.; Ko, Hanseok

    2017-05-01

    Optical imaging systems are often degraded by scattering due to atmospheric particles, such as haze, fog, and mist. Imaging under nighttime haze conditions may suffer especially from the glows near active light sources as well as scattering. We present a methodology for nighttime image dehazing based on an optical imaging model which accounts for varying light sources and their glow. First, glow effects are decomposed using relative smoothness. Atmospheric light is then estimated by assessing global and local atmospheric light using a local atmospheric selection rule. The transmission of light is then estimated by maximizing an objective function designed on the basis of weighted entropy. Finally, haze is removed using two estimated parameters, namely, atmospheric light and transmission. The visual and quantitative comparison of the experimental results with the results of existing state-of-the-art methods demonstrates the significance of the proposed approach.

  6. A fault diagnosis scheme for planetary gearboxes using modified multi-scale symbolic dynamic entropy and mRMR feature selection

    NASA Astrophysics Data System (ADS)

    Li, Yongbo; Yang, Yuantao; Li, Guoyan; Xu, Minqiang; Huang, Wenhu

    2017-07-01

    Health condition identification of planetary gearboxes is crucial to reduce the downtime and maximize productivity. This paper aims to develop a novel fault diagnosis method based on modified multi-scale symbolic dynamic entropy (MMSDE) and minimum redundancy maximum relevance (mRMR) to identify the different health conditions of planetary gearbox. MMSDE is proposed to quantify the regularity of time series, which can assess the dynamical characteristics over a range of scales. MMSDE has obvious advantages in the detection of dynamical changes and computation efficiency. Then, the mRMR approach is introduced to refine the fault features. Lastly, the obtained new features are fed into the least square support vector machine (LSSVM) to complete the fault pattern identification. The proposed method is numerically and experimentally demonstrated to be able to recognize the different fault types of planetary gearboxes.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eliazar, Iddo, E-mail: eliazar@post.tau.ac.il

    Rank distributions are collections of positive sizes ordered either increasingly or decreasingly. Many decreasing rank distributions, formed by the collective collaboration of human actions, follow an inverse power-law relation between ranks and sizes. This remarkable empirical fact is termed Zipf’s law, and one of its quintessential manifestations is the demography of human settlements — which exhibits a harmonic relation between ranks and sizes. In this paper we present a comprehensive statistical-physics analysis of rank distributions, establish that power-law and exponential rank distributions stand out as optimal in various entropy-based senses, and unveil the special role of the harmonic relation betweenmore » ranks and sizes. Our results extend the contemporary entropy-maximization view of Zipf’s law to a broader, panoramic, Gibbsian perspective of increasing and decreasing power-law and exponential rank distributions — of which Zipf’s law is one out of four pillars.« less

  8. Entanglement and area law with a fractal boundary in a topologically ordered phase

    NASA Astrophysics Data System (ADS)

    Hamma, Alioscia; Lidar, Daniel A.; Severini, Simone

    2010-01-01

    Quantum systems with short-range interactions are known to respect an area law for the entanglement entropy: The von Neumann entropy S associated to a bipartition scales with the boundary p between the two parts. Here we study the case in which the boundary is a fractal. We consider the topologically ordered phase of the toric code with a magnetic field. When the field vanishes it is possible to analytically compute the entanglement entropy for both regular and fractal bipartitions (A,B) of the system and this yields an upper bound for the entire topological phase. When the A-B boundary is regular we have S/p=1 for large p. When the boundary is a fractal of the Hausdorff dimension D, we show that the entanglement between the two parts scales as S/p=γ⩽1/D, and γ depends on the fractal considered.

  9. A core-halo pattern of entropy creation in gravitational collapse

    NASA Astrophysics Data System (ADS)

    Wren, Andrew J.

    2018-03-01

    This paper presents a kinetic theory model of gravitational collapse due to a small perturbation. Solving the relevant equations yields a pattern of entropy destruction in a spherical core around the perturbation, and entropy creation in a surrounding halo. This indicates collisional "de-relaxation" in the core, and collisional relaxation in the halo. Core-halo patterns are ubiquitous in the astrophysics of gravitational collapse, and are found here without any of the prior assumptions of such a pattern usually made in analytical models. Motivated by this analysis, the paper outlines a possible scheme for identifying structure formation in a set of observations or a simulation. This scheme involves a choice of coarse-graining scale appropriate to the structure under consideration, and might aid exploration of hierarchical structure formation, supplementing the usual density-based methods for highlighting astrophysical and cosmological structure at various scales.

  10. A core-halo pattern of entropy creation in gravitational collapse

    NASA Astrophysics Data System (ADS)

    Wren, Andrew J.

    2018-07-01

    This paper presents a kinetic theory model of gravitational collapse due to a small perturbation. Solving the relevant equations yields a pattern of entropy destruction in a spherical core around the perturbation, and entropy creation in a surrounding halo. This indicates collisional `de-relaxation' in the core, and collisional relaxation in the halo. Core-halo patterns are ubiquitous in the astrophysics of gravitational collapse and are found here without any of the prior assumptions of such a pattern usually made in analytical models. Motivated by this analysis, the paper outlines a possible scheme for identifying structure formation in a set of observations or a simulation. This scheme involves a choice of coarse-graining scale appropriate to the structure under consideration, and might aid exploration of hierarchical structure formation, supplementing the usual density-based methods for highlighting astrophysical and cosmological structure at various scales.

  11. Mode-dependent templates and scan order for H.264/AVC-based intra lossless coding.

    PubMed

    Gu, Zhouye; Lin, Weisi; Lee, Bu-Sung; Lau, Chiew Tong; Sun, Ming-Ting

    2012-09-01

    In H.264/advanced video coding (AVC), lossless coding and lossy coding share the same entropy coding module. However, the entropy coders in the H.264/AVC standard were original designed for lossy video coding and do not yield adequate performance for lossless video coding. In this paper, we analyze the problem with the current lossless coding scheme and propose a mode-dependent template (MD-template) based method for intra lossless coding. By exploring the statistical redundancy of the prediction residual in the H.264/AVC intra prediction modes, more zero coefficients are generated. By designing a new scan order for each MD-template, the scanned coefficients sequence fits the H.264/AVC entropy coders better. A fast implementation algorithm is also designed. With little computation increase, experimental results confirm that the proposed fast algorithm achieves about 7.2% bit saving compared with the current H.264/AVC fidelity range extensions high profile.

  12. Inverting Monotonic Nonlinearities by Entropy Maximization

    PubMed Central

    López-de-Ipiña Pena, Karmele; Caiafa, Cesar F.

    2016-01-01

    This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results. PMID:27780261

  13. Inverting Monotonic Nonlinearities by Entropy Maximization.

    PubMed

    Solé-Casals, Jordi; López-de-Ipiña Pena, Karmele; Caiafa, Cesar F

    2016-01-01

    This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results.

  14. An entropic framework for modeling economies

    NASA Astrophysics Data System (ADS)

    Caticha, Ariel; Golan, Amos

    2014-08-01

    We develop an information-theoretic framework for economic modeling. This framework is based on principles of entropic inference that are designed for reasoning on the basis of incomplete information. We take the point of view of an external observer who has access to limited information about broad macroscopic economic features. We view this framework as complementary to more traditional methods. The economy is modeled as a collection of agents about whom we make no assumptions of rationality (in the sense of maximizing utility or profit). States of statistical equilibrium are introduced as those macrostates that maximize entropy subject to the relevant information codified into constraints. The basic assumption is that this information refers to supply and demand and is expressed in the form of the expected values of certain quantities (such as inputs, resources, goods, production functions, utility functions and budgets). The notion of economic entropy is introduced. It provides a measure of the uniformity of the distribution of goods and resources. It captures both the welfare state of the economy as well as the characteristics of the market (say, monopolistic, concentrated or competitive). Prices, which turn out to be the Lagrange multipliers, are endogenously generated by the economy. Further studies include the equilibrium between two economies and the conditions for stability. As an example, the case of the nonlinear economy that arises from linear production and utility functions is treated in some detail.

  15. Understanding shape entropy through local dense packing

    DOE PAGES

    van Anders, Greg; Klotsa, Daphne; Ahmed, N. Khalid; ...

    2014-10-24

    Entropy drives the phase behavior of colloids ranging from dense suspensions of hard spheres or rods to dilute suspensions of hard spheres and depletants. Entropic ordering of anisotropic shapes into complex crystals, liquid crystals, and even quasicrystals was demonstrated recently in computer simulations and experiments. The ordering of shapes appears to arise from the emergence of directional entropic forces (DEFs) that align neighboring particles, but these forces have been neither rigorously defined nor quantified in generic systems. In this paper, we show quantitatively that shape drives the phase behavior of systems of anisotropic particles upon crowding through DEFs. We definemore » DEFs in generic systems and compute them for several hard particle systems. We show they are on the order of a few times the thermal energy (k BT) at the onset of ordering, placing DEFs on par with traditional depletion, van der Waals, and other intrinsic interactions. In experimental systems with these other interactions, we provide direct quantitative evidence that entropic effects of shape also contribute to self-assembly. We use DEFs to draw a distinction between self-assembly and packing behavior. We show that the mechanism that generates directional entropic forces is the maximization of entropy by optimizing local particle packing. Finally, we show that this mechanism occurs in a wide class of systems and we treat, in a unified way, the entropy-driven phase behavior of arbitrary shapes, incorporating the well-known works of Kirkwood, Onsager, and Asakura and Oosawa.« less

  16. Crystallization in melts of short, semiflexible hard polymer chains: An interplay of entropies and dimensions

    NASA Astrophysics Data System (ADS)

    Shakirov, T.; Paul, W.

    2018-04-01

    What is the thermodynamic driving force for the crystallization of melts of semiflexible polymers? We try to answer this question by employing stochastic approximation Monte Carlo simulations to obtain the complete thermodynamic equilibrium information for a melt of short, semiflexible polymer chains with purely repulsive nonbonded interactions. The thermodynamics is obtained based on the density of states of our coarse-grained model, which varies by up to 5600 orders of magnitude. We show that our polymer melt undergoes a first-order crystallization transition upon increasing the chain stiffness at fixed density. This crystallization can be understood by the interplay of the maximization of different entropy contributions in different spatial dimensions. At sufficient stiffness and density, the three-dimensional orientational interactions drive the orientational ordering transition, which is accompanied by a two-dimensional translational ordering transition in the plane perpendicular to the chains resulting in a hexagonal crystal structure. While the three-dimensional ordering can be understood in terms of Onsager theory, the two-dimensional transition can be understood in terms of the liquid-hexatic transition of hard disks. Due to the domination of lateral two-dimensional translational entropy over the one-dimensional translational entropy connected with columnar displacements, the chains form a lamellar phase. Based on this physical understanding, orientational ordering and translational ordering should be separable for polymer melts. A phenomenological theory based on this understanding predicts a qualitative phase diagram as a function of volume fraction and stiffness in good agreement with results from the literature.

  17. Assimilation of Remotely Sensed Soil Moisture Profiles into a Crop Modeling Framework for Reliable Yield Estimations

    NASA Astrophysics Data System (ADS)

    Mishra, V.; Cruise, J.; Mecikalski, J. R.

    2017-12-01

    Much effort has been expended recently on the assimilation of remotely sensed soil moisture into operational land surface models (LSM). These efforts have normally been focused on the use of data derived from the microwave bands and results have often shown that improvements to model simulations have been limited due to the fact that microwave signals only penetrate the top 2-5 cm of the soil surface. It is possible that model simulations could be further improved through the introduction of geostationary satellite thermal infrared (TIR) based root zone soil moisture in addition to the microwave deduced surface estimates. In this study, root zone soil moisture estimates from the TIR based Atmospheric Land Exchange Inverse (ALEXI) model were merged with NASA Soil Moisture Active Passive (SMAP) based surface estimates through the application of informational entropy. Entropy can be used to characterize the movement of moisture within the vadose zone and accounts for both advection and diffusion processes. The Principle of Maximum Entropy (POME) can be used to derive complete soil moisture profiles and, fortuitously, only requires a surface boundary condition as well as the overall mean moisture content of the soil column. A lower boundary can be considered a soil parameter or obtained from the LSM itself. In this study, SMAP provided the surface boundary while ALEXI supplied the mean and the entropy integral was used to tie the two together and produce the vertical profile. However, prior to the merging, the coarse resolution (9 km) SMAP data were downscaled to the finer resolution (4.7 km) ALEXI grid. The disaggregation scheme followed the Soil Evaporative Efficiency approach and again, all necessary inputs were available from the TIR model. The profiles were then assimilated into a standard agricultural crop model (Decision Support System for Agrotechnology, DSSAT) via the ensemble Kalman Filter. The study was conducted over the Southeastern United States for the growing seasons from 2015-2017. Soil moisture profiles compared favorably to in situ data and simulated crop yields compared well with observed yields.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stark, Christopher C.; Roberge, Aki; Mandell, Avi

    ExoEarth yield is a critical science metric for future exoplanet imaging missions. Here we estimate exoEarth candidate yield using single visit completeness for a variety of mission design and astrophysical parameters. We review the methods used in previous yield calculations and show that the method choice can significantly impact yield estimates as well as how the yield responds to mission parameters. We introduce a method, called Altruistic Yield Optimization, that optimizes the target list and exposure times to maximize mission yield, adapts maximally to changes in mission parameters, and increases exoEarth candidate yield by up to 100% compared to previousmore » methods. We use Altruistic Yield Optimization to estimate exoEarth candidate yield for a large suite of mission and astrophysical parameters using single visit completeness. We find that exoEarth candidate yield is most sensitive to telescope diameter, followed by coronagraph inner working angle, followed by coronagraph contrast, and finally coronagraph contrast noise floor. We find a surprisingly weak dependence of exoEarth candidate yield on exozodi level. Additionally, we provide a quantitative approach to defining a yield goal for future exoEarth-imaging missions.« less

  19. Teaching the principles of statistical dynamics

    PubMed Central

    Ghosh, Kingshuk; Dill, Ken A.; Inamdar, Mandar M.; Seitaridou, Effrosyni; Phillips, Rob

    2012-01-01

    We describe a simple framework for teaching the principles that underlie the dynamical laws of transport: Fick’s law of diffusion, Fourier’s law of heat flow, the Newtonian viscosity law, and the mass-action laws of chemical kinetics. In analogy with the way that the maximization of entropy over microstates leads to the Boltzmann distribution and predictions about equilibria, maximizing a quantity that E. T. Jaynes called “caliber” over all the possible microtrajectories leads to these dynamical laws. The principle of maximum caliber also leads to dynamical distribution functions that characterize the relative probabilities of different microtrajectories. A great source of recent interest in statistical dynamics has resulted from a new generation of single-particle and single-molecule experiments that make it possible to observe dynamics one trajectory at a time. PMID:23585693

  20. Teaching the principles of statistical dynamics.

    PubMed

    Ghosh, Kingshuk; Dill, Ken A; Inamdar, Mandar M; Seitaridou, Effrosyni; Phillips, Rob

    2006-02-01

    We describe a simple framework for teaching the principles that underlie the dynamical laws of transport: Fick's law of diffusion, Fourier's law of heat flow, the Newtonian viscosity law, and the mass-action laws of chemical kinetics. In analogy with the way that the maximization of entropy over microstates leads to the Boltzmann distribution and predictions about equilibria, maximizing a quantity that E. T. Jaynes called "caliber" over all the possible microtrajectories leads to these dynamical laws. The principle of maximum caliber also leads to dynamical distribution functions that characterize the relative probabilities of different microtrajectories. A great source of recent interest in statistical dynamics has resulted from a new generation of single-particle and single-molecule experiments that make it possible to observe dynamics one trajectory at a time.

  1. Dephasing-covariant operations enable asymptotic reversibility of quantum resources

    NASA Astrophysics Data System (ADS)

    Chitambar, Eric

    2018-05-01

    We study the power of dephasing-covariant operations in the resource theories of coherence and entanglement. These are quantum operations whose actions commute with a projective measurement. In the resource theory of coherence, we find that any two states are asymptotically interconvertible under dephasing-covariant operations. This provides a rare example of a resource theory in which asymptotic reversibility can be attained without needing the maximal set of resource nongenerating operations. When extended to the resource theory of entanglement, the resultant operations share similarities with local operations and classical communication, such as prohibiting the increase of all Rényi α -entropies of entanglement under pure-state transformations. However, we show these operations are still strong enough to enable asymptotic reversibility between any two maximally correlated mixed states, even in the multipartite setting.

  2. Li-ion site disorder driven superionic conductivity in solid electrolytes: a first-principles investigation of β-Li 3PS 4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phani Dathar, Gopi Krishna; Balachandran, Janakiraman; Kent, Paul R. C.

    The attractive safety and long-term stability of all solid-state batteries has added a new impetus to the discovery and development of solid electrolytes for lithium batteries. Recently several superionic lithium conducting solid electrolytes have been discovered. All the superionic lithium containing compounds (β-Li 3PS 4 and Li 10GeP 2S 12 and oxides, predominantly in the garnet phase) have partially occupied sites. This naturally begs the question of understanding the role of partial site occupancies (or site disorder) in optimizing ionic conductivity in these family of solids. In this paper, we find that for a given topology of the host lattice,more » maximizing the number of sites with similar Li-ion adsorption energies, which gives partial site occupancy, is a natural way to increase the configurational entropy of the system and optimize the conductivity. For a given topology and density of Li-ion adsorption sites, the ionic conductivity is maximal when the number of mobile Li-ions are equal to the number of mobile vacancies, also the very condition for achieving maximal configurational entropy. We demonstrate applicability of this principle by elucidating the role of Li-ion site disorder and the local chemical environment in the high ionic conductivity of β-Li 3PS 4. In addition, for β-Li 3PS 4 we find that a significant density of vacancies in the Li-ion sub-lattice (~25%) leads to sub-lattice melting at (~600 K) leading to a molten form for the Li-ions in an otherwise solid anionic host. This gives a lithium site occupancy that is similar to what is measured experimentally. We further show that quenching this disorder can improve conductivity at lower temperatures. As a consequence, we discover that (a) one can optimize ionic conductivity in a given topology by choosing a chemistry/composition that maximizes the number of mobile-carriers i.e. maximizing both mobile Li-ions and vacancies, and (b) when the concentration of vacancies becomes significant in the Li-ion sub-lattice, it becomes energetically as well as entropically favorable for it to remain molten well below the bulk decomposition temperature of the solid. Finally, this principle may already apply to several known superionic conducting solids.« less

  3. Li-ion site disorder driven superionic conductivity in solid electrolytes: a first-principles investigation of β-Li 3PS 4

    DOE PAGES

    Phani Dathar, Gopi Krishna; Balachandran, Janakiraman; Kent, Paul R. C.; ...

    2016-12-09

    The attractive safety and long-term stability of all solid-state batteries has added a new impetus to the discovery and development of solid electrolytes for lithium batteries. Recently several superionic lithium conducting solid electrolytes have been discovered. All the superionic lithium containing compounds (β-Li 3PS 4 and Li 10GeP 2S 12 and oxides, predominantly in the garnet phase) have partially occupied sites. This naturally begs the question of understanding the role of partial site occupancies (or site disorder) in optimizing ionic conductivity in these family of solids. In this paper, we find that for a given topology of the host lattice,more » maximizing the number of sites with similar Li-ion adsorption energies, which gives partial site occupancy, is a natural way to increase the configurational entropy of the system and optimize the conductivity. For a given topology and density of Li-ion adsorption sites, the ionic conductivity is maximal when the number of mobile Li-ions are equal to the number of mobile vacancies, also the very condition for achieving maximal configurational entropy. We demonstrate applicability of this principle by elucidating the role of Li-ion site disorder and the local chemical environment in the high ionic conductivity of β-Li 3PS 4. In addition, for β-Li 3PS 4 we find that a significant density of vacancies in the Li-ion sub-lattice (~25%) leads to sub-lattice melting at (~600 K) leading to a molten form for the Li-ions in an otherwise solid anionic host. This gives a lithium site occupancy that is similar to what is measured experimentally. We further show that quenching this disorder can improve conductivity at lower temperatures. As a consequence, we discover that (a) one can optimize ionic conductivity in a given topology by choosing a chemistry/composition that maximizes the number of mobile-carriers i.e. maximizing both mobile Li-ions and vacancies, and (b) when the concentration of vacancies becomes significant in the Li-ion sub-lattice, it becomes energetically as well as entropically favorable for it to remain molten well below the bulk decomposition temperature of the solid. Finally, this principle may already apply to several known superionic conducting solids.« less

  4. Thermodynamics of k-essence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bilic, Neven

    We discuss thermodynamic properties of dark energy using the formalism of field theory at finite temperature. In particular, we apply our formalism to a purely kinetic type of k-essence. We show quite generally that the entropy associated with dark energy is always equal or greater than zero. Hence, contrary to often stated claims, a violation of the null energy condition (phantom dark energy) does not necessarily yield a negative entropy. In addition, we find that the thermal fluctuations of a k-essence field may be represented by a free boson gas with an effective number of degrees of freedom equal tomore » c{sub s}{sup -3}.« less

  5. High pressure synthesis of a hexagonal close-packed phase of the high-entropy alloy CrMnFeCoNi

    DOE PAGES

    Tracy, Cameron L.; Park, Sulgiye; Rittman, Dylan R.; ...

    2017-05-25

    High pressure x-ray diffraction measurements reveal that the face-centered cubic (fcc) high-entropy alloy CrMnFeCoNi transforms martensitically to a hexagonal close-packed (hcp) phase at ~14 GPa. We attribute this to suppression of the local magnetic moments, destabilizing the fcc phase. Similar to fcc-to-hcp transformations in Al and the noble gases, this transformation is sluggish, occurring over a range of >40 GPa. But, the behavior of CrMnFeCoNi is unique in that the hcp phase is retained following decompression to ambient pressure, yielding metastable fcc-hcp mixtures.

  6. Maximally random discrete-spin systems with symmetric and asymmetric interactions and maximally degenerate ordering

    NASA Astrophysics Data System (ADS)

    Atalay, Bora; Berker, A. Nihat

    2018-05-01

    Discrete-spin systems with maximally random nearest-neighbor interactions that can be symmetric or asymmetric, ferromagnetic or antiferromagnetic, including off-diagonal disorder, are studied, for the number of states q =3 ,4 in d dimensions. We use renormalization-group theory that is exact for hierarchical lattices and approximate (Migdal-Kadanoff) for hypercubic lattices. For all d >1 and all noninfinite temperatures, the system eventually renormalizes to a random single state, thus signaling q ×q degenerate ordering. Note that this is the maximally degenerate ordering. For high-temperature initial conditions, the system crosses over to this highly degenerate ordering only after spending many renormalization-group iterations near the disordered (infinite-temperature) fixed point. Thus, a temperature range of short-range disorder in the presence of long-range order is identified, as previously seen in underfrustrated Ising spin-glass systems. The entropy is calculated for all temperatures, behaves similarly for ferromagnetic and antiferromagnetic interactions, and shows a derivative maximum at the short-range disordering temperature. With a sharp immediate contrast of infinitesimally higher dimension 1 +ɛ , the system is as expected disordered at all temperatures for d =1 .

  7. Classification of Partial Discharge Signals by Combining Adaptive Local Iterative Filtering and Entropy Features

    PubMed Central

    Morison, Gordon; Boreham, Philip

    2018-01-01

    Electromagnetic Interference (EMI) is a technique for capturing Partial Discharge (PD) signals in High-Voltage (HV) power plant apparatus. EMI signals can be non-stationary which makes their analysis difficult, particularly for pattern recognition applications. This paper elaborates upon a previously developed software condition-monitoring model for improved EMI events classification based on time-frequency signal decomposition and entropy features. The idea of the proposed method is to map multiple discharge source signals captured by EMI and labelled by experts, including PD, from the time domain to a feature space, which aids in the interpretation of subsequent fault information. Here, instead of using only one permutation entropy measure, a more robust measure, called Dispersion Entropy (DE), is added to the feature vector. Multi-Class Support Vector Machine (MCSVM) methods are utilized for classification of the different discharge sources. Results show an improved classification accuracy compared to previously proposed methods. This yields to a successful development of an expert’s knowledge-based intelligent system. Since this method is demonstrated to be successful with real field data, it brings the benefit of possible real-world application for EMI condition monitoring. PMID:29385030

  8. Elastic moduli and thermal expansion coefficients of medium-entropy subsystems of the CrMnFeCoNi high-entropy alloy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laplanche, Guillaume; Gadaud, P.; Barsch, C.

    Elastic moduli of a set of equiatomic alloys (CrFeCoNi, CrCoNi, CrFeNi, FeCoNi, MnCoNi, MnFeNi, and CoNi), which are medium-entropy subsystems of the CrMnFeCoNi high-entropy alloy were determined as a function of temperature over the range 293 K–1000 K. Thermal expansion coefficients were determined for these alloys over the temperature range 100 K–673 K. All alloys were single-phase and had the face-centered cubic (FCC) crystal structure, except CrFeNi which is a two-phase alloy containing a small amount of body-centered cubic (BCC) precipitates in a FCC matrix. The temperature dependences of thermal expansion coefficients and elastic moduli obtained here are useful for quantifying fundamental aspects suchmore » as solid solution strengthening, and for structural analysis/design. Furthermore, using the above results, the yield strengths reported in literature for these alloys were normalized by their shear moduli to reveal the influence of shear modulus on solid solution strengthening.« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Lin, E-mail: godyalin@163.com; Singh, Uttam, E-mail: uttamsingh@hri.res.in; Pati, Arun K., E-mail: akpati@hri.res.in

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate thatmore » mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.« less

  10. Parameter Estimation as a Problem in Statistical Thermodynamics.

    PubMed

    Earle, Keith A; Schneider, David J

    2011-03-14

    In this work, we explore the connections between parameter fitting and statistical thermodynamics using the maxent principle of Jaynes as a starting point. In particular, we show how signal averaging may be described by a suitable one particle partition function, modified for the case of a variable number of particles. These modifications lead to an entropy that is extensive in the number of measurements in the average. Systematic error may be interpreted as a departure from ideal gas behavior. In addition, we show how to combine measurements from different experiments in an unbiased way in order to maximize the entropy of simultaneous parameter fitting. We suggest that fit parameters may be interpreted as generalized coordinates and the forces conjugate to them may be derived from the system partition function. From this perspective, the parameter fitting problem may be interpreted as a process where the system (spectrum) does work against internal stresses (non-optimum model parameters) to achieve a state of minimum free energy/maximum entropy. Finally, we show how the distribution function allows us to define a geometry on parameter space, building on previous work[1, 2]. This geometry has implications for error estimation and we outline a program for incorporating these geometrical insights into an automated parameter fitting algorithm.

  11. Translation Invariant Extensions of Finite Volume Measures

    NASA Astrophysics Data System (ADS)

    Goldstein, S.; Kuna, T.; Lebowitz, J. L.; Speer, E. R.

    2017-02-01

    We investigate the following questions: Given a measure μ _Λ on configurations on a subset Λ of a lattice L, where a configuration is an element of Ω ^Λ for some fixed set Ω , does there exist a measure μ on configurations on all of L, invariant under some specified symmetry group of L, such that μ _Λ is its marginal on configurations on Λ ? When the answer is yes, what are the properties, e.g., the entropies, of such measures? Our primary focus is the case in which L=Z^d and the symmetries are the translations. For the case in which Λ is an interval in Z we give a simple necessary and sufficient condition, local translation invariance ( LTI), for extendibility. For LTI measures we construct extensions having maximal entropy, which we show are Gibbs measures; this construction extends to the case in which L is the Bethe lattice. On Z we also consider extensions supported on periodic configurations, which are analyzed using de Bruijn graphs and which include the extensions with minimal entropy. When Λ subset Z is not an interval, or when Λ subset Z^d with d>1, the LTI condition is necessary but not sufficient for extendibility. For Z^d with d>1, extendibility is in some sense undecidable.

  12. Maximum entropy approach to statistical inference for an ocean acoustic waveguide.

    PubMed

    Knobles, D P; Sagers, J D; Koch, R A

    2012-02-01

    A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America

  13. Evaluation of cluster expansions and correlated one-body properties of nuclei

    NASA Astrophysics Data System (ADS)

    Moustakidis, Ch. C.; Massen, S. E.; Panos, C. P.; Grypeos, M. E.; Antonov, A. N.

    2001-07-01

    Three different cluster expansions for the evaluation of correlated one-body properties of s-p and s-d shell nuclei are compared. Harmonic oscillator wave functions and Jastrow-type correlations are used, while analytical expressions are obtained for the charge form factor, density distribution, and momentum distribution by truncating the expansions and using a standard Jastrow correlation function f. The harmonic oscillator parameter b and the correlation parameter β have been determined by a least-squares fit to the experimental charge form factors in each case. The information entropy of nuclei in position space (Sr) and momentum space (Sk) according to the three methods are also calculated. It is found that the larger the entropy sum, S=Sr+Sk (the net information content of the system), the smaller the values of χ2. This indicates that maximal S is a criterion of the quality of a given nuclear model, according to the maximum entropy principle. Only two exceptions to this rule, out of many cases examined, were found. Finally an analytic expression for the so-called ``healing'' or ``wound'' integrals is derived with the function f considered, for any state of the relative two-nucleon motion, and their values in certain cases are computed and compared.

  14. Estimating transition probabilities in unmarked populations --entropy revisited

    USGS Publications Warehouse

    Cooch, E.G.; Link, W.A.

    1999-01-01

    The probability of surviving and moving between 'states' is of great interest to biologists. Robust estimation of these transitions using multiple observations of individually identifiable marked individuals has received considerable attention in recent years. However, in some situations, individuals are not identifiable (or have a very low recapture rate), although all individuals in a sample can be assigned to a particular state (e.g. breeding or non-breeding) without error. In such cases, only aggregate data (number of individuals in a given state at each occasion) are available. If the underlying matrix of transition probabilities does not vary through time and aggregate data are available for several time periods, then it is possible to estimate these parameters using least-squares methods. Even when such data are available, this assumption of stationarity will usually be deemed overly restrictive and, frequently, data will only be available for two time periods. In these cases, the problem reduces to estimating the most likely matrix (or matrices) leading to the observed frequency distribution of individuals in each state. An entropy maximization approach has been previously suggested. In this paper, we show that the entropy approach rests on a particular limiting assumption, and does not provide estimates of latent population parameters (the transition probabilities), but rather predictions of realized rates.

  15. An exploratory statistical approach to depression pattern identification

    NASA Astrophysics Data System (ADS)

    Feng, Qing Yi; Griffiths, Frances; Parsons, Nick; Gunn, Jane

    2013-02-01

    Depression is a complex phenomenon thought to be due to the interaction of biological, psychological and social factors. Currently depression assessment uses self-reported depressive symptoms but this is limited in the degree to which it can characterise the different expressions of depression emerging from the complex causal pathways that are thought to underlie depression. In this study, we aimed to represent the different patterns of depression with pattern values unique to each individual, where each value combines all the available information about an individual’s depression. We considered the depressed individual as a subsystem of an open complex system, proposed Generalized Information Entropy (GIE) to represent the general characteristics of information entropy of the system, and then implemented Maximum Entropy Estimates to derive equations for depression patterns. We also introduced a numerical simulation method to process the depression related data obtained by the Diamond Cohort Study which has been underway in Australia since 2005 involving 789 people. Unlike traditional assessment, we obtained a unique value for each depressed individual which gives an overall assessment of the depression pattern. Our work provides a novel way to visualise and quantitatively measure the depression pattern of the depressed individual which could be used for pattern categorisation. This may have potential for tailoring health interventions to depressed individuals to maximize health benefit.

  16. Blind source computer device identification from recorded VoIP calls for forensic investigation.

    PubMed

    Jahanirad, Mehdi; Anuar, Nor Badrul; Wahab, Ainuddin Wahid Abdul

    2017-03-01

    The VoIP services provide fertile ground for criminal activity, thus identifying the transmitting computer devices from recorded VoIP call may help the forensic investigator to reveal useful information. It also proves the authenticity of the call recording submitted to the court as evidence. This paper extended the previous study on the use of recorded VoIP call for blind source computer device identification. Although initial results were promising but theoretical reasoning for this is yet to be found. The study suggested computing entropy of mel-frequency cepstrum coefficients (entropy-MFCC) from near-silent segments as an intrinsic feature set that captures the device response function due to the tolerances in the electronic components of individual computer devices. By applying the supervised learning techniques of naïve Bayesian, linear logistic regression, neural networks and support vector machines to the entropy-MFCC features, state-of-the-art identification accuracy of near 99.9% has been achieved on different sets of computer devices for both call recording and microphone recording scenarios. Furthermore, unsupervised learning techniques, including simple k-means, expectation-maximization and density-based spatial clustering of applications with noise (DBSCAN) provided promising results for call recording dataset by assigning the majority of instances to their correct clusters. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  17. Age-related motor unit remodeling in the Tibialis Anterior.

    PubMed

    Siddiqi, Ariba; Kumar, Dinesh; Arjunan, Sridhar

    2015-01-01

    Limited studies exist on the use of surface electromyogram (EMG) signal features to detect age-related motor unit remodeling in the Tibialis Anterior. Motor unit remodeling leads to declined muscle strength and force steadiness during submaximal contractions which are factors for risk of falls in the elderly. This study investigated the remodeling phenomena in the Tibialis Anterior using sample entropy and higher order statistics. Eighteen young (26.1 ± 2.9 years) and twelve elderly (68.7 ± 9.0 years) participants performed isometric dorsiflexion of the ankle at 20% maximal voluntary contraction (MVC) and their Tibialis Anterior (TA) EMG was recorded. Sample entropy, Gaussianity and Linearity Test statistics were calculated from the recorded EMG for each MVC. Shapiro-Wilk test was used to determine normality, and either a two-tail student t-test or Wilcoxon rank sum test was performed to determine significant difference in the EMG features between the young and old cohorts. Results show age-related motor unit remodeling to be depicted by decreased sample entropy (p <; 0.1), increased non-Gaussianity (p <; 0.05) and lesser degree of linearity in the elderly. This is due to the increased sparsity of the MUAPs as a result of the denervation-reinnervation process, and the decrease in total number of motor units.

  18. Comment on 'Entropy lowering in ion-atom collisions'

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ostrovsky, V. N.

    2006-01-15

    The recent experimental result by Nguyen et al. [Phys. Rev. A 71, 062714 (2005)] on the ratio of cross sections for charge exchange processes Rb{sup +}+Rb(5s){yields}Rb(5p)+Rb{sup +} and Rb{sup +}+Rb(5p){yields}Rb(5s)+Rb{sup +} is quantitatively derived from simple considerations within the general framework of the quasimolecular theory. Contrary to the expectations, applicability of the Demkov model for charge exchange with small energy defect is not shattered.

  19. Anatomy of particle diffusion

    NASA Astrophysics Data System (ADS)

    Bringuier, E.

    2009-11-01

    The paper analyses particle diffusion from a thermodynamic standpoint. The main goal of the paper is to highlight the conceptual connection between particle diffusion, which belongs to non-equilibrium statistical physics, and mechanics, which deals with particle motion, at the level of third-year university courses. We start out from the fact that, near equilibrium, particle transport should occur down the gradient of the chemical potential. This yields Fick's law with two additional advantages. First, splitting the chemical potential into 'mechanical' and 'chemical' contributions shows how transport and mechanics are linked through the diffusivity-mobility relationship. Second, splitting the chemical potential into entropic and energetic contributions discloses the respective roles of entropy maximization and energy minimization in driving diffusion. The paper addresses first unary diffusion, where there is only one mobile species in an immobile medium, and next turns to binary diffusion, where two species are mobile with respect to each other in a fluid medium. The interrelationship between unary and binary diffusivities is brought out and it is shown how binary diffusion reduces to unary diffusion in the limit of high dilution of one species amidst the other one. Self- and mutual diffusion are considered and contrasted within the thermodynamic framework; self-diffusion is a time-dependent manifestation of the Gibbs paradox of mixing.

  20. Recognition of dual targets by a molecular beacon-based sensor: subtyping of influenza A virus.

    PubMed

    Lee, Chun-Ching; Liao, Yu-Chieh; Lai, Yu-Hsuan; Lee, Chang-Chun David; Chuang, Min-Chieh

    2015-01-01

    A molecular beacon (MB)-based sensor to offer a decisive answer in combination with information originated from dual-target inputs is designed. The system harnesses an assistant strand and thermodynamically favored designation of unpaired nucleotides (UNs) to process the binary targets in "AND-gate" format and report fluorescence in "off-on" mechanism via a formation of a DNA four-way junction (4WJ). By manipulating composition of the UNs, the dynamic fluorescence difference between the binary targets-coexisting circumstance and any other scenario was maximized. Characteristic equilibrium constant (K), change of entropy (ΔS), and association rate constant (k) between the association ("on") and dissociation ("off") states of the 4WJ were evaluated to understand unfolding behavior of MB in connection to its sensing capability. Favorable MB and UNs were furthermore designed toward analysis of genuine genetic sequences of hemagglutinin (HA) and neuraminidase (NA) in an influenza A H5N2 isolate. The MB-based sensor was demonstrated to yield a linear calibration range from 1.2 to 240 nM and detection limit of 120 pM. Furthermore, high-fidelity subtyping of influenza virus was implemented in a sample of unpurified amplicons. The strategy opens an alternative avenue of MB-based sensors for dual targets toward applications in clinical diagnosis.

  1. Consciousness as a global property of brain dynamic activity

    NASA Astrophysics Data System (ADS)

    Mateos, D. M.; Wennberg, R.; Guevara, R.; Perez Velazquez, J. L.

    2017-12-01

    We seek general principles of the structure of the cellular collective activity associated with conscious awareness. Can we obtain evidence for features of the optimal brain organization that allows for adequate processing of stimuli and that may guide the emergence of cognition and consciousness? Analyzing brain recordings in conscious and unconscious states, we followed initially the classic approach in physics when it comes to understanding collective behaviours of systems composed of a myriad of units: the assessment of the number of possible configurations (microstates) that the system can adopt, for which we use a global entropic measure associated with the number of connected brain regions. Having found maximal entropy in conscious states, we then inspected the microscopic nature of the configurations of connections using an adequate complexity measure and found higher complexity in states characterized not only by conscious awareness but also by subconscious cognitive processing, such as sleep stages. Our observations indicate that conscious awareness is associated with maximal global (macroscopic) entropy and with the short time scale (microscopic) complexity of the configurations of connected brain networks in pathological unconscious states (seizures and coma), but the microscopic view captures the high complexity in physiological unconscious states (sleep) where there is information processing. As such, our results support the global nature of conscious awareness, as advocated by several theories of cognition. We thus hope that our studies represent preliminary steps to reveal aspects of the structure of cognition that leads to conscious awareness.

  2. Discontinuous Galerkin Methods for NonLinear Differential Systems

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Mansour, Nagi (Technical Monitor)

    2001-01-01

    This talk considers simplified finite element discretization techniques for first-order systems of conservation laws equipped with a convex (entropy) extension. Using newly developed techniques in entropy symmetrization theory, simplified forms of the discontinuous Galerkin (DG) finite element method have been developed and analyzed. The use of symmetrization variables yields numerical schemes which inherit global entropy stability properties of the PDE (partial differential equation) system. Central to the development of the simplified DG methods is the Eigenvalue Scaling Theorem which characterizes right symmetrizers of an arbitrary first-order hyperbolic system in terms of scaled eigenvectors of the corresponding flux Jacobian matrices. A constructive proof is provided for the Eigenvalue Scaling Theorem with detailed consideration given to the Euler equations of gas dynamics and extended conservation law systems derivable as moments of the Boltzmann equation. Using results from kinetic Boltzmann moment closure theory, we then derive and prove energy stability for several approximate DG fluxes which have practical and theoretical merit.

  3. Quantum Entanglement and the Topological Order of Fractional Hall States

    NASA Astrophysics Data System (ADS)

    Rezayi, Edward

    2015-03-01

    Fractional quantum Hall states or, more generally, topological phases of matter defy Landau classification based on order parameter and broken symmetry. Instead they have been characterized by their topological order. Quantum information concepts, such as quantum entanglement, appear to provide the most efficient method of detecting topological order solely from the knowledge of the ground state wave function. This talk will focus on real-space bi-partitioning of quantum Hall states and will present both exact diagonalization and quantum Monte Carlo studies of topological entanglement entropy in various geometries. Results on the torus for non-contractible cuts are quite rich and, through the use of minimum entropy states, yield the modular S-matrix and hence uniquely determine the topological order, as shown in recent literature. Concrete examples of minimum entropy states from known quantum Hall wave functions and their corresponding quantum numbers, used in exact diagonalizations, will be given. In collaboration with Clare Abreu and Raul Herrera. Supported by DOE Grant DE-SC0002140.

  4. The role of topology in microstructure-property relations: a 2D DEM based study

    NASA Astrophysics Data System (ADS)

    Saleme Ruiz, Katerine; Emelianenko, Maria

    2018-01-01

    We compare Rényi entropy-based mesoscale approaches for characterizing 2D polycrystalline network topology and geometry, based on the grain number of sides and grain areas, respectively. We study the effect of microstructure disorder on mechanical properties such as elastic and damage response by performing simulations of quasi-static uniaxial compression loading tests on an idealized material using grain-level micro-mechanical discrete element model. While not comprehensive enough to make general conclusions, this study allows us to make observations about the sensitivity of mechanical parameters such as Young's modulus, proportional limit, first yield stress, toughness and amount of microstructure damage to different entropy measures.

  5. Maximization of the connectivity repertoire as a statistical principle governing the shapes of dendritic arbors

    PubMed Central

    Wen, Quan; Stepanyants, Armen; Elston, Guy N.; Grosberg, Alexander Y.; Chklovskii, Dmitri B.

    2009-01-01

    The shapes of dendritic arbors are fascinating and important, yet the principles underlying these complex and diverse structures remain unclear. Here, we analyzed basal dendritic arbors of 2,171 pyramidal neurons sampled from mammalian brains and discovered 3 statistical properties: the dendritic arbor size scales with the total dendritic length, the spatial correlation of dendritic branches within an arbor has a universal functional form, and small parts of an arbor are self-similar. We proposed that these properties result from maximizing the repertoire of possible connectivity patterns between dendrites and surrounding axons while keeping the cost of dendrites low. We solved this optimization problem by drawing an analogy with maximization of the entropy for a given energy in statistical physics. The solution is consistent with the above observations and predicts scaling relations that can be tested experimentally. In addition, our theory explains why dendritic branches of pyramidal cells are distributed more sparsely than those of Purkinje cells. Our results represent a step toward a unifying view of the relationship between neuronal morphology and function. PMID:19622738

  6. Explaining Entropy responses after a noxious stimulus, with or without neuromuscular blocking agents, by means of the raw electroencephalographic and electromyographic characteristics.

    PubMed

    Aho, A J; Lyytikäinen, L-P; Yli-Hankala, A; Kamata, K; Jäntti, V

    2011-01-01

    Entropy™, an anaesthetic EEG monitoring method, yields two parameters: State Entropy (SE) and Response Entropy (RE). SE reflects the hypnotic level of the patient. RE covers also the EMG-dominant part of the frequency spectrum, reflecting the upper facial EMG response to noxious stimulation. We studied the EEG, EMG, and Entropy values before and after skin incision, and the effect of rocuronium on Entropy and EMG at skin incision during sevoflurane-nitrous oxide (N₂O) anaesthesia. Thirty-eight patients were anaesthetized with sevoflurane-N₂O or sevoflurane-N₂O-rocuronium. The biosignal was stored and analysed off-line to detect EEG patterns, EMG, and artifacts. The signal, its power spectrum, SE, RE, and RE-SE values were analysed before and after skin incision. The EEG arousal was classified as β (increase in over 8 Hz activity and decrease in under 4 Hz activity with a typical β pattern) or δ (increase in under 4 Hz activity with the characteristic rhythmic δ pattern and a decrease in over 8 Hz activity). The EEG arousal appeared in 17 of 19 and 15 of 19 patients (NS), and the EMG arousal in 0 of 19 and 13 of 19 patients (P<0.01) with and without rocuronium, respectively. Both β (n=30) and EMG arousals increased SE and RE. The δ arousal (n=2) decreased both SE and RE. A significant increase in RE-SE values was only seen in patients without rocuronium. During sevoflurane-N₂O anaesthesia, both EEG and EMG arousals were seen. β and δ arousals had opposite effects on the Entropy values. The EMG arousal was abolished by rocuronium at the train of four level 0/4.

  7. Role of adjacency-matrix degeneracy in maximum-entropy-weighted network models

    NASA Astrophysics Data System (ADS)

    Sagarra, O.; Pérez Vicente, C. J.; Díaz-Guilera, A.

    2015-11-01

    Complex network null models based on entropy maximization are becoming a powerful tool to characterize and analyze data from real systems. However, it is not easy to extract good and unbiased information from these models: A proper understanding of the nature of the underlying events represented in them is crucial. In this paper we emphasize this fact stressing how an accurate counting of configurations compatible with given constraints is fundamental to build good null models for the case of networks with integer-valued adjacency matrices constructed from an aggregation of one or multiple layers. We show how different assumptions about the elements from which the networks are built give rise to distinctively different statistics, even when considering the same observables to match those of real data. We illustrate our findings by applying the formalism to three data sets using an open-source software package accompanying the present work and demonstrate how such differences are clearly seen when measuring network observables.

  8. On the structure-bounded growth processes in plant populations.

    PubMed

    Kilian, H G; Kazda, M; Király, F; Kaufmann, D; Kemkemer, R; Bartkowiak, D

    2010-07-01

    If growing cells in plants are considered to be composed of increments (ICs) an extended version of the law of mass action can be formulated. It evidences that growth of plants runs optimal if the reaction-entropy term (entropy times the absolute temperature) matches the contact energy of ICs. Since these energies are small, thermal molecular movements facilitate via relaxation the removal of structure disturbances. Stem diameter distributions exhibit extra fluctuations likely to be caused by permanent constraints. Since the signal-response system enables in principle perfect optimization only within finite-sized cell ensembles, plants comprising relatively large cell numbers form a network of size-limited subsystems. The maximal number of these constituents depends both on genetic and environmental factors. Accounting for logistical structure-dynamics interrelations, equations can be formulated to describe the bimodal growth curves of very different plants. The reproduction of the S-bended growth curves verifies that the relaxation modes with a broad structure-controlled distribution freeze successively until finally growth is fully blocked thus bringing about "continuous solidification".

  9. Maximizing Lipid Yield in Neochloris oleoabundans Algae Extraction by Stressing and Using Multiple Extraction Stages with N-Ethylbutylamine as Switchable Solvent

    PubMed Central

    2017-01-01

    The extraction yield of lipids from nonbroken Neochloris oleoabundans was maximized by using multiple extraction stages and using stressed algae. Experimental parameters that affect the extraction were investigated. The study showed that with wet algae (at least) 18 h extraction time was required for maximum yield at room temperature and a solvent/feed ratio of 1:1 (w/w). For fresh water (FW), nonstressed, nonbroken Neochloris oleoabundans, 13.1 wt % of lipid extraction yield (based on dry algae mass) was achieved, which could be improved to 61.3 wt % for FW stressed algae after four extractions, illustrating that a combination of stressing the algae and applying the solvent N-ethylbutylamine in multiple stages of extraction results in almost 5 times higher yield and is very promising for further development of energy-efficient lipid extraction technology targeting nonbroken wet microalgae. PMID:28781427

  10. Entanglement of heavy quark impurities and generalized gravitational entropy

    NASA Astrophysics Data System (ADS)

    Kumar, S. Prem; Silvani, Dorian

    2018-01-01

    We calculate the contribution from non-conformal heavy quark sources to the entanglement entropy (EE) of a spherical region in N=4 SUSY Yang-Mills theory. We apply the generalized gravitational entropy method to non-conformal probe D-brane embeddings in AdS5×S5, dual to pointlike impurities exhibiting flows between quarks in large-rank tensor representations and the fundamental representation. For the D5-brane embedding which describes the screening of fundamental quarks in the UV to the antisymmetric tensor representation in the IR, the EE excess decreases non-monotonically towards its IR asymptotic value, tracking the qualitative behaviour of the one-point function of static fields sourced by the impurity. We also examine two classes of D3-brane embeddings, one which connects a symmetric representation source in the UV to fundamental quarks in the IR, and a second category which yields the symmetric representation source on the Coulomb branch. The EE excess for the former increases from the UV to the IR, whilst decreasing and becoming negative for the latter. In all cases, the probe free energy on hyperbolic space with β = 2 π increases monotonically towards the IR, supporting its interpretation as a relative entropy. We identify universal corrections, depending logarithmically on the VEV, for the symmetric representation on the Coulomb branch.

  11. Discrimination of coherent features in turbulent boundary layers by the entropy method

    NASA Technical Reports Server (NTRS)

    Corke, T. C.; Guezennec, Y. G.

    1984-01-01

    Entropy in information theory is defined as the expected or mean value of the measure of the amount of self-information contained in the ith point of a distribution series x sub i, based on its probability of occurrence p(x sub i). If p(x sub i) is the probability of the ith state of the system in probability space, then the entropy, E(X) = - sigma p(x sub i) logp (x sub i), is a measure of the disorder in the system. Based on this concept, a method was devised which sought to minimize the entropy in a time series in order to construct the signature of the most coherent motions. The constrained minimization was performed using a Lagrange multiplier approach which resulted in the solution of a simultaneous set of non-linear coupled equations to obtain the coherent time series. The application of the method to space-time data taken by a rake of sensors in the near-wall region of a turbulent boundary layer was presented. The results yielded coherent velocity motions made up of locally decelerated or accelerated fluid having a streamwise scale of approximately 100 nu/u(tau), which is in qualitative agreement with the results from other less objective discrimination methods.

  12. Thinning strategies for aspen: a prediction model.

    Treesearch

    Donald A. Perala

    1978-01-01

    Derives thinning strategies to maximize volume yields of aspen fiber, sawtimber, and veneer. Demonstrates how yields are affected by growing season climatic variation and periodic defoliation by forest tent caterpillar.

  13. Green ultrasound-assisted extraction of anthocyanin and phenolic compounds from purple sweet potato using response surface methodology

    NASA Astrophysics Data System (ADS)

    Zhu, Zhenzhou; Guan, Qingyan; Guo, Ying; He, Jingren; Liu, Gang; Li, Shuyi; Barba, Francisco J.; Jaffrin, Michel Y.

    2016-01-01

    Response surface methodology was used to optimize experimental conditions for ultrasound-assisted extraction of valuable components (anthocyanins and phenolics) from purple sweet potatoes using water as a solvent. The Box-Behnken design was used for optimizing extraction responses of anthocyanin extraction yield, phenolic extraction yield, and specific energy consumption. Conditions to obtain maximal anthocyanin extraction yield, maximal phenolic extraction yield, and minimal specific energy consumption were different; an overall desirability function was used to search for overall optimal conditions: extraction temperature of 68ºC, ultrasonic treatment time of 52 min, and a liquid/solid ratio of 20. The optimized anthocyanin extraction yield, phenolic extraction yield, and specific energy consumption were 4.91 mg 100 g-1 fresh weight, 3.24 mg g-1 fresh weight, and 2.07 kWh g-1, respectively, with a desirability of 0.99. This study indicates that ultrasound-assisted extraction should contribute to a green process for valorization of purple sweet potatoes.

  14. Calculation of Five Thermodynamic Molecular Descriptors by Means of a General Computer Algorithm Based on the Group-Additivity Method: Standard Enthalpies of Vaporization, Sublimation and Solvation, and Entropy of Fusion of Ordinary Organic Molecules and Total Phase-Change Entropy of Liquid Crystals.

    PubMed

    Naef, Rudolf; Acree, William E

    2017-06-25

    The calculation of the standard enthalpies of vaporization, sublimation and solvation of organic molecules is presented using a common computer algorithm on the basis of a group-additivity method. The same algorithm is also shown to enable the calculation of their entropy of fusion as well as the total phase-change entropy of liquid crystals. The present method is based on the complete breakdown of the molecules into their constituting atoms and their immediate neighbourhood; the respective calculations of the contribution of the atomic groups by means of the Gauss-Seidel fitting method is based on experimental data collected from literature. The feasibility of the calculations for each of the mentioned descriptors was verified by means of a 10-fold cross-validation procedure proving the good to high quality of the predicted values for the three mentioned enthalpies and for the entropy of fusion, whereas the predictive quality for the total phase-change entropy of liquid crystals was poor. The goodness of fit ( Q ²) and the standard deviation (σ) of the cross-validation calculations for the five descriptors was as follows: 0.9641 and 4.56 kJ/mol ( N = 3386 test molecules) for the enthalpy of vaporization, 0.8657 and 11.39 kJ/mol ( N = 1791) for the enthalpy of sublimation, 0.9546 and 4.34 kJ/mol ( N = 373) for the enthalpy of solvation, 0.8727 and 17.93 J/mol/K ( N = 2637) for the entropy of fusion and 0.5804 and 32.79 J/mol/K ( N = 2643) for the total phase-change entropy of liquid crystals. The large discrepancy between the results of the two closely related entropies is discussed in detail. Molecules for which both the standard enthalpies of vaporization and sublimation were calculable, enabled the estimation of their standard enthalpy of fusion by simple subtraction of the former from the latter enthalpy. For 990 of them the experimental enthalpy-of-fusion values are also known, allowing their comparison with predictions, yielding a correlation coefficient R ² of 0.6066.

  15. Finite-time braiding exponents

    NASA Astrophysics Data System (ADS)

    Budišić, Marko; Thiffeault, Jean-Luc

    2015-08-01

    Topological entropy of a dynamical system is an upper bound for the sum of positive Lyapunov exponents; in practice, it is strongly indicative of the presence of mixing in a subset of the domain. Topological entropy can be computed by partition methods, by estimating the maximal growth rate of material lines or other material elements, or by counting the unstable periodic orbits of the flow. All these methods require detailed knowledge of the velocity field that is not always available, for example, when ocean flows are measured using a small number of floating sensors. We propose an alternative calculation, applicable to two-dimensional flows, that uses only a sparse set of flow trajectories as its input. To represent the sparse set of trajectories, we use braids, algebraic objects that record how trajectories exchange positions with respect to a projection axis. Material curves advected by the flow are represented as simplified loop coordinates. The exponential rate at which a braid stretches loops over a finite time interval is the Finite-Time Braiding Exponent (FTBE). We study FTBEs through numerical simulations of the Aref Blinking Vortex flow, as a representative of a general class of flows having a single invariant component with positive topological entropy. The FTBEs approach the value of the topological entropy from below as the length and number of trajectories is increased; we conjecture that this result holds for a general class of ergodic, mixing systems. Furthermore, FTBEs are computed robustly with respect to the numerical time step, details of braid representation, and choice of initial conditions. We find that, in the class of systems we describe, trajectories can be re-used to form different braids, which greatly reduces the amount of data needed to assess the complexity of the flow.

  16. Finite-time braiding exponents.

    PubMed

    Budišić, Marko; Thiffeault, Jean-Luc

    2015-08-01

    Topological entropy of a dynamical system is an upper bound for the sum of positive Lyapunov exponents; in practice, it is strongly indicative of the presence of mixing in a subset of the domain. Topological entropy can be computed by partition methods, by estimating the maximal growth rate of material lines or other material elements, or by counting the unstable periodic orbits of the flow. All these methods require detailed knowledge of the velocity field that is not always available, for example, when ocean flows are measured using a small number of floating sensors. We propose an alternative calculation, applicable to two-dimensional flows, that uses only a sparse set of flow trajectories as its input. To represent the sparse set of trajectories, we use braids, algebraic objects that record how trajectories exchange positions with respect to a projection axis. Material curves advected by the flow are represented as simplified loop coordinates. The exponential rate at which a braid stretches loops over a finite time interval is the Finite-Time Braiding Exponent (FTBE). We study FTBEs through numerical simulations of the Aref Blinking Vortex flow, as a representative of a general class of flows having a single invariant component with positive topological entropy. The FTBEs approach the value of the topological entropy from below as the length and number of trajectories is increased; we conjecture that this result holds for a general class of ergodic, mixing systems. Furthermore, FTBEs are computed robustly with respect to the numerical time step, details of braid representation, and choice of initial conditions. We find that, in the class of systems we describe, trajectories can be re-used to form different braids, which greatly reduces the amount of data needed to assess the complexity of the flow.

  17. From Maximization to Optimization: A Paradigm Shift in Rice Production in Thailand to Improve Overall Quality of Life of Stakeholders

    PubMed Central

    Doi, Ryoichi; Pitiwut, Supachai

    2014-01-01

    The concept of crop yield maximization has been widely supported. In practice, however, yield maximization does not necessarily lead to maximum socioeconomic welfare. Optimization is therefore necessary to ensure quality of life of farmers and other stakeholders. In Thailand, a rice farmers' network has adopted a promising agricultural system aimed at the optimization of rice farming. Various feasible techniques were flexibly combined. The new system offers technical strengths and minimizes certain difficulties with which the rice farmers once struggled. It has resulted in fairly good yields of up to 8.75 t ha−1 or yield increases of up to 57% (from 4.38 to 6.88 t ha−1). Under the optimization paradigm, the farmers have established diversified sustainable relationships with the paddy fields in terms of ecosystem management through their own self-motivated scientific observations. The system has resulted in good health conditions for the farmers and villagers, financial security, availability of extra time, and additional opportunities and freedom and hence in the improvement of their overall quality of life. The underlying technical and social mechanisms are discussed herein. PMID:25089294

  18. Proposed mechanism for learning and memory erasure in a white-noise-driven sleeping cortex.

    PubMed

    Steyn-Ross, Moira L; Steyn-Ross, D A; Sleigh, J W; Wilson, M T; Wilcocks, Lara C

    2005-12-01

    Understanding the structure and purpose of sleep remains one of the grand challenges of neurobiology. Here we use a mean-field linearized theory of the sleeping cortex to derive statistics for synaptic learning and memory erasure. The growth in correlated low-frequency high-amplitude voltage fluctuations during slow-wave sleep (SWS) is characterized by a probability density function that becomes broader and shallower as the transition into rapid-eye-movement (REM) sleep is approached. At transition, the Shannon information entropy of the fluctuations is maximized. If we assume Hebbian-learning rules apply to the cortex, then its correlated response to white-noise stimulation during SWS provides a natural mechanism for a synaptic weight change that will tend to shut down reverberant neural activity. In contrast, during REM sleep the weights will evolve in a direction that encourages excitatory activity. These entropy and weight-change predictions lead us to identify the final portion of deep SWS that occurs immediately prior to transition into REM sleep as a time of enhanced erasure of labile memory. We draw a link between the sleeping cortex and Landauer's dissipation theorem for irreversible computing [R. Landauer, IBM J. Res. Devel. 5, 183 (1961)], arguing that because information erasure is an irreversible computation, there is an inherent entropy cost as the cortex transits from SWS into REM sleep.

  19. Proposed mechanism for learning and memory erasure in a white-noise-driven sleeping cortex

    NASA Astrophysics Data System (ADS)

    Steyn-Ross, Moira L.; Steyn-Ross, D. A.; Sleigh, J. W.; Wilson, M. T.; Wilcocks, Lara C.

    2005-12-01

    Understanding the structure and purpose of sleep remains one of the grand challenges of neurobiology. Here we use a mean-field linearized theory of the sleeping cortex to derive statistics for synaptic learning and memory erasure. The growth in correlated low-frequency high-amplitude voltage fluctuations during slow-wave sleep (SWS) is characterized by a probability density function that becomes broader and shallower as the transition into rapid-eye-movement (REM) sleep is approached. At transition, the Shannon information entropy of the fluctuations is maximized. If we assume Hebbian-learning rules apply to the cortex, then its correlated response to white-noise stimulation during SWS provides a natural mechanism for a synaptic weight change that will tend to shut down reverberant neural activity. In contrast, during REM sleep the weights will evolve in a direction that encourages excitatory activity. These entropy and weight-change predictions lead us to identify the final portion of deep SWS that occurs immediately prior to transition into REM sleep as a time of enhanced erasure of labile memory. We draw a link between the sleeping cortex and Landauer’s dissipation theorem for irreversible computing [R. Landauer, IBM J. Res. Devel. 5, 183 (1961)], arguing that because information erasure is an irreversible computation, there is an inherent entropy cost as the cortex transits from SWS into REM sleep.

  20. Influence of Lumber Volume Maximization on Value in Sawing Hardwood Sawlogs

    Treesearch

    Philip H. Steele; Francis G. Wagner; Lalit Kumar; Philip A. Araman

    1992-01-01

    Research based on applying volume-maximizing sawing solutions to idealized hardwood log forms has shown that average lumber yield can be increased by 6 percent. It is possible, however, that a lumber volume-maximizing solution may result in a decrease in lumber grade and a net reduction in total value of sawn lumber. The objective of this study was to determine the...

  1. Developing a Measure of General Academic Ability: An Application of Maximal Reliability and Optimal Linear Combination to High School Students' Scores

    ERIC Educational Resources Information Center

    Dimitrov, Dimiter M.; Raykov, Tenko; AL-Qataee, Abdullah Ali

    2015-01-01

    This article is concerned with developing a measure of general academic ability (GAA) for high school graduates who apply to colleges, as well as with the identification of optimal weights of the GAA indicators in a linear combination that yields a composite score with maximal reliability and maximal predictive validity, employing the framework of…

  2. Galaxy Clusters: A Novel Look at Diffuse Baryons Withstanding Dark Matter Gravity

    NASA Astrophysics Data System (ADS)

    Cavaliere, A.; Lapi, A.; Fusco-Femiano, R.

    2009-06-01

    In galaxy clusters, the equilibria of the intracluster plasma (ICP) and of the gravitationally dominant dark matter (DM) are governed by the hydrostatic equation and by the Jeans equation, respectively; in either case gravity is withstood by the corresponding, entropy-modulated pressure. Jeans, with the DM "entropy" set to K vprop r α and α ≈ 1.25-1.3 applying from groups to rich clusters, yields our radial α-profiles these, compared to the empirical Navarro-Frenk-White distribution, are flatter at the center and steeper in the outskirts as required by recent gravitational lensing data. In the ICP, on the other hand, the entropy run k(r) is mainly shaped by shocks, as steadily set by supersonic accretion of gas at the cluster boundary, and intermittently driven from the center by merging events or by active galactic nuclei (AGNs); the resulting equilibrium is described by the exact yet simple formalism constituting our ICP Supermodel. With two parameters, this accurately represents the runs of density n(r) and temperature T(r) as required by up-to-date X-ray data on surface brightness and spectroscopy for both cool core (CC) and non-cool core (NCC) clusters; the former are marked by a middle temperature peak, whose location is predicted from rich clusters to groups. The Supermodel inversely links the inner runs of n(r) and T(r), and highlights their central scaling with entropy nc vprop k -1 c and Tc vprop k 0.35 c , to yield radiative cooling times tc ≈ 0.3(kc /15 keV cm2)1.2 Gyr. We discuss the stability of the central values so focused: against radiative erosion of kc in the cool dense conditions of CC clusters, that triggers recurrent AGN activities resetting it back; or against energy inputs from AGNs and mergers whose effects are saturated by the hot central conditions of NCC clusters. From the Supermodel, we also derive as limiting cases the classic polytropic β-models, and the "mirror" model with T(r) vprop σ2(r) suitable for NCC and CC clusters, respectively; these limiting cases highlight how the ICP temperature T(r) strives to mirror the DM velocity dispersion σ2(r) away from energy and entropy injections. Finally, we discuss how the Supermodel connects information derived from X-ray and gravitational lensing observations.

  3. Life, hierarchy, and the thermodynamic machinery of planet Earth.

    PubMed

    Kleidon, Axel

    2010-12-01

    Throughout Earth's history, life has increased greatly in abundance, complexity, and diversity. At the same time, it has substantially altered the Earth's environment, evolving some of its variables to states further and further away from thermodynamic equilibrium. For instance, concentrations in atmospheric oxygen have increased throughout Earth's history, resulting in an increased chemical disequilibrium in the atmosphere as well as an increased redox gradient between the atmosphere and the Earth's reducing crust. These trends seem to contradict the second law of thermodynamics, which states for isolated systems that gradients and free energy are dissipated over time, resulting in a state of thermodynamic equilibrium. This seeming contradiction is resolved by considering planet Earth as a coupled, hierarchical and evolving non-equilibrium thermodynamic system that has been substantially altered by the input of free energy generated by photosynthetic life. Here, I present this hierarchical thermodynamic theory of the Earth system. I first present simple considerations to show that thermodynamic variables are driven away from a state of thermodynamic equilibrium by the transfer of power from some other process and that the resulting state of disequilibrium reflects the past net work done on the variable. This is applied to the processes of planet Earth to characterize the generation and transfer of free energy and its dissipation, from radiative gradients to temperature and chemical potential gradients that result in chemical, kinetic, and potential free energy and associated dynamics of the climate system and geochemical cycles. The maximization of power transfer among the processes within this hierarchy yields thermodynamic efficiencies much lower than the Carnot efficiency of equilibrium thermodynamics and is closely related to the proposed principle of Maximum Entropy Production (MEP). The role of life is then discussed as a photochemical process that generates substantial amounts of chemical free energy which essentially skips the limitations and inefficiencies associated with the transfer of power within the thermodynamic hierarchy of the planet. This perspective allows us to view life as being the means to transform many aspects of planet Earth to states even further away from thermodynamic equilibrium than is possible by purely abiotic means. In this perspective pockets of low-entropy life emerge from the overall trend of the Earth system to increase the entropy of the universe at the fastest possible rate. The implications of the theory are discussed regarding fundamental deficiencies in Earth system modeling, applications of the theory to reconstructions of Earth system history, and regarding the role of human activity for the future of the planet. Copyright © 2010 Elsevier B.V. All rights reserved.

  4. Using Entropy Maximization to Understand the Determinants of Structural Dynamics beyond Native Contact Topology

    PubMed Central

    Lezon, Timothy R.; Bahar, Ivet

    2010-01-01

    Comparison of elastic network model predictions with experimental data has provided important insights on the dominant role of the network of inter-residue contacts in defining the global dynamics of proteins. Most of these studies have focused on interpreting the mean-square fluctuations of residues, or deriving the most collective, or softest, modes of motions that are known to be insensitive to structural and energetic details. However, with increasing structural data, we are in a position to perform a more critical assessment of the structure-dynamics relations in proteins, and gain a deeper understanding of the major determinants of not only the mean-square fluctuations and lowest frequency modes, but the covariance or the cross-correlations between residue fluctuations and the shapes of higher modes. A systematic study of a large set of NMR-determined proteins is analyzed using a novel method based on entropy maximization to demonstrate that the next level of refinement in the elastic network model description of proteins ought to take into consideration properties such as contact order (or sequential separation between contacting residues) and the secondary structure types of the interacting residues, whereas the types of amino acids do not play a critical role. Most importantly, an optimal description of observed cross-correlations requires the inclusion of destabilizing, as opposed to exclusively stabilizing, interactions, stipulating the functional significance of local frustration in imparting native-like dynamics. This study provides us with a deeper understanding of the structural basis of experimentally observed behavior, and opens the way to the development of more accurate models for exploring protein dynamics. PMID:20585542

  5. Using entropy maximization to understand the determinants of structural dynamics beyond native contact topology.

    PubMed

    Lezon, Timothy R; Bahar, Ivet

    2010-06-17

    Comparison of elastic network model predictions with experimental data has provided important insights on the dominant role of the network of inter-residue contacts in defining the global dynamics of proteins. Most of these studies have focused on interpreting the mean-square fluctuations of residues, or deriving the most collective, or softest, modes of motions that are known to be insensitive to structural and energetic details. However, with increasing structural data, we are in a position to perform a more critical assessment of the structure-dynamics relations in proteins, and gain a deeper understanding of the major determinants of not only the mean-square fluctuations and lowest frequency modes, but the covariance or the cross-correlations between residue fluctuations and the shapes of higher modes. A systematic study of a large set of NMR-determined proteins is analyzed using a novel method based on entropy maximization to demonstrate that the next level of refinement in the elastic network model description of proteins ought to take into consideration properties such as contact order (or sequential separation between contacting residues) and the secondary structure types of the interacting residues, whereas the types of amino acids do not play a critical role. Most importantly, an optimal description of observed cross-correlations requires the inclusion of destabilizing, as opposed to exclusively stabilizing, interactions, stipulating the functional significance of local frustration in imparting native-like dynamics. This study provides us with a deeper understanding of the structural basis of experimentally observed behavior, and opens the way to the development of more accurate models for exploring protein dynamics.

  6. Classification of epileptic seizures using wavelet packet log energy and norm entropies with recurrent Elman neural network classifier.

    PubMed

    Raghu, S; Sriraam, N; Kumar, G Pradeep

    2017-02-01

    Electroencephalogram shortly termed as EEG is considered as the fundamental segment for the assessment of the neural activities in the brain. In cognitive neuroscience domain, EEG-based assessment method is found to be superior due to its non-invasive ability to detect deep brain structure while exhibiting superior spatial resolutions. Especially for studying the neurodynamic behavior of epileptic seizures, EEG recordings reflect the neuronal activity of the brain and thus provide required clinical diagnostic information for the neurologist. This specific proposed study makes use of wavelet packet based log and norm entropies with a recurrent Elman neural network (REN) for the automated detection of epileptic seizures. Three conditions, normal, pre-ictal and epileptic EEG recordings were considered for the proposed study. An adaptive Weiner filter was initially applied to remove the power line noise of 50 Hz from raw EEG recordings. Raw EEGs were segmented into 1 s patterns to ensure stationarity of the signal. Then wavelet packet using Haar wavelet with a five level decomposition was introduced and two entropies, log and norm were estimated and were applied to REN classifier to perform binary classification. The non-linear Wilcoxon statistical test was applied to observe the variation in the features under these conditions. The effect of log energy entropy (without wavelets) was also studied. It was found from the simulation results that the wavelet packet log entropy with REN classifier yielded a classification accuracy of 99.70 % for normal-pre-ictal, 99.70 % for normal-epileptic and 99.85 % for pre-ictal-epileptic.

  7. High-order computer-assisted estimates of topological entropy

    NASA Astrophysics Data System (ADS)

    Grote, Johannes

    The concept of Taylor Models is introduced, which offers highly accurate C0-estimates for the enclosures of functional dependencies, combining high-order Taylor polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified interval arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly nonlinear dynamical systems. A method to obtain sharp rigorous enclosures of Poincare maps for certain types of flows and surfaces is developed and numerical examples are presented. Differential algebraic techniques allow the efficient and accurate computation of polynomial approximations for invariant curves of certain planar maps around hyperbolic fixed points. Subsequently we introduce a procedure to extend these polynomial curves to verified Taylor Model enclosures of local invariant manifolds with C0-errors of size 10-10--10 -14, and proceed to generate the global invariant manifold tangle up to comparable accuracy through iteration in Taylor Model arithmetic. Knowledge of the global manifold structure up to finite iterations of the local manifold pieces enables us to find all homoclinic and heteroclinic intersections in the generated manifold tangle. Combined with the mapping properties of the homoclinic points and their ordering we are able to construct a subshift of finite type as a topological factor of the original planar system to obtain rigorous lower bounds for its topological entropy. This construction is fully automatic and yields homoclinic tangles with several hundred homoclinic points. As an example rigorous lower bounds for the topological entropy of the Henon map are computed, which to the best knowledge of the authors yield the largest such estimates published so far.

  8. CURRENT SHEET THINNING AND ENTROPY CONSTRAINTS DURING THE SUBSTORM GROWTH PHASE

    NASA Astrophysics Data System (ADS)

    Otto, A.; Hall, F., IV

    2009-12-01

    A typical property during the growth phase of geomagnetic substorms is the thinning of the near-Earth current sheet, most pronounced in the region between 6 and 15 R_E. We propose that the cause for the current sheet thinning is convection from the midnight tail region to the dayside to replenish magnetospheric magnetic flux which is eroded at the dayside as a result of dayside reconnection. Adiabatic convection from the near-Earth tail region toward the dayside must conserve the entropy on magnetic field lines. This constraint prohibits a source of the magnetic flux from a region further out in the magnetotail. Thus the near-Earth tail region is increasingly depleted of magnetic flux (the Erickson and Wolf [1980] problem) with entropy matching that of flux tubes that are eroded on the dayside. It is proposed that the magnetic flux depletion in the near-Earth tail forces the formation of thin current layers. The process is documented by three-dimensional MHD simulations. It is shown that the simulations yield a time scale, location, and other general characteristics of the current sheet evolution during the substorm growth phase.

  9. An investigation of the information propagation and entropy transport aspects of Stirling machine numerical simulation

    NASA Technical Reports Server (NTRS)

    Goldberg, Louis F.

    1992-01-01

    Aspects of the information propagation modeling behavior of integral machine computer simulation programs are investigated in terms of a transmission line. In particular, the effects of pressure-linking and temporal integration algorithms on the amplitude ratio and phase angle predictions are compared against experimental and closed-form analytic data. It is concluded that the discretized, first order conservation balances may not be adequate for modeling information propagation effects at characteristic numbers less than about 24. An entropy transport equation suitable for generalized use in Stirling machine simulation is developed. The equation is evaluated by including it in a simulation of an incompressible oscillating flow apparatus designed to demonstrate the effect of flow oscillations on the enhancement of thermal diffusion. Numerical false diffusion is found to be a major factor inhibiting validation of the simulation predictions with experimental and closed-form analytic data. A generalized false diffusion correction algorithm is developed which allows the numerical results to match their analytic counterparts. Under these conditions, the simulation yields entropy predictions which satisfy Clausius' inequality.

  10. Effect of cold rolling on the microstructure and mechanical properties of Al 0.25CoCrFe 1.25Ni 1.25 high-entropy alloy

    DOE PAGES

    Wang, Z.; Gao, M. C.; Ma, S. G.; ...

    2015-08-05

    Cold rolling can break down the as-cast dendrite microstructure and thus may have pronounced impact on the mechanical behavior of the alloy. In the present study, the effect of cold rolling on the microstructure and mechanical properties of Al 0.25CoCrFe 1.25Ni 1.25 high-entropy alloy in the face-centered cubic structure was investigated. With increasing the thickness reduction from cold rolling, the hardness, the yield strength, and the fracture strength increased at the cost of reducing ductility. At the thickness reduction of 80%, the tensile strength (hardness) was 702 MPa (406 MPa), 1.62 (2.43) times that in the as-cast condition. Compared tomore » traditional alloys, Al 0.25CoCrFe 1.25Ni 1.25 has the highest hardening rate with respect to CR thickness reduction. Lastly, the phase relation and the mixing properties of Gibbs free energy, enthalpy and entropy of Al xCoCrFe 1.25Ni 1.25 were predicted using the CALPHAD method.« less

  11. Melting of Simple Solids and the Elementary Excitations of the Communal Entropy

    NASA Astrophysics Data System (ADS)

    Bongiorno, Angelo

    2010-03-01

    The melting phase transition of simple solids is addressed through the use of atomistic computer simulations. Three transition metals (Ni, Au, and Pt) and a semiconductor (Si) are considered in this study. Iso-enthalpic molecular dynamics simulations are used to compute caloric curves across the solid-to-liquid phase transition of a periodic crystalline system, to construct the free energy function of the solid and liquid phases, and thus to derive the thermodynamical limit of the melting point, latent heat and entropy of fusion of the material. The computational strategy used in this study yields accurate estimates of melting parameters, it consents to determine the superheating and supercooling temperature limits, and it gives access to the atomistic mechanisms mediating the melting process. In particular, it is found that the melting phase transition in simple solids is driven by exchange steps involving a few atoms and preserving the crystalline structure. These self-diffusion phenomena correspond to the elementary excitations of the communal entropy and, as their rate depends on the local material cohesivity, they mediate both the homogeneous and non-homogeneous melting process in simple solids.

  12. Thermodynamic studies for adsorption of ionizable pharmaceuticals onto soil.

    PubMed

    Maszkowska, Joanna; Wagil, Marta; Mioduszewska, Katarzyna; Kumirska, Jolanta; Stepnowski, Piotr; Białk-Bielińska, Anna

    2014-09-01

    Although pharmaceutical compounds (PCs) are being used more and more widely, and studies have been carried out to assess their presence in the environment, knowledge of their fate and behavior, especially under different environmental conditions, is still limited. The principle objective of the present work, therefore, is to evaluate the adsorption behavior of three ionizable, polar compounds occurring in different forms: cationic (propranolol - PRO), anionic (sulfisoxazole - SSX) and neutral (sulfaguanidine - SGD) onto soil under various temperature conditions. The adsorption thermodynamics of these researched compounds were extensively investigated using parameters such as enthalpy change (ΔH°), Gibbs free energy change (ΔG°) as well as entropy change (ΔS°). These calculations reveal that sorption of PRO is exothermic, spontaneous and enthalpy driven, sorption of SGD is endothermic, spontaneous and entropy driven whereas sorption of SSX is endothermic, spontaneous only above the temperature of 303.15K and entropy driven. Furthermore, we submit that the calculated values yield valuable information regarding the sorption mechanism of PRO, SGD and SSX onto soils. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Improved sugar yields from biomass sorghum feedstocks: comparing low-lignin mutants and pretreatment chemistries

    USDA-ARS?s Scientific Manuscript database

    Background: For biofuel production processes to be economically efficient, it is essential to maximize the production of monomeric carbohydrates from the structural carbohydrates of feedstocks. One strategy for maximizing carbohydrate production is to identify less recalcitrant feedstock cultivars b...

  14. Maximizing grain sorghum water use efficiency under deficit irrigation

    USDA-ARS?s Scientific Manuscript database

    Development and evaluation of sustainable and efficient irrigation strategies is a priority for producers faced with water shortages resulting from aquifer depletion, reduced base flows, and reallocation of water to non-agricultural sectors. Under a limited water supply, yield maximization may not b...

  15. On the Spatial Distribution of High Velocity Al-26 Near the Galactic Center

    NASA Technical Reports Server (NTRS)

    Sturner, Steven J.

    2000-01-01

    We present results of simulations of the distribution of 1809 keV radiation from the decay of Al-26 in the Galaxy. Recent observations of this emission line using the Gamma Ray Imaging Spectrometer (GRIS) have indicated that the bulk of the AL-26 must have a velocity of approx. 500 km/ s. We have previously shown that a velocity this large could be maintained over the 10(exp 6) year lifetime of the Al-26 if it is trapped in dust grains that are reaccelerated periodically in the ISM. Here we investigate whether a dust grain velocity of approx. 500 km/ s will produce a distribution of 1809 keV emission in latitude that is consistent with the narrow distribution seen by COMPTEL. We find that dust grain velocities in the range 275 - 1000 km/ s are able to reproduce the COMPTEL 1809 keV emission maps reconstructed using the Richardson-Lucy and Maximum Entropy image reconstruction methods while the emission map reconstructed using the Multiresolution Regularized Expectation Maximization algorithm is not well fit by any of our models. The Al-26 production rate that is needed to reproduce the observed 1809 keV intensity yields in a Galactic mass of Al-26 of approx. 1.5 - 2 solar mass which is in good agreement with both other observations and theoretical production rates.

  16. Information theory analysis of Australian humpback whale song.

    PubMed

    Miksis-Olds, Jennifer L; Buck, John R; Noad, Michael J; Cato, Douglas H; Stokes, M Dale

    2008-10-01

    Songs produced by migrating whales were recorded off the coast of Queensland, Australia, over six consecutive weeks in 2003. Forty-eight independent song sessions were analyzed using information theory techniques. The average length of the songs estimated by correlation analysis was approximately 100 units, with song sessions lasting from 300 to over 3100 units. Song entropy, a measure of structural constraints, was estimated using three different methodologies: (1) the independently identically distributed model, (2) a first-order Markov model, and (3) the nonparametric sliding window match length (SWML) method, as described by Suzuki et al. [(2006). "Information entropy of humpback whale song," J. Acoust. Soc. Am. 119, 1849-1866]. The analysis finds that the song sequences of migrating Australian whales are consistent with the hierarchical structure proposed by Payne and McVay [(1971). "Songs of humpback whales," Science 173, 587-597], and recently supported mathematically by Suzuki et al. (2006) for singers on the Hawaiian breeding grounds. Both the SWML entropy estimates and the song lengths for the Australian singers in 2003 were lower than that reported by Suzuki et al. (2006) for Hawaiian whales in 1976-1978; however, song redundancy did not differ between these two populations separated spatially and temporally. The average total information in the sequence of units in Australian song was approximately 35 bits/song. Aberrant songs (8%) yielded entropies similar to the typical songs.

  17. Qualitative breakdown of the noncrossing approximation for the symmetric one-channel Anderson impurity model at all temperatures

    NASA Astrophysics Data System (ADS)

    Sposetti, C. N.; Manuel, L. O.; Roura-Bas, P.

    2016-08-01

    The Anderson impurity model is studied by means of the self-consistent hybridization expansions in its noncrossing (NCA) and one-crossing (OCA) approximations. We have found that for the one-channel spin-1 /2 particle-hole symmetric Anderson model, the NCA results are qualitatively wrong for any temperature, even when the approximation gives the exact threshold exponents of the ionic states. Actually, the NCA solution describes an overscreened Kondo effect, because it is the same as for the two-channel infinite-U single-level Anderson model. We explicitly show that the NCA is unable to distinguish between these two very different physical systems, independently of temperature. Using the impurity entropy as an example, we show that the low-temperature values of the NCA entropy for the symmetric case yield the limit Simp(T =0 ) →ln√{2 }, which corresponds to the zero temperature entropy of the overscreened Kondo model. Similar pathologies are predicted for any other thermodynamic property. On the other hand, we have found that the OCA approach lifts the artificial mapping between the models and restores correct properties of the ground state, for instance, a vanishing entropy at low enough temperatures Simp(T =0 ) →0 . Our results indicate that the very well known NCA should be used with caution close to the symmetric point of the Anderson model.

  18. Predicted harvest time effects on switchgrass moisture content, nutrient concentration, yield, and profitability

    USDA-ARS?s Scientific Manuscript database

    Production costs change with harvest date of switchgrass (Panicum virgatum L.) as a result of nutrient recycling and changes in yield of this perennial crop. This study examines the range of cost of production from an early, yield-maximizing harvest date to a late winter harvest date at low moisture...

  19. Benchmarking B-Cell Epitope Prediction with Quantitative Dose-Response Data on Antipeptide Antibodies: Towards Novel Pharmaceutical Product Development

    PubMed Central

    Caoili, Salvador Eugenio C.

    2014-01-01

    B-cell epitope prediction can enable novel pharmaceutical product development. However, a mechanistically framed consensus has yet to emerge on benchmarking such prediction, thus presenting an opportunity to establish standards of practice that circumvent epistemic inconsistencies of casting the epitope prediction task as a binary-classification problem. As an alternative to conventional dichotomous qualitative benchmark data, quantitative dose-response data on antibody-mediated biological effects are more meaningful from an information-theoretic perspective in the sense that such effects may be expressed as probabilities (e.g., of functional inhibition by antibody) for which the Shannon information entropy (SIE) can be evaluated as a measure of informativeness. Accordingly, half-maximal biological effects (e.g., at median inhibitory concentrations of antibody) correspond to maximally informative data while undetectable and maximal biological effects correspond to minimally informative data. This applies to benchmarking B-cell epitope prediction for the design of peptide-based immunogens that elicit antipeptide antibodies with functionally relevant cross-reactivity. Presently, the Immune Epitope Database (IEDB) contains relatively few quantitative dose-response data on such cross-reactivity. Only a small fraction of these IEDB data is maximally informative, and many more of them are minimally informative (i.e., with zero SIE). Nevertheless, the numerous qualitative data in IEDB suggest how to overcome the paucity of informative benchmark data. PMID:24949474

  20. Production of ethanol and xylitol from corn cobs by yeasts.

    PubMed

    Latif, F; Rajoka, M I

    2001-03-01

    Saccharomyces cerevisiae and Candida tropicalis were used separately and as co-culture for simultaneous saccharification and fermentation (SSF) of 5-20% (w/v) dry corn cobs. A maximal ethanol concentration of 27, 23, 21 g/l (w/v) from 200 g/l (w/v) dry corn cobs was obtained by S. cerevisiae, C. tropicalis and the co-culture, respectively, after 96 h of fermentation. However, theoretical yields of 82%, 71% and 63% were observed from 50 g/l dry corn cobs for the above cultures, respectively. Maximal xylitol concentration of 21, 20 and 15 g/l from 200 g/l (w/v) dry corn cobs was obtained by C. tropicalis, co-culture, and S. cerevisiae, respectively. Maximum theoretical yields of 79.0%, 77.0% and 58% were observed from 50 g/l of corn cobs, respectively. The volumetric productivities for ethanol and xylitol increased with the increase in substrate concentration, whereas, yield decreased. Glycerol and acetic acid were formed as minor by-products. S. cerevisiae and C. tropicalis resulted in better product yields (0.42 and 0.36 g/g) for ethanol and (0.52 and 0.71 g/g) for xylitol, respectively, whereas, the co-culture showed moderate level of ethanol (0.32 g/g) and almost maximal levels of xylitol (0.69 g/g).

  1. Light-cone velocities after a global quench in a noninteracting model

    NASA Astrophysics Data System (ADS)

    Najafi, K.; Rajabpour, M. A.; Viti, J.

    2018-05-01

    We study the light-cone velocity for global quenches in the noninteracting XY chain starting from a class of initial states that are eigenstates of the local z component of the spin. We point out how translation invariance of the initial state can affect the maximal speed at which correlations spread. As a consequence the light-cone velocity can be state dependent also for noninteracting systems: a new effect of which we provide clear numerical evidence and analytic predictions. Analogous considerations, based on numerical results, are drawn for the evolution of the entanglement entropy.

  2. Maximum entropy approach to fuzzy control

    NASA Technical Reports Server (NTRS)

    Ramer, Arthur; Kreinovich, Vladik YA.

    1992-01-01

    For the same expert knowledge, if one uses different &- and V-operations in a fuzzy control methodology, one ends up with different control strategies. Each choice of these operations restricts the set of possible control strategies. Since a wrong choice can lead to a low quality control, it is reasonable to try to loose as few possibilities as possible. This idea is formalized and it is shown that it leads to the choice of min(a + b,1) for V and min(a,b) for &. This choice was tried on NASA Shuttle simulator; it leads to a maximally stable control.

  3. LEGO Materials.

    PubMed

    Talapin, Dmitri V

    2008-06-01

    Two papers in this issue report important developments in the field of inorganic nanomaterials. Chen and O'Brien discuss self-assembly of semiconductor nanocrystals into binary nanoparticle superlattices (BNSLs). They show that simple geometrical principles based on maximizing the packing density can determine BNSL symmetry in the absence of cohesive electrostatic interactions. This finding highlights the role of entropy as the driving force for ordering nanoparticles. The other paper, by Weller and co-workers, addresses an important problem related to device integration of nanoparticle assemblies. They employ the Langmuir-Blodgett technique to prepare long-range ordered monolayers of close-packed nanocrystals and transfer them to different substrates.

  4. Detecting recurrence domains of dynamical systems by symbolic dynamics.

    PubMed

    beim Graben, Peter; Hutt, Axel

    2013-04-12

    We propose an algorithm for the detection of recurrence domains of complex dynamical systems from time series. Our approach exploits the characteristic checkerboard texture of recurrence domains exhibited in recurrence plots. In phase space, recurrence plots yield intersecting balls around sampling points that could be merged into cells of a phase space partition. We construct this partition by a rewriting grammar applied to the symbolic dynamics of time indices. A maximum entropy principle defines the optimal size of intersecting balls. The final application to high-dimensional brain signals yields an optimal symbolic recurrence plot revealing functional components of the signal.

  5. DNA Nanostructures as Models for Evaluating the Role of Enthalpy and Entropy in Polyvalent Binding

    PubMed Central

    Nangreave, Jeanette; Yan, Hao; Liu, Yan

    2011-01-01

    DNA nanotechnology allows the design and construction of nano-scale objects that have finely tuned dimensions, orientation, and structure with remarkable ease and convenience. Synthetic DNA nanostructures can be precisely engineered to model a variety of molecules and systems, providing the opportunity to probe very subtle biophysical phenomena. In this study, several such synthetic DNA nanostructures were designed to serve as models to study the binding behavior of polyvalent molecules and gain insight into how small changes to the ligand/receptor scaffolds, intended to vary their conformational flexibility, will affect their association equilibrium. This approach has yielded a quantitative identification of the roles of enthalpy and entropy in the affinity of polyvalent DNA nanostructure interactions, which exhibit an intriguing compensating effect. PMID:21381740

  6. A non-uniformly sampled 4D HCC(CO)NH-TOCSY experiment processed using maximum entropy for rapid protein sidechain assignment

    PubMed Central

    Mobli, Mehdi; Stern, Alan S.; Bermel, Wolfgang; King, Glenn F.; Hoch, Jeffrey C.

    2010-01-01

    One of the stiffest challenges in structural studies of proteins using NMR is the assignment of sidechain resonances. Typically, a panel of lengthy 3D experiments are acquired in order to establish connectivities and resolve ambiguities due to overlap. We demonstrate that these experiments can be replaced by a single 4D experiment that is time-efficient, yields excellent resolution, and captures unique carbon-proton connectivity information. The approach is made practical by the use of non-uniform sampling in the three indirect time dimensions and maximum entropy reconstruction of the corresponding 3D frequency spectrum. This 4D method will facilitate automated resonance assignment procedures and it should be particularly beneficial for increasing throughput in NMR-based structural genomics initiatives. PMID:20299257

  7. Interim heterogeneity changes measured using entropy texture features on T2-weighted MRI at 3.0 T are associated with pathological response to neoadjuvant chemotherapy in primary breast cancer.

    PubMed

    Henderson, Shelley; Purdie, Colin; Michie, Caroline; Evans, Andrew; Lerski, Richard; Johnston, Marilyn; Vinnicombe, Sarah; Thompson, Alastair M

    2017-11-01

    To investigate whether interim changes in hetereogeneity (measured using entropy features) on MRI were associated with pathological residual cancer burden (RCB) at final surgery in patients receiving neoadjuvant chemotherapy (NAC) for primary breast cancer. This was a retrospective study of 88 consenting women (age: 30-79 years). Scanning was performed on a 3.0 T MRI scanner prior to NAC (baseline) and after 2-3 cycles of treatment (interim). Entropy was derived from the grey-level co-occurrence matrix, on slice-matched baseline/interim T2-weighted images. Response, assessed using RCB score on surgically resected specimens, was compared statistically with entropy/heterogeneity changes and ROC analysis performed. Association of pCR within each tumour immunophenotype was evaluated. Mean entropy percent differences between examinations, by response category, were: pCR: 32.8%, RCB-I: 10.5%, RCB-II: 9.7% and RCB-III: 3.0%. Association of ultimate pCR with coarse entropy changes between baseline/interim MRI across all lesions yielded 85.2% accuracy (area under ROC curve: 0.845). Excellent sensitivity/specificity was obtained for pCR prediction within each immunophenotype: ER+: 100%/100%; HER2+: 83.3%/95.7%, TNBC: 87.5%/80.0%. Lesion T2 heterogeneity changes are associated with response to NAC using RCB scores, particularly for pCR, and can be useful across all immunophenotypes with good diagnostic accuracy. • Texture analysis provides a means of measuring lesion heterogeneity on MRI images. • Heterogeneity changes between baseline/interim MRI can be linked with ultimate pathological response. • Heterogeneity changes give good diagnostic accuracy of pCR response across all immunophenotypes. • Percentage reduction in heterogeneity is associated with pCR with good accuracy and NPV.

  8. New Standard State Entropy for Sphene (Titanite)

    NASA Astrophysics Data System (ADS)

    Manon, M. R.; Dachs, E.; Essene, E. J.

    2004-12-01

    Several recent papers have questioned the accepted standard state (STP) entropy of sphene (CaTiSiO5), which had been considered to be in the range 129-132 J/mol.K (Berman, 1988: 129.3 Robie and Hemingway, 1995: 129.2 J/mol.K; Holland and Powell, 1995: 131.2 J/mol.K.). However, Xirouchakis and Lindsley (1998) recommended a much lower value of 106 J/mol.K for the STP entropy of sphene. Tangeman and Xirouchakis (2001) inferred a value less than 124 or 120 J/mol.K, based on based on enthalpy constraints combined with the tightly reversed reaction sphene+kyanite=rutile+anorthite by Bohlen and Manning (1991). Their recommendations are in conflict with the accepted values for STP entropy for sphene, including values calculated by direct measurement of Cp from 50 to 300 K by King (1954). In order to resolve this discrepancy, we have collected new data on the Cp of sphene between 5 and 300 K. Our measurements were made in the PPMS at Salzburg on a 21.4 g sample of sphene generously furnished by Tangeman and Xirouchakis (2001), the same sample as used in their experiments. The Cp data are slightly lower than those of King (1954) but merge smoothly with data of Tangeman and Xirouchakis (2001) from 330 to 483 K (or whatever) where a transition is recorded in the Cp data as a lambda anomaly. Tangeman and Xirouchakis also obtained data above the transition up to 950K. Integration of the new Cp data yields a STP entropy of 127.3 J/mol.K, lower than the generally accepted value by ca. 2 J/mol.K. A change in the STP entropy of sphene will have an effect on many Ti-bearing reactions which occur within the earth, although the magnitude of this change is not nearly as large as that suggested by Xirouchakis and Lindsley (1998). Above 700 K, the entropy calculated using the new STP entropy with the heat capacity equation of Tangeman and Xirouchakis (2001) is within 1 J/mol.K of the value tabulated in Robie and Hemingway (1995) and of that calculated from Berman (1988). The effect on most phase equilibrium calculations will not be large except for reactions with small Δ S. The use of 127.2 J/mol.K as the standard entropy of sphene is recommended especially in calculations of geobarometers involving that phase.

  9. New approach of financial volatility duration dynamics by stochastic finite-range interacting voter system.

    PubMed

    Wang, Guochao; Wang, Jun

    2017-01-01

    We make an approach on investigating the fluctuation behaviors of financial volatility duration dynamics. A new concept of volatility two-component range intensity (VTRI) is developed, which constitutes the maximal variation range of volatility intensity and shortest passage time of duration, and can quantify the investment risk in financial markets. In an attempt to study and describe the nonlinear complex properties of VTRI, a random agent-based financial price model is developed by the finite-range interacting biased voter system. The autocorrelation behaviors and the power-law scaling behaviors of return time series and VTRI series are investigated. Then, the complexity of VTRI series of the real markets and the proposed model is analyzed by Fuzzy entropy (FuzzyEn) and Lempel-Ziv complexity. In this process, we apply the cross-Fuzzy entropy (C-FuzzyEn) to study the asynchrony of pairs of VTRI series. The empirical results reveal that the proposed model has the similar complex behaviors with the actual markets and indicate that the proposed stock VTRI series analysis and the financial model are meaningful and feasible to some extent.

  10. New approach of financial volatility duration dynamics by stochastic finite-range interacting voter system

    NASA Astrophysics Data System (ADS)

    Wang, Guochao; Wang, Jun

    2017-01-01

    We make an approach on investigating the fluctuation behaviors of financial volatility duration dynamics. A new concept of volatility two-component range intensity (VTRI) is developed, which constitutes the maximal variation range of volatility intensity and shortest passage time of duration, and can quantify the investment risk in financial markets. In an attempt to study and describe the nonlinear complex properties of VTRI, a random agent-based financial price model is developed by the finite-range interacting biased voter system. The autocorrelation behaviors and the power-law scaling behaviors of return time series and VTRI series are investigated. Then, the complexity of VTRI series of the real markets and the proposed model is analyzed by Fuzzy entropy (FuzzyEn) and Lempel-Ziv complexity. In this process, we apply the cross-Fuzzy entropy (C-FuzzyEn) to study the asynchrony of pairs of VTRI series. The empirical results reveal that the proposed model has the similar complex behaviors with the actual markets and indicate that the proposed stock VTRI series analysis and the financial model are meaningful and feasible to some extent.

  11. Ship Detection from Ocean SAR Image Based on Local Contrast Variance Weighted Information Entropy

    PubMed Central

    Huang, Yulin; Pei, Jifang; Zhang, Qian; Gu, Qin; Yang, Jianyu

    2018-01-01

    Ship detection from synthetic aperture radar (SAR) images is one of the crucial issues in maritime surveillance. However, due to the varying ocean waves and the strong echo of the sea surface, it is very difficult to detect ships from heterogeneous and strong clutter backgrounds. In this paper, an innovative ship detection method is proposed to effectively distinguish the vessels from complex backgrounds from a SAR image. First, the input SAR image is pre-screened by the maximally-stable extremal region (MSER) method, which can obtain the ship candidate regions with low computational complexity. Then, the proposed local contrast variance weighted information entropy (LCVWIE) is adopted to evaluate the complexity of those candidate regions and the dissimilarity between the candidate regions with their neighborhoods. Finally, the LCVWIE values of the candidate regions are compared with an adaptive threshold to obtain the final detection result. Experimental results based on measured ocean SAR images have shown that the proposed method can obtain stable detection performance both in strong clutter and heterogeneous backgrounds. Meanwhile, it has a low computational complexity compared with some existing detection methods. PMID:29652863

  12. Moderate point: Balanced entropy and enthalpy contributions in soft matter

    NASA Astrophysics Data System (ADS)

    He, Baoji; Wang, Yanting

    2017-03-01

    Various soft materials share some common features, such as significant entropic effect, large fluctuations, sensitivity to thermodynamic conditions, and mesoscopic characteristic spatial and temporal scales. However, no quantitative definitions have yet been provided for soft matter, and the intrinsic mechanisms leading to their common features are unclear. In this work, from the viewpoint of statistical mechanics, we show that soft matter works in the vicinity of a specific thermodynamic state named moderate point, at which entropy and enthalpy contributions among substates along a certain order parameter are well balanced or have a minimal difference. Around the moderate point, the order parameter fluctuation, the associated response function, and the spatial correlation length maximize, which explains the large fluctuation, the sensitivity to thermodynamic conditions, and mesoscopic spatial and temporal scales of soft matter, respectively. Possible applications to switching chemical bonds or allosteric biomachines determining their best working temperatures are also briefly discussed. Project supported by the National Basic Research Program of China (Grant No. 2013CB932804) and the National Natural Science Foundation of China (Grant Nos. 11274319 and 11421063).

  13. Entanglement properties of the antiferromagnetic-singlet transition in the Hubbard model on bilayer square lattices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Chia-Chen; Singh, Rajiv R. P.; Scalettar, Richard T.

    Here, we calculate the bipartite R enyi entanglement entropy of an L x L x 2 bilayer Hubbard model using a determinantal quantum Monte Carlo method recently proposed by Grover [Phys. Rev. Lett. 111, 130402 (2013)]. Two types of bipartition are studied: (i) One that divides the lattice into two L x L planes, and (ii) One that divides the lattice into two equal-size (L x L=2 x 2) bilayers. Furthermore, we compare our calculations with those for the tight-binding model studied by the correlation matrix method. As expected, the entropy for bipartition (i) scales as L 2, while themore » latter scales with L with possible logarithmic corrections. The onset of the antiferromagnet to singlet transition shows up by a saturation of the former to a maximal value and the latter to a small value in the singlet phase. We also comment on the large uncertainties in the numerical results with increasing U, which would have to be overcome before the critical behavior and logarithmic corrections can be quanti ed.« less

  14. Entanglement properties of the antiferromagnetic-singlet transition in the Hubbard model on bilayer square lattices

    DOE PAGES

    Chang, Chia-Chen; Singh, Rajiv R. P.; Scalettar, Richard T.

    2014-10-10

    Here, we calculate the bipartite R enyi entanglement entropy of an L x L x 2 bilayer Hubbard model using a determinantal quantum Monte Carlo method recently proposed by Grover [Phys. Rev. Lett. 111, 130402 (2013)]. Two types of bipartition are studied: (i) One that divides the lattice into two L x L planes, and (ii) One that divides the lattice into two equal-size (L x L=2 x 2) bilayers. Furthermore, we compare our calculations with those for the tight-binding model studied by the correlation matrix method. As expected, the entropy for bipartition (i) scales as L 2, while themore » latter scales with L with possible logarithmic corrections. The onset of the antiferromagnet to singlet transition shows up by a saturation of the former to a maximal value and the latter to a small value in the singlet phase. We also comment on the large uncertainties in the numerical results with increasing U, which would have to be overcome before the critical behavior and logarithmic corrections can be quanti ed.« less

  15. Controlling of magnetocaloric effect in Gd2O3@SiO2 nanocomposites by substrate dimensionality and particles' concentration

    NASA Astrophysics Data System (ADS)

    ZeleÅáková, Adriana; Hrubovčák, Pavol; Kapusta, Ondrej; Berkutova, Anna; ZeleÅák, Vladimir; Franco, Victorino

    2018-04-01

    The magnetocaloric effect (MCE) of hybrid nanostructures consisting of fine gadolinium oxide (Gd2O3) nanoparticles with diameter 7 nm and 12 nm loaded into the pores of the periodically ordered mesoporous silica with hexagonal (SBA-15) or cubic (SBA-16) symmetry were investigated. The concentration effect of the added nanoparticles (NPs) and the effect of the silica matrix dimensionality on the structural properties, magnetization M(H), magnetic entropy change ΔSM, and parameters A(T) and B(T) derived from Arrott plots were studied in four samples. Examined nanocomposites exhibited reasonable high values of magnetic entropy change ΔSM varying from 29 J/kgK established for Gd2O3@SBA-15 up to 64 J/kgK observed in Gd2O3@SBA-16 at maximal field change 5 T at low temperatures. This suggests that studied nanocomposites, where diamagnetic silica matrices serve as nanoreactors for growth of Gd2O3 nanoparticles and their symmetry strongly affect magnetic properties of whole composites, could be feasible for cryomagnetic refrigeration applications.

  16. Tackling Information Asymmetry in Networks: A New Entropy-Based Ranking Index

    NASA Astrophysics Data System (ADS)

    Barucca, Paolo; Caldarelli, Guido; Squartini, Tiziano

    2018-06-01

    Information is a valuable asset in socio-economic systems, a significant part of which is entailed into the network of connections between agents. The different interlinkages patterns that agents establish may, in fact, lead to asymmetries in the knowledge of the network structure; since this entails a different ability of quantifying relevant, systemic properties (e.g. the risk of contagion in a network of liabilities), agents capable of providing a better estimation of (otherwise) inaccessible network properties, ultimately have a competitive advantage. In this paper, we address the issue of quantifying the information asymmetry of nodes: to this aim, we define a novel index—InfoRank—intended to rank nodes according to their information content. In order to do so, each node ego-network is enforced as a constraint of an entropy-maximization problem and the subsequent uncertainty reduction is used to quantify the node-specific accessible information. We, then, test the performance of our ranking procedure in terms of reconstruction accuracy and show that it outperforms other centrality measures in identifying the "most informative" nodes. Finally, we discuss the socio-economic implications of network information asymmetry.

  17. Comparison of cosmology and seabed acoustics measurements using statistical inference from maximum entropy

    NASA Astrophysics Data System (ADS)

    Knobles, David; Stotts, Steven; Sagers, Jason

    2012-03-01

    Why can one obtain from similar measurements a greater amount of information about cosmological parameters than seabed parameters in ocean waveguides? The cosmological measurements are in the form of a power spectrum constructed from spatial correlations of temperature fluctuations within the microwave background radiation. The seabed acoustic measurements are in the form of spatial correlations along the length of a spatial aperture. This study explores the above question from the perspective of posterior probability distributions obtained from maximizing a relative entropy functional. An answer is in part that the seabed in shallow ocean environments generally has large temporal and spatial inhomogeneities, whereas the early universe was a nearly homogeneous cosmological soup with small but important fluctuations. Acoustic propagation models used in shallow water acoustics generally do not capture spatial and temporal variability sufficiently well, which leads to model error dominating the statistical inference problem. This is not the case in cosmology. Further, the physics of the acoustic modes in cosmology is that of a standing wave with simple initial conditions, whereas for underwater acoustics it is a traveling wave in a strongly inhomogeneous bounded medium.

  18. Multipeak low-temperature behavior of specific heat capacity in frustrated magnetic systems: An exact theoretical analysis

    NASA Astrophysics Data System (ADS)

    Jurčišinová, E.; Jurčišin, M.

    2018-05-01

    We investigate in detail the process of formation of the multipeak low-temperature structure in the behavior of the specific heat capacity in frustrated magnetic systems in the framework of the exactly solvable antiferromagnetic spin-1 /2 Ising model with the multisite interaction in the presence of the external magnetic field on the kagome-like Husimi lattice. The behavior of the entropy of the model is studied and exact values of the residual entropies of all ground states are found. It is shown that the multipeak structure in the behavior of the specific heat capacity is related to the formation of the multilevel hierarchical ordering in the system of all ground states of the model. Direct relation between the maximal number of peaks in the specific heat capacity behavior and the number of independent interactions in studied frustrated magnetic system is identified. The mechanism of the formation of the multipeak structure in the specific heat capacity is described and studied in detail, and it is generalized to frustrated magnetic systems with arbitrary numbers of independent interactions.

  19. Fertilizer placement to maximize nitrogen use by fescue

    USDA-ARS?s Scientific Manuscript database

    The method of fertilizer nitrogen(N) application can affect N uptake in tall fescue and therefore its yield and quality. Subsurface-banding (knife) of fertilizer maximizes fescue N uptake in the poorly-drained clay–pan soils of southeastern Kansas. This study was conducted to determine if knifed N r...

  20. Assessment of risk of femoral neck fracture with radiographic texture parameters: a retrospective study.

    PubMed

    Thevenot, Jérôme; Hirvasniemi, Jukka; Pulkkinen, Pasi; Määttä, Mikko; Korpelainen, Raija; Saarakkala, Simo; Jämsä, Timo

    2014-07-01

    To investigate whether femoral neck fracture can be predicted retrospectively on the basis of clinical radiographs by using the combined analysis of bone geometry, textural analysis of trabecular bone, and bone mineral density (BMD). Formal ethics committee approval was obtained for the study, and all participants gave informed written consent. Pelvic radiographs and proximal femur BMD measurements were obtained in 53 women aged 79-82 years in 2006. By 2012, 10 of these patients had experienced a low-impact femoral neck fracture. A Laplacian-based semiautomatic custom algorithm was applied to the radiographs to calculate the texture parameters along the trabecular fibers in the lower neck area for all subjects. Intra- and interobserver reproducibility was calculated by using the root mean square average coefficient of variation to evaluate the robustness of the method. The best predictors of hip fracture were entropy (P = .007; reproducibility coefficient of variation < 1%), the neck-shaft angle (NSA) (P = .017), and the BMD (P = .13). For prediction of fracture, the area under the receiver operating characteristic curve was 0.753 for entropy, 0.608 for femoral neck BMD, and 0.698 for NSA. The area increased to 0.816 when entropy and NSA were combined and to 0.902 when entropy, NSA, and BMD were combined. Textural analysis of pelvic radiographs enables discrimination of patients at risk for femoral neck fracture, and our results show the potential of this conventional imaging method to yield better prediction than that achieved with dual-energy x-ray absorptiometry-based BMD. The combination of the entropy parameter with NSA and BMD can further enhance predictive accuracy. © RSNA, 2014.

  1. Irreversibility and entropy production in transport phenomena, IV: Symmetry, integrated intermediate processes and separated variational principles for multi-currents

    NASA Astrophysics Data System (ADS)

    Suzuki, Masuo

    2013-10-01

    The mechanism of entropy production in transport phenomena is discussed again by emphasizing the role of symmetry of non-equilibrium states and also by reformulating Einstein’s theory of Brownian motion to derive entropy production from it. This yields conceptual reviews of the previous papers [M. Suzuki, Physica A 390 (2011) 1904; 391 (2012) 1074; 392 (2013) 314]. Separated variational principles of steady states for multi external fields {Xi} and induced currents {Ji} are proposed by extending the principle of minimum integrated entropy production found by the present author for a single external field. The basic strategy of our theory on steady states is to take in all the intermediate processes from the equilibrium state to the final possible steady states in order to study the irreversible physics even in the steady states. As an application of this principle, Gransdorff-Prigogine’s evolution criterion inequality (or stability condition) dXP≡∫dr∑iJidXi≤0 is derived in the stronger form dQi≡∫drJidXi≤0 for individual force Xi and current Ji even in nonlinear responses which depend on all the external forces {Xk} nonlinearly. This is called “separated evolution criterion”. Some explicit demonstrations of the present general theory to simple electric circuits with multi external fields are given in order to clarify the physical essence of our new theory and to realize the condition of its validity concerning the existence of the solutions of the simultaneous equations obtained by the separated variational principles. It is also instructive to compare the two results obtained by the new variational theory and by the old scheme based on the instantaneous entropy production. This seems to be suggestive even to the energy problem in the world.

  2. Comparison of Analytic Hierarchy Process, Catastrophe and Entropy techniques for evaluating groundwater prospect of hard-rock aquifer systems

    NASA Astrophysics Data System (ADS)

    Jenifer, M. Annie; Jha, Madan K.

    2017-05-01

    Groundwater is a treasured underground resource, which plays a central role in sustainable water management. However, it being hidden and dynamic in nature, its sustainable development and management calls for precise quantification of this precious resource at an appropriate scale. This study demonstrates the efficacy of three GIS-based multi-criteria decision analysis (MCDA) techniques, viz., Analytic Hierarchy Process (AHP), Catastrophe and Entropy in evaluating groundwater potential through a case study in hard-rock aquifer systems. Using satellite imagery and relevant field data, eight thematic layers (rainfall, land slope, drainage density, soil, lineament density, geology, proximity to surface water bodies and elevation) of the factors having significant influence on groundwater occurrence were prepared. These thematic layers and their features were assigned suitable weights based on the conceptual frameworks of AHP, Catastrophe and Entropy techniques and then they were integrated in the GIS environment to generate an integrated raster layer depicting groundwater potential index of the study area. The three groundwater prospect maps thus yielded by these MCDA techniques were verified using a novel approach (concept of 'Dynamic Groundwater Potential'). The validation results revealed that the groundwater potential predicted by the AHP technique has a pronounced accuracy of 87% compared to the Catastrophe (46% accuracy) and Entropy techniques (51% accuracy). It is concluded that the AHP technique is the most reliable for the assessment of groundwater resources followed by the Entropy method. The developed groundwater potential maps can serve as a scientific guideline for the cost-effective siting of wells and the effective planning of groundwater development at a catchment or basin scale.

  3. Real topological entropy versus metric entropy for birational measure-preserving transformations

    NASA Astrophysics Data System (ADS)

    Abarenkova, N.; Anglès d'Auriac, J.-Ch.; Boukraa, S.; Maillard, J.-M.

    2000-10-01

    We consider a family of birational measure-preserving transformations of two complex variables, depending on one parameter for which simple rational expressions for the dynamical zeta function have been conjectured, together with an equality between the topological entropy and the logarithm of the Arnold complexity (divided by the number of iterations). Similar results have been obtained for the adaptation of these two concepts to dynamical systems of real variables, yielding to introduce a “real topological entropy” and a “real Arnold complexity”. We try to compare, here, the Kolmogorov-Sinai metric entropy and this real Arnold complexity, or real topological entropy, on this particular example of a one-parameter dependent birational transformation of two variables. More precisely, we analyze, using an infinite precision calculation, the Lyapunov characteristic exponents for various values of the parameter of the birational transformation, in order to compare these results with the ones for the real Arnold complexity. We find a quite surprising result: for this very birational example, and, in fact, for a large set of birational measure-preserving mappings generated by involutions, the Lyapunov characteristic exponents seem to be equal to zero or, at least, extremely small, for all the orbits we have considered, and for all values of the parameter. Birational measure-preserving transformations, generated by involutions, could thus allow to better understand the difference between the topological description and the probabilistic description of discrete dynamical systems. Many birational measure-preserving transformations, generated by involutions, seem to provide examples of discrete dynamical systems which can be topologically chaotic while they are metrically almost quasi-periodic. Heuristically, this can be understood as a consequence of the fact that their orbits seem to form some kind of “transcendental foliation” of the two-dimensional space of variables.

  4. Statistical mechanics of scale-free gene expression networks

    NASA Astrophysics Data System (ADS)

    Gross, Eitan

    2012-12-01

    The gene co-expression networks of many organisms including bacteria, mice and man exhibit scale-free distribution. This heterogeneous distribution of connections decreases the vulnerability of the network to random attacks and thus may confer the genetic replication machinery an intrinsic resilience to such attacks, triggered by changing environmental conditions that the organism may be subject to during evolution. This resilience to random attacks comes at an energetic cost, however, reflected by the lower entropy of the scale-free distribution compared to the more homogenous, random network. In this study we found that the cell cycle-regulated gene expression pattern of the yeast Saccharomyces cerevisiae obeys a power-law distribution with an exponent α = 2.1 and an entropy of 1.58. The latter is very close to the maximal value of 1.65 obtained from linear optimization of the entropy function under the constraint of a constant cost function, determined by the average degree connectivity . We further show that the yeast's gene expression network can achieve scale-free distribution in a process that does not involve growth but rather via re-wiring of the connections between nodes of an ordered network. Our results support the idea of an evolutionary selection, which acts at the level of the protein sequence, and is compatible with the notion of greater biological importance of highly connected nodes in the protein interaction network. Our constrained re-wiring model provides a theoretical framework for a putative thermodynamically driven evolutionary selection process.

  5. How the Second Law of Thermodynamics Has Informed Ecosystem Ecology through Its History

    NASA Astrophysics Data System (ADS)

    Chapman, E. J.; Childers, D. L.; Vallino, J. J.

    2014-12-01

    Throughout the history of ecosystem ecology many attempts have been made to develop a general principle governing how systems develop and organize. We reviewed the historical developments that led to conceptualization of several goal-oriented principles in ecosystem ecology and the relationships among them. We focused our review on two prominent principles—the Maximum Power Principle and the Maximum Entropy Production Principle—and the literature that applies to both. While these principles have considerable conceptual overlap and both use concepts in physics (power and entropy), we found considerable differences in their historical development, the disciplines that apply these principles, and their adoption in the literature. We reviewed the literature using Web of Science keyword searches for the MPP, the MEPP, as well as for papers that cited pioneers in the MPP and the MEPP development. From the 6000 papers that our keyword searches returned, we limited our further meta-analysis to 32 papers by focusing on studies with a foundation in ecosystems research. Despite these seemingly disparate pasts, we concluded that the conceptual approaches of these two principles were more similar than dissimilar and that maximization of power in ecosystems occurs with maximum entropy production. We also found that these two principles have great potential to explain how systems develop, organize, and function, but there are no widely agreed upon theoretical derivations for the MEPP or the MPP, possibly hindering their broader use in ecological research. We end with recommendations for how ecosystems-level studies may better use these principles.

  6. Microstructural investigation of plastically deformed Ti{sub 20}Zr{sub 20}Hf{sub 20}Nb{sub 20}Ta{sub 20} high entropy alloy by X-ray diffraction and transmission electron microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dirras, G., E-mail: dirras@univ-paris13.fr; Gubicza, J.; Heczel, A.

    2015-10-15

    The microstructure evolution in body-centered cubic (bcc) Ti{sub 20}Zr{sub 20}Hf{sub 20}Nb{sub 20}Ta{sub 20} high entropy alloy during quasi-static compression test was studied by X-ray line profile analysis (XLPA) and transmission electron microscopy (TEM). The average lattice constant and other important parameters of the microstructure such as the mean crystallite size, the dislocation density and the edge/screw character of dislocations were determined by XLPA. The elastic anisotropy factor required for XLPA procedure was determined by nanoindentation. XLPA shows that the crystallite size decreased while the dislocation density increased with strain during compression, and their values reached about 39 nm and 15more » × 10{sup 14} m{sup −2}, respectively, at a plastic strain of ~ 20%. It was revealed that with increasing strain the dislocation character became more screw. This can be explained by the reduced mobility of screw dislocations compared to edge dislocations in bcc structures. These observations are in line with TEM investigations. The development of dislocation density during compression was related to the yield strength evolution. - Highlights: • Ti{sub 20}Zr{sub 20}Hf{sub 20}Nb{sub 20}Ta{sub 20} high entropy alloy was processed by arc-melting. • The mechanical was evaluated by RT compression test. • The microstructure evolution was studied by XLPA and TEM. • With increasing strain the dislocation character became more screw. • The yield strength was related to the development of the dislocation density.« less

  7. A subjective supply-demand model: the maximum Boltzmann/Shannon entropy solution

    NASA Astrophysics Data System (ADS)

    Piotrowski, Edward W.; Sładkowski, Jan

    2009-03-01

    The present authors have put forward a projective geometry model of rational trading. The expected (mean) value of the time that is necessary to strike a deal and the profit strongly depend on the strategies adopted. A frequent trader often prefers maximal profit intensity to the maximization of profit resulting from a separate transaction because the gross profit/income is the adopted/recommended benchmark. To investigate activities that have different periods of duration we define, following the queuing theory, the profit intensity as a measure of this economic category. The profit intensity in repeated trading has a unique property of attaining its maximum at a fixed point regardless of the shape of demand curves for a wide class of probability distributions of random reverse transactions (i.e. closing of the position). These conclusions remain valid for an analogous model based on supply analysis. This type of market game is often considered in research aiming at finding an algorithm that maximizes profit of a trader who negotiates prices with the Rest of the World (a collective opponent), possessing a definite and objective supply profile. Such idealization neglects the sometimes important influence of an individual trader on the demand/supply profile of the Rest of the World and in extreme cases questions the very idea of demand/supply profile. Therefore we put forward a trading model in which the demand/supply profile of the Rest of the World induces the (rational) trader to (subjectively) presume that he/she lacks (almost) all knowledge concerning the market but his/her average frequency of trade. This point of view introduces maximum entropy principles into the model and broadens the range of economic phenomena that can be perceived as a sort of thermodynamical system. As a consequence, the profit intensity has a fixed point with an astonishing connection with Fibonacci classical works and looking for the quickest algorithm for obtaining the extremum of a convex function: the profit intensity reaches its maximum when the probability of transaction is given by the golden ratio rule (\\sqrt {5}-1)/{2} . This condition sets a sharp criterion of validity of the model and can be tested with real market data.

  8. Explaining the density profile of self-gravitating systems by statistical mechanics

    NASA Astrophysics Data System (ADS)

    Kang, Dong-Biao

    A self-gravitating system usually shows a quasi-universal density profile, such as the NFW profile of a simulated dark matter halo, the flat rotation curve of a spiral galaxy, the Sérsic profile of an elliptical galaxy, the King profile of a globular cluster and the exponential law of the stellar disk. It will be interesting if all of the above can be obtained from first principles. Based on the original work of White & Narayan (1987), we propose that if the self-bounded system is divided into infinite infinitesimal subsystems, the entropy of each subsystem can be maximized, but the whole system's gravity may just play the role of the wall, which may not increase the whole system's entropy S t , and finally S t may be the minimum among all of the locally maximized entropies (He & Kang 2010). For spherical systems with isotropic velocity dispersion, the form of the equation of state will be a hybrid of isothermal and adiabatic (Kang & He 2011). Hence this density profile can be approximated by a truncated isothermal sphere, which means that the total mass must be finite and our results can be consistent with observations (Kang & He 2011b). Our method requires that the mass and energy should be conserved, so we only compare our results with simulations of mild relaxation (i.e. the virial ratio is close to -1) of dissipationless collapse (Kang 2014), and the fitting also is well. The capacity can be calculated and is found not to be always negative as in previous works, and combining with calculations of the second order variation of the entropy, we find that the thermodynamical stability still can be true (Kang 2012) if the temperature tends to be zero. However, the cusp in the center of dark matter halos can not be explained, and more works will continue. The above work can be generalized to study the radial distribution of the disk (Kang 2015). The energy constraint automatically disappears in our variation, because angular momentum is much more important than energy for the disk-shape system. To simplify this issue, a toy model is taken: 2D gravity is adopted, then at large scale it will be consistent with a flat rotation curve; the bulge and the stellar disk are studied together. Then with constraints of mass and angular momentum, the calculated surface density can be consistent with the truncated, up-bended or standard exponential law. Therefore the radial distribution of the stellar disk may be determined by both the random and orbital motions of stars. In our fittings the central gravity is set to be nonzero to include the effect of asymmetric components.

  9. Optical differentiation between malignant and benign lymphadenopathy by grey scale texture analysis of endobronchial ultrasound convex probe images.

    PubMed

    Nguyen, Phan; Bashirzadeh, Farzad; Hundloe, Justin; Salvado, Olivier; Dowson, Nicholas; Ware, Robert; Masters, Ian Brent; Bhatt, Manoj; Kumar, Aravind Ravi; Fielding, David

    2012-03-01

    Morphologic and sonographic features of endobronchial ultrasound (EBUS) convex probe images are helpful in predicting metastatic lymph nodes. Grey scale texture analysis is a well-established methodology that has been applied to ultrasound images in other fields of medicine. The aim of this study was to determine if this methodology could differentiate between benign and malignant lymphadenopathy of EBUS images. Lymph nodes from digital images of EBUS procedures were manually mapped to obtain a region of interest and were analyzed in a prediction set. The regions of interest were analyzed for the following grey scale texture features in MATLAB (version 7.8.0.347 [R2009a]): mean pixel value, difference between maximal and minimal pixel value, SEM pixel value, entropy, correlation, energy, and homogeneity. Significant grey scale texture features were used to assess a validation set compared with fluoro-D-glucose (FDG)-PET-CT scan findings where available. Fifty-two malignant nodes and 48 benign nodes were in the prediction set. Malignant nodes had a greater difference in the maximal and minimal pixel values, SEM pixel value, entropy, and correlation, and a lower energy (P < .0001 for all values). Fifty-one lymph nodes were in the validation set; 44 of 51 (86.3%) were classified correctly. Eighteen of these lymph nodes also had FDG-PET-CT scan assessment, which correctly classified 14 of 18 nodes (77.8%), compared with grey scale texture analysis, which correctly classified 16 of 18 nodes (88.9%). Grey scale texture analysis of EBUS convex probe images can be used to differentiate malignant and benign lymphadenopathy. Preliminary results are comparable to FDG-PET-CT scan.

  10. Multipass Target Search in Natural Environments

    PubMed Central

    Otte, Michael W.; Sofge, Donald; Gupta, Satyandra K.

    2017-01-01

    Consider a disaster scenario where search and rescue workers must search difficult to access buildings during an earthquake or flood. Often, finding survivors a few hours sooner results in a dramatic increase in saved lives, suggesting the use of drones for expedient rescue operations. Entropy can be used to quantify the generation and resolution of uncertainty. When searching for targets, maximizing mutual information of future sensor observations will minimize expected target location uncertainty by minimizing the entropy of the future estimate. Motion planning for multi-target autonomous search requires planning over an area with an imperfect sensor and may require multiple passes, which is hindered by the submodularity property of mutual information. Further, mission duration constraints must be handled accordingly, requiring consideration of the vehicle’s dynamics to generate feasible trajectories and must plan trajectories spanning the entire mission duration, something which most information gathering algorithms are incapable of doing. If unanticipated changes occur in an uncertain environment, new plans must be generated quickly. In addition, planning multipass trajectories requires evaluating path dependent rewards, requiring planning in the space of all previously selected actions, compounding the problem. We present an anytime algorithm for autonomous multipass target search in natural environments. The algorithm is capable of generating long duration dynamically feasible multipass coverage plans that maximize mutual information using a variety of techniques such as ϵ-admissible heuristics to speed up the search. To the authors’ knowledge this is the first attempt at efficiently solving multipass target search problems of such long duration. The proposed algorithm is based on best first branch and bound and is benchmarked against state of the art algorithms adapted to the problem in natural Simplex environments, gathering the most information in the given search time. PMID:29099087

  11. Validating predictions from climate envelope models

    USGS Publications Warehouse

    Watling, J.; Bucklin, D.; Speroterra, C.; Brandt, L.; Cabal, C.; Romañach, Stephanie S.; Mazzotti, Frank J.

    2013-01-01

    Climate envelope models are a potentially important conservation tool, but their ability to accurately forecast species’ distributional shifts using independent survey data has not been fully evaluated. We created climate envelope models for 12 species of North American breeding birds previously shown to have experienced poleward range shifts. For each species, we evaluated three different approaches to climate envelope modeling that differed in the way they treated climate-induced range expansion and contraction, using random forests and maximum entropy modeling algorithms. All models were calibrated using occurrence data from 1967–1971 (t1) and evaluated using occurrence data from 1998–2002 (t2). Model sensitivity (the ability to correctly classify species presences) was greater using the maximum entropy algorithm than the random forest algorithm. Although sensitivity did not differ significantly among approaches, for many species, sensitivity was maximized using a hybrid approach that assumed range expansion, but not contraction, in t2. Species for which the hybrid approach resulted in the greatest improvement in sensitivity have been reported from more land cover types than species for which there was little difference in sensitivity between hybrid and dynamic approaches, suggesting that habitat generalists may be buffered somewhat against climate-induced range contractions. Specificity (the ability to correctly classify species absences) was maximized using the random forest algorithm and was lowest using the hybrid approach. Overall, our results suggest cautious optimism for the use of climate envelope models to forecast range shifts, but also underscore the importance of considering non-climate drivers of species range limits. The use of alternative climate envelope models that make different assumptions about range expansion and contraction is a new and potentially useful way to help inform our understanding of climate change effects on species.

  12. Black holes in vector-tensor theories and their thermodynamics

    NASA Astrophysics Data System (ADS)

    Fan, Zhong-Ying

    2018-01-01

    In this paper, we study Einstein gravity either minimally or non-minimally coupled to a vector field which breaks the gauge symmetry explicitly in general dimensions. We first consider a minimal theory which is simply the Einstein-Proca theory extended with a quartic self-interaction term for the vector field. We obtain its general static maximally symmetric black hole solution and study the thermodynamics using Wald formalism. The aspects of the solution are much like a Reissner-Nordstrøm black hole in spite of that a global charge cannot be defined for the vector. For non-minimal theories, we obtain a lot of exact black hole solutions, depending on the parameters of the theories. In particular, many of the solutions are general static and have maximal symmetry. However, there are some subtleties and ambiguities in the derivation of the first laws because the existence of an algebraic degree of freedom of the vector in general invalids the Wald entropy formula. The thermodynamics of these solutions deserves further studies.

  13. Maximally informative pairwise interactions in networks

    PubMed Central

    Fitzgerald, Jeffrey D.; Sharpee, Tatyana O.

    2010-01-01

    Several types of biological networks have recently been shown to be accurately described by a maximum entropy model with pairwise interactions, also known as the Ising model. Here we present an approach for finding the optimal mappings between input signals and network states that allow the network to convey the maximal information about input signals drawn from a given distribution. This mapping also produces a set of linear equations for calculating the optimal Ising-model coupling constants, as well as geometric properties that indicate the applicability of the pairwise Ising model. We show that the optimal pairwise interactions are on average zero for Gaussian and uniformly distributed inputs, whereas they are nonzero for inputs approximating those in natural environments. These nonzero network interactions are predicted to increase in strength as the noise in the response functions of each network node increases. This approach also suggests ways for how interactions with unmeasured parts of the network can be inferred from the parameters of response functions for the measured network nodes. PMID:19905153

  14. Role of sufficient statistics in stochastic thermodynamics and its implication to sensory adaptation

    NASA Astrophysics Data System (ADS)

    Matsumoto, Takumi; Sagawa, Takahiro

    2018-04-01

    A sufficient statistic is a significant concept in statistics, which means a probability variable that has sufficient information required for an inference task. We investigate the roles of sufficient statistics and related quantities in stochastic thermodynamics. Specifically, we prove that for general continuous-time bipartite networks, the existence of a sufficient statistic implies that an informational quantity called the sensory capacity takes the maximum. Since the maximal sensory capacity imposes a constraint that the energetic efficiency cannot exceed one-half, our result implies that the existence of a sufficient statistic is inevitably accompanied by energetic dissipation. We also show that, in a particular parameter region of linear Langevin systems there exists the optimal noise intensity at which the sensory capacity, the information-thermodynamic efficiency, and the total entropy production are optimized at the same time. We apply our general result to a model of sensory adaptation of E. coli and find that the sensory capacity is nearly maximal with experimentally realistic parameters.

  15. Fish swarm intelligent to optimize real time monitoring of chips drying using machine vision

    NASA Astrophysics Data System (ADS)

    Hendrawan, Y.; Hawa, L. C.; Damayanti, R.

    2018-03-01

    This study attempted to apply machine vision-based chips drying monitoring system which is able to optimise the drying process of cassava chips. The objective of this study is to propose fish swarm intelligent (FSI) optimization algorithms to find the most significant set of image features suitable for predicting water content of cassava chips during drying process using artificial neural network model (ANN). Feature selection entails choosing the feature subset that maximizes the prediction accuracy of ANN. Multi-Objective Optimization (MOO) was used in this study which consisted of prediction accuracy maximization and feature-subset size minimization. The results showed that the best feature subset i.e. grey mean, L(Lab) Mean, a(Lab) energy, red entropy, hue contrast, and grey homogeneity. The best feature subset has been tested successfully in ANN model to describe the relationship between image features and water content of cassava chips during drying process with R2 of real and predicted data was equal to 0.9.

  16. [Photosynthetic fluorescence characteristics of floating-leaved and submersed macrophytes commonly found in Taihu Lake].

    PubMed

    Song, Yu-zhi; Cai, Wei; Qin, Bo-qiang

    2009-03-01

    Some aquatic macrophytes commonly found in Taihu Lake, including Trapa bispinosa, Nymphyoides peltatum, Vallisneria natans, and Hydrilla verticillata were collected, and their maximal quantum yield of photosystem II (Fv/Fm) as well as the rapid light curves (RLCs) under conditions of light adaptation and dark adaptation were measured in situ by using a submersible and pulse-amplitude modulated fluorometer (Diving-PAM). The results showed that floating-leaved plants T. bispinosa and N. peltatum had higher potential maximum photosynthetic capacity than submerged macrophytes V. natans and H. verticillata. The measured maximal quantum yield of T. bispinosa, N. peltatum, V. natans, and H. verticillata was 0.837, 0.831, 0.684, and 0.764, respectively. Both the maximal relative electron transport rate and the half saturation point of light intensity of T. bispinosa and N. peltatum were higher than those of V. natans and H. verticillata, especially under the condition of light adaptation.

  17. Dynamics of yield-stress droplets: Morphology of impact craters

    NASA Astrophysics Data System (ADS)

    Neufeld, Jerome; Sohr, David; Ferrari, Leo; Dalziel, Stuart

    2017-11-01

    Yield strength can play an important role for the dynamics of droplets impacting on surfaces, whether at the industrial or planetary scale, and can capture a zoo of impact crater morphologies, from simple parabolic craters, to more complex forms with forms with, for example, multiple rings, central peaks. Here we show that the morphology of planetary impact craters can be reproduced in the laboratory using carbopol, a transparent yield-stress fluid, as both impactor and bulk fluid. Using high-speed video photography, we characterise the universal, transient initial excavation stage of impact and show the dependence of the subsequent relaxation to final crater morphology on impactor size, impact speed and yield stress. To further interrogate our laboratory impacts, we dye our impactor to map its final distribution and use particle tracking to determine the flow fields during impact and the maximal extent of the yield surface. We characterise the flow-fields induced during impact, and the maximal extent of the yield surface, by tracking particles within the bulk fluid and map the distribution of impactor and bulk by tracing the final distribution of dyed impactor. The results of laboratory impact droplets are used to infer the properties of planetary impactors, and aid in inter.

  18. An entropy and viscosity corrected potential method for rotor performance prediction

    NASA Technical Reports Server (NTRS)

    Bridgeman, John O.; Strawn, Roger C.; Caradonna, Francis X.

    1988-01-01

    An unsteady Full-Potential Rotor code (FPR) has been enhanced with modifications directed at improving its drag prediction capability. The shock generated entropy has been included to provide solutions comparable to the Euler equations. A weakly interacted integral boundary layer has also been coupled to FPR in order to estimate skin-friction drag. Pressure distributions, shock positions, and drag comparisons are made with various data sets derived from two-dimensional airfoil, hovering, and advancing high speed rotor tests. In all these comparisons, the effect of the nonisentropic modification improves (i.e., weakens) the shock strength and wave drag. In addition, the boundary layer method yields reasonable estimates of skin-friction drag. Airfoil drag and hover torque data comparisons are excellent, as are predicted shock strength and positions for a high speed advancing rotor.

  19. Finite temperature properties of clusters by replica exchange metadynamics: the water nonamer.

    PubMed

    Zhai, Yingteng; Laio, Alessandro; Tosatti, Erio; Gong, Xin-Gao

    2011-03-02

    We introduce an approach for the accurate calculation of thermal properties of classical nanoclusters. On the basis of a recently developed enhanced sampling technique, replica exchange metadynamics, the method yields the true free energy of each relevant cluster structure, directly sampling its basin and measuring its occupancy in full equilibrium. All entropy sources, whether vibrational, rotational anharmonic, or especially configurational, the latter often forgotten in many cluster studies, are automatically included. For the present demonstration, we choose the water nonamer (H(2)O)(9), an extremely simple cluster, which nonetheless displays a sufficient complexity and interesting physics in its relevant structure spectrum. Within a standard TIP4P potential description of water, we find that the nonamer second relevant structure possesses a higher configurational entropy than the first, so that the two free energies surprisingly cross for increasing temperature.

  20. Finite Temperature Properties of Clusters by Replica Exchange Metadynamics: The Water Nonamer

    NASA Astrophysics Data System (ADS)

    Zhai, Yingteng; Laio, Alessandro; Tosatti, Erio; Gong, Xingao

    2012-02-01

    We introduce an approach for the accurate calculation of thermal properties of classical nanoclusters. Based on a recently developed enhanced sampling technique, replica exchange metadynamics, the method yields the true free energy of each relevant cluster structure, directly sampling its basin and measuring its occupancy in full equilibrium. All entropy sources, whether vibrational, rotational anharmonic and especially configurational -- the latter often forgotten in many cluster studies -- are automatically included. For the present demonstration we choose the water nonamer (H2O)9, an extremely simple cluster which nonetheless displays a sufficient complexity and interesting physics in its relevant structure spectrum. Within a standard TIP4P potential description of water, we find that the nonamer second relevant structure possesses a higher configurational entropy than the first, so that the two free energies surprisingly cross for increasing temperature.

  1. Quantum information processing in the radical-pair mechanism: Haberkorn's theory violates the Ozawa entropy bound

    NASA Astrophysics Data System (ADS)

    Mouloudakis, K.; Kominis, I. K.

    2017-02-01

    Radical-ion-pair reactions, central for understanding the avian magnetic compass and spin transport in photosynthetic reaction centers, were recently shown to be a fruitful paradigm of the new synthesis of quantum information science with biological processes. We show here that the master equation so far constituting the theoretical foundation of spin chemistry violates fundamental bounds for the entropy of quantum systems, in particular the Ozawa bound. In contrast, a recently developed theory based on quantum measurements, quantum coherence measures, and quantum retrodiction, thus exemplifying the paradigm of quantum biology, satisfies the Ozawa bound as well as the Lanford-Robinson bound on information extraction. By considering Groenewold's information, the quantum information extracted during the reaction, we reproduce the known and unravel other magnetic-field effects not conveyed by reaction yields.

  2. Stability region maximization by decomposition-aggregation method. [Skylab stability

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.; Cuk, S. M.

    1974-01-01

    This work is to improve the estimates of the stability regions by formulating and resolving a proper maximization problem. The solution of the problem provides the best estimate of the maximal value of the structural parameter and at the same time yields the optimum comparison system, which can be used to determine the degree of stability of the Skylab. The analysis procedure is completely computerized, resulting in a flexible and powerful tool for stability considerations of large-scale linear as well as nonlinear systems.

  3. Heat, temperature and Clausius inequality in a model for active Brownian particles

    PubMed Central

    Marconi, Umberto Marini Bettolo; Puglisi, Andrea; Maggi, Claudio

    2017-01-01

    Methods of stochastic thermodynamics and hydrodynamics are applied to a recently introduced model of active particles. The model consists of an overdamped particle subject to Gaussian coloured noise. Inspired by stochastic thermodynamics, we derive from the system’s Fokker-Planck equation the average exchanges of heat and work with the active bath and the associated entropy production. We show that a Clausius inequality holds, with the local (non-uniform) temperature of the active bath replacing the uniform temperature usually encountered in equilibrium systems. Furthermore, by restricting the dynamical space to the first velocity moments of the local distribution function we derive a hydrodynamic description where local pressure, kinetic temperature and internal heat fluxes appear and are consistent with the previous thermodynamic analysis. The procedure also shows under which conditions one obtains the unified coloured noise approximation (UCNA): such an approximation neglects the fast relaxation to the active bath and therefore yields detailed balance and zero entropy production. In the last part, by using multiple time-scale analysis, we provide a constructive method (alternative to UCNA) to determine the solution of the Kramers equation and go beyond the detailed balance condition determining negative entropy production. PMID:28429787

  4. Heat, temperature and Clausius inequality in a model for active Brownian particles.

    PubMed

    Marconi, Umberto Marini Bettolo; Puglisi, Andrea; Maggi, Claudio

    2017-04-21

    Methods of stochastic thermodynamics and hydrodynamics are applied to a recently introduced model of active particles. The model consists of an overdamped particle subject to Gaussian coloured noise. Inspired by stochastic thermodynamics, we derive from the system's Fokker-Planck equation the average exchanges of heat and work with the active bath and the associated entropy production. We show that a Clausius inequality holds, with the local (non-uniform) temperature of the active bath replacing the uniform temperature usually encountered in equilibrium systems. Furthermore, by restricting the dynamical space to the first velocity moments of the local distribution function we derive a hydrodynamic description where local pressure, kinetic temperature and internal heat fluxes appear and are consistent with the previous thermodynamic analysis. The procedure also shows under which conditions one obtains the unified coloured noise approximation (UCNA): such an approximation neglects the fast relaxation to the active bath and therefore yields detailed balance and zero entropy production. In the last part, by using multiple time-scale analysis, we provide a constructive method (alternative to UCNA) to determine the solution of the Kramers equation and go beyond the detailed balance condition determining negative entropy production.

  5. A model for competitiveness level analysis in sports competitions: Application to basketball

    NASA Astrophysics Data System (ADS)

    de Saá Guerra, Y.; Martín González, J. M.; Sarmiento Montesdeoca, S.; Rodríguez Ruiz, D.; García-Rodríguez, A.; García-Manso, J. M.

    2012-05-01

    The degree of overall competitiveness of a sport league is a complex phenomenon. It is difficult to assess and quantify all elements that yield the final standing. In this paper, we analyze the general behavior of the result matrices of each season and we use the corresponding results as a probably density. Thus, the results of previous seasons are a way to investigate the probability that each team has to reach a certain number of victories. We developed a model based on Shannon entropy using two extreme competitive structures (a hierarchical structure and a random structure), and applied this model to investigate the competitiveness of two of the best professional basketball leagues: the NBA (USA) and the ACB (Spain). Both leagues’ entropy levels are high (NBA mean 0.983; ACB mean 0.980), indicating high competitiveness, although the entropy of the ACB (from 0.986 to 0.972) demonstrated more seasonal variability than that of the NBA (from 0.985 to 0.990), a possible result of greater sporting gradients in the ACB. The use of this methodology has proven useful for investigating the competitiveness of sports leagues as well as their underlying variability across time.

  6. On the nature of the excess heat capacity of mixing

    NASA Astrophysics Data System (ADS)

    Benisek, Artur; Dachs, Edgar

    2011-03-01

    The excess vibrational entropy (Δ S {vib/ex}) of several silicate solid solutions are found to be linearly correlated with the differences in end-member volumes (Δ V i ) and end-member bulk moduli (Δκ i ). If a substitution produces both, larger and elastically stiffer polyhedra, then the substituted ion will find itself in a strong enlarged structure. The frequency of its vibration is decreased because of the increase in bond lengths. Lowering of frequencies produces larger heat capacities, which give rise to positive excess vibrational entropies. If a substitution produces larger but elastically softer polyhedra, then increase and decrease of mean bond lengths may be similar in magnitude and their effect on the vibrational entropy tends to be compensated. The empirical relationship between Δ S {vib/ex}, Δ V i and Δκ i , as described by Δ S {vib/ex} = (Δ V i + mΔκ i ) f, was calibrated on six silicate solid solutions (analbite-sanidine, pyrope-grossular, forsterite-fayalite, analbite-anorthite, anorthite-sanidine, CaTs-diopside) yielding m = 0.0246 and f = 2.926. It allows the prediction of Δ S {vib/ex} behaviour of a solid solution based on its volume and bulk moduli end-member data.

  7. Flood control project selection using an interval type-2 entropy weight with interval type-2 fuzzy TOPSIS

    NASA Astrophysics Data System (ADS)

    Zamri, Nurnadiah; Abdullah, Lazim

    2014-06-01

    Flood control project is a complex issue which takes economic, social, environment and technical attributes into account. Selection of the best flood control project requires the consideration of conflicting quantitative and qualitative evaluation criteria. When decision-makers' judgment are under uncertainty, it is relatively difficult for them to provide exact numerical values. The interval type-2 fuzzy set (IT2FS) is a strong tool which can deal with the uncertainty case of subjective, incomplete, and vague information. Besides, it helps to solve for some situations where the information about criteria weights for alternatives is completely unknown. Therefore, this paper is adopted the information interval type-2 entropy concept into the weighting process of interval type-2 fuzzy TOPSIS. This entropy weight is believed can effectively balance the influence of uncertainty factors in evaluating attribute. Then, a modified ranking value is proposed in line with the interval type-2 entropy weight. Quantitative and qualitative factors that normally linked with flood control project are considered for ranking. Data in form of interval type-2 linguistic variables were collected from three authorised personnel of three Malaysian Government agencies. Study is considered for the whole of Malaysia. From the analysis, it shows that diversion scheme yielded the highest closeness coefficient at 0.4807. A ranking can be drawn using the magnitude of closeness coefficient. It was indicated that the diversion scheme recorded the first rank among five causes.

  8. Coherent entropy induced and acoustic noise separation in compact nozzles

    NASA Astrophysics Data System (ADS)

    Tao, Wenjie; Schuller, Thierry; Huet, Maxime; Richecoeur, Franck

    2017-04-01

    A method to separate entropy induced noise from an acoustic pressure wave in an harmonically perturbed flow through a nozzle is presented. It is tested on an original experimental setup generating simultaneously acoustic and temperature fluctuations in an air flow that is accelerated by a convergent nozzle. The setup mimics the direct and indirect noise contributions to the acoustic pressure field in a confined combustion chamber by producing synchronized acoustic and temperature fluctuations, without dealing with the complexity of the combustion process. It allows generating temperature fluctuations with amplitude up to 10 K in the frequency range from 10 to 100 Hz. The noise separation technique uses experiments with and without temperature fluctuations to determine the relative level of acoustic and entropy fluctuations in the system and to identify the nozzle response to these forcing waves. It requires multi-point measurements of acoustic pressure and temperature. The separation method is first validated with direct numerical simulations of the nonlinear Euler equations. These simulations are used to investigate the conditions for which the separation technique is valid and yield similar trends as the experiments for the investigated flow operating conditions. The separation method then gives successfully the acoustic reflection coefficient but does not recover the same entropy reflection coefficient as predicted by the compact nozzle theory due to the sensitivity of the method to signal noises in the explored experimental conditions. This methodology provides a framework for experimental investigation of direct and indirect combustion noises originating from synchronized perturbations.

  9. Tandem Lewis/Brønsted homogeneous acid catalysis: conversion of glucose to 5-hydoxymethylfurfural in an aqueous chromium(iii) chloride and hydrochloric acid solution

    DOE PAGES

    Swift, T. Dallas; Nguyen, Hannah; Anderko, Andrzej; ...

    2015-07-25

    Here, a kinetic model for the tandem conversion of glucose to 5-hydroxymethylfurfural (HMF) through fructose in aqueous CrCl 3–HCl solution was developed by analyzing experimental data. We show that the coupling of Lewis and Brønsted acids in a single pot overcomes equilibrium limitations of the glucose–fructose isomerization leading to high glucose conversions and identify conditions that maximize HMF yield. Adjusting the HCl/CrCl 3 concentration has a more pronounced effect on HMF yield at constant glucose conversion than that of temperature or CrCl 3 concentration. This is attributed to the interactions between HCl and CrCl 3 speciation in solution that leadsmore » to HMF yield being maximized at moderate HCl concentrations for each CrCl 3 concentration. This volcano-like behavior is accompanied with a change in the rate-limiting step from fructose dehydration to glucose isomerization as the concentration of the Brønsted acid increases. The maximum HMF yield in a single aqueous phase is only modest and appears independent of catalysts’ concentrations as long as they are appropriately balanced. However, it can be further maximized in a biphasic system. Our findings are consistent with recent studies in other tandem reactions catalyzed by different catalysts.« less

  10. Tandem Lewis/Brønsted homogeneous acid catalysis: conversion of glucose to 5-hydoxymethylfurfural in an aqueous chromium(iii) chloride and hydrochloric acid solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swift, T. Dallas; Nguyen, Hannah; Anderko, Andrzej

    Here, a kinetic model for the tandem conversion of glucose to 5-hydroxymethylfurfural (HMF) through fructose in aqueous CrCl 3–HCl solution was developed by analyzing experimental data. We show that the coupling of Lewis and Brønsted acids in a single pot overcomes equilibrium limitations of the glucose–fructose isomerization leading to high glucose conversions and identify conditions that maximize HMF yield. Adjusting the HCl/CrCl 3 concentration has a more pronounced effect on HMF yield at constant glucose conversion than that of temperature or CrCl 3 concentration. This is attributed to the interactions between HCl and CrCl 3 speciation in solution that leadsmore » to HMF yield being maximized at moderate HCl concentrations for each CrCl 3 concentration. This volcano-like behavior is accompanied with a change in the rate-limiting step from fructose dehydration to glucose isomerization as the concentration of the Brønsted acid increases. The maximum HMF yield in a single aqueous phase is only modest and appears independent of catalysts’ concentrations as long as they are appropriately balanced. However, it can be further maximized in a biphasic system. Our findings are consistent with recent studies in other tandem reactions catalyzed by different catalysts.« less

  11. Sugarcane cultural practices to increase profits

    USDA-ARS?s Scientific Manuscript database

    Multiple experiments were initiated in an effort to reduce costs, increase ratooning, and maximize profits. A new method of mechanical removal using a modified rake produced yields similarly to burning, with both yielding an additional 1000 lbs/A than full retention. Where burning was not an option,...

  12. Simultaneous catalytic conversion of cellulose and corncob xylan under temperature programming for enhanced sorbitol and xylitol production.

    PubMed

    Ribeiro, Lucília Sousa; Órfão, José J de Melo; Pereira, Manuel Fernando Ribeiro

    2017-11-01

    Sorbitol and xylitol yields can be improved by converting cellulose and xylan simultaneously, due to a synergetic effect between both substrates. Furthermore, both yields can be greatly enhanced by simply adjusting the reaction conditions regarding the optimum for the production of each product, since xylitol (from xylan) and sorbitol (from cellulose) yields are maximized when the reaction is carried out at 170 and 205°C, respectively. Therefore, the combination of a simultaneous conversion of cellulose and xylan with a two-step temperature approach, which consists in the variation of the reaction temperature from 170 to 205°C after 2h, showed to be a good strategy for maximizing the production of sorbitol and xylitol directly from mixture of cellulose and xylan. Using this new and environmentally friendly approach, yields of sorbitol and xylitol of 75 and 77%, respectively, were obtained after 6h of reaction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Lower Limits on Aperture Size for an ExoEarth Detecting Coronagraphic Mission

    NASA Technical Reports Server (NTRS)

    Stark, Christopher C.; Roberge, Aki; Mandell, Avi; Clampin, Mark; Domagal-Goldman, Shawn D.; McElwain, Michael W.; Stapelfeldt, Karl R.

    2015-01-01

    The yield of Earth-like planets will likely be a primary science metric for future space-based missions that will drive telescope aperture size. Maximizing the exoEarth candidate yield is therefore critical to minimizing the required aperture. Here we describe a method for exoEarth candidate yield maximization that simultaneously optimizes, for the first time, the targets chosen for observation, the number of visits to each target, the delay time between visits, and the exposure time of every observation. This code calculates both the detection time and multiwavelength spectral characterization time required for planets. We also refine the astrophysical assumptions used as inputs to these calculations, relying on published estimates of planetary occurrence rates as well as theoretical and observational constraints on terrestrial planet sizes and classical habitable zones. Given these astrophysical assumptions, optimistic telescope and instrument assumptions, and our new completeness code that produces the highest yields to date, we suggest lower limits on the aperture size required to detect and characterize a statistically motivated sample of exoEarths.

  14. Temporally dependent pollinator competition and facilitation with mass flowering crops affects yield in co-blooming crops

    PubMed Central

    Grab, Heather; Blitzer, Eleanor J.; Danforth, Bryan; Loeb, Greg; Poveda, Katja

    2017-01-01

    One of the greatest challenges in sustainable agricultural production is managing ecosystem services, such as pollination, in ways that maximize crop yields. Most efforts to increase services by wild pollinators focus on management of natural habitats surrounding farms or non-crop habitats within farms. However, mass flowering crops create resource pulses that may be important determinants of pollinator dynamics. Mass bloom attracts pollinators and it is unclear how this affects the pollination and yields of other co-blooming crops. We investigated the effects of mass flowering apple on the pollinator community and yield of co-blooming strawberry on farms spanning a gradient in cover of apple orchards in the landscape. The effect of mass flowering apple on strawberry was dependent on the stage of apple bloom. During early and peak apple bloom, pollinator abundance and yield were reduced in landscapes with high cover of apple orchards. Following peak apple bloom, pollinator abundance was greater on farms with high apple cover and corresponded with increased yields on these farms. Spatial and temporal overlap between mass flowering and co-blooming crops alters the strength and direction of these dynamics and suggests that yields can be optimized by designing agricultural systems that avoid competition while maximizing facilitation. PMID:28345653

  15. miRge - A Multiplexed Method of Processing Small RNA-Seq Data to Determine MicroRNA Entropy

    PubMed Central

    Myers, Jason R.; Gupta, Simone; Weng, Lien-Chun; Ashton, John M.; Cornish, Toby C.; Pandey, Akhilesh; Halushka, Marc K.

    2015-01-01

    Small RNA RNA-seq for microRNAs (miRNAs) is a rapidly developing field where opportunities still exist to create better bioinformatics tools to process these large datasets and generate new, useful analyses. We built miRge to be a fast, smart small RNA-seq solution to process samples in a highly multiplexed fashion. miRge employs a Bayesian alignment approach, whereby reads are sequentially aligned against customized mature miRNA, hairpin miRNA, noncoding RNA and mRNA sequence libraries. miRNAs are summarized at the level of raw reads in addition to reads per million (RPM). Reads for all other RNA species (tRNA, rRNA, snoRNA, mRNA) are provided, which is useful for identifying potential contaminants and optimizing small RNA purification strategies. miRge was designed to optimally identify miRNA isomiRs and employs an entropy based statistical measurement to identify differential production of isomiRs. This allowed us to identify decreasing entropy in isomiRs as stem cells mature into retinal pigment epithelial cells. Conversely, we show that pancreatic tumor miRNAs have similar entropy to matched normal pancreatic tissues. In a head-to-head comparison with other miRNA analysis tools (miRExpress 2.0, sRNAbench, omiRAs, miRDeep2, Chimira, UEA small RNA Workbench), miRge was faster (4 to 32-fold) and was among the top-two methods in maximally aligning miRNAs reads per sample. Moreover, miRge has no inherent limits to its multiplexing. miRge was capable of simultaneously analyzing 100 small RNA-Seq samples in 52 minutes, providing an integrated analysis of miRNA expression across all samples. As miRge was designed for analysis of single as well as multiple samples, miRge is an ideal tool for high and low-throughput users. miRge is freely available at http://atlas.pathology.jhu.edu/baras/miRge.html. PMID:26571139

  16. Thermal activation parameters of plastic flow reveal deformation mechanisms in the CrMnFeCoNi high-entropy alloy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laplanche, Guillaume; Bonneville, J.; Varvenne, C.

    To reveal the operating mechanisms of plastic deformation in an FCC high-entropy alloy, the activation volumes in CrMnFeCoNi have been measured as a function of plastic strain and temperature between 77 K and 423 K using repeated load relaxation experiments. At the yield stress, σ y, the activation volume varies from ~60 b3 at 77 K to ~360 b 3 at 293 K and scales inversely with yield stress. With increasing plastic strain, the activation volume decreases and the trends follow the Cottrell-Stokes law, according to which the inverse activation volume should increase linearly with σ - σ y (Haasenmore » plot). This is consistent with the notion that hardening due to an increase in the density of forest dislocations is naturally associated with a decrease in the activation volume because the spacing between dislocations decreases. The values and trends in activation volume agree with theoretical predictions that treat the HEA as a high-concentration solid-solution-strengthened alloy. Lastly, these results demonstrate that this HEA deforms by the mechanisms typical of solute strengthening in FCC alloys, and thus indicate that the high compositional/structural complexity does not introduce any new intrinsic deformation mechanisms.« less

  17. Thermal activation parameters of plastic flow reveal deformation mechanisms in the CrMnFeCoNi high-entropy alloy

    DOE PAGES

    Laplanche, Guillaume; Bonneville, J.; Varvenne, C.; ...

    2017-10-06

    To reveal the operating mechanisms of plastic deformation in an FCC high-entropy alloy, the activation volumes in CrMnFeCoNi have been measured as a function of plastic strain and temperature between 77 K and 423 K using repeated load relaxation experiments. At the yield stress, σ y, the activation volume varies from ~60 b3 at 77 K to ~360 b 3 at 293 K and scales inversely with yield stress. With increasing plastic strain, the activation volume decreases and the trends follow the Cottrell-Stokes law, according to which the inverse activation volume should increase linearly with σ - σ y (Haasenmore » plot). This is consistent with the notion that hardening due to an increase in the density of forest dislocations is naturally associated with a decrease in the activation volume because the spacing between dislocations decreases. The values and trends in activation volume agree with theoretical predictions that treat the HEA as a high-concentration solid-solution-strengthened alloy. Lastly, these results demonstrate that this HEA deforms by the mechanisms typical of solute strengthening in FCC alloys, and thus indicate that the high compositional/structural complexity does not introduce any new intrinsic deformation mechanisms.« less

  18. Tillage and Composting Strategies to Maximize Potentially Mineralizable Nitrogen in Maize-based Cropping Systems

    USDA-ARS?s Scientific Manuscript database

    Cereal crop yields vary drastically between developed and developing nations. In developing nations, a lack of synthetic nitrogen (N) fertilizer often limits yields. Low-cost soil management strategies that increase biologically available soil organic matter can reduce farmer reliance on synthetic N...

  19. Corn response to nitrogen management under fully-irrigated vs. water-stressed conditions

    USDA-ARS?s Scientific Manuscript database

    Characterizing corn grain yield response to nitrogen (N) fertilizer rate is critical for maximizing profits, optimizing N use efficiency and minimizing environmental impacts. Although a large data base of yield response to N has been compiled for highly productive soils in the upper Midwest U.S., f...

  20. Bold-Independent Computational Entropy Assesses Functional Donut-Like Structures in Brain fMRI Images

    PubMed Central

    Peters, James F.; Ramanna, Sheela; Tozzi, Arturo; İnan, Ebubekir

    2017-01-01

    We introduce a novel method for the measurement of information level in fMRI (functional Magnetic Resonance Imaging) neural data sets, based on image subdivision in small polygons equipped with different entropic content. We show how this method, called maximal nucleus clustering (MNC), is a novel, fast and inexpensive image-analysis technique, independent from the standard blood-oxygen-level dependent signals. MNC facilitates the objective detection of hidden temporal patterns of entropy/information in zones of fMRI images generally not taken into account by the subjective standpoint of the observer. This approach befits the geometric character of fMRIs. The main purpose of this study is to provide a computable framework for fMRI that not only facilitates analyses, but also provides an easily decipherable visualization of structures. This framework commands attention because it is easily implemented using conventional software systems. In order to evaluate the potential applications of MNC, we looked for the presence of a fourth dimension's distinctive hallmarks in a temporal sequence of 2D images taken during spontaneous brain activity. Indeed, recent findings suggest that several brain activities, such as mind-wandering and memory retrieval, might take place in the functional space of a four dimensional hypersphere, which is a double donut-like structure undetectable in the usual three dimensions. We found that the Rényi entropy is higher in MNC areas than in the surrounding ones, and that these temporal patterns closely resemble the trajectories predicted by the possible presence of a hypersphere in the brain. PMID:28203153

  1. Bold-Independent Computational Entropy Assesses Functional Donut-Like Structures in Brain fMRI Images.

    PubMed

    Peters, James F; Ramanna, Sheela; Tozzi, Arturo; İnan, Ebubekir

    2017-01-01

    We introduce a novel method for the measurement of information level in fMRI (functional Magnetic Resonance Imaging) neural data sets, based on image subdivision in small polygons equipped with different entropic content. We show how this method, called maximal nucleus clustering (MNC), is a novel, fast and inexpensive image-analysis technique, independent from the standard blood-oxygen-level dependent signals. MNC facilitates the objective detection of hidden temporal patterns of entropy/information in zones of fMRI images generally not taken into account by the subjective standpoint of the observer. This approach befits the geometric character of fMRIs. The main purpose of this study is to provide a computable framework for fMRI that not only facilitates analyses, but also provides an easily decipherable visualization of structures. This framework commands attention because it is easily implemented using conventional software systems. In order to evaluate the potential applications of MNC, we looked for the presence of a fourth dimension's distinctive hallmarks in a temporal sequence of 2D images taken during spontaneous brain activity. Indeed, recent findings suggest that several brain activities, such as mind-wandering and memory retrieval, might take place in the functional space of a four dimensional hypersphere, which is a double donut-like structure undetectable in the usual three dimensions. We found that the Rényi entropy is higher in MNC areas than in the surrounding ones, and that these temporal patterns closely resemble the trajectories predicted by the possible presence of a hypersphere in the brain.

  2. Microscopic entropy of the three-dimensional rotating black hole of Bergshoeff-Hohm-Townsend massive gravity

    NASA Astrophysics Data System (ADS)

    Giribet, Gaston; Oliva, Julio; Tempo, David; Troncoso, Ricardo

    2009-12-01

    Asymptotically anti-de Sitter rotating black holes for the Bergshoeff-Hohm-Townsend massive gravity theory in three dimensions are considered. In the special case when the theory admits a unique maximally symmetric solution, apart from the mass and the angular momentum, the black hole is described by an independent “gravitational hair” parameter, which provides a negative lower bound for the mass. This bound is saturated at the extremal case, and since the temperature and the semiclassical entropy vanish, it is naturally regarded as the ground state. The absence of a global charge associated with the gravitational hair parameter reflects itself through the first law of thermodynamics in the fact that the variation of this parameter can be consistently reabsorbed by a shift of the global charges, giving further support to consider the extremal case as the ground state. The rotating black hole fits within relaxed asymptotic conditions as compared with the ones of Brown and Henneaux, such that they are invariant under the standard asymptotic symmetries spanned by two copies of the Virasoro generators, and the algebra of the conserved charges acquires a central extension. Then it is shown that Strominger’s holographic computation for general relativity can also be extended to the Bergshoeff-Hohm-Townsend theory; i.e., assuming that the quantum theory could be consistently described by a dual conformal field theory at the boundary, the black hole entropy can be microscopically computed from the asymptotic growth of the number of states according to Cardy’s formula, in exact agreement with the semiclassical result.

  3. Gene Network for Identifying the Entropy Changes of Different Modules in Pediatric Sepsis.

    PubMed

    Yang, Jing; Zhang, Pingli; Wang, Lumin

    2016-01-01

    Pediatric sepsis is a disease that threatens life of children. The incidence of pediatric sepsis is higher in developing countries due to various reasons, such as insufficient immunization and nutrition, water and air pollution, etc. Exploring the potential genes via different methods is of significance for the prevention and treatment of pediatric sepsis. This study aimed to identify potential genes associated with pediatric sepsis utilizing analysis of gene network and entropy. The mRNA expression in the blood samples collected from 20 septic children and 30 healthy controls was quantified by using Affymetrix HG-U133A microarray. Two condition-specific protein-protein interaction networks (PINs), one for the healthy control and the other one for the children with sepsis, were deduced by combining the fundamental human PINs with gene expression profiles in the two phenotypes. Subsequently, distinct modules from the two conditional networks were extracted by adopting a maximal clique-merging approach. Delta entropy (ΔS) was calculated between sepsis and control modules. Then, key genes displaying changes in gene composition were identified by matching the control and sepsis modules. Two objective modules were obtained, in which ribosomal protein RPL4 and RPL9 as well as TOP2A were probably considered as the key genes differentiating sepsis from healthy controls. According to previous reports and this work, TOP2A is the potential gene therapy target for pediatric sepsis. The relationship between pediatric sepsis and RPL4 and RPL9 needs further investigation. © 2016 The Author(s) Published by S. Karger AG, Basel.

  4. Quantum engine efficiency bound beyond the second law of thermodynamics.

    PubMed

    Niedenzu, Wolfgang; Mukherjee, Victor; Ghosh, Arnab; Kofman, Abraham G; Kurizki, Gershon

    2018-01-11

    According to the second law, the efficiency of cyclic heat engines is limited by the Carnot bound that is attained by engines that operate between two thermal baths under the reversibility condition whereby the total entropy does not increase. Quantum engines operating between a thermal and a squeezed-thermal bath have been shown to surpass this bound. Yet, their maximum efficiency cannot be determined by the reversibility condition, which may yield an unachievable efficiency bound above unity. Here we identify the fraction of the exchanged energy between a quantum system and a bath that necessarily causes an entropy change and derive an inequality for this change. This inequality reveals an efficiency bound for quantum engines energised by a non-thermal bath. This bound does not imply reversibility, unless the two baths are thermal. It cannot be solely deduced from the laws of thermodynamics.

  5. Thermalizing Sterile Neutrino Dark Matter

    NASA Astrophysics Data System (ADS)

    Hansen, Rasmus S. L.; Vogl, Stefan

    2017-12-01

    Sterile neutrinos produced through oscillations are a well motivated dark matter candidate, but recent constraints from observations have ruled out most of the parameter space. We analyze the impact of new interactions on the evolution of keV sterile neutrino dark matter in the early Universe. Based on general considerations we find a mechanism which thermalizes the sterile neutrinos after an initial production by oscillations. The thermalization of sterile neutrinos is accompanied by dark entropy production which increases the yield of dark matter and leads to a lower characteristic momentum. This resolves the growing tensions with structure formation and x-ray observations and even revives simple nonresonant production as a viable way to produce sterile neutrino dark matter. We investigate the parameters required for the realization of the thermalization mechanism in a representative model and find that a simple estimate based on energy and entropy conservation describes the mechanism well.

  6. The BCC/B2 morphologies in Al xNiCoFeCr high-entropy alloys

    DOE PAGES

    Ma, Yue; Jiang, Beibei; Li, Chunling; ...

    2017-02-15

    Here, the present work primarily investigates the morphological evolution of the body-centered-cubic (BCC)/B2 phases in Al xNiCoFeCr high-entropy alloys (HEAs) with increasing Al content. It is found that the BCC/B2 coherent morphology is closely related to the lattice misfit between these two phases, which is sensitive to Al. There are two types of microscopic BCC/B2 morphologies in this HEA series: one is the weave-like morphology induced by the spinodal decomposition, and the other is the microstructure of a spherical disordered BCC precipitation on the ordered B2 matrix that appears in HEAs with a much higher Al content. The mechanical properties,more » including the compressive yielding strength and microhardness of the Al xNiCoFeCr HEAs, are also discussed in light of the concept of the valence electron concentration (VEC).« less

  7. Energy and maximum norm estimates for nonlinear conservation laws

    NASA Technical Reports Server (NTRS)

    Olsson, Pelle; Oliger, Joseph

    1994-01-01

    We have devised a technique that makes it possible to obtain energy estimates for initial-boundary value problems for nonlinear conservation laws. The two major tools to achieve the energy estimates are a certain splitting of the flux vector derivative f(u)(sub x), and a structural hypothesis, referred to as a cone condition, on the flux vector f(u). These hypotheses are fulfilled for many equations that occur in practice, such as the Euler equations of gas dynamics. It should be noted that the energy estimates are obtained without any assumptions on the gradient of the solution u. The results extend to weak solutions that are obtained as point wise limits of vanishing viscosity solutions. As a byproduct we obtain explicit expressions for the entropy function and the entropy flux of symmetrizable systems of conservation laws. Under certain circumstances the proposed technique can be applied repeatedly so as to yield estimates in the maximum norm.

  8. Improved one-dimensional area law for frustration-free systems

    NASA Astrophysics Data System (ADS)

    Arad, Itai; Landau, Zeph; Vazirani, Umesh

    2012-05-01

    We present a new proof for the 1D area law for frustration-free systems with a constant gap, which exponentially improves the entropy bound in Hastingsâ 1D area law and which is tight to within a polynomial factor. For particles of dimension d, spectral gap ɛ>0, and interaction strength at most J, our entropy bound is S1D≤O(1)·X3log8X, where X=def(Jlogd)/ɛ. Our proof is completely combinatorial, combining the detectability lemma with basic tools from approximation theory. In higher dimensions, when the bipartitioning area is |∂L|, we use additional local structure in the proof and show that S≤O(1)·|∂L|2log6|∂L|·X3log8X. This is at the cusp of being nontrivial in the 2D case, in the sense that any further improvement would yield a subvolume law.

  9. Thermalizing Sterile Neutrino Dark Matter.

    PubMed

    Hansen, Rasmus S L; Vogl, Stefan

    2017-12-22

    Sterile neutrinos produced through oscillations are a well motivated dark matter candidate, but recent constraints from observations have ruled out most of the parameter space. We analyze the impact of new interactions on the evolution of keV sterile neutrino dark matter in the early Universe. Based on general considerations we find a mechanism which thermalizes the sterile neutrinos after an initial production by oscillations. The thermalization of sterile neutrinos is accompanied by dark entropy production which increases the yield of dark matter and leads to a lower characteristic momentum. This resolves the growing tensions with structure formation and x-ray observations and even revives simple nonresonant production as a viable way to produce sterile neutrino dark matter. We investigate the parameters required for the realization of the thermalization mechanism in a representative model and find that a simple estimate based on energy and entropy conservation describes the mechanism well.

  10. Information theory lateral density distribution for Earth inferred from global gravity field

    NASA Technical Reports Server (NTRS)

    Rubincam, D. P.

    1981-01-01

    Information Theory Inference, better known as the Maximum Entropy Method, was used to infer the lateral density distribution inside the Earth. The approach assumed that the Earth consists of indistinguishable Maxwell-Boltzmann particles populating infinitesimal volume elements, and followed the standard methods of statistical mechanics (maximizing the entropy function). The GEM 10B spherical harmonic gravity field coefficients, complete to degree and order 36, were used as constraints on the lateral density distribution. The spherically symmetric part of the density distribution was assumed to be known. The lateral density variation was assumed to be small compared to the spherically symmetric part. The resulting information theory density distribution for the cases of no crust removed, 30 km of compensated crust removed, and 30 km of uncompensated crust removed all gave broad density anomalies extending deep into the mantle, but with the density contrasts being the greatest towards the surface (typically + or 0.004 g cm 3 in the first two cases and + or - 0.04 g cm 3 in the third). None of the density distributions resemble classical organized convection cells. The information theory approach may have use in choosing Standard Earth Models, but, the inclusion of seismic data into the approach appears difficult.

  11. Entropy and optimality in river deltas

    NASA Astrophysics Data System (ADS)

    Tejedor, Alejandro; Longjas, Anthony; Edmonds, Douglas A.; Zaliapin, Ilya; Georgiou, Tryphon T.; Rinaldo, Andrea; Foufoula-Georgiou, Efi

    2017-10-01

    The form and function of river deltas is intricately linked to the evolving structure of their channel networks, which controls how effectively deltas are nourished with sediments and nutrients. Understanding the coevolution of deltaic channels and their flux organization is crucial for guiding maintenance strategies of these highly stressed systems from a range of anthropogenic activities. To date, however, a unified theory explaining how deltas self-organize to distribute water and sediment up to the shoreline remains elusive. Here, we provide evidence for an optimality principle underlying the self-organized partition of fluxes in delta channel networks. By introducing a suitable nonlocal entropy rate (nER) and by analyzing field and simulated deltas, we suggest that delta networks achieve configurations that maximize the diversity of water and sediment flux delivery to the shoreline. We thus suggest that prograding deltas attain dynamically accessible optima of flux distributions on their channel network topologies, thus effectively decoupling evolutionary time scales of geomorphology and hydrology. When interpreted in terms of delta resilience, high nER configurations reflect an increased ability to withstand perturbations. However, the distributive mechanism responsible for both diversifying flux delivery to the shoreline and dampening possible perturbations might lead to catastrophic events when those perturbations exceed certain intensity thresholds.

  12. Quantum entanglement and informational activities of biomolecules

    NASA Astrophysics Data System (ADS)

    Al-Shargi, Hanan; Berkovich, Simon

    2009-03-01

    Our model of holographic Universe [1] explains the surprising property of quantum entanglement and reveals its biological implications. The suggested holographic mechanism handles 2D slices of the physical world as a whole. Fitting this simple holistic process in the Procrustean bed of individual particles interactions leads to intricacies of quantum theory with an unintelligible protrusion of distant correlations. Holographic medium imposes dependence of quantum effects on absolute positioning. Testing this prediction for a non-exponential radioactive decay could resolutely point to outside ``memory.'' The essence of Life is in the sophistication of macromolecules. Distinctions in biological information processing of nucleotides in DNA and amino acids in proteins are related to entropies of their structures. Randomness of genetic configurations as exposed by their maximal entropy is characteristic of passive identification rather than active storage functionality. Structural redundancy of proteins shows their operability, of which different foldings of prions is most indicative. Folding of one prion can reshape another prion without a direct contact appearing like ``quantum entanglement,'' or ``teleportation.'' Testing the surmised influence of absolute orientation on the prion reshaping can uncover the latency effects in the ``mad cow'' disease. 1. Simon Berkovich, TR-GWU-CS-07-006, http://www.cs.gwu.edu/research/reports.php

  13. Random SU(2) invariant tensors

    NASA Astrophysics Data System (ADS)

    Li, Youning; Han, Muxin; Ruan, Dong; Zeng, Bei

    2018-04-01

    SU(2) invariant tensors are states in the (local) SU(2) tensor product representation but invariant under the global group action. They are of importance in the study of loop quantum gravity. A random tensor is an ensemble of tensor states. An average over the ensemble is carried out when computing any physical quantities. The random tensor exhibits a phenomenon known as ‘concentration of measure’, which states that for any bipartition the average value of entanglement entropy of its reduced density matrix is asymptotically the maximal possible as the local dimensions go to infinity. We show that this phenomenon is also true when the average is over the SU(2) invariant subspace instead of the entire space for rank-n tensors in general. It is shown in our earlier work Li et al (2017 New J. Phys. 19 063029) that the subleading correction of the entanglement entropy has a mild logarithmic divergence when n  =  4. In this paper, we show that for n  >  4 the subleading correction is not divergent but a finite number. In some special situation, the number could be even smaller than 1/2, which is the subleading correction of random state over the entire Hilbert space of tensors.

  14. Closer look at time averages of the logistic map at the edge of chaos

    NASA Astrophysics Data System (ADS)

    Tirnakli, Ugur; Tsallis, Constantino; Beck, Christian

    2009-05-01

    The probability distribution of sums of iterates of the logistic map at the edge of chaos has been recently shown [U. Tirnakli , Phys. Rev. E 75, 040106(R) (2007)] to be numerically consistent with a q -Gaussian, the distribution which—under appropriate constraints—maximizes the nonadditive entropy Sq , which is the basis of nonextensive statistical mechanics. This analysis was based on a study of the tails of the distribution. We now check the entire distribution, in particular, its central part. This is important in view of a recent q generalization of the central limit theorem, which states that for certain classes of strongly correlated random variables the rescaled sum approaches a q -Gaussian limit distribution. We numerically investigate for the logistic map with a parameter in a small vicinity of the critical point under which conditions there is convergence to a q -Gaussian both in the central region and in the tail region and find a scaling law involving the Feigenbaum constant δ . Our results are consistent with a large number of already available analytical and numerical evidences that the edge of chaos is well described in terms of the entropy Sq and its associated concepts.

  15. Using Maximum Entropy to Find Patterns in Genomes

    NASA Astrophysics Data System (ADS)

    Liu, Sophia; Hockenberry, Adam; Lancichinetti, Andrea; Jewett, Michael; Amaral, Luis

    The existence of over- and under-represented sequence motifs in genomes provides evidence of selective evolutionary pressures on biological mechanisms such as transcription, translation, ligand-substrate binding, and host immunity. To accurately identify motifs and other genome-scale patterns of interest, it is essential to be able to generate accurate null models that are appropriate for the sequences under study. There are currently no tools available that allow users to create random coding sequences with specified amino acid composition and GC content. Using the principle of maximum entropy, we developed a method that generates unbiased random sequences with pre-specified amino acid and GC content. Our method is the simplest way to obtain maximally unbiased random sequences that are subject to GC usage and primary amino acid sequence constraints. This approach can also be easily be expanded to create unbiased random sequences that incorporate more complicated constraints such as individual nucleotide usage or even di-nucleotide frequencies. The ability to generate correctly specified null models will allow researchers to accurately identify sequence motifs which will lead to a better understanding of biological processes. National Institute of General Medical Science, Northwestern University Presidential Fellowship, National Science Foundation, David and Lucile Packard Foundation, Camille Dreyfus Teacher Scholar Award.

  16. Phylogenetic diversity measures based on Hill numbers.

    PubMed

    Chao, Anne; Chiu, Chun-Huo; Jost, Lou

    2010-11-27

    We propose a parametric class of phylogenetic diversity (PD) measures that are sensitive to both species abundance and species taxonomic or phylogenetic distances. This work extends the conventional parametric species-neutral approach (based on 'effective number of species' or Hill numbers) to take into account species relatedness, and also generalizes the traditional phylogenetic approach (based on 'total phylogenetic length') to incorporate species abundances. The proposed measure quantifies 'the mean effective number of species' over any time interval of interest, or the 'effective number of maximally distinct lineages' over that time interval. The product of the measure and the interval length quantifies the 'branch diversity' of the phylogenetic tree during that interval. The new measures generalize and unify many existing measures and lead to a natural definition of taxonomic diversity as a special case. The replication principle (or doubling property), an important requirement for species-neutral diversity, is generalized to PD. The widely used Rao's quadratic entropy and the phylogenetic entropy do not satisfy this essential property, but a simple transformation converts each to our measures, which do satisfy the property. The proposed approach is applied to forest data for interpreting the effects of thinning.

  17. Maximized exoEarth candidate yields for starshades

    NASA Astrophysics Data System (ADS)

    Stark, Christopher C.; Shaklan, Stuart; Lisman, Doug; Cady, Eric; Savransky, Dmitry; Roberge, Aki; Mandell, Avi M.

    2016-10-01

    The design and scale of a future mission to directly image and characterize potentially Earth-like planets will be impacted, to some degree, by the expected yield of such planets. Recent efforts to increase the estimated yields, by creating observation plans optimized for the detection and characterization of Earth-twins, have focused solely on coronagraphic instruments; starshade-based missions could benefit from a similar analysis. Here we explore how to prioritize observations for a starshade given the limiting resources of both fuel and time, present analytic expressions to estimate fuel use, and provide efficient numerical techniques for maximizing the yield of starshades. We implemented these techniques to create an approximate design reference mission code for starshades and used this code to investigate how exoEarth candidate yield responds to changes in mission, instrument, and astrophysical parameters for missions with a single starshade. We find that a starshade mission operates most efficiently somewhere between the fuel- and exposuretime-limited regimes and, as a result, is less sensitive to photometric noise sources as well as parameters controlling the photon collection rate in comparison to a coronagraph. We produced optimistic yield curves for starshades, assuming our optimized observation plans are schedulable and future starshades are not thrust-limited. Given these yield curves, detecting and characterizing several dozen exoEarth candidates requires either multiple starshades or an η≳0.3.

  18. Quantitative LC-MS of polymers: determining accurate molecular weight distributions by combined size exclusion chromatography and electrospray mass spectrometry with maximum entropy data processing.

    PubMed

    Gruendling, Till; Guilhaus, Michael; Barner-Kowollik, Christopher

    2008-09-15

    We report on the successful application of size exclusion chromatography (SEC) combined with electrospray ionization mass spectrometry (ESI-MS) and refractive index (RI) detection for the determination of accurate molecular weight distributions of synthetic polymers, corrected for chromatographic band broadening. The presented method makes use of the ability of ESI-MS to accurately depict the peak profiles and retention volumes of individual oligomers eluting from the SEC column, whereas quantitative information on the absolute concentration of oligomers is obtained from the RI-detector only. A sophisticated computational algorithm based on the maximum entropy principle is used to process the data gained by both detectors, yielding an accurate molecular weight distribution, corrected for chromatographic band broadening. Poly(methyl methacrylate) standards with molecular weights up to 10 kDa serve as model compounds. Molecular weight distributions (MWDs) obtained by the maximum entropy procedure are compared to MWDs, which were calculated by a conventional calibration of the SEC-retention time axis with peak retention data obtained from the mass spectrometer. Comparison showed that for the employed chromatographic system, distributions below 7 kDa were only weakly influenced by chromatographic band broadening. However, the maximum entropy algorithm could successfully correct the MWD of a 10 kDa standard for band broadening effects. Molecular weight averages were between 5 and 14% lower than the manufacturer stated data obtained by classical means of calibration. The presented method demonstrates a consistent approach for analyzing data obtained by coupling mass spectrometric detectors and concentration sensitive detectors to polymer liquid chromatography.

  19. Depolymerization of cellulose into high-value chemicals by using synergy of zinc chloride hydrate and sulfate ion promoted titania catalyst.

    PubMed

    Wei, Weiqi; Wu, Shubin

    2017-10-01

    Experiments for cellulose depolymerization by synergy of zinc chloride hydrate (ZnCl 2 ·RH 2 O) and sulfated titania catalyst (SO 4 2- /TiO 2 ) were investigated in this study. The results showed the introduction of sulfate into the TiO 2 significantly enhanced the catalyst acid amount, especially for Brønsted acid site, which is beneficial for subsequent cellulose depolymerization. ZnCl 2 ·RH 2 O hydrate, only a narrow composition range of water, specifically 3.0≤R≤4.0, can dissolve cellulose, which finally resulted the cellulose with low crystallinity and weak intrachain and interchain hydrogen bond network. Coupling of ZnCl 2 ·RH 2 O hydrate and SO 4 2- /TiO 2 catalyst as a mixed reaction system promoted cellulose depolymerization, and the products can be adjusted by the control of reaction conditions, the low temperature (80-100°C) seemed beneficial for glucose formation (maximal yield 50.5%), and the high temperature (120-140°C) favored to produce levulinic acid (maximal yield 43.1%). Besides, the addition of organic co-solvent making HMF as the main product (maximal yield 38.3%). Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Corn nitrogen fertilization rate tools compared over eight midwest states

    USDA-ARS?s Scientific Manuscript database

    Publicly-available nitrogen (N) rate recommendation tools are utilized to help maximize yield in corn production. These tools often fail both when N is over-applied and result in excess N being lost to the environment, or when N is under-applied and the result in decreased yield and economic returns...

  1. Nitrification Enhancement through pH Control with Rotating Biological Contactors

    DTIC Science & Technology

    1981-09-01

    source for growth (24). The generation of bactsrial biomass per unit of amonia oxidized, or yield, is quite small. The total yield for both Nitrosomonas...6.7 had the lowest performance level throughout most of the 69-day Qtudy and also developed the l.east amount of biofilm. The maxim- amonia -oxidition

  2. Antioxidants from slow pyrolysis bio-oil of birch wood: Application for biodiesel and biobased lubricants

    USDA-ARS?s Scientific Manuscript database

    Birch wood was slowly pyrolyzed to produce bio-oil and biochar. Slow pyrolysis conditions including reaction temperature, residence time, and particle size of the feed were optimized to maximize bio-oil yield. Particle size had an insignificant effect, whereas yields of up to 56% were achieved using...

  3. Conformational Entropy from Slowly Relaxing Local Structure Analysis of 15N-H Relaxation in Proteins: Application to Pheromone Binding to MUP-I in the 283-308 K Temperature Range.

    PubMed

    Žídek, Lukáš; Meirovitch, Eva

    2017-09-21

    The slowly relaxing local structure (SRLS) approach is applied to 15 N-H relaxation from the major urinary protein I (MUP-I), and its complex with pheromone 2-sec-butyl-4,5-dihydrothiazol. The objective is to elucidate dynamics, and binding-induced changes in conformational entropy. Experimental data acquired previously in the 283-308 K temperature range are used. The N-H bond is found to reorient globally with correlation time, τ 1,0 , and locally with correlation time, τ 2,0 , where τ 1,0 ≫ τ 2,0 . The local motion is restricted by the potential u = -c 0 2 D 00 2 , where D 00 2 is the Wigner rotation matrix element for L = 2, K = 0, and c 0 2 evaluates the strength of the potential. u yields straightforwardly the order parameter, ⟨D 00 2 ⟩, and the conformational entropy, S k , both given by P eq = exp(-u). The deviation of the local ordering/local diffusion axis from the N-H bond, given by the angle β, is also determined. We find that c 0 2 ≅ 18 ± 4 and τ 2,0 = 0-170 ps for ligand-free MUP-I, whereas c 0 2 ≅ 15 ± 4 and τ 2,0 = 20-270 ps for ligand-bound MUP-I. β is in the 0-10° range. c 0 2 and τ 2,0 decrease, whereas β increases, when the temperature is increased from 283 to 308 K. Thus, SRLS provides physically well-defined structure-related (c 0 2 and ⟨D 00 2 ⟩), motion-related (τ 2,0 ), geometry-related (β), and binding-related (S k ) local parameters, and their temperature-dependences. Intriguingly, upon pheromone binding the conformational entropy of MUP-I decreases at high temperature and increases at low temperature. The very same experimental data were analyzed previously with the model-free (MF) method which yielded "global" (in this context, "relating to the entire 283-308 K range") amplitude (S 2 ) and rate (τ e ) of the local motion, and a phenomenological exchange term (R ex ). S 2 is found to decrease (implying implicitly "global" increase in S k ) upon pheromone binding.

  4. Thermodynamics of Aryl-Dihydroxyphenyl-Thiadiazole Binding to Human Hsp90

    PubMed Central

    Kazlauskas, Egidijus; Petrikaitė, Vilma; Michailovienė, Vilma; Revuckienė, Jurgita; Matulienė, Jurgita; Grinius, Leonas; Matulis, Daumantas

    2012-01-01

    The design of specific inhibitors against the Hsp90 chaperone and other enzyme relies on the detailed and correct understanding of both the thermodynamics of inhibitor binding and the structural features of the protein-inhibitor complex. Here we present a detailed thermodynamic study of binding of aryl-dihydroxyphenyl-thiadiazole inhibitor series to recombinant human Hsp90 alpha isozyme. The inhibitors are highly potent, with the intrinsic Kd approximately equal to 1 nM as determined by isothermal titration calorimetry (ITC) and thermal shift assay (TSA). Dissection of protonation contributions yielded the intrinsic thermodynamic parameters of binding, such as enthalpy, entropy, Gibbs free energy, and the heat capacity. The differences in binding thermodynamic parameters between the series of inhibitors revealed contributions of the functional groups, thus providing insight into molecular reasons for improved or diminished binding efficiency. The inhibitor binding to Hsp90 alpha primarily depended on a large favorable enthalpic contribution combined with the smaller favorable entropic contribution, thus suggesting that their binding was both enthalpically and entropically optimized. The enthalpy-entropy compensation phenomenon was highly evident when comparing the inhibitor binding enthalpies and entropies. This study illustrates how detailed thermodynamic analysis helps to understand energetic reasons for the binding efficiency and develop more potent inhibitors that could be applied for therapeutic use as Hsp90 inhibitors. PMID:22655030

  5. Information Theory to Probe Intrapartum Fetal Heart Rate Dynamics

    NASA Astrophysics Data System (ADS)

    Granero-Belinchon, Carlos; Roux, Stéphane; Abry, Patrice; Doret, Muriel; Garnier, Nicolas

    2017-11-01

    Intrapartum fetal heart rate (FHR) monitoring constitutes a reference tool in clinical practice to assess the baby health status and to detect fetal acidosis. It is usually analyzed by visual inspection grounded on FIGO criteria. Characterization of Intrapartum fetal heart rate temporal dynamics remains a challenging task and continuously receives academic research efforts. Complexity measures, often implemented with tools referred to as \\emph{Approximate Entropy} (ApEn) or \\emph{Sample Entropy} (SampEn), have regularly been reported as significant features for intrapartum FHR analysis. We explore how Information Theory, and especially {\\em auto mutual information} (AMI), is connected to ApEn and SampEn and can be used to probe FHR dynamics. Applied to a large (1404 subjects) and documented database of FHR data, collected in a French academic hospital, it is shown that i) auto mutual information outperforms ApEn and SampEn for acidosis detection in the first stage of labor and continues to yield the best performance in the second stage; ii) Shannon entropy increases as labor progresses, and is always much larger in the second stage;iii) babies suffering from fetal acidosis additionally show more structured temporal dynamics than healthy ones and that this progressive structuration can be used for early acidosis detection.

  6. Simultaneous Strength-Ductility Enhancement of a Nano-Lamellar AlCoCrFeNi2.1 Eutectic High Entropy Alloy by Cryo-Rolling and Annealing.

    PubMed

    Bhattacharjee, T; Wani, I S; Sheikh, S; Clark, I T; Okawa, T; Guo, S; Bhattacharjee, P P; Tsuji, N

    2018-02-19

    Nano-lamellar (L1 2  + B2) AlCoCrFeNi 2.1 eutectic high entropy alloy (EHEA) was processed by cryo-rolling and annealing. The EHEA developed a novel hierarchical microstructure featured by fine lamellar regions consisting of FCC lamellae filled with ultrafine FCC grains (average size ~200-250 nm) and B2 lamellae, and coarse non-lamellar regions consisting of ultrafine FCC (average size ~200-250 nm), few coarse recrystallized FCC grains and rather coarse unrecrystallized B2 phase (~2.5 µm). This complex and hierarchical microstructure originated from differences in strain-partitioning amongst the constituent phases, affecting the driving force for recrystallization. The hierarchical microstructure of the cryo-rolled and annealed material resulted in simultaneous enhancement in strength (Yield Strength/YS: 1437 ± 26 MPa, Ultimate Tensile Strength/UTS: 1562 ± 33 MPa) and ductility (elongation to failure/e f  ~ 14 ± 1%) as compared to the as-cast as well as cold-rolled and annealed materials. The present study for the first time demonstrated that cryo-deformation and annealing could be a novel microstructural design strategy for overcoming strength-ductility trade off in multiphase high entropy alloys.

  7. Query construction, entropy, and generalization in neural-network models

    NASA Astrophysics Data System (ADS)

    Sollich, Peter

    1994-05-01

    We study query construction algorithms, which aim at improving the generalization ability of systems that learn from examples by choosing optimal, nonredundant training sets. We set up a general probabilistic framework for deriving such algorithms from the requirement of optimizing a suitable objective function; specifically, we consider the objective functions entropy (or information gain) and generalization error. For two learning scenarios, the high-low game and the linear perceptron, we evaluate the generalization performance obtained by applying the corresponding query construction algorithms and compare it to training on random examples. We find qualitative differences between the two scenarios due to the different structure of the underlying rules (nonlinear and ``noninvertible'' versus linear); in particular, for the linear perceptron, random examples lead to the same generalization ability as a sequence of queries in the limit of an infinite number of examples. We also investigate learning algorithms which are ill matched to the learning environment and find that, in this case, minimum entropy queries can in fact yield a lower generalization ability than random examples. Finally, we study the efficiency of single queries and its dependence on the learning history, i.e., on whether the previous training examples were generated randomly or by querying, and the difference between globally and locally optimal query construction.

  8. Practical Aspects of Stabilized FEM Discretizations of Nonlinear Conservation Law Systems with Convex Extension

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Saini, Subhash (Technical Monitor)

    1999-01-01

    This talk considers simplified finite element discretization techniques for first-order systems of conservation laws equipped with a convex (entropy) extension. Using newly developed techniques in entropy symmetrization theory, simplified forms of the Galerkin least-squares (GLS) and the discontinuous Galerkin (DG) finite element method have been developed and analyzed. The use of symmetrization variables yields numerical schemes which inherit global entropy stability properties of the POE system. Central to the development of the simplified GLS and DG methods is the Degenerative Scaling Theorem which characterizes right symmetrizes of an arbitrary first-order hyperbolic system in terms of scaled eigenvectors of the corresponding flux Jacobean matrices. A constructive proof is provided for the Eigenvalue Scaling Theorem with detailed consideration given to the Euler, Navier-Stokes, and magnetohydrodynamic (MHD) equations. Linear and nonlinear energy stability is proven for the simplified GLS and DG methods. Spatial convergence properties of the simplified GLS and DO methods are numerical evaluated via the computation of Ringleb flow on a sequence of successively refined triangulations. Finally, we consider a posteriori error estimates for the GLS and DG demoralization assuming error functionals related to the integrated lift and drag of a body. Sample calculations in 20 are shown to validate the theory and implementation.

  9. Entrofy: Participant Selection Made Easy

    NASA Astrophysics Data System (ADS)

    Huppenkothen, Daniela

    2016-03-01

    Selection participants for a workshop out of a much larger applicant pool can be a difficult task, especially when the goal is diversifying over a range of criteria (e.g. academic seniority, research field, skill levels, gender etc). In this talk I am presenting our tool, Entrofy, aimed at aiding organizers in this task. Entrofy is an open-source tool using a maximum entropy-based algorithm that aims to select a set of participants out of the applicant pool such that a pre-defined range of criteria are globally maximized. This approach allows for a potentially more transparent and less biased selection process while encouraging organizers to think deeply about the goals and the process of their participant selection.

  10. Mass-deformed ABJM and black holes in AdS4

    NASA Astrophysics Data System (ADS)

    Bobev, Nikolay; Min, Vincent S.; Pilch, Krzysztof

    2018-03-01

    We find a class of new supersymmetric dyonic black holes in four-dimensional maximal gauged supergravity which are asymptotic to the SU(3) × U(1) invariant AdS4 Warner vacuum. These black holes can be embedded in eleven-dimensional supergravity where they describe the backreaction of M2-branes wrapped on a Riemann surface. The holographic dual description of these supergravity backgrounds is given by a partial topological twist on a Riemann surface of a three-dimensional N=2 SCFT that is obtained by a mass-deformation of the ABJM theory. We compute explicitly the topologically twisted index of this SCFT and show that it accounts for the entropy of the black holes.

  11. Deep neural network and noise classification-based speech enhancement

    NASA Astrophysics Data System (ADS)

    Shi, Wenhua; Zhang, Xiongwei; Zou, Xia; Han, Wei

    2017-07-01

    In this paper, a speech enhancement method using noise classification and Deep Neural Network (DNN) was proposed. Gaussian mixture model (GMM) was employed to determine the noise type in speech-absent frames. DNN was used to model the relationship between noisy observation and clean speech. Once the noise type was determined, the corresponding DNN model was applied to enhance the noisy speech. GMM was trained with mel-frequency cepstrum coefficients (MFCC) and the parameters were estimated with an iterative expectation-maximization (EM) algorithm. Noise type was updated by spectrum entropy-based voice activity detection (VAD). Experimental results demonstrate that the proposed method could achieve better objective speech quality and smaller distortion under stationary and non-stationary conditions.

  12. Evidence for criticality in financial data

    NASA Astrophysics Data System (ADS)

    Ruiz, G.; de Marcos, A. F.

    2018-01-01

    We provide evidence that cumulative distributions of absolute normalized returns for the 100 American companies with the highest market capitalization, uncover a critical behavior for different time scales Δt. Such cumulative distributions, in accordance with a variety of complex - and financial - systems, can be modeled by the cumulative distribution functions of q-Gaussians, the distribution function that, in the context of nonextensive statistical mechanics, maximizes a non-Boltzmannian entropy. These q-Gaussians are characterized by two parameters, namely ( q, β), that are uniquely defined by Δt. From these dependencies, we find a monotonic relationship between q and β, which can be seen as evidence of criticality. We numerically determine the various exponents which characterize this criticality.

  13. Time evolution of complexity in Abelian gauge theories

    NASA Astrophysics Data System (ADS)

    Hashimoto, Koji; Iizuka, Norihiro; Sugishita, Sotaro

    2017-12-01

    Quantum complexity is conjectured to probe inside of black hole horizons (or wormholes) via gauge gravity correspondence. In order to have a better understanding of this correspondence, we study time evolutions of complexities for Abelian pure gauge theories. For this purpose, we discretize the U (1 ) gauge group as ZN and also the continuum spacetime as lattice spacetime, and this enables us to define a universal gate set for these gauge theories and to evaluate time evolutions of the complexities explicitly. We find that to achieve a large complexity ˜exp (entropy), which is one of the conjectured criteria necessary to have a dual black hole, the Abelian gauge theory needs to be maximally nonlocal.

  14. Limits on entanglement from rotationally invariant scattering of spin systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harshman, N. L.

    2006-06-15

    This paper investigates the dynamical generation of entanglement in scattering systems, in particular two spin systems that interact via rotationally invariant scattering. The spin degrees of freedom of the in states are assumed to be in unentangled, pure states, as defined by the entropy of entanglement. Because of the restriction of rotationally symmetric interactions, perfectly entangling S matrices, i.e., those that lead to a maximally entangled out state, only exist for a certain class of separable in states. Using Clebsch-Gordan coefficients for the rotation group, the scattering phases that determine the S matrix are determined for the case of spinmore » systems with {sigma}=1/2, 1, and 3/2.« less

  15. Optimizing cosmological surveys in a crowded market

    NASA Astrophysics Data System (ADS)

    Bassett, Bruce A.

    2005-04-01

    Optimizing the major next-generation cosmological surveys (such as SNAP, KAOS, etc.) is a key problem given our ignorance of the physics underlying cosmic acceleration and the plethora of surveys planned. We propose a Bayesian design framework which (1) maximizes the discrimination power of a survey without assuming any underlying dark-energy model, (2) finds the best niche survey geometry given current data and future competing experiments, (3) maximizes the cross section for serendipitous discoveries and (4) can be adapted to answer specific questions (such as “is dark energy dynamical?”). Integrated parameter-space optimization (IPSO) is a design framework that integrates projected parameter errors over an entire dark energy parameter space and then extremizes a figure of merit (such as Shannon entropy gain which we show is stable to off-diagonal covariance matrix perturbations) as a function of survey parameters using analytical, grid or MCMC techniques. We discuss examples where the optimization can be performed analytically. IPSO is thus a general, model-independent and scalable framework that allows us to appropriately use prior information to design the best possible surveys.

  16. Fewer not more leaves - Key to obtaining the needed jump in crop yield potential and water use efficiency

    NASA Astrophysics Data System (ADS)

    Srinivasan, V.; Kumar, P.; Long, S.

    2013-12-01

    Word food and feed supply needs to increase by 75% by 2050 to meet the increasing demands of our growing population. Soybean which is the world`s fourth most important crop in terms of total production at 250 million Mt/yr is a key protein source, and together with rice and wheat, are experiencing declining global yield increases year on year. At present rates of improvement, 2050 targets cannot be reached without new innovations. In this study we demonstrate an innovative approach that could provide a yield jump. While, natural selection favors individual plants to maximize leaf production to maximize light interception and shade competitors, the presence of this trait in domestic crops could be disadvantageous. In addition, rising CO2 causes increased leaf production further exacerbating the problem. Here, we show by mathematical model and field experiment that, a modern cultivar growing at the center of US soy cultivation produces too many leaves and reduction to an optimal level would increase yield. Our model results indicate that an LAI of 3.5 and 3.8 produces maximal rates of net canopy assimilation under ambient and elevated CO2 conditions respectively. However, observed peak LAI values are 6.9 and 7.5 under ambient and elevated CO2 conditions respectively. This results in a NPP loss of 30% and 20% under ambient and elevated CO2 conditions respectively. Furthermore, the optimal LAI results in a decreased transpiration of up to 30% thus increasing water use efficiency. We show that as LAI increases, the tradeoffs between diminishing day time gains in NPP, and increasing losses in respiration is responsible for this effect. By designing a more optimum canopy, we can increase NPP and this potentially translates to increased seed yield. To test this model result, we perform canopy manipulation experiments on soybean plants, where we artificially decrease LAI by periodically removing young and emerging leaves throughout the growing season (after pod onset), and measure the seed yield under ambient and elevated CO2 conditions. Our experimental results show that an LAI reduction of 0.5 results in an increased seed yield of 8.1% validating our model results. We propose that, by achieving a stronger LAI reduction, we can improve seed yields by up to 24% providing the much needed jump in yield to achieve future food security.

  17. The best and worst of corn nitrogen rate recommendation tools used in the Midwest

    USDA-ARS?s Scientific Manuscript database

    Publicly-available nitrogen (N) rate recommendation tools are utilized to help maximize yield in corn production. These tools often fail when N is over-applied and results in excess N being lost to the environment, or when N is under-applied and results in decreased yield and economic returns. The p...

  18. Leaf and canopy scale drivers of genotypic variation in soybean response to elevated carbon dioxide concentration

    USDA-ARS?s Scientific Manuscript database

    The atmospheric [CO2] in which crops grow today is greater than at any point in their domestication history, and represents an opportunity for positive effects on seed yield that can counteract the negative effects of greater heat and drought this century. In order to maximize yields under future at...

  19. 76 FR 50143 - Fisheries of the Caribbean, Gulf of Mexico, and South Atlantic; Reef Fish Fishery of the Gulf of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-12

    ... Science and Statistical Committee (SSC) recommended that the red snapper total allowable catch (TAC) be... the optimum yield for the fishery, thus enhancing social and economic benefits to the fishery. DATES... achieve the optimum yield for the fishery, thereby maximizing the social and economic benefits for...

  20. Response Surface Methodology Optimization of Ultrasonic-Assisted Extraction of Acer Truncatum Leaves for Maximal Phenolic Yield and Antioxidant Activity.

    PubMed

    Yang, Lingguang; Yin, Peipei; Fan, Hang; Xue, Qiang; Li, Ke; Li, Xiang; Sun, Liwei; Liu, Yujun

    2017-02-04

    This study is the first to report the use of response surface methodology to improve phenolic yield and antioxidant activity of Acer truncatum leaves extracts (ATLs) obtained by ultrasonic-assisted extraction. The phenolic composition in ATLs extracted under the optimized conditions were characterized by UPLC-QTOF-MS/MS. Solvent and extraction time were selected based on preliminary experiments, and a four-factors-three-levels central composite design was conducted to optimize solvent concentration ( X ₁), material-to-liquid ratio ( X ₂), ultrasonic temperature ( X ₃) and power ( X ₄) for an optimal total phenol yield ( Y ₁) and DPPH• antioxidant activity ( Y ₂). The results showed that the optimal combination was ethanol:water ( v : v ) 66.21%, material-to-liquid ratio 1:15.31 g/mL, ultrasonic bath temperature 60 °C, power 267.30 W, and time 30 min with three extractions, giving a maximal total phenol yield of 7593.62 mg gallic acid equivalent/100 g d.w. and a maximal DPPH• antioxidant activity of 74,241.61 μmol Trolox equivalent/100 g d.w. Furthermore, 22 phenolics were first identified in ATL extract obtained under the optimized conditions, indicating that gallates, gallotannins, quercetin, myricetin and chlorogenic acid derivatives were the main phenolic components in ATL. What's more, a gallotannins pathway existing in ATL from gallic acid to penta- O -galloylglucoside was proposed. All these results provide practical information aiming at full utilization of phenolics in ATL, together with fundamental knowledge for further research.

  1. Hydrodistillation extraction time effect on essential oil yield, composition, and bioactivity of coriander oil.

    PubMed

    Zheljazkov, Valtcho D; Astatkie, Tess; Schlegel, Vicki

    2014-01-01

    Coriander (Coriandrum sativum L.) is a major essential oil crop grown throughout the world. Coriander essential oil is extracted from coriander fruits via hydrodistillation, with the industry using 180-240 min of distillation time (DT), but the optimum DT for maximizing essential oil yield, composition of constituents, and antioxidant activities are not known. This research was conducted to determine the effect of DT on coriander oil yield, composition, and bioactivity. The results show that essential oil yield at the shorter DT was low and generally increased with increasing DT with the maximum yields achieved at DT between 40 and 160 min. The concentrations of the low-boiling point essential oil constituents: α-pinene, camphene, β-pinene, myrcene, para-cymene, limonene, and γ-terpinene were higher at shorter DT (< 2.5 min) and decreased with increasing DT; but the trend reversed for the high-boiling point constituents: geraniol and geranyl-acetate. The concentration of the major essential oil constituent, linalool, was 51% at DT 1.15 min, and increased steadily to 68% with increasing DT. In conclusion, 40 min DT is sufficient to maximize yield of essential oil; and different DT can be used to obtain essential oil with differential composition. Its antioxidant capacity was affected by the DT, with 20 and 240 min DT showing higher antioxidant activity. Comparisons of coriander essential oil composition must consider the length of the DT.

  2. Non-equilibrium thermodynamics, maximum entropy production and Earth-system evolution.

    PubMed

    Kleidon, Axel

    2010-01-13

    The present-day atmosphere is in a unique state far from thermodynamic equilibrium. This uniqueness is for instance reflected in the high concentration of molecular oxygen and the low relative humidity in the atmosphere. Given that the concentration of atmospheric oxygen has likely increased throughout Earth-system history, we can ask whether this trend can be generalized to a trend of Earth-system evolution that is directed away from thermodynamic equilibrium, why we would expect such a trend to take place and what it would imply for Earth-system evolution as a whole. The justification for such a trend could be found in the proposed general principle of maximum entropy production (MEP), which states that non-equilibrium thermodynamic systems maintain steady states at which entropy production is maximized. Here, I justify and demonstrate this application of MEP to the Earth at the planetary scale. I first describe the non-equilibrium thermodynamic nature of Earth-system processes and distinguish processes that drive the system's state away from equilibrium from those that are directed towards equilibrium. I formulate the interactions among these processes from a thermodynamic perspective and then connect them to a holistic view of the planetary thermodynamic state of the Earth system. In conclusion, non-equilibrium thermodynamics and MEP have the potential to provide a simple and holistic theory of Earth-system functioning. This theory can be used to derive overall evolutionary trends of the Earth's past, identify the role that life plays in driving thermodynamic states far from equilibrium, identify habitability in other planetary environments and evaluate human impacts on Earth-system functioning. This journal is © 2010 The Royal Society

  3. Beyond maximum entropy: Fractal Pixon-based image reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard C.; Pina, R. K.

    1994-01-01

    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.

  4. On Use of Multi-Chambered Fission Detectors for In-Core, Neutron Spectroscopy

    NASA Astrophysics Data System (ADS)

    Roberts, Jeremy A.

    2018-01-01

    Presented is a short, computational study on the potential use of multichambered fission detectors for in-core, neutron spectroscopy. Motivated by the development of very small fission chambers at CEA in France and at Kansas State University in the U.S., it was assumed in this preliminary analysis that devices can be made small enough to avoid flux perturbations and that uncertainties related to measurements can be ignored. It was hypothesized that a sufficient number of chambers with unique reactants can act as a real-time, foilactivation experiment. An unfolding scheme based on maximizing (Shannon) entropy was used to produce a flux spectrum from detector signals that requires no prior information. To test the method, integral, detector responses were generated for singleisotope detectors of various Th, U, Np, Pu, Am, and Cs isotopes using a simplified, pressurized-water reactor spectrum and fluxweighted, microscopic, fission cross sections, in the WIMS-69 multigroup format. An unfolded spectrum was found from subsets of these responses that had a maximum entropy while reproducing the responses considered and summing to one (that is, they were normalized). Several nuclide subsets were studied, and, as expected, the results indicate inclusion of more nuclides leads to better spectra but with diminishing improvements, with the best-case spectrum having an average, relative, group-wise error of approximately 51%. Furthermore, spectra found from minimum-norm and Tihkonov-regularization inversion were of lower quality than the maximum entropy solutions. Finally, the addition of thermal-neutron filters (here, Cd and Gd) provided substantial improvement over unshielded responses alone. The results, as a whole, suggest that in-core, neutron spectroscopy is at least marginally feasible.

  5. Accurate Image Analysis of the Retina Using Hessian Matrix and Binarisation of Thresholded Entropy with Application of Texture Mapping

    PubMed Central

    Yin, Xiaoxia; Ng, Brian W-H; He, Jing; Zhang, Yanchun; Abbott, Derek

    2014-01-01

    In this paper, we demonstrate a comprehensive method for segmenting the retinal vasculature in camera images of the fundus. This is of interest in the area of diagnostics for eye diseases that affect the blood vessels in the eye. In a departure from other state-of-the-art methods, vessels are first pre-grouped together with graph partitioning, using a spectral clustering technique based on morphological features. Local curvature is estimated over the whole image using eigenvalues of Hessian matrix in order to enhance the vessels, which appear as ridges in images of the retina. The result is combined with a binarized image, obtained using a threshold that maximizes entropy, to extract the retinal vessels from the background. Speckle type noise is reduced by applying a connectivity constraint on the extracted curvature based enhanced image. This constraint is varied over the image according to each region's predominant blood vessel size. The resultant image exhibits the central light reflex of retinal arteries and veins, which prevents the segmentation of whole vessels. To address this, the earlier entropy-based binarization technique is repeated on the original image, but crucially, with a different threshold to incorporate the central reflex vessels. The final segmentation is achieved by combining the segmented vessels with and without central light reflex. We carry out our approach on DRIVE and REVIEW, two publicly available collections of retinal images for research purposes. The obtained results are compared with state-of-the-art methods in the literature using metrics such as sensitivity (true positive rate), selectivity (false positive rate) and accuracy rates for the DRIVE images and measured vessel widths for the REVIEW images. Our approach out-performs the methods in the literature. PMID:24781033

  6. Cross-Approximate Entropy parallel computation on GPUs for biomedical signal analysis. Application to MEG recordings.

    PubMed

    Martínez-Zarzuela, Mario; Gómez, Carlos; Díaz-Pernas, Francisco Javier; Fernández, Alberto; Hornero, Roberto

    2013-10-01

    Cross-Approximate Entropy (Cross-ApEn) is a useful measure to quantify the statistical dissimilarity of two time series. In spite of the advantage of Cross-ApEn over its one-dimensional counterpart (Approximate Entropy), only a few studies have applied it to biomedical signals, mainly due to its high computational cost. In this paper, we propose a fast GPU-based implementation of the Cross-ApEn that makes feasible its use over a large amount of multidimensional data. The scheme followed is fully scalable, thus maximizes the use of the GPU despite of the number of neural signals being processed. The approach consists in processing many trials or epochs simultaneously, with independence of its origin. In the case of MEG data, these trials can proceed from different input channels or subjects. The proposed implementation achieves an average speedup greater than 250× against a CPU parallel version running on a processor containing six cores. A dataset of 30 subjects containing 148 MEG channels (49 epochs of 1024 samples per channel) can be analyzed using our development in about 30min. The same processing takes 5 days on six cores and 15 days when running on a single core. The speedup is much larger if compared to a basic sequential Matlab(®) implementation, that would need 58 days per subject. To our knowledge, this is the first contribution of Cross-ApEn measure computation using GPUs. This study demonstrates that this hardware is, to the day, the best option for the signal processing of biomedical data with Cross-ApEn. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  7. Multivariate Approach for Alzheimer's Disease Detection Using Stationary Wavelet Entropy and Predator-Prey Particle Swarm Optimization.

    PubMed

    Zhang, Yudong; Wang, Shuihua; Sui, Yuxiu; Yang, Ming; Liu, Bin; Cheng, Hong; Sun, Junding; Jia, Wenjuan; Phillips, Preetha; Gorriz, Juan Manuel

    2017-07-17

    The number of patients with Alzheimer's disease is increasing rapidly every year. Scholars often use computer vision and machine learning methods to develop an automatic diagnosis system. In this study, we developed a novel machine learning system that can make diagnoses automatically from brain magnetic resonance images. First, the brain imaging was processed, including skull stripping and spatial normalization. Second, one axial slice was selected from the volumetric image, and stationary wavelet entropy (SWE) was done to extract the texture features. Third, a single-hidden-layer neural network was used as the classifier. Finally, a predator-prey particle swarm optimization was proposed to train the weights and biases of the classifier. Our method used 4-level decomposition and yielded 13 SWE features. The classification yielded an overall accuracy of 92.73±1.03%, a sensitivity of 92.69±1.29%, and a specificity of 92.78±1.51%. The area under the curve is 0.95±0.02. Additionally, this method only cost 0.88 s to identify a subject in online stage, after its volumetric image is preprocessed. In terms of classification performance, our method performs better than 10 state-of-the-art approaches and the performance of human observers. Therefore, this proposed method is effective in the detection of Alzheimer's disease.

  8. Dietary protein quality and quantity affect lactational responses to corn distillers grains: a meta-analysis.

    PubMed

    Hollmann, M; Allen, M S; Beede, D K

    2011-04-01

    Diet fermentability influences lactational responses to feeding corn distillers grains (CDG) to dairy cows. However, some measures of diet fermentability are inherently related to the concentration and characteristics of corn-based ingredients in the ration. Corn-based feeds have poor protein quality, unable to meet the essential AA requirements of lactating cows. We conducted a meta-analysis of treatment means (n=44) from the scientific literature to evaluate responses in milk yield (MY) and milk true protein concentration and yield to dietary CDG. The test variable was the difference in response between the CDG diet mean and the control diet mean (0% CDG) within experiment. Fixed variables were CDG concentration of the diet [% of dietary dry matter (DM)] and crude protein (CP) concentration and fractions of CP based on origin (corn-based versus non-corn-based feeds) of control and CDG diets. Diets with CDG ranged from 4 to 42% CDG, DM basis. Non-corn-based dietary CP averaged 6.3±3.32% of total DM. Milk yield and milk true protein yield responses to added CDG were maximized when approximately 8.5% of the total dietary DM was non-corn-based CP. Milk yield response peaked for higher-producing cows (>30.0 kg MY/cow per day) at 4.3% dietary corn-based CP, but decreased linearly for lower-producing cows (<30.0 kg MY/cow per day) as corn-based dietary CP increased. Milk true protein yield response decreased as corn-based dietary CP concentration increased but milk true protein concentration response was not decreased when CDG diets had more than 6.5% dietary non-corn-based CP. Overall, 8.5% dietary non-corn-based CP was necessary in lactation diets to maximize lactational responses to dietary CDG. The necessity of dietary non-corn-based CP to maximize milk and milk protein yields limits the amount of dietary corn-based CP, including that from CDG, which can be included in rations without overfeeding N. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  9. Biodiversity Hotspots, Climate Change, and Agricultural Development: Global Limits of Adaptation

    NASA Astrophysics Data System (ADS)

    Schneider, U. A.; Rasche, L.; Schmid, E.; Habel, J. C.

    2017-12-01

    Terrestrial ecosystems are threatened by climate and land management change. These changes result from complex and heterogeneous interactions of human activities and natural processes. Here, we study the potential change in pristine area in 33 global biodiversity hotspots within this century under four climate projections (representative concentration pathways) and associated population and income developments (shared socio-economic pathways). A coupled modelling framework computes the regional net expansion of crop and pasture lands as result of changes in food production and consumption. We use a biophysical crop simulation model to quantify climate change impacts on agricultural productivity, water, and nutrient emissions for alternative crop management systems in more than 100 thousand agricultural land polygons (homogeneous response units) and for each climate projection. The crop simulation model depicts detailed soil, weather, and management information and operates with a daily time step. We use time series of livestock statistics to link livestock production to feed and pasture requirements. On the food consumption side, we estimate national demand shifts in all countries by processing population and income growth projections through econometrically estimated Engel curves. Finally, we use a global agricultural sector optimization model to quantify the net change in pristine area in all biodiversity hotspots under different adaptation options. These options include full-scale global implementation of i) crop yield maximizing management without additional irrigation, ii) crop yield maximizing management with additional irrigation, iii) food yield maximizing crop mix adjustments, iv) food supply maximizing trade flow adjustments, v) healthy diets, and vi) combinations of the individual options above. Results quantify the regional potentials and limits of major agricultural producer and consumer adaptation options for the preservation of pristine areas in biodiversity hotspots. Results also quantify the conflicts between food and water security, biodiversity protection, and climate change mitigation.

  10. Algorithms for optimized maximum entropy and diagnostic tools for analytic continuation.

    PubMed

    Bergeron, Dominic; Tremblay, A-M S

    2016-08-01

    Analytic continuation of numerical data obtained in imaginary time or frequency has become an essential part of many branches of quantum computational physics. It is, however, an ill-conditioned procedure and thus a hard numerical problem. The maximum-entropy approach, based on Bayesian inference, is the most widely used method to tackle that problem. Although the approach is well established and among the most reliable and efficient ones, useful developments of the method and of its implementation are still possible. In addition, while a few free software implementations are available, a well-documented, optimized, general purpose, and user-friendly software dedicated to that specific task is still lacking. Here we analyze all aspects of the implementation that are critical for accuracy and speed and present a highly optimized approach to maximum entropy. Original algorithmic and conceptual contributions include (1) numerical approximations that yield a computational complexity that is almost independent of temperature and spectrum shape (including sharp Drude peaks in broad background, for example) while ensuring quantitative accuracy of the result whenever precision of the data is sufficient, (2) a robust method of choosing the entropy weight α that follows from a simple consistency condition of the approach and the observation that information- and noise-fitting regimes can be identified clearly from the behavior of χ^{2} with respect to α, and (3) several diagnostics to assess the reliability of the result. Benchmarks with test spectral functions of different complexity and an example with an actual physical simulation are presented. Our implementation, which covers most typical cases for fermions, bosons, and response functions, is available as an open source, user-friendly software.

  11. Algorithms for optimized maximum entropy and diagnostic tools for analytic continuation

    NASA Astrophysics Data System (ADS)

    Bergeron, Dominic; Tremblay, A.-M. S.

    2016-08-01

    Analytic continuation of numerical data obtained in imaginary time or frequency has become an essential part of many branches of quantum computational physics. It is, however, an ill-conditioned procedure and thus a hard numerical problem. The maximum-entropy approach, based on Bayesian inference, is the most widely used method to tackle that problem. Although the approach is well established and among the most reliable and efficient ones, useful developments of the method and of its implementation are still possible. In addition, while a few free software implementations are available, a well-documented, optimized, general purpose, and user-friendly software dedicated to that specific task is still lacking. Here we analyze all aspects of the implementation that are critical for accuracy and speed and present a highly optimized approach to maximum entropy. Original algorithmic and conceptual contributions include (1) numerical approximations that yield a computational complexity that is almost independent of temperature and spectrum shape (including sharp Drude peaks in broad background, for example) while ensuring quantitative accuracy of the result whenever precision of the data is sufficient, (2) a robust method of choosing the entropy weight α that follows from a simple consistency condition of the approach and the observation that information- and noise-fitting regimes can be identified clearly from the behavior of χ2 with respect to α , and (3) several diagnostics to assess the reliability of the result. Benchmarks with test spectral functions of different complexity and an example with an actual physical simulation are presented. Our implementation, which covers most typical cases for fermions, bosons, and response functions, is available as an open source, user-friendly software.

  12. Diffusion Entropy: A Potential Neuroimaging Biomarker of Bipolar Disorder in the Temporal Pole.

    PubMed

    Spuhler, Karl; Bartlett, Elizabeth; Ding, Jie; DeLorenzo, Christine; Parsey, Ramin; Huang, Chuan

    2018-02-01

    Despite much research, bipolar depression remains poorly understood, with no clinically useful biomarkers for its diagnosis. The paralimbic system has become a target for biomarker research, with paralimbic structural connectivity commonly reported to distinguish bipolar patients from controls in tractography-based diffusion MRI studies, despite inconsistent findings in voxel-based studies. The purpose of this analysis was to validate existing findings with traditional diffusion MRI metrics and investigate the utility of a novel diffusion MRI metric, entropy of diffusion, in the search for bipolar depression biomarkers. We performed group-level analysis on 9 un-medicated (6 medication-naïve; 3 medication-free for at least 33 days) bipolar patients in a major depressive episode and 9 matched healthy controls to compare: (1) average mean diffusivity (MD) and fractional anisotropy (FA) and; (2) MD and FA histogram entropy-a statistical measure of distribution homogeneity-in the amygdala, hippocampus, orbitofrontal cortex and temporal pole. We also conducted classification analyses with leave-one-out and separate testing dataset (N = 11) approaches. We did not observe statistically significant differences in average MD or FA between the groups in any region. However, in the temporal pole, we observed significantly lower MD entropy in bipolar patients; this finding suggests a regional difference in MD distributions in the absence of an average difference. This metric allowed us to accurately characterize bipolar patients from controls in leave-one-out (accuracy = 83%) and prediction (accuracy = 73%) analyses. This novel application of diffusion MRI yielded not only an interesting separation between bipolar patients and healthy controls, but also accurately classified bipolar patients from controls. © 2017 Wiley Periodicals, Inc.

  13. Multiplicity and entropy scaling of medium-energy protons emitted in relativistic heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Abdelsalam, A.; Kamel, S.; Hafiz, M. E.

    2015-10-01

    The behavior and the properties of medium-energy protons with kinetic energies in the range 26 - 400 MeV is derived from measurements of the particle yields and spectra in the final state of relativistic heavy-ion collisions (16O-AgBr interactions at 60 A and 200 A GeV and 32S-AgBr interactions at 3.7 A and 200 A GeV) and their interpretation in terms of the higher order moments. The multiplicity distributions have been fitted well with the Gaussian distribution function. The data are also compared with the predictions of the modified FRITIOF model, showing that the FRITIOF model does not reproduce the trend and the magnitude of the data. Measurements of the ratio of the variance to the mean show that the production of target fragments at high energies cannot be considered as a statistically independent process. However, the deviation of each multiplicity distribution from a Poisson law provides evidence for correlations. The KNO scaling behavior of two types of scaling (Koba-Nielsen-Olesen (KNO) scaling and Hegyi scaling) functions in terms of the multiplicity distribution is investigated. A simplified universal function has been used in each scaling to display the experimental data. An examination of the relationship between the entropy, the average multiplicity, and the KNO function is performed. Entropy production and subsequent scaling in nucleus-nucleus collisions are carried out by analyzing the experimental data over a wide energy range (Dubna and SPS). Interestingly, the data points corresponding to various energies overlap and fall on a single curve, indicating the presence of a kind of entropy scaling.

  14. Using Statistical Mechanics and Entropy Principles to Interpret Variability in Power Law Models of the Streamflow Recession

    NASA Astrophysics Data System (ADS)

    Dralle, D.; Karst, N.; Thompson, S. E.

    2015-12-01

    Multiple competing theories suggest that power law behavior governs the observed first-order dynamics of streamflow recessions - the important process by which catchments dry-out via the stream network, altering the availability of surface water resources and in-stream habitat. Frequently modeled as: dq/dt = -aqb, recessions typically exhibit a high degree of variability, even within a single catchment, as revealed by significant shifts in the values of "a" and "b" across recession events. One potential source of this variability lies in underlying, hard-to-observe fluctuations in how catchment water storage is partitioned amongst distinct storage elements, each having different discharge behaviors. Testing this and competing hypotheses with widely available streamflow timeseries, however, has been hindered by a power law scaling artifact that obscures meaningful covariation between the recession parameters, "a" and "b". Here we briefly outline a technique that removes this artifact, revealing intriguing new patterns in the joint distribution of recession parameters. Using long-term flow data from catchments in Northern California, we explore temporal variations, and find that the "a" parameter varies strongly with catchment wetness. Then we explore how the "b" parameter changes with "a", and find that measures of its variation are maximized at intermediate "a" values. We propose an interpretation of this pattern based on statistical mechanics, meaning "b" can be viewed as an indicator of the catchment "microstate" - i.e. the partitioning of storage - and "a" as a measure of the catchment macrostate (i.e. the total storage). In statistical mechanics, entropy (i.e. microstate variance, that is the variance of "b") is maximized for intermediate values of extensive variables (i.e. wetness, "a"), as observed in the recession data. This interpretation of "a" and "b" was supported by model runs using a multiple-reservoir catchment toy model, and lends support to the hypothesis that power law streamflow recession dynamics, and their variations, have their origin in the multiple modalities of storage partitioning.

  15. Singlet Orbital Ordering in Bilayer Sr_{3}Cr_{2}O_{7}.

    PubMed

    Jeanneau, Justin; Toulemonde, Pierre; Remenyi, Gyorgy; Sulpice, André; Colin, Claire; Nassif, Vivian; Suard, Emmanuelle; Salas Colera, Eduardo; Castro, Germán R; Gay, Frederic; Urdaniz, Corina; Weht, Ruben; Fevrier, Clement; Ralko, Arnaud; Lacroix, Claudine; Aligia, Armando A; Núñez-Regueiro, Manuel

    2017-05-19

    We perform an extensive study of Sr_{3}Cr_{2}O_{7}, the n=2 member of the Ruddlesden-Popper Sr_{n+1}Cr_{n}O_{3n+1} system. An antiferromagnetic ordering is clearly visible in the magnetization and the specific heat, which yields a huge transition entropy, Rln(6). By neutron diffraction as a function of temperature we have determined the antiferromagnetic structure that coincides with the one obtained from density functional theory calculations. It is accompanied by anomalous asymmetric distortions of the CrO_{6} octahedra. Strong coupling and Lanczos calculations on a derived Kugel-Khomskii Hamiltonian yield a simultaneous orbital and moment ordering. Our results favor an exotic ordered phase of orbital singlets not originated by frustration.

  16. The role of sympathetic and vagal cardiac control on complexity of heart rate dynamics.

    PubMed

    Silva, Luiz Eduardo Virgilio; Silva, Carlos Alberto Aguiar; Salgado, Helio Cesar; Fazan, Rubens

    2017-03-01

    Analysis of heart rate variability (HRV) by nonlinear approaches has been gaining interest due to their ability to extract additional information from heart rate (HR) dynamics that are not detectable by traditional approaches. Nevertheless, the physiological interpretation of nonlinear approaches remains unclear. Therefore, we propose long-term (60 min) protocols involving selective blockade of cardiac autonomic receptors to investigate the contribution of sympathetic and parasympathetic function upon nonlinear dynamics of HRV. Conscious male Wistar rats had their electrocardiogram (ECG) recorded under three distinct conditions: basal, selective (atenolol or atropine), or combined (atenolol plus atropine) pharmacological blockade of autonomic muscarinic or β 1 -adrenergic receptors. Time series of RR interval were assessed by multiscale entropy (MSE) and detrended fluctuation analysis (DFA). Entropy over short (1 to 5, MSE 1-5 ) and long (6 to 30, MSE 6-30 ) time scales was computed, as well as DFA scaling exponents at short (α short , 5 ≤ n ≤ 15), mid (α mid , 30 ≤ n ≤ 200), and long (α long , 200 ≤ n ≤ 1,700) window sizes. The results show that MSE 1-5 is reduced under atropine blockade and MSE 6-30 is reduced under atropine, atenolol, or combined blockade. In addition, while atropine expressed its maximal effect at scale six, the effect of atenolol on MSE increased with scale. For DFA, α short decreased during atenolol blockade, while the α mid increased under atropine blockade. Double blockade decreased α short and increased α long Results with surrogate data show that the dynamics during combined blockade is not random. In summary, sympathetic and vagal control differently affect entropy (MSE) and fractal properties (DFA) of HRV. These findings are important to guide future studies. NEW & NOTEWORTHY Although multiscale entropy (MSE) and detrended fluctuation analysis (DFA) are recognizably useful prognostic/diagnostic methods, their physiological interpretation remains unclear. The present study clarifies the effect of the cardiac autonomic control on MSE and DFA, assessed during long periods (1 h). These findings are important to help the interpretation of future studies. Copyright © 2017 the American Physiological Society.

  17. Cryoradiolytic reduction of heme proteins: Maximizing dose-dependent yield

    NASA Astrophysics Data System (ADS)

    Denisov, Ilia G.; Victoria, Doreen C.; Sligar, Stephen G.

    2007-04-01

    Radiolytic reduction in frozen solutions and crystals is a useful method for generation of trapped intermediates in protein-based radical reactions. In this communication we define the conditions which provide the maximum yield of one electron-reduced myoglobin at 77 K using 60Co γ-irradiation in aqueous glycerol glass. The yield reached 50% after 20 kGy, was almost complete at ˜160 kGy total dose, and does not depend on the protein concentration in the range 0.01-5 mM.

  18. Separation and purification of hydrolyzable tannin from Geranium wilfordii Maxim by reversed-phase and normal-phase high-speed counter-current chromatography.

    PubMed

    Liu, Dan; Su, Zhiguo; Wang, Changhai; Gu, Ming; Xing, Siliang

    2010-08-01

    Three hydrolyzable tannins, geraniin, corilagin and gallic acid, main active components of Geranium wilfordii Maxim, have been separated and purified in one-step by both reversed-phase and normal-phase high-speed counter-current chromatography. Gallic acid, corilagin and geraniin were purified from 70% aqueous acetone extract of G. wilfordii Maxim with solvent system n-hexane-ethyl acetate-methanol-acetic acid-water (1:10:0.2:0.2:20) by reversed-phase high-speed counter-current chromatography at purities of 94.2, 91.0 and 91.3%, at yields of 89.3, 82.9 and 91.7%, respectively. Gallic acid, corilagin and geraniin were purified with solvent system n-hexane-ethyl acetate-methanol-acetic acid-water (0.2:10:2:1:5) by normal-phase high-speed counter-current chromatography at purities of 85.9, 92.2 and 87.6%, at yields of 87.4, 94.6 and 94.3%, respectively. It was successful for both reversed-phase and normal-phase high-speed counter-current chromatography to separate high-polarity of low-molecular-weight substances.

  19. The Value Versus Volume Yield Problem for Live-Sawn Hardwood Sawlogs

    Treesearch

    Philip H. Steele; Francis G. Wagner; Lalit Kumar; Philip A. Araman

    1993-01-01

    The potential conflict between value and volume maximization in sawing hardwood sawlogs by the live sawing method was analyzed. Twenty-four digitally described red oak sawlogs were sawn at the log orientation of highest value yield. Five opening face sawlines were iteratively placed in the sawlog a 1/4-inch intervals and lumber grades, volumes, and values from...

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lamoureux, Louis-Philippe; Navez, Patrick; Cerf, Nicolas J.

    It is shown that any quantum operation that perfectly clones the entanglement of all maximally entangled qubit pairs cannot preserve separability. This 'entanglement no-cloning' principle naturally suggests that some approximate cloning of entanglement is nevertheless allowed by quantum mechanics. We investigate a separability-preserving optimal cloning machine that duplicates all maximally entangled states of two qubits, resulting in 0.285 bits of entanglement per clone, while a local cloning machine only yields 0.060 bits of entanglement per clone.

Top