Science.gov

Sample records for maximum entropy models

  1. Maximum entropy model for business cycle synchronization

    NASA Astrophysics Data System (ADS)

    Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui

    2014-11-01

    The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.

  2. Random versus maximum entropy models of neural population activity

    NASA Astrophysics Data System (ADS)

    Ferrari, Ulisse; Obuchi, Tomoyuki; Mora, Thierry

    2017-04-01

    The principle of maximum entropy provides a useful method for inferring statistical mechanics models from observations in correlated systems, and is widely used in a variety of fields where accurate data are available. While the assumptions underlying maximum entropy are intuitive and appealing, its adequacy for describing complex empirical data has been little studied in comparison to alternative approaches. Here, data from the collective spiking activity of retinal neurons is reanalyzed. The accuracy of the maximum entropy distribution constrained by mean firing rates and pairwise correlations is compared to a random ensemble of distributions constrained by the same observables. For most of the tested networks, maximum entropy approximates the true distribution better than the typical or mean distribution from that ensemble. This advantage improves with population size, with groups as small as eight being almost always better described by maximum entropy. Failure of maximum entropy to outperform random models is found to be associated with strong correlations in the population.

  3. Random versus maximum entropy models of neural population activity.

    PubMed

    Ferrari, Ulisse; Obuchi, Tomoyuki; Mora, Thierry

    2017-04-01

    The principle of maximum entropy provides a useful method for inferring statistical mechanics models from observations in correlated systems, and is widely used in a variety of fields where accurate data are available. While the assumptions underlying maximum entropy are intuitive and appealing, its adequacy for describing complex empirical data has been little studied in comparison to alternative approaches. Here, data from the collective spiking activity of retinal neurons is reanalyzed. The accuracy of the maximum entropy distribution constrained by mean firing rates and pairwise correlations is compared to a random ensemble of distributions constrained by the same observables. For most of the tested networks, maximum entropy approximates the true distribution better than the typical or mean distribution from that ensemble. This advantage improves with population size, with groups as small as eight being almost always better described by maximum entropy. Failure of maximum entropy to outperform random models is found to be associated with strong correlations in the population.

  4. Maximum entropy models of ecosystem functioning

    NASA Astrophysics Data System (ADS)

    Bertram, Jason

    2014-12-01

    Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes' broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.

  5. Maximum entropy models of ecosystem functioning

    SciTech Connect

    Bertram, Jason

    2014-12-05

    Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes’ broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.

  6. Maximum-entropy principle as Galerkin modelling paradigm

    NASA Astrophysics Data System (ADS)

    Noack, Bernd R.; Niven, Robert K.; Rowley, Clarence W.

    2012-11-01

    We show how the empirical Galerkin method, leading e.g. to POD models, can be derived from maximum-entropy principles building on Noack & Niven 2012 JFM. In particular, principles are proposed (1) for the Galerkin expansion, (2) for the Galerkin system identification, and (3) for the probability distribution of the attractor. Examples will illustrate the advantages of the entropic modelling paradigm. Partially supported by the ANR Chair of Excellence TUCOROM and an ADFA/UNSW Visiting Fellowship.

  7. Stimulus-dependent Maximum Entropy Models of Neural Population Codes

    PubMed Central

    Segev, Ronen; Schneidman, Elad

    2013-01-01

    Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME) model—a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population. PMID:23516339

  8. A maximum entropy model for opinions in social groups

    NASA Astrophysics Data System (ADS)

    Davis, Sergio; Navarrete, Yasmín; Gutiérrez, Gonzalo

    2014-04-01

    We study how the opinions of a group of individuals determine their spatial distribution and connectivity, through an agent-based model. The interaction between agents is described by a Hamiltonian in which agents are allowed to move freely without an underlying lattice (the average network topology connecting them is determined from the parameters). This kind of model was derived using maximum entropy statistical inference under fixed expectation values of certain probabilities that (we propose) are relevant to social organization. Control parameters emerge as Lagrange multipliers of the maximum entropy problem, and they can be associated with the level of consequence between the personal beliefs and external opinions, and the tendency to socialize with peers of similar or opposing views. These parameters define a phase diagram for the social system, which we studied using Monte Carlo Metropolis simulations. Our model presents both first and second-order phase transitions, depending on the ratio between the internal consequence and the interaction with others. We have found a critical value for the level of internal consequence, below which the personal beliefs of the agents seem to be irrelevant.

  9. Understanding Peripheral Bat Populations Using Maximum-Entropy Suitability Modeling.

    PubMed

    Barnhart, Paul R; Gillam, Erin H

    2016-01-01

    Individuals along the periphery of a species distribution regularly encounter more challenging environmental and climatic conditions than conspecifics near the center of the distribution. Due to these potential constraints, individuals in peripheral margins are expected to change their habitat and behavioral characteristics. Managers typically rely on species distribution maps when developing adequate management practices. However, these range maps are often too simplistic and do not provide adequate information as to what fine-scale biotic and abiotic factors are driving a species occurrence. In the last decade, habitat suitability modelling has become widely used as a substitute for simplistic distribution mapping which allows regional managers the ability to fine-tune management resources. The objectives of this study were to use maximum-entropy modeling to produce habitat suitability models for seven species that have a peripheral margin intersecting the state of North Dakota, according to current IUCN distributions, and determine the vegetative and climatic characteristics driving these models. Mistnetting resulted in the documentation of five species outside the IUCN distribution in North Dakota, indicating that current range maps for North Dakota, and potentially the northern Great Plains, are in need of update. Maximum-entropy modeling showed that temperature and not precipitation were the variables most important for model production. This fine-scale result highlights the importance of habitat suitability modelling as this information cannot be extracted from distribution maps. Our results provide baseline information needed for future research about how and why individuals residing in the peripheral margins of a species' distribution may show marked differences in habitat use as a result of urban expansion, habitat loss, and climate change compared to more centralized populations.

  10. Understanding Peripheral Bat Populations Using Maximum-Entropy Suitability Modeling

    PubMed Central

    Barnhart, Paul R.; Gillam, Erin H.

    2016-01-01

    Individuals along the periphery of a species distribution regularly encounter more challenging environmental and climatic conditions than conspecifics near the center of the distribution. Due to these potential constraints, individuals in peripheral margins are expected to change their habitat and behavioral characteristics. Managers typically rely on species distribution maps when developing adequate management practices. However, these range maps are often too simplistic and do not provide adequate information as to what fine-scale biotic and abiotic factors are driving a species occurrence. In the last decade, habitat suitability modelling has become widely used as a substitute for simplistic distribution mapping which allows regional managers the ability to fine-tune management resources. The objectives of this study were to use maximum-entropy modeling to produce habitat suitability models for seven species that have a peripheral margin intersecting the state of North Dakota, according to current IUCN distributions, and determine the vegetative and climatic characteristics driving these models. Mistnetting resulted in the documentation of five species outside the IUCN distribution in North Dakota, indicating that current range maps for North Dakota, and potentially the northern Great Plains, are in need of update. Maximum-entropy modeling showed that temperature and not precipitation were the variables most important for model production. This fine-scale result highlights the importance of habitat suitability modelling as this information cannot be extracted from distribution maps. Our results provide baseline information needed for future research about how and why individuals residing in the peripheral margins of a species’ distribution may show marked differences in habitat use as a result of urban expansion, habitat loss, and climate change compared to more centralized populations. PMID:27935936

  11. From Maximum Entropy Models to Non-Stationarity and Irreversibility

    NASA Astrophysics Data System (ADS)

    Cofre, Rodrigo; Cessac, Bruno; Maldonado, Cesar

    The maximum entropy distribution can be obtained from a variational principle. This is important as a matter of principle and for the purpose of finding approximate solutions. One can exploit this fact to obtain relevant information about the underlying stochastic process. We report here in recent progress in three aspects to this approach.1- Biological systems are expected to show some degree of irreversibility in time. Based on the transfer matrix technique to find the spatio-temporal maximum entropy distribution, we build a framework to quantify the degree of irreversibility of any maximum entropy distribution.2- The maximum entropy solution is characterized by a functional called Gibbs free energy (solution of the variational principle). The Legendre transformation of this functional is the rate function, which controls the speed of convergence of empirical averages to their ergodic mean. We show how the correct description of this functional is determinant for a more rigorous characterization of first and higher order phase transitions.3- We assess the impact of a weak time-dependent external stimulus on the collective statistics of spiking neuronal networks. We show how to evaluate this impact on any higher order spatio-temporal correlation. RC supported by ERC advanced Grant ``Bridges'', BC: KEOPS ANR-CONICYT, Renvision and CM: CONICYT-FONDECYT No. 3140572.

  12. On the maximum-entropy/autoregressive modeling of time series

    NASA Technical Reports Server (NTRS)

    Chao, B. F.

    1984-01-01

    The autoregressive (AR) model of a random process is interpreted in the light of the Prony's relation which relates a complex conjugate pair of poles of the AR process in the z-plane (or the z domain) on the one hand, to the complex frequency of one complex harmonic function in the time domain on the other. Thus the AR model of a time series is one that models the time series as a linear combination of complex harmonic functions, which include pure sinusoids and real exponentials as special cases. An AR model is completely determined by its z-domain pole configuration. The maximum-entropy/autogressive (ME/AR) spectrum, defined on the unit circle of the z-plane (or the frequency domain), is nothing but a convenient, but ambiguous visual representation. It is asserted that the position and shape of a spectral peak is determined by the corresponding complex frequency, and the height of the spectral peak contains little information about the complex amplitude of the complex harmonic functions.

  13. Maximum entropy production: Can it be used to constrain conceptual hydrological models?

    Treesearch

    M.C. Westhoff; E. Zehe

    2013-01-01

    In recent years, optimality principles have been proposed to constrain hydrological models. The principle of maximum entropy production (MEP) is one of the proposed principles and is subject of this study. It states that a steady state system is organized in such a way that entropy production is maximized. Although successful applications have been reported in...

  14. Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.

    PubMed

    Chatzis, Sotirios P; Andreou, Andreas S

    2015-11-01

    Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.

  15. ACE: adaptive cluster expansion for maximum entropy graphical model inference.

    PubMed

    Barton, J P; De Leonardis, E; Coucke, A; Cocco, S

    2016-10-15

    Graphical models are often employed to interpret patterns of correlations observed in data through a network of interactions between the variables. Recently, Ising/Potts models, also known as Markov random fields, have been productively applied to diverse problems in biology, including the prediction of structural contacts from protein sequence data and the description of neural activity patterns. However, inference of such models is a challenging computational problem that cannot be solved exactly. Here, we describe the adaptive cluster expansion (ACE) method to quickly and accurately infer Ising or Potts models based on correlation data. ACE avoids overfitting by constructing a sparse network of interactions sufficient to reproduce the observed correlation data within the statistical error expected due to finite sampling. When convergence of the ACE algorithm is slow, we combine it with a Boltzmann Machine Learning algorithm (BML). We illustrate this method on a variety of biological and artificial datasets and compare it to state-of-the-art approximate methods such as Gaussian and pseudo-likelihood inference. We show that ACE accurately reproduces the true parameters of the underlying model when they are known, and yields accurate statistical descriptions of both biological and artificial data. Models inferred by ACE more accurately describe the statistics of the data, including both the constrained low-order correlations and unconstrained higher-order correlations, compared to those obtained by faster Gaussian and pseudo-likelihood methods. These alternative approaches can recover the structure of the interaction network but typically not the correct strength of interactions, resulting in less accurate generative models. The ACE source code, user manual and tutorials with the example data and filtered correlations described herein are freely available on GitHub at https://github.com/johnbarton/ACE CONTACTS: jpbarton@mit.edu, cocco

  16. Galerkin POD Model Closure with Triadic Interactions by the Maximum Entropy Method

    NASA Astrophysics Data System (ADS)

    Hérouard, Nicolas; Niven, Robert K.; Noack, Bernd R.; Abel, Markus W.; Schlegel, Michael

    2016-11-01

    The maximum entropy method of Jaynes provides a method to infer the expected or most probable state of a system, by maximizing the relative entropy subject to physical constraints such as conservation of mass, energy and power. A maximum entropy closure for reduced-order models of fluid flows based on principal orthogonal decomposition (POD) is developed, to infer the probability density function for the POD modal amplitudes. This closure takes into account energy transfers by triadic interactions between modes, by extension of a theoretical model of these interactions in incompressible flow. The framework is applied to several incompressible flow systems including the cylinder wake, both at low and high Reynolds number (oscillatory and turbulent flow conditions), with important implications for the triadic structure and power balance (energy cascade) in the system. Australian Research Council Discovery Projects Grant DP140104402.

  17. Using maximum entropy modeling to identify and prioritize red spruce forest habitat in West Virginia

    Treesearch

    Nathan R. Beane; James S. Rentch; Thomas M. Schuler

    2013-01-01

    Red spruce forests in West Virginia are found in island-like distributions at high elevations and provide essential habitat for the endangered Cheat Mountain salamander and the recently delisted Virginia northern flying squirrel. Therefore, it is important to identify restoration priorities of red spruce forests. Maximum entropy modeling was used to identify areas of...

  18. Structural modelling and control design under incomplete parameter information: The maximum-entropy approach

    NASA Technical Reports Server (NTRS)

    Hyland, D. C.

    1983-01-01

    A stochastic structural control model is described. In contrast to the customary deterministic model, the stochastic minimum data/maximum entropy model directly incorporates the least possible a priori parameter information. The approach is to adopt this model as the basic design model, thus incorporating the effects of parameter uncertainty at a fundamental level, and design mean-square optimal controls (that is, choose the control law to minimize the average of a quadratic performance index over the parameter ensemble).

  19. Generalized Maximum Entropy

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter; Stutz, John

    2005-01-01

    A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].

  20. Bistability, non-ergodicity, and inhibition in pairwise maximum-entropy models.

    PubMed

    Rostami, Vahid; Porta Mana, PierGianLuca; Grün, Sonja; Helias, Moritz

    2017-10-02

    Pairwise maximum-entropy models have been used in neuroscience to predict the activity of neuronal populations, given only the time-averaged correlations of the neuron activities. This paper provides evidence that the pairwise model, applied to experimental recordings, would produce a bimodal distribution for the population-averaged activity, and for some population sizes the second mode would peak at high activities, that experimentally would be equivalent to 90% of the neuron population active within time-windows of few milliseconds. Several problems are connected with this bimodality: 1. The presence of the high-activity mode is unrealistic in view of observed neuronal activity and on neurobiological grounds. 2. Boltzmann learning becomes non-ergodic, hence the pairwise maximum-entropy distribution cannot be found: in fact, Boltzmann learning would produce an incorrect distribution; similarly, common variants of mean-field approximations also produce an incorrect distribution. 3. The Glauber dynamics associated with the model is unrealistically bistable and cannot be used to generate realistic surrogate data. This bimodality problem is first demonstrated for an experimental dataset from 159 neurons in the motor cortex of macaque monkey. Evidence is then provided that this problem affects typical neural recordings of population sizes of a couple of hundreds or more neurons. The cause of the bimodality problem is identified as the inability of standard maximum-entropy distributions with a uniform reference measure to model neuronal inhibition. To eliminate this problem a modified maximum-entropy model is presented, which reflects a basic effect of inhibition in the form of a simple but non-uniform reference measure. This model does not lead to unrealistic bimodalities, can be found with Boltzmann learning, and has an associated Glauber dynamics which incorporates a minimal asymmetric inhibition.

  1. Maximum Entropies Copulas

    NASA Astrophysics Data System (ADS)

    Pougaza, Doriano-Boris; Mohammad-Djafari, Ali

    2011-03-01

    New families of copulas are obtained in a two-step process: first considering the inverse problem which consists of finding a joint distribution from its given marginals as the constrained maximization of some entropies (Shannon, Rényi, Burg, Tsallis-Havrda-Charvát), and then using Sklar's theorem, to define the corresponding copula.

  2. The Research on Chinese Coreference Resolution Based on Maximum Entropy Model and Rules

    NASA Astrophysics Data System (ADS)

    Zhang, Yihao; Guo, Jianyi; Yu, Zhengtao; Zhang, Zhikun; Yao, Xianming

    Coreference resolution is an important research topic in natural language processing, including the coreference resolution of proper nouns, common nouns and pronouns. In this paper, a coreference resolution algorithm of the Chinese noun phrase and the pronoun is proposed that based on maximum entropy model and rules. The use of maximum entropy model can integrate effectively a variety of separate features, on this basis to use rules method to improve the recall rate of digestion, and then use filtering rules to remove "noise" to further improve the accuracy rate of digestion. Experiments show that the F value of the algorithm in a closed test and an open test can reach 85.2% and 76.2% respectively, which improve about 12.9 percentage points and 7.8 percentage points compare with the method of rules respectively.

  3. Inferring Pairwise Interactions from Biological Data Using Maximum-Entropy Probability Models

    PubMed Central

    Stein, Richard R.; Marks, Debora S.; Sander, Chris

    2015-01-01

    Maximum entropy-based inference methods have been successfully used to infer direct interactions from biological datasets such as gene expression data or sequence ensembles. Here, we review undirected pairwise maximum-entropy probability models in two categories of data types, those with continuous and categorical random variables. As a concrete example, we present recently developed inference methods from the field of protein contact prediction and show that a basic set of assumptions leads to similar solution strategies for inferring the model parameters in both variable types. These parameters reflect interactive couplings between observables, which can be used to predict global properties of the biological system. Such methods are applicable to the important problems of protein 3-D structure prediction and association of gene–gene networks, and they enable potential applications to the analysis of gene alteration patterns and to protein design. PMID:26225866

  4. Photosynthetic models with maximum entropy production in irreversible charge transfer steps.

    PubMed

    Juretić, Davor; Zupanović, Pasko

    2003-12-01

    Steady-state bacterial photosynthesis is modelled as cyclic chemical reaction and is examined with respect to overall efficiency, power transfer efficiency, and entropy production. A nonlinear flux-force relationship is assumed. The simplest two-state kinetic model bears complete analogy with the performance of an ideal (zero ohmic resistance of the P-N junction) solar cell. In both cases power transfer to external load is much higher than the 50% allowed by the impedance matching theorem for the linear flux-force relationship. When maximum entropy production is required in the transition with a load, one obtains high optimal photochemical yield of 97% and power transfer efficiency of 91%. In more complex photosynthetic models, entropy production is maximized in all irreversible electron/proton (non-slip) transitions in an iterative procedure. The resulting steady-state is stable with respect to an extremely wide range of initial values for forward rate constants. Optimal proton current increases proportionally to light intensity and decreases with an increase in the proton-motive force (the backpressure effect). Optimal affinity transfer efficiency is very high and nearly perfectly constant for different light absorption rates and for different electrochemical proton gradients. Optimal overall efficiency (of solar into proton-motive power) ranges from 10% (bacteriorhodopsin) to 19% (chlorophyll-based bacterial photosynthesis). Optimal time constants in a photocycle span a wide range from nanoseconds to milliseconds, just as corresponding experimental constants do. We conclude that photosynthetic proton pumps operate close to the maximum entropy production mode, connecting biological to thermodynamic evolution in a coupled self-amplifying process.

  5. Convex Accelerated Maximum Entropy Reconstruction

    PubMed Central

    Worley, Bradley

    2016-01-01

    Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm – called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm – is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra. PMID:26894476

  6. Convex accelerated maximum entropy reconstruction.

    PubMed

    Worley, Bradley

    2016-04-01

    Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm - called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm - is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Convex accelerated maximum entropy reconstruction

    NASA Astrophysics Data System (ADS)

    Worley, Bradley

    2016-04-01

    Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm - called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm - is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra.

  8. Modeling the Multiple-Antenna Wireless Channel Using Maximum Entropy Methods

    NASA Astrophysics Data System (ADS)

    Guillaud, M.; Debbah, M.; Moustakas, A. L.

    2007-11-01

    Analytical descriptions of the statistics of wireless channel models are desirable tools for communication systems engineering. When multiple antennas are available at the transmit and/or the receive side (the Multiple-Input Multiple-Output, or MIMO, case), the statistics of the matrix H representing the gains between the antennas of a transmit and a receive antenna array, and in particular the correlation between its coefficients, are known to be of paramount importance for the design of such systems. However these characteristics depend on the operating environment, since the electromagnetic propagation paths are dictated by the surroundings of the antenna arrays, and little knowledge about these is available at the time of system design. An approach using the Maximum Entropy principle to derive probability density functions for the channel matrix, based on various degrees of knowledge about the environment, is presented. The general idea is to apply the maximum entropy principle to obtain the distribution of each parameter of interest (e.g. correlation), and then to marginalize them out to obtain the full channel distribution. It was shown in previous works, using sophisticated integrals from statistical physics, that by using the full spatial correlation matrix E{vec(H)vec(H)H} as the intermediate modeling parameter, this method can yield surprisingly concise channel descriptions. In this case, the joint probability density function is shown to be merely a function of the Frobenius norm of the channel matrix |H|F. In the present paper, we investigate the case where information about the average covariance matrix is available (e.g. through measurements). The maximum entropy distribution of the covariance is derived under this constraint. Furthermore, we consider also the doubly correlated case, where the intermediate modeling parameters are chosen as the transmit- and receive-side channel covariance matrices (respectively E{HHH} and E{HHH}). We compare the

  9. Unifying catchment water balance models for different time scales through the maximum entropy production principle

    NASA Astrophysics Data System (ADS)

    Zhao, Jianshi; Wang, Dingbao; Yang, Hanbo; Sivapalan, Murugesu

    2016-09-01

    The paper presents a thermodynamic basis for water balance partitioning at the catchment scale, through formulation of flux-force relationships for the constituent hydrological processes, leading to the derivation of optimality conditions that satisfy the principle of Maximum Entropy Production (MEP). Application of these optimality principles at three different time scales leads to the derivation of water balance equations that mimic widely used, empirical models, i.e., Budyko-type model over long-term scale, the "abcd" model at monthly scale, and the SCS model at the event scale. The applicability of MEP in each case helps to draw connections between the water balances at the three different time scales, and to demonstrate a common thermodynamic basis for the otherwise empirical water balance models. In particular, it is concluded that the long time scale Budyko-type model and the event scale SCS model are both special cases of the monthly "abcd" model.

  10. Maximum Entropy Guide for BSS

    NASA Astrophysics Data System (ADS)

    Górriz, J. M.; Puntonet, C. G.; Medialdea, E. G.; Rojas, F.

    2005-11-01

    This paper proposes a novel method for Blindly Separating unobservable independent component (IC) Signals (BSS) based on the use of a maximum entropy guide (MEG). The paper also includes a formal proof on the convergence of the proposed algorithm using the guiding operator, a new concept in the genetic algorithm (GA) scenario. The Guiding GA (GGA) presented in this work, is able to extract IC with faster rate than the previous ICA algorithms, based on maximum entropy contrast functions, as input space dimension increases. It shows significant accuracy and robustness than the previous approaches in any case.

  11. Steepest entropy ascent model for far-nonequilibrium thermodynamics: unified implementation of the maximum entropy production principle.

    PubMed

    Beretta, Gian Paolo

    2014-10-01

    By suitable reformulations, we cast the mathematical frameworks of several well-known different approaches to the description of nonequilibrium dynamics into a unified formulation valid in all these contexts, which extends to such frameworks the concept of steepest entropy ascent (SEA) dynamics introduced by the present author in previous works on quantum thermodynamics. Actually, the present formulation constitutes a generalization also for the quantum thermodynamics framework. The analysis emphasizes that in the SEA modeling principle a key role is played by the geometrical metric with respect to which to measure the length of a trajectory in state space. In the near-thermodynamic-equilibrium limit, the metric tensor is directly related to the Onsager's generalized resistivity tensor. Therefore, through the identification of a suitable metric field which generalizes the Onsager generalized resistance to the arbitrarily far-nonequilibrium domain, most of the existing theories of nonequilibrium thermodynamics can be cast in such a way that the state exhibits the spontaneous tendency to evolve in state space along the path of SEA compatible with the conservation constraints and the boundary conditions. The resulting unified family of SEA dynamical models is intrinsically and strongly consistent with the second law of thermodynamics. The non-negativity of the entropy production is a general and readily proved feature of SEA dynamics. In several of the different approaches to nonequilibrium description we consider here, the SEA concept has not been investigated before. We believe it defines the precise meaning and the domain of general validity of the so-called maximum entropy production principle. Therefore, it is hoped that the present unifying approach may prove useful in providing a fresh basis for effective, thermodynamically consistent, numerical models and theoretical treatments of irreversible conservative relaxation towards equilibrium from far nonequilibrium

  12. Steepest entropy ascent model for far-nonequilibrium thermodynamics: Unified implementation of the maximum entropy production principle

    NASA Astrophysics Data System (ADS)

    Beretta, Gian Paolo

    2014-10-01

    By suitable reformulations, we cast the mathematical frameworks of several well-known different approaches to the description of nonequilibrium dynamics into a unified formulation valid in all these contexts, which extends to such frameworks the concept of steepest entropy ascent (SEA) dynamics introduced by the present author in previous works on quantum thermodynamics. Actually, the present formulation constitutes a generalization also for the quantum thermodynamics framework. The analysis emphasizes that in the SEA modeling principle a key role is played by the geometrical metric with respect to which to measure the length of a trajectory in state space. In the near-thermodynamic-equilibrium limit, the metric tensor is directly related to the Onsager's generalized resistivity tensor. Therefore, through the identification of a suitable metric field which generalizes the Onsager generalized resistance to the arbitrarily far-nonequilibrium domain, most of the existing theories of nonequilibrium thermodynamics can be cast in such a way that the state exhibits the spontaneous tendency to evolve in state space along the path of SEA compatible with the conservation constraints and the boundary conditions. The resulting unified family of SEA dynamical models is intrinsically and strongly consistent with the second law of thermodynamics. The non-negativity of the entropy production is a general and readily proved feature of SEA dynamics. In several of the different approaches to nonequilibrium description we consider here, the SEA concept has not been investigated before. We believe it defines the precise meaning and the domain of general validity of the so-called maximum entropy production principle. Therefore, it is hoped that the present unifying approach may prove useful in providing a fresh basis for effective, thermodynamically consistent, numerical models and theoretical treatments of irreversible conservative relaxation towards equilibrium from far nonequilibrium

  13. Modelling streambank erosion potential using maximum entropy in a central Appalachian watershed

    NASA Astrophysics Data System (ADS)

    Pitchford, J.; Strager, M.; Riley, A.; Lin, L.; Anderson, J.

    2015-03-01

    We used maximum entropy to model streambank erosion potential (SEP) in a central Appalachian watershed to help prioritize sites for management. Model development included measuring erosion rates, application of a quantitative approach to locate Target Eroding Areas (TEAs), and creation of maps of boundary conditions. We successfully constructed a probability distribution of TEAs using the program Maxent. All model evaluation procedures indicated that the model was an excellent predictor, and that the major environmental variables controlling these processes were streambank slope, soil characteristics, bank position, and underlying geology. A classification scheme with low, moderate, and high levels of SEP derived from logistic model output was able to differentiate sites with low erosion potential from sites with moderate and high erosion potential. A major application of this type of modelling framework is to address uncertainty in stream restoration planning, ultimately helping to bridge the gap between restoration science and practice.

  14. Maximum entropy production in daisyworld

    NASA Astrophysics Data System (ADS)

    Maunu, Haley A.; Knuth, Kevin H.

    2012-05-01

    Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.

  15. Maximum entropy beam diagnostic tomography

    SciTech Connect

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs.

  16. Maximum entropy analysis of transport networks

    NASA Astrophysics Data System (ADS)

    Waldrip, Steven H.; Niven, Robert K.; Abel, Markus; Schlegel, Michael

    2017-06-01

    The maximum entropy method is used to derive an alternative gravity model for a transport network. The proposed method builds on previous methods which assign the discrete value of a maximum entropy distribution to equal the traffic flow rate. The proposed method however, uses a distribution to represent each flow rate. The proposed method is shown to be able to handle uncertainty in a more elegant way and give similar results to traditional methods. It is able to incorporate more of the observed data through the entropy function, prior distribution and integration limits potentially allowing better inferences to be made.

  17. Maximum entropy perception-action space: a Bayesian model of eye movement selection

    NASA Astrophysics Data System (ADS)

    Colas, Francis; Bessière, Pierre; Girard, Benoît

    2011-03-01

    In this article, we investigate the issue of the selection of eye movements in a free-eye Multiple Object Tracking task. We propose a Bayesian model of retinotopic maps with a complex logarithmic mapping. This model is structured in two parts: a representation of the visual scene, and a decision model based on the representation. We compare different decision models based on different features of the representation and we show that taking into account uncertainty helps predict the eye movements of subjects recorded in a psychophysics experiment. Finally, based on experimental data, we postulate that the complex logarithmic mapping has a functional relevance, as the density of objects in this space in more uniform than expected. This may indicate that the representation space and control strategies are such that the object density is of maximum entropy.

  18. Modeling the Mass Action Dynamics of Metabolism with Fluctuation Theorems and Maximum Entropy

    NASA Astrophysics Data System (ADS)

    Cannon, William; Thomas, Dennis; Baxter, Douglas; Zucker, Jeremy; Goh, Garrett

    The laws of thermodynamics dictate the behavior of biotic and abiotic systems. Simulation methods based on statistical thermodynamics can provide a fundamental understanding of how biological systems function and are coupled to their environment. While mass action kinetic simulations are based on solving ordinary differential equations using rate parameters, analogous thermodynamic simulations of mass action dynamics are based on modeling states using chemical potentials. The latter have the advantage that standard free energies of formation/reaction and metabolite levels are much easier to determine than rate parameters, allowing one to model across a large range of scales. Bridging theory and experiment, statistical thermodynamics simulations allow us to both predict activities of metabolites and enzymes and use experimental measurements of metabolites and proteins as input data. Even if metabolite levels are not available experimentally, it is shown that a maximum entropy assumption is quite reasonable and in many cases results in both the most energetically efficient process and the highest material flux.

  19. Maximum Entropy Inferences on the Axion Mass in Models with Axion-Neutrino Interaction

    NASA Astrophysics Data System (ADS)

    Alves, Alexandre; Dias, Alex Gomes; da Silva, Roberto

    2017-08-01

    In this work, we use the maximum entropy principle (MEP) to infer the mass of an axion which interacts to photons and neutrinos in an effective low energy theory. The Shannon entropy function to be maximized is defined in terms of the axion branching ratios. We show that MEP strongly constrains the axion mass taking into account the current experimental bounds on the neutrinos masses. Assuming that the axion is massive enough to decay into all the three neutrinos and that MEP fixes all the free parameters of the model, the inferred axion mass is in the interval 0.1 eV < m A < 0.2 eV, which can be tested by forthcoming experiments such as IAXO. However, even in the case where MEP fixes just the axion mass and no other parameter, we found that 0.1 eV < m A < 6.3 eV in the DFSZ model with right-handed neutrinos. Moreover, a light axion, allowed to decay to photons and the lightest neutrino only, is determined by MEP as a viable dark matter candidate.

  20. Tissue Radiation Response with Maximum Tsallis Entropy

    SciTech Connect

    Sotolongo-Grau, O.; Rodriguez-Perez, D.; Antoranz, J. C.; Sotolongo-Costa, Oscar

    2010-10-08

    The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.

  1. Tissue radiation response with maximum Tsallis entropy.

    PubMed

    Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar

    2010-10-08

    The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.

  2. On the Sufficiency of Pairwise Interactions in Maximum Entropy Models of Networks

    NASA Astrophysics Data System (ADS)

    Merchan, Lina; Nemenman, Ilya

    2016-03-01

    Biological information processing networks consist of many components, which are coupled by an even larger number of complex multivariate interactions. However, analyses of data sets from fields as diverse as neuroscience, molecular biology, and behavior have reported that observed statistics of states of some biological networks can be approximated well by maximum entropy models with only pairwise interactions among the components. Based on simulations of random Ising spin networks with p-spin (p>2) interactions, here we argue that this reduction in complexity can be thought of as a natural property of densely interacting networks in certain regimes, and not necessarily as a special property of living systems. By connecting our analysis to the theory of random constraint satisfaction problems, we suggest a reason for why some biological systems may operate in this regime.

  3. Online Robot Dead Reckoning Localization Using Maximum Relative Entropy Optimization With Model Constraints

    SciTech Connect

    Urniezius, Renaldas

    2011-03-14

    The principle of Maximum relative Entropy optimization was analyzed for dead reckoning localization of a rigid body when observation data of two attached accelerometers was collected. Model constraints were derived from the relationships between the sensors. The experiment's results confirmed that accelerometers each axis' noise can be successfully filtered utilizing dependency between channels and the dependency between time series data. Dependency between channels was used for a priori calculation, and a posteriori distribution was derived utilizing dependency between time series data. There was revisited data of autocalibration experiment by removing the initial assumption that instantaneous rotation axis of a rigid body was known. Performance results confirmed that such an approach could be used for online dead reckoning localization.

  4. On the sufficiency of pairwise interactions in maximum entropy models of networks

    NASA Astrophysics Data System (ADS)

    Nemenman, Ilya; Merchan, Lina

    Biological information processing networks consist of many components, which are coupled by an even larger number of complex multivariate interactions. However, analyses of data sets from fields as diverse as neuroscience, molecular biology, and behavior have reported that observed statistics of states of some biological networks can be approximated well by maximum entropy models with only pairwise interactions among the components. Based on simulations of random Ising spin networks with p-spin (p > 2) interactions, here we argue that this reduction in complexity can be thought of as a natural property of some densely interacting networks in certain regimes, and not necessarily as a special property of living systems. This work was supported in part by James S. McDonnell Foundation Grant No. 220020321.

  5. Maximum entropy principal for transportation

    SciTech Connect

    Bilich, F.; Da Silva, R.

    2008-11-06

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.

  6. Economics and Maximum Entropy Production

    NASA Astrophysics Data System (ADS)

    Lorenz, R. D.

    2003-04-01

    Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.

  7. Modeling Non-Equilibrium Dynamics of a Discrete Probability Distribution: General Rate Equation for Maximal Entropy Generation in a Maximum-Entropy Landscape with Time-Dependent Constraints

    NASA Astrophysics Data System (ADS)

    Beretta, Gian P.

    2008-09-01

    A rate equation for a discrete probability distribution is discussed as a route to describe smooth relaxation towards the maximum entropy distribution compatible at all times with one or more linear constraints. The resulting dynamics follows the path of steepest entropy ascent compatible with the constraints. The rate equation is consistent with the Onsager theorem of reciprocity and the fluctuation-dissipation theorem. The mathematical formalism was originally developed to obtain a quantum theoretical unification of mechanics and thermodinamics. It is presented here in a general, non-quantal formulation as a part of an effort to develop tools for the phenomenological treatment of non-equilibrium problems with applications in engineering, biology, sociology, and economics. The rate equation is also extended to include the case of assigned time-dependences of the constraints and the entropy, such as for modeling non-equilibrium energy and entropy exchanges.

  8. a Maximum Entropy Model of the Bearded Capuchin Monkey Habitat Incorporating Topography and Spectral Unmixing Analysis

    NASA Astrophysics Data System (ADS)

    Howard, A. M.; Bernardes, S.; Nibbelink, N.; Biondi, L.; Presotto, A.; Fragaszy, D. M.; Madden, M.

    2012-07-01

    Movement patterns of bearded capuchin monkeys (Cebus (Sapajus) libidinosus) in northeastern Brazil are likely impacted by environmental features such as elevation, vegetation density, or vegetation type. Habitat preferences of these monkeys provide insights regarding the impact of environmental features on species ecology and the degree to which they incorporate these features in movement decisions. In order to evaluate environmental features influencing movement patterns and predict areas suitable for movement, we employed a maximum entropy modelling approach, using observation points along capuchin monkey daily routes as species presence points. We combined these presence points with spatial data on important environmental features from remotely sensed data on land cover and topography. A spectral mixing analysis procedure was used to generate fraction images that represent green vegetation, shade and soil of the study area. A Landsat Thematic Mapper scene of the area of study was geometrically and atmospherically corrected and used as input in a Minimum Noise Fraction (MNF) procedure and a linear spectral unmixing approach was used to generate the fraction images. These fraction images and elevation were the environmental layer inputs for our logistic MaxEnt model of capuchin movement. Our models' predictive power (test AUC) was 0.775. Areas of high elevation (>450 m) showed low probabilities of presence, and percent green vegetation was the greatest overall contributor to model AUC. This work has implications for predicting daily movement patterns of capuchins in our field site, as suitability values from our model may relate to habitat preference and facility of movement.

  9. Revisiting the global surface energy budgets with maximum-entropy-production model of surface heat fluxes

    NASA Astrophysics Data System (ADS)

    Huang, Shih-Yu; Deng, Yi; Wang, Jingfeng

    2016-10-01

    The maximum-entropy-production (MEP) model of surface heat fluxes, based on contemporary non-equilibrium thermodynamics, information theory, and atmospheric turbulence theory, is used to re-estimate the global surface heat fluxes. The MEP model predicted surface fluxes automatically balance the surface energy budgets at all time and space scales without the explicit use of near-surface temperature and moisture gradient, wind speed and surface roughness data. The new MEP-based global annual mean fluxes over the land surface, using input data of surface radiation, temperature data from National Aeronautics and Space Administration-Clouds and the Earth's Radiant Energy System (NASA CERES) supplemented by surface specific humidity data from the Modern-Era Retrospective Analysis for Research and Applications (MERRA), agree closely with previous estimates. The new estimate of ocean evaporation, not using the MERRA reanalysis data as model inputs, is lower than previous estimates, while the new estimate of ocean sensible heat flux is higher than previously reported. The MEP model also produces the first global map of ocean surface heat flux that is not available from existing global reanalysis products.

  10. Revisiting the global surface energy budgets with maximum-entropy-production model of surface heat fluxes

    NASA Astrophysics Data System (ADS)

    Huang, Shih-Yu; Deng, Yi; Wang, Jingfeng

    2017-09-01

    The maximum-entropy-production (MEP) model of surface heat fluxes, based on contemporary non-equilibrium thermodynamics, information theory, and atmospheric turbulence theory, is used to re-estimate the global surface heat fluxes. The MEP model predicted surface fluxes automatically balance the surface energy budgets at all time and space scales without the explicit use of near-surface temperature and moisture gradient, wind speed and surface roughness data. The new MEP-based global annual mean fluxes over the land surface, using input data of surface radiation, temperature data from National Aeronautics and Space Administration-Clouds and the Earth's Radiant Energy System (NASA CERES) supplemented by surface specific humidity data from the Modern-Era Retrospective Analysis for Research and Applications (MERRA), agree closely with previous estimates. The new estimate of ocean evaporation, not using the MERRA reanalysis data as model inputs, is lower than previous estimates, while the new estimate of ocean sensible heat flux is higher than previously reported. The MEP model also produces the first global map of ocean surface heat flux that is not available from existing global reanalysis products.

  11. [Prediction of potential geographic distribution of Lyme disease in Qinghai province with Maximum Entropy model].

    PubMed

    Zhang, Lin; Hou, Xuexia; Liu, Huixin; Liu, Wei; Wan, Kanglin; Hao, Qin

    2016-01-01

    To predict the potential geographic distribution of Lyme disease in Qinghai by using Maximum Entropy model (MaxEnt). The sero-diagnosis data of Lyme disease in 6 counties (Huzhu, Zeku, Tongde, Datong, Qilian and Xunhua) and the environmental and anthropogenic data including altitude, human footprint, normalized difference vegetation index (NDVI) and temperature in Qinghai province since 1990 were collected. By using the data of Huzhu Zeku and Tongde, the prediction of potential distribution of Lyme disease in Qinghai was conducted with MaxEnt. The prediction results were compared with the human sero-prevalence of Lyme disease in Datong, Qilian and Xunhua counties in Qinghai. Three hot spots of Lyme disease were predicted in Qinghai, which were all in the east forest areas. Furthermore, the NDVI showed the most important role in the model prediction, followed by human footprint. Datong, Qilian and Xunhua counties were all in eastern Qinghai. Xunhua was in hot spot areaⅡ, Datong was close to the north of hot spot area Ⅲ, while Qilian with lowest sero-prevalence of Lyme disease was not in the hot spot areas. The data were well modeled in MaxEnt (Area Under Curve=0.980). The actual distribution of Lyme disease in Qinghai was in consistent with the results of the model prediction. MaxEnt could be used in predicting the potential distribution patterns of Lyme disease. The distribution of vegetation and the range and intensity of human activity might be related with Lyme disease distribution.

  12. Stochastic model of the NASA/MSFC ground facility for large space structures with uncertain parameters: The maximum entropy approach

    NASA Technical Reports Server (NTRS)

    Hsia, Wei-Shen

    1987-01-01

    A stochastic control model of the NASA/MSFC Ground Facility for Large Space Structures (LSS) control verification through Maximum Entropy (ME) principle adopted in Hyland's method was presented. Using ORACLS, a computer program was implemented for this purpose. Four models were then tested and the results presented.

  13. Zipf's law, power laws and maximum entropy

    NASA Astrophysics Data System (ADS)

    Visser, Matt

    2013-04-01

    Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.

  14. Predicting the distribution of the Asian tapir in Peninsular Malaysia using maximum entropy modeling.

    PubMed

    Clements, Gopalasamy Reuben; Rayan, D Mark; Aziz, Sheema Abdul; Kawanishi, Kae; Traeholt, Carl; Magintan, David; Yazi, Muhammad Fadlli Abdul; Tingley, Reid

    2012-12-01

    In 2008, the IUCN threat status of the Asian tapir (Tapirus indicus) was reclassified from 'vulnerable' to 'endangered'. The latest distribution map from the IUCN Red List suggests that the tapirs' native range is becoming increasingly fragmented in Peninsular Malaysia, but distribution data collected by local researchers suggest a more extensive geographical range. Here, we compile a database of 1261 tapir occurrence records within Peninsular Malaysia, and demonstrate that this species, indeed, has a much broader geographical range than the IUCN range map suggests. However, extreme spatial and temporal bias in these records limits their utility for conservation planning. Therefore, we used maximum entropy (MaxEnt) modeling to elucidate the potential extent of the Asian tapir's occurrence in Peninsular Malaysia while accounting for bias in existing distribution data. Our MaxEnt model predicted that the Asian tapir has a wider geographic range than our fine-scale data and the IUCN range map both suggest. Approximately 37% of Peninsular Malaysia contains potentially suitable tapir habitats. Our results justify a revision to the Asian tapir's extent of occurrence in the IUCN Red List. Furthermore, our modeling demonstrated that selectively logged forests encompass 45% of potentially suitable tapir habitats, underscoring the importance of these habitats for the conservation of this species in Peninsular Malaysia.

  15. Maximum-Entropy Models of Sequenced Immune Repertoires Predict Antigen-Antibody Affinity

    PubMed Central

    Marcatili, Paolo; Pagnani, Andrea

    2016-01-01

    The immune system has developed a number of distinct complex mechanisms to shape and control the antibody repertoire. One of these mechanisms, the affinity maturation process, works in an evolutionary-like fashion: after binding to a foreign molecule, the antibody-producing B-cells exhibit a high-frequency mutation rate in the genome region that codes for the antibody active site. Eventually, cells that produce antibodies with higher affinity for their cognate antigen are selected and clonally expanded. Here, we propose a new statistical approach based on maximum entropy modeling in which a scoring function related to the binding affinity of antibodies against a specific antigen is inferred from a sample of sequences of the immune repertoire of an individual. We use our inference strategy to infer a statistical model on a data set obtained by sequencing a fairly large portion of the immune repertoire of an HIV-1 infected patient. The Pearson correlation coefficient between our scoring function and the IC50 neutralization titer measured on 30 different antibodies of known sequence is as high as 0.77 (p-value 10−6), outperforming other sequence- and structure-based models. PMID:27074145

  16. A Novel Maximum Entropy Markov Model for Human Facial Expression Recognition

    PubMed Central

    2016-01-01

    Research in video based FER systems has exploded in the past decade. However, most of the previous methods work well when they are trained and tested on the same dataset. Illumination settings, image resolution, camera angle, and physical characteristics of the people differ from one dataset to another. Considering a single dataset keeps the variance, which results from differences, to a minimum. Having a robust FER system, which can work across several datasets, is thus highly desirable. The aim of this work is to design, implement, and validate such a system using different datasets. In this regard, the major contribution is made at the recognition module which uses the maximum entropy Markov model (MEMM) for expression recognition. In this model, the states of the human expressions are modeled as the states of an MEMM, by considering the video-sensor observations as the observations of MEMM. A modified Viterbi is utilized to generate the most probable expression state sequence based on such observations. Lastly, an algorithm is designed which predicts the expression state from the generated state sequence. Performance is compared against several existing state-of-the-art FER systems on six publicly available datasets. A weighted average accuracy of 97% is achieved across all datasets. PMID:27635654

  17. Predictive Modeling and Mapping of Malayan Sun Bear (Helarctos malayanus) Distribution Using Maximum Entropy

    PubMed Central

    Nazeri, Mona; Jusoff, Kamaruzaman; Madani, Nima; Mahmud, Ahmad Rodzi; Bahman, Abdul Rani; Kumar, Lalit

    2012-01-01

    One of the available tools for mapping the geographical distribution and potential suitable habitats is species distribution models. These techniques are very helpful for finding poorly known distributions of species in poorly sampled areas, such as the tropics. Maximum Entropy (MaxEnt) is a recently developed modeling method that can be successfully calibrated using a relatively small number of records. In this research, the MaxEnt model was applied to describe the distribution and identify the key factors shaping the potential distribution of the vulnerable Malayan Sun Bear (Helarctos malayanus) in one of the main remaining habitats in Peninsular Malaysia. MaxEnt results showed that even though Malaysian sun bear habitat is tied with tropical evergreen forests, it lives in a marginal threshold of bio-climatic variables. On the other hand, current protected area networks within Peninsular Malaysia do not cover most of the sun bears potential suitable habitats. Assuming that the predicted suitability map covers sun bears actual distribution, future climate change, forest degradation and illegal hunting could potentially severely affect the sun bear’s population. PMID:23110182

  18. Computational design of hepatitis C vaccines using maximum entropy models and population dynamics

    NASA Astrophysics Data System (ADS)

    Hart, Gregory; Ferguson, Andrew

    Hepatitis C virus (HCV) afflicts 170 million people and kills 350,000 annually. Vaccination offers the most realistic and cost effective hope of controlling this epidemic. Despite 20 years of research, no vaccine is available. A major obstacle is the virus' extreme genetic variability and rapid mutational escape from immune pressure. Improvements in the vaccine design process are urgently needed. Coupling data mining with spin glass models and maximum entropy inference, we have developed a computational approach to translate sequence databases into empirical fitness landscapes. These landscapes explicitly connect viral genotype to phenotypic fitness and reveal vulnerable targets that can be exploited to rationally design immunogens. Viewing these landscapes as the mutational ''playing field'' over which the virus is constrained to evolve, we have integrated them with agent-based models of the viral mutational and host immune response dynamics, establishing a data-driven immune simulator of HCV infection. We have employed this simulator to perform in silico screening of HCV immunogens. By systematically identifying a small number of promising vaccine candidates, these models can accelerate the search for a vaccine by massively reducing the experimental search space.

  19. Application of Maximum Entropy principle to modeling torsion angle probability distribution in proteins

    NASA Astrophysics Data System (ADS)

    Rowicka, Małgorzata; Otwinowski, Zbyszek

    2004-04-01

    Using the Maximum Entropy principle, we find probability distribution of torsion angles in proteins. We estimate parameters of this distribution numerically, by implementing the conjugate gradient method in Polak-Ribiere variant. We investigate practical approximations of the theoretical distribution. We discuss the information content of these approximations and compare them with standard histogram method. Our data are pairs of main chain torsion angles for a selected subset of high resolution non-homologous protein structures from Protein Data Bank.

  20. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    USGS Publications Warehouse

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  1. Application of Bayesian Maximum Entropy Filter in parameter calibration of groundwater flow model in PingTung Plain

    NASA Astrophysics Data System (ADS)

    Cheung, Shao-Yong; Lee, Chieh-Han; Yu, Hwa-Lung

    2017-04-01

    Due to the limited hydrogeological observation data and high levels of uncertainty within, parameter estimation of the groundwater model has been an important issue. There are many methods of parameter estimation, for example, Kalman filter provides a real-time calibration of parameters through measurement of groundwater monitoring wells, related methods such as Extended Kalman Filter and Ensemble Kalman Filter are widely applied in groundwater research. However, Kalman Filter method is limited to linearity. This study propose a novel method, Bayesian Maximum Entropy Filtering, which provides a method that can considers the uncertainty of data in parameter estimation. With this two methods, we can estimate parameter by given hard data (certain) and soft data (uncertain) in the same time. In this study, we use Python and QGIS in groundwater model (MODFLOW) and development of Extended Kalman Filter and Bayesian Maximum Entropy Filtering in Python in parameter estimation. This method may provide a conventional filtering method and also consider the uncertainty of data. This study was conducted through numerical model experiment to explore, combine Bayesian maximum entropy filter and a hypothesis for the architecture of MODFLOW groundwater model numerical estimation. Through the virtual observation wells to simulate and observe the groundwater model periodically. The result showed that considering the uncertainty of data, the Bayesian maximum entropy filter will provide an ideal result of real-time parameters estimation.

  2. A subjective supply-demand model: the maximum Boltzmann/Shannon entropy solution

    NASA Astrophysics Data System (ADS)

    Piotrowski, Edward W.; Sładkowski, Jan

    2009-03-01

    The present authors have put forward a projective geometry model of rational trading. The expected (mean) value of the time that is necessary to strike a deal and the profit strongly depend on the strategies adopted. A frequent trader often prefers maximal profit intensity to the maximization of profit resulting from a separate transaction because the gross profit/income is the adopted/recommended benchmark. To investigate activities that have different periods of duration we define, following the queuing theory, the profit intensity as a measure of this economic category. The profit intensity in repeated trading has a unique property of attaining its maximum at a fixed point regardless of the shape of demand curves for a wide class of probability distributions of random reverse transactions (i.e. closing of the position). These conclusions remain valid for an analogous model based on supply analysis. This type of market game is often considered in research aiming at finding an algorithm that maximizes profit of a trader who negotiates prices with the Rest of the World (a collective opponent), possessing a definite and objective supply profile. Such idealization neglects the sometimes important influence of an individual trader on the demand/supply profile of the Rest of the World and in extreme cases questions the very idea of demand/supply profile. Therefore we put forward a trading model in which the demand/supply profile of the Rest of the World induces the (rational) trader to (subjectively) presume that he/she lacks (almost) all knowledge concerning the market but his/her average frequency of trade. This point of view introduces maximum entropy principles into the model and broadens the range of economic phenomena that can be perceived as a sort of thermodynamical system. As a consequence, the profit intensity has a fixed point with an astonishing connection with Fibonacci classical works and looking for the quickest algorithm for obtaining the extremum of a

  3. Maximum Entropy Production Modeling of Evapotranspiration Partitioning on Heterogeneous Terrain and Canopy Cover: advantages and limitations.

    NASA Astrophysics Data System (ADS)

    Gutierrez-Jurado, H. A.; Guan, H.; Wang, J.; Wang, H.; Bras, R. L.; Simmons, C. T.

    2015-12-01

    Quantification of evapotranspiration (ET) and its partition over regions of heterogeneous topography and canopy poses a challenge using traditional approaches. In this study, we report the results of a novel field experiment design guided by the Maximum Entropy Production model of ET (MEP-ET), formulated for estimating evaporation and transpiration from homogeneous soil and canopy. A catchment with complex terrain and patchy vegetation in South Australia was instrumented to measure temperature, humidity and net radiation at soil and canopy surfaces. Performance of the MEP-ET model to quantify transpiration and soil evaporation was evaluated during wet and dry conditions with independently and directly measured transpiration from sapflow and soil evaporation using the Bowen Ratio Energy Balance (BREB). MEP-ET transpiration shows remarkable agreement with that obtained through sapflow measurements during wet conditions, but consistently overestimates the flux during dry periods. However, an additional term introduced to the original MEP-ET model accounting for higher stomatal regulation during dry spells, based on differences between leaf and air vapor pressure deficits and temperatures, significantly improves the model performance. On the other hand, MEP-ET soil evaporation is in good agreement with that from BREB regardless of moisture conditions. The experimental design allows a plot and tree scale quantification of evaporation and transpiration respectively. This study confirms for the first time that the MEP-ET originally developed for homogeneous open bare soil and closed canopy can be used for modeling ET over heterogeneous land surfaces. Furthermore, we show that with the addition of an empirical function simulating the plants ability to regulate transpiration, and based on the same measurements of temperature and humidity, the method can produce reliable estimates of ET during both wet and dry conditions without compromising its parsimony.

  4. Maximum entropy spectral analysis for streamflow forecasting

    NASA Astrophysics Data System (ADS)

    Cui, Huijuan; Singh, Vijay P.

    2016-01-01

    Configurational entropy spectral analysis (CESAS) is developed with spectral power as a random variable for streamflow forecasting. It is found that the CESAS derived by maximizing the configurational entropy yields the same solution as by the Burg entropy spectral analysis (BESA). Comparison of forecasted streamflows by CESAS and BESA shows less than 0.001% difference between the two analyses and thus the two entropy spectral analyses are concluded to be identical. Thus, the Burg entropy spectral analysis and two configurational entropy spectral analyses form the maximum entropy spectral analysis.

  5. Maximum entropy PDF projection: A review

    NASA Astrophysics Data System (ADS)

    Baggenstoss, Paul M.

    2017-06-01

    We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.

  6. Quantile-based Bayesian maximum entropy approach for spatiotemporal modeling of ambient air quality levels.

    PubMed

    Yu, Hwa-Lung; Wang, Chih-Hsin

    2013-02-05

    Understanding the daily changes in ambient air quality concentrations is important to the assessing human exposure and environmental health. However, the fine temporal scales (e.g., hourly) involved in this assessment often lead to high variability in air quality concentrations. This is because of the complex short-term physical and chemical mechanisms among the pollutants. Consequently, high heterogeneity is usually present in not only the averaged pollution levels, but also the intraday variance levels of the daily observations of ambient concentration across space and time. This characteristic decreases the estimation performance of common techniques. This study proposes a novel quantile-based Bayesian maximum entropy (QBME) method to account for the nonstationary and nonhomogeneous characteristics of ambient air pollution dynamics. The QBME method characterizes the spatiotemporal dependence among the ambient air quality levels based on their location-specific quantiles and accounts for spatiotemporal variations using a local weighted smoothing technique. The epistemic framework of the QBME method can allow researchers to further consider the uncertainty of space-time observations. This study presents the spatiotemporal modeling of daily CO and PM10 concentrations across Taiwan from 1998 to 2009 using the QBME method. Results show that the QBME method can effectively improve estimation accuracy in terms of lower mean absolute errors and standard deviations over space and time, especially for pollutants with strong nonhomogeneous variances across space. In addition, the epistemic framework can allow researchers to assimilate the site-specific secondary information where the observations are absent because of the common preferential sampling issues of environmental data. The proposed QBME method provides a practical and powerful framework for the spatiotemporal modeling of ambient pollutants.

  7. Modelling non-Gaussianity of background and observational errors by the Maximum Entropy method

    NASA Astrophysics Data System (ADS)

    Pires, Carlos; Talagrand, Olivier; Bocquet, Marc

    2010-05-01

    The Best Linear Unbiased Estimator (BLUE) has widely been used in atmospheric-oceanic data assimilation. However, when data errors have non-Gaussian pdfs, the BLUE differs from the absolute Minimum Variance Unbiased Estimator (MVUE), minimizing the mean square analysis error. The non-Gaussianity of errors can be due to the statistical skewness and positiveness of some physical observables (e.g. moisture, chemical species) or due to the nonlinearity of the data assimilation models and observation operators acting on Gaussian errors. Non-Gaussianity of assimilated data errors can be justified from a priori hypotheses or inferred from statistical diagnostics of innovations (observation minus background). Following this rationale, we compute measures of innovation non-Gaussianity, namely its skewness and kurtosis, relating it to: a) the non-Gaussianity of the individual error themselves, b) the correlation between nonlinear functions of errors, and c) the heteroscedasticity of errors within diagnostic samples. Those relationships impose bounds for skewness and kurtosis of errors which are critically dependent on the error variances, thus leading to a necessary tuning of error variances in order to accomplish consistency with innovations. We evaluate the sub-optimality of the BLUE as compared to the MVUE, in terms of excess of error variance, under the presence of non-Gaussian errors. The error pdfs are obtained by the maximum entropy method constrained by error moments up to fourth order, from which the Bayesian probability density function and the MVUE are computed. The impact is higher for skewed extreme innovations and grows in average with the skewness of data errors, especially if those skewnesses have the same sign. Application has been performed to the quality-accepted ECMWF innovations of brightness temperatures of a set of High Resolution Infrared Sounder channels. In this context, the MVUE has led in some extreme cases to a potential reduction of 20-60% error

  8. Role of adjacency-matrix degeneracy in maximum-entropy-weighted network models

    NASA Astrophysics Data System (ADS)

    Sagarra, O.; Pérez Vicente, C. J.; Díaz-Guilera, A.

    2015-11-01

    Complex network null models based on entropy maximization are becoming a powerful tool to characterize and analyze data from real systems. However, it is not easy to extract good and unbiased information from these models: A proper understanding of the nature of the underlying events represented in them is crucial. In this paper we emphasize this fact stressing how an accurate counting of configurations compatible with given constraints is fundamental to build good null models for the case of networks with integer-valued adjacency matrices constructed from an aggregation of one or multiple layers. We show how different assumptions about the elements from which the networks are built give rise to distinctively different statistics, even when considering the same observables to match those of real data. We illustrate our findings by applying the formalism to three data sets using an open-source software package accompanying the present work and demonstrate how such differences are clearly seen when measuring network observables.

  9. Dynamical maximum entropy approach to flocking

    NASA Astrophysics Data System (ADS)

    Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.

    2014-04-01

    We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.

  10. The two-box model of climate: limitations and applications to planetary habitability and maximum entropy production studies.

    PubMed

    Lorenz, Ralph D

    2010-05-12

    The 'two-box model' of planetary climate is discussed. This model has been used to demonstrate consistency of the equator-pole temperature gradient on Earth, Mars and Titan with what would be predicted from a principle of maximum entropy production (MEP). While useful for exposition and for generating first-order estimates of planetary heat transports, it has too low a resolution to investigate climate systems with strong feedbacks. A two-box MEP model agrees well with the observed day : night temperature contrast observed on the extrasolar planet HD 189733b.

  11. The two-box model of climate: limitations and applications to planetary habitability and maximum entropy production studies

    PubMed Central

    Lorenz, Ralph D.

    2010-01-01

    The ‘two-box model’ of planetary climate is discussed. This model has been used to demonstrate consistency of the equator–pole temperature gradient on Earth, Mars and Titan with what would be predicted from a principle of maximum entropy production (MEP). While useful for exposition and for generating first-order estimates of planetary heat transports, it has too low a resolution to investigate climate systems with strong feedbacks. A two-box MEP model agrees well with the observed day : night temperature contrast observed on the extrasolar planet HD 189733b. PMID:20368253

  12. Analyzing Trade Dynamics from Incomplete Data in Spatial Regional Models: a Maximum Entropy Approach

    NASA Astrophysics Data System (ADS)

    Papalia, Rosa Bernardini

    2008-11-01

    Flow data are viewed as cross-classified data, and spatial interaction models are reformulated as log-linear models. According to this view, we introduce a spatial panel data model and we derive a Generalized Maximum Entropy—based estimation approach. The estimator has the advantage of being consistent with the underlying data generation process and eventually with the restrictions implied by some non sample information or by past empirical evidence by also controlling for collinearity and endogeneity problems.

  13. Statistical optimization for passive scalar transport: maximum entropy production versus maximum Kolmogorov-Sinai entropy

    NASA Astrophysics Data System (ADS)

    Mihelich, M.; Faranda, D.; Dubrulle, B.; Paillard, D.

    2015-03-01

    We derive rigorous results on the link between the principle of maximum entropy production and the principle of maximum Kolmogorov-Sinai entropy for a Markov model of the passive scalar diffusion called the Zero Range Process. We show analytically that both the entropy production and the Kolmogorov-Sinai entropy, seen as functions of a parameter f connected to the jump probability, admit a unique maximum denoted fmaxEP and fmaxKS. The behaviour of these two maxima is explored as a function of the system disequilibrium and the system resolution N. The main result of this paper is that fmaxEP and fmaxKS have the same Taylor expansion at first order in the deviation from equilibrium. We find that fmaxEP hardly depends on N whereas fmaxKS depends strongly on N. In particular, for a fixed difference of potential between the reservoirs, fmaxEP(N) tends towards a non-zero value, while fmaxKS(N) tends to 0 when N goes to infinity. For values of N typical of those adopted by Paltridge and climatologists working on maximum entropy production (N ≍ 10-100), we show that fmaxEP and fmaxKS coincide even far from equilibrium. Finally, we show that one can find an optimal resolution N* such that fmaxEP and fmaxKS coincide, at least up to a second-order parameter proportional to the non-equilibrium fluxes imposed to the boundaries. We find that the optimal resolution N* depends on the non-equilibrium fluxes, so that deeper convection should be represented on finer grids. This result points to the inadequacy of using a single grid for representing convection in climate and weather models. Moreover, the application of this principle to passive scalar transport parametrization is therefore expected to provide both the value of the optimal flux, and of the optimal number of degrees of freedom (resolution) to describe the system.

  14. Maximum entropy production - Full steam ahead

    NASA Astrophysics Data System (ADS)

    Lorenz, Ralph D.

    2012-05-01

    The application of a principle of Maximum Entropy Production (MEP, or less ambiguously MaxEP) to planetary climate is discussed. This idea suggests that if sufficiently free of dynamical constraints, the atmospheric and oceanic heat flows across a planet may conspire to maximize the generation of mechanical work, or entropy. Thermodynamic and information-theoretic aspects of this idea are discussed. These issues are also discussed in the context of dust devils, convective vortices found in strongly-heated desert areas.

  15. Maximum Entropy-Based Ecological Niche Model and Bio-Climatic Determinants of Lone Star Tick (Amblyomma americanum) Niche

    PubMed Central

    Raghavan, Ram K.; Goodin, Douglas G.; Hanzlicek, Gregg A.; Zolnerowich, Gregory; Dryden, Michael W.; Anderson, Gary A.; Ganta, Roman R.

    2016-01-01

    Abstract The potential distribution of Amblyomma americanum ticks in Kansas was modeled using maximum entropy (MaxEnt) approaches based on museum and field-collected species occurrence data. Various bioclimatic variables were used in the model as potentially influential factors affecting the A. americanum niche. Following reduction of dimensionality among predictor variables using principal components analysis, which revealed that the first two principal axes explain over 87% of the variance, the model indicated that suitable conditions for this medically important tick species cover a larger area in Kansas than currently believed. Soil moisture, temperature, and precipitation were highly correlated with the first two principal components and were influential factors in the A. americanum ecological niche. Assuming that the niche estimated in this study covers the occupied distribution, which needs to be further confirmed by systematic surveys, human exposure to this known disease vector may be considerably under-appreciated in the state. PMID:26824880

  16. Weak scale from the maximum entropy principle

    NASA Astrophysics Data System (ADS)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2015-03-01

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  17. Maximum entropy spherical deconvolution for diffusion MRI.

    PubMed

    Alexander, Daniel C

    2005-01-01

    This paper proposes a maximum entropy method for spherical deconvolution. Spherical deconvolution arises in various inverse problems. This paper uses the method to reconstruct the distribution of microstructural fibre orientations from diffusion MRI measurements. Analysis shows that the PASMRI algorithm, one of the most accurate diffusion MRI reconstruction algorithms in the literature, is a special case of the maximum entropy spherical deconvolution. Experiments compare the new method to linear spherical deconvolution, used previously in diffusion MRI, and to the PASMRI algorithm. The new method compares favourably both in simulation and on standard brain-scan data.

  18. Predicting the potential environmental suitability for Theileria orientalis transmission in New Zealand cattle using maximum entropy niche modelling.

    PubMed

    Lawrence, K E; Summers, S R; Heath, A C G; McFadden, A M J; Pulford, D J; Pomroy, W E

    2016-07-15

    The tick-borne haemoparasite Theileria orientalis is the most important infectious cause of anaemia in New Zealand cattle. Since 2012 a previously unrecorded type, T. orientalis type 2 (Ikeda), has been associated with disease outbreaks of anaemia, lethargy, jaundice and deaths on over 1000 New Zealand cattle farms, with most of the affected farms found in the upper North Island. The aim of this study was to model the relative environmental suitability for T. orientalis transmission throughout New Zealand, to predict the proportion of cattle farms potentially suitable for active T. orientalis infection by region, island and the whole of New Zealand and to estimate the average relative environmental suitability per farm by region, island and the whole of New Zealand. The relative environmental suitability for T. orientalis transmission was estimated using the Maxent (maximum entropy) modelling program. The Maxent model predicted that 99% of North Island cattle farms (n=36,257), 64% South Island cattle farms (n=15,542) and 89% of New Zealand cattle farms overall (n=51,799) could potentially be suitable for T. orientalis transmission. The average relative environmental suitability of T. orientalis transmission at the farm level was 0.34 in the North Island, 0.02 in the South Island and 0.24 overall. The study showed that the potential spatial distribution of T. orientalis environmental suitability was much greater than presumed in the early part of the Theileria associated bovine anaemia (TABA) epidemic. Maximum entropy offers a computer efficient method of modelling the probability of habitat suitability for an arthropod vectored disease. This model could help estimate the boundaries of the endemically stable and endemically unstable areas for T. orientalis transmission within New Zealand and be of considerable value in informing practitioner and farmer biosecurity decisions in these respective areas.

  19. Deep-sea benthic megafaunal habitat suitability modelling: A global-scale maximum entropy model for xenophyophores

    NASA Astrophysics Data System (ADS)

    Ashford, Oliver S.; Davies, Andrew J.; Jones, Daniel O. B.

    2014-12-01

    Xenophyophores are a group of exclusively deep-sea agglutinating rhizarian protozoans, at least some of which are foraminifera. They are an important constituent of the deep-sea megafauna that are sometimes found in sufficient abundance to act as a significant source of habitat structure for meiofaunal and macrofaunal organisms. This study utilised maximum entropy modelling (Maxent) and a high-resolution environmental database to explore the environmental factors controlling the presence of Xenophyophorea and two frequently sampled xenophyophore species that are taxonomically stable: Syringammina fragilissima and Stannophyllum zonarium. These factors were also used to predict the global distribution of each taxon. Areas of high habitat suitability for xenophyophores were highlighted throughout the world's oceans, including in a large number of areas yet to be suitably sampled, but the Northeast and Southeast Atlantic Ocean, Gulf of Mexico and Caribbean Sea, the Red Sea and deep-water regions of the Malay Archipelago represented particular hotspots. The two species investigated showed more specific habitat requirements when compared to the model encompassing all xenophyophore records, perhaps in part due to the smaller number and relatively more clustered nature of the presence records available for modelling at present. The environmental variables depth, oxygen parameters, nitrate concentration, carbon-chemistry parameters and temperature were of greatest importance in determining xenophyophore distributions, but, somewhat surprisingly, hydrodynamic parameters were consistently shown to have low importance, possibly due to the paucity of well-resolved global hydrodynamic datasets. The results of this study (and others of a similar type) have the potential to guide further sample collection, environmental policy, and spatial planning of marine protected areas and industrial activities that impact the seafloor, particularly those that overlap with aggregations of

  20. Maximum entropy analysis of flow networks

    NASA Astrophysics Data System (ADS)

    Niven, Robert K.; Abel, Markus; Schlegel, Michael; Waldrip, Steven H.

    2014-12-01

    This study examines a generalised maximum entropy (MaxEnt) analysis of a flow network, involving flow rates and potential differences on the network, connected by resistance functions. The analysis gives a generic derivation based on an explicit form of the resistance functions. Accounting for the constraints also leads to an extended form of Gibbs' phase rule, applicable to flow networks.

  1. Maximum entropy analysis of hydraulic pipe networks

    NASA Astrophysics Data System (ADS)

    Waldrip, Steven H.; Niven, Robert K.; Abel, Markus; Schlegel, Michael

    2014-12-01

    A Maximum Entropy (MaxEnt) method is developed to infer mean external and internal flow rates and mean pressure gradients (potential differences) in hydraulic pipe networks, without or with sufficient constraints to render the system deterministic. The proposed method substantially extends existing methods for the analysis of flow networks (e.g. Hardy-Cross), applicable only to deterministic networks.

  2. Maximum entropy modeling risk of anthrax in the Republic of Kazakhstan.

    PubMed

    Abdrakhmanov, S K; Mukhanbetkaliyev, Y Y; Korennoy, F I; Sultanov, A A; Kadyrov, A S; Kushubaev, D B; Bakishev, T G

    2017-09-01

    The objective of this study was to zone the territory of the Republic of Kazakhstan (RK) into risk categories according to the probability of anthrax emergence in farm animals as stipulated by the re-activation of preserved natural foci. We used historical data on anthrax morbidity in farm animals during the period 1933 - 2014, collected by the veterinary service of the RK. The database covers the entire territory of the RK and contains 4058 anthrax outbreaks tied to 1798 unique locations. Considering the strongly pronounced natural focality of anthrax, we employed environmental niche modeling (Maxent) to reveal patterns in the outbreaks' linkages to specific combinations of environmental factors. The set of bioclimatic factors BIOCLIM, derived from remote sensing data, the altitude above sea level, the land cover type, the maximum green vegetation fraction (MGVF) and the soil type were examined as explanatory variables. The model demonstrated good predictive ability, while the MGVF, the bioclimatic variables reflecting precipitation level and humidity, and the soil type were found to contribute most significantly to the model. A continuous probability surface was obtained that reflects the suitability of the study area for the emergence of anthrax outbreaks. The surface was turned into a categorical risk map by averaging the probabilities within the administrative divisions at the 2nd level and putting them into four categories of risk, namely: low, medium, high and very high risk zones, where very high risk refers to more than 50% suitability to the disease re-emergence and low risk refers to less than 10% suitability. The map indicated increased risk of anthrax re-emergence in the districts along the northern, eastern and south-eastern borders of the country. It was recommended that the national veterinary service uses the risk map for the development of contra-epizootic measures aimed at the prevention of anthrax re-emergence in historically affected regions of

  3. Maximum Tsallis entropy with generalized Gini and Gini mean difference indices constraints

    NASA Astrophysics Data System (ADS)

    Khosravi Tanak, A.; Mohtashami Borzadaran, G. R.; Ahmadi, J.

    2017-04-01

    Using the maximum entropy principle with Tsallis entropy, some distribution families for modeling income distribution are obtained. By considering income inequality measures, maximum Tsallis entropy distributions under the constraint on generalized Gini and Gini mean difference indices are derived. It is shown that the Tsallis entropy maximizers with the considered constraints belong to generalized Pareto family.

  4. Soil Moisture and Vegetation Controls on Surface Energy Balance Using the Maximum Entropy Production Model of Evapotranspiration

    NASA Astrophysics Data System (ADS)

    Wang, J.; Parolari, A.; Huang, S. Y.

    2014-12-01

    The objective of this study is to formulate and test plant water stress parameterizations for the recently proposed maximum entropy production (MEP) model of evapotranspiration (ET) over vegetated surfaces. . The MEP model of ET is a parsimonious alternative to existing land surface parameterizations of surface energy fluxes from net radiation, temperature, humidity, and a small number of parameters. The MEP model was previously tested for vegetated surfaces under well-watered and dry, dormant conditions, when the surface energy balance is relatively insensitive to plant physiological activity. Under water stressed conditions, however, the plant water stress response strongly affects the surface energy balance. This effect occurs through plant physiological adjustments that reduce ET to maintain leaf turgor pressure as soil moisture is depleted during drought. To improve MEP model of ET predictions under water stress conditions, the model was modified to incorporate this plant-mediated feedback between soil moisture and ET. We compare MEP model predictions to observations under a range of field conditions, including bare soil, grassland, and forest. The results indicate a water stress function that combines the soil water potential in the surface soil layer with the atmospheric humidity successfully reproduces observed ET decreases during drought. In addition to its utility as a modeling tool, the calibrated water stress functions also provide a means to infer ecosystem influence on the land surface state. Challenges associated with sampling model input data (i.e., net radiation, surface temperature, and surface humidity) are also discussed.

  5. Maximum entropy production principle for geostrophic turbulence

    NASA Astrophysics Data System (ADS)

    Sommeria, J.; Bouchet, F.; Chavanis, P. H.

    2003-04-01

    In 2D turbulence, complex stirring leads to the formation of steady organized states, once fine scale fluctuations have been filtered out. This self-organization can be explained in terms of statistical equilibrium for vorticity, as the most likely outcome of vorticity parcel rearrangements with the constraints of the conservation laws. A mixing entropy describing the vorticity rearrangements is introduced. Extension to the shallow water system has been proposed by Chavanis P.H. and Sommeria J. (2002), Phys. Rev. E. Generalization to multi-layer geostrophic flows is formally straightforward. Outside equilibrium, eddy fluxes should drive the system toward equilibrium, in the spirit of non equilibrium linear thermodynamics. This can been formalized in terms of a principle of maximum entropy production (MEP), as shown by Robert and Sommeria (1991), Phys. Rev. Lett. 69. Then a parameterization of eddy fluxes is obtained, involving an eddy diffusivity plus a drift term acting at larger scale. These two terms balance each other at equilibrium, resulting in a non trivial steady flow, which is the mean state of the statistical equilibrium. Applications of this eddy parametrization will be presented, in the context of oceanic circulation and Jupiter's Great Red Spot. Quantitative tests will be discussed, obtained by comparisons with direct numerical simulations. Kinetic models, inspired from plasma physics, provide a more precise description of the relaxation toward equilibrium, as shown by Chavanis P.H. 2000 ``Quasilinear theory of the 2D Euler equation'', Phys. Rev. Lett. 84. This approach provides relaxation equations with a form similar to the MEP, but not identical. In conclusion, the MEP provides the right trends of the system but its precise justification remains elusive.

  6. Joint Modeling of Multiple Social Networks to Elucidate Primate Social Dynamics: I. Maximum Entropy Principle and Network-Based Interactions

    PubMed Central

    Chan, Stephanie; Fushing, Hsieh; Beisner, Brianne A.; McCowan, Brenda

    2013-01-01

    In a complex behavioral system, such as an animal society, the dynamics of the system as a whole represent the synergistic interaction among multiple aspects of the society. We constructed multiple single-behavior social networks for the purpose of approximating from multiple aspects a single complex behavioral system of interest: rhesus macaque society. Instead of analyzing these networks individually, we describe a new method for jointly analyzing them in order to gain comprehensive understanding about the system dynamics as a whole. This method of jointly modeling multiple networks becomes valuable analytical tool for studying the complex nature of the interaction among multiple aspects of any system. Here we develop a bottom-up, iterative modeling approach based upon the maximum entropy principle. This principle is applied to a multi-dimensional link-based distributional framework, which is derived by jointly transforming the multiple directed behavioral social network data, for extracting patterns of synergistic inter-behavioral relationships. Using a rhesus macaque group as a model system, we jointly modeled and analyzed four different social behavioral networks at two different time points (one stable and one unstable) from a rhesus macaque group housed at the California National Primate Research Center (CNPRC). We report and discuss the inter-behavioral dynamics uncovered by our joint modeling approach with respect to social stability. PMID:23468833

  7. Bayesian Maximum Entropy Integration of Ozone Observations and Model Predictions: An Application for Attainment Demonstration in North Carolina

    PubMed Central

    de Nazelle, Audrey; Arunachalam, Saravanan; Serre, Marc L.

    2010-01-01

    States in the USA are required to demonstrate future compliance of criteria air pollutant standards by using both air quality monitors and model outputs. In the case of ozone, the demonstration tests aim at relying heavily on measured values, due to their perceived objectivity and enforceable quality. Weight given to numerical models is diminished by integrating them in the calculations only in a relative sense. For unmonitored locations, the EPA has suggested the use of a spatial interpolation technique to assign current values. We demonstrate that this approach may lead to erroneous assignments of non-attainment and may make it difficult for States to establish future compliance. We propose a method that combines different sources of information to map air pollution, using the Bayesian Maximum Entropy (BME) Framework. The approach gives precedence to measured values and integrates modeled data as a function of model performance. We demonstrate this approach in North Carolina, using the State’s ozone monitoring network in combination with outputs from the Multiscale Air Quality Simulation Platform (MAQSIP) modeling system. We show that the BME data integration approach, compared to a spatial interpolation of measured data, improves the accuracy and the precision of ozone estimations across the State. PMID:20590110

  8. Bayesian maximum entropy integration of ozone observations and model predictions: an application for attainment demonstration in North Carolina.

    PubMed

    de Nazelle, Audrey; Arunachalam, Saravanan; Serre, Marc L

    2010-08-01

    States in the USA are required to demonstrate future compliance of criteria air pollutant standards by using both air quality monitors and model outputs. In the case of ozone, the demonstration tests aim at relying heavily on measured values, due to their perceived objectivity and enforceable quality. Weight given to numerical models is diminished by integrating them in the calculations only in a relative sense. For unmonitored locations, the EPA has suggested the use of a spatial interpolation technique to assign current values. We demonstrate that this approach may lead to erroneous assignments of nonattainment and may make it difficult for States to establish future compliance. We propose a method that combines different sources of information to map air pollution, using the Bayesian Maximum Entropy (BME) Framework. The approach gives precedence to measured values and integrates modeled data as a function of model performance. We demonstrate this approach in North Carolina, using the State's ozone monitoring network in combination with outputs from the Multiscale Air Quality Simulation Platform (MAQSIP) modeling system. We show that the BME data integration approach, compared to a spatial interpolation of measured data, improves the accuracy and the precision of ozone estimations across the state.

  9. Maximum entropy production rate in quantum thermodynamics

    NASA Astrophysics Data System (ADS)

    Beretta, Gian Paolo

    2010-06-01

    In the framework of the recent quest for well-behaved nonlinear extensions of the traditional Schrödinger-von Neumann unitary dynamics that could provide fundamental explanations of recent experimental evidence of loss of quantum coherence at the microscopic level, a recent paper [Gheorghiu-Svirschevski 2001 Phys. Rev. A 63 054102] reproposes the nonlinear equation of motion proposed by the present author [see Beretta G P 1987 Found. Phys. 17 365 and references therein] for quantum (thermo)dynamics of a single isolated indivisible constituent system, such as a single particle, qubit, qudit, spin or atomic system, or a Bose-Einstein or Fermi-Dirac field. As already proved, such nonlinear dynamics entails a fundamental unifying microscopic proof and extension of Onsager's reciprocity and Callen's fluctuation-dissipation relations to all nonequilibrium states, close and far from thermodynamic equilibrium. In this paper we propose a brief but self-contained review of the main results already proved, including the explicit geometrical construction of the equation of motion from the steepest-entropy-ascent ansatz and its exact mathematical and conceptual equivalence with the maximal-entropy-generation variational-principle formulation presented in Gheorghiu-Svirschevski S 2001 Phys. Rev. A 63 022105. Moreover, we show how it can be extended to the case of a composite system to obtain the general form of the equation of motion, consistent with the demanding requirements of strong separability and of compatibility with general thermodynamics principles. The irreversible term in the equation of motion describes the spontaneous attraction of the state operator in the direction of steepest entropy ascent, thus implementing the maximum entropy production principle in quantum theory. The time rate at which the path of steepest entropy ascent is followed has so far been left unspecified. As a step towards the identification of such rate, here we propose a possible, well

  10. An adaptive meshfree method for phase-field models of biomembranes. Part I: Approximation with maximum-entropy basis functions

    NASA Astrophysics Data System (ADS)

    Rosolen, A.; Peco, C.; Arroyo, M.

    2013-09-01

    We present an adaptive meshfree method to approximate phase-field models of biomembranes. In such models, the Helfrich curvature elastic energy, the surface area, and the enclosed volume of a vesicle are written as functionals of a continuous phase-field, which describes the interface in a smeared manner. Such functionals involve up to second-order spatial derivatives of the phase-field, leading to fourth-order Euler-Lagrange partial differential equations (PDE). The solutions develop sharp internal layers in the vicinity of the putative interface, and are nearly constant elsewhere. Thanks to the smoothness of the local maximum-entropy (max-ent) meshfree basis functions, we approximate numerically this high-order phase-field model with a direct Ritz-Galerkin method. The flexibility of the meshfree method allows us to easily adapt the grid to resolve the sharp features of the solutions. Thus, the proposed approach is more efficient than common tensor product methods (e.g. finite differences or spectral methods), and simpler than unstructured C0 finite element methods, applicable by reformulating the model as a system of second-order PDE. The proposed method, implemented here under the assumption of axisymmetry, allows us to show numerical evidence of convergence of the phase-field solutions to the sharp interface limit as the regularization parameter approaches zero. In a companion paper, we present a Lagrangian method based on the approximants analyzed here to study the dynamics of vesicles embedded in a viscous fluid.

  11. Pareto versus lognormal: A maximum entropy test

    NASA Astrophysics Data System (ADS)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  12. Maximum entropy and Bayesian methods. Proceedings.

    NASA Astrophysics Data System (ADS)

    Grandy, W. T., Jr.; Schick, L. H.

    This volume contains a selection of papers presented at the Tenth Annual Workshop on Maximum Entropy and Bayesian Methods. The thirty-six papers included cover a wide range of applications in areas such as economics and econometrics, astronomy and astrophysics, general physics, complex systems, image reconstruction, and probability and mathematics. Together they give an excellent state-of-the-art overview of fundamental methods of data analysis.

  13. Systemic risk, maximum entropy and interbank contagion

    NASA Astrophysics Data System (ADS)

    Andrecut, M.

    2016-06-01

    We discuss the systemic risk implied by the interbank exposures reconstructed with the maximum entropy (ME) method. The ME method severely underestimates the risk of interbank contagion by assuming a fully connected network, while in reality the structure of the interbank network is sparsely connected. Here, we formulate an algorithm for sparse network reconstruction, and we show numerically that it provides a more reliable estimation of the systemic risk.

  14. Automatic salient object detection via maximum entropy estimation.

    PubMed

    Chen, Xiao; Zhao, Hongwei; Liu, Pingping; Zhou, Baoyu; Ren, Weiwu

    2013-05-15

    This Letter proposes a rapid method for automatic salient object detection inspired by the idea that an image consists of redundant information and novelty fluctuations. We believe object detection can be achieved by removing the nonsalient parts and focusing on the salient object. Considering the relation between the composition of the image and the aim of object detection, we constructed what we believe is a more reliable saliency map to evaluate the image composition. The local energy feature is combined with a simple biologically inspired model (color, intensity, orientation) to strengthen the integrity of the object in the saliency map. We estimated the entropy of the object via the maximum entropy method. Then, we removed pixels of minimal intensity from the original image and compute the entropy of the resulting images, correlating this entropy with the object entropy. Our experimental results show that the algorithm outperforms the state-of-the-art methods and is more suitable in real-time applications.

  15. Automatically quantifying the scientific quality and sensationalism of news records mentioning pandemics: validating a maximum entropy machine-learning model.

    PubMed

    Hoffman, Steven J; Justicz, Victoria

    2016-07-01

    To develop and validate a method for automatically quantifying the scientific quality and sensationalism of individual news records. After retrieving 163,433 news records mentioning the Severe Acute Respiratory Syndrome (SARS) and H1N1 pandemics, a maximum entropy model for inductive machine learning was used to identify relationships among 500 randomly sampled news records that correlated with systematic human assessments of their scientific quality and sensationalism. These relationships were then computationally applied to automatically classify 10,000 additional randomly sampled news records. The model was validated by randomly sampling 200 records and comparing human assessments of them to the computer assessments. The computer model correctly assessed the relevance of 86% of news records, the quality of 65% of records, and the sensationalism of 73% of records, as compared to human assessments. Overall, the scientific quality of SARS and H1N1 news media coverage had potentially important shortcomings, but coverage was not too sensationalizing. Coverage slightly improved between the two pandemics. Automated methods can evaluate news records faster, cheaper, and possibly better than humans. The specific procedure implemented in this study can at the very least identify subsets of news records that are far more likely to have particular scientific and discursive qualities. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Inferring global wind energetics from a simple Earth system model based on the principle of maximum entropy production

    NASA Astrophysics Data System (ADS)

    Karkar, S.; Paillard, D.

    2015-03-01

    The question of total available wind power in the atmosphere is highly debated, as well as the effect large scale wind farms would have on the climate. Bottom-up approaches, such as those proposed by wind turbine engineers often lead to non-physical results (non-conservation of energy, mostly), while top-down approaches have proven to give physically consistent results. This paper proposes an original method for the calculation of mean annual wind energetics in the atmosphere, without resorting to heavy numerical integration of the entire dynamics. The proposed method is derived from a model based on the Maximum of Entropy Production (MEP) principle, which has proven to efficiently describe the annual mean temperature and energy fluxes, despite its simplicity. Because the atmosphere is represented with only one vertical layer and there is no vertical wind component, the model fails to represent the general circulation patterns such as cells or trade winds. However, interestingly, global energetic diagnostics are well captured by the mere combination of a simple MEP model and a flux inversion method.

  17. Maximum entropy modeling of invasive plants in the forests of Cumberland Plateau and Mountain Region

    Treesearch

    Dawn Lemke; Philip Hulme; Jennifer Brown; Wubishet. Tadesse

    2011-01-01

    As anthropogenic influences on the landscape change the composition of 'natural' areas, it is important that we apply spatial technology in active management to mitigate human impact. This research explores the integration of geographic information systems (GIS) and remote sensing with statistical analysis to assist in modeling the distribution of invasive...

  18. Predicting Changes in Macrophyte Community Structure from Functional Traits in a Freshwater Lake: A Test of Maximum Entropy Model

    PubMed Central

    Fu, Hui; Zhong, Jiayou; Yuan, Guixiang; Guo, Chunjing; Lou, Qian; Zhang, Wei; Xu, Jun; Ni, Leyi; Xie, Ping; Cao, Te

    2015-01-01

    Trait-based approaches have been widely applied to investigate how community dynamics respond to environmental gradients. In this study, we applied a series of maximum entropy (maxent) models incorporating functional traits to unravel the processes governing macrophyte community structure along water depth gradient in a freshwater lake. We sampled 42 plots and 1513 individual plants, and measured 16 functional traits and abundance of 17 macrophyte species. Study results showed that maxent model can be highly robust (99.8%) in predicting the species relative abundance of macrophytes with observed community-weighted mean (CWM) traits as the constraints, while relative low (about 30%) with CWM traits fitted from water depth gradient as the constraints. The measured traits showed notably distinct importance in predicting species abundances, with lowest for perennial growth form and highest for leaf dry mass content. For tuber and leaf nitrogen content, there were significant shifts in their effects on species relative abundance from positive in shallow water to negative in deep water. This result suggests that macrophyte species with tuber organ and greater leaf nitrogen content would become more abundant in shallow water, but would become less abundant in deep water. Our study highlights how functional traits distributed across gradients provide a robust path towards predictive community ecology. PMID:26167856

  19. Dynamics of the Anderson model for dilute magnetic alloys: A quantum Monte Carlo and maximum entropy study

    SciTech Connect

    Silver, R.N.; Gubernatis, J.E.; Sivia, D.S. ); Jarrell, M. . Dept. of Physics)

    1990-01-01

    In this article we describe the results of a new method for calculating the dynamical properties of the Anderson model. QMC generates data about the Matsubara Green's functions in imaginary time. To obtain dynamical properties, one must analytically continue these data to real time. This is an extremely ill-posed inverse problem similar to the inversion of a Laplace transform from incomplete and noisy data. Our method is a general one, applicable to the calculation of dynamical properties from a wide variety of quantum simulations. We use Bayesian methods of statistical inference to determine the dynamical properties based on both the QMC data and any prior information we may have such as sum rules, symmetry, high frequency limits, etc. This provides a natural means of combining perturbation theory and numerical simulations in order to understand dynamical many-body problems. Specifically we use the well-established maximum entropy (ME) method for image reconstruction. We obtain the spectral density and transport coefficients over the entire range of model parameters accessible by QMC, with data having much larger statistical error than required by other proposed analytic continuation methods.

  20. Predicting the Current and Future Potential Distributions of Lymphatic Filariasis in Africa Using Maximum Entropy Ecological Niche Modelling

    PubMed Central

    Slater, Hannah; Michael, Edwin

    2012-01-01

    Modelling the spatial distributions of human parasite species is crucial to understanding the environmental determinants of infection as well as for guiding the planning of control programmes. Here, we use ecological niche modelling to map the current potential distribution of the macroparasitic disease, lymphatic filariasis (LF), in Africa, and to estimate how future changes in climate and population could affect its spread and burden across the continent. We used 508 community-specific infection presence data collated from the published literature in conjunction with five predictive environmental/climatic and demographic variables, and a maximum entropy niche modelling method to construct the first ecological niche maps describing potential distribution and burden of LF in Africa. We also ran the best-fit model against climate projections made by the HADCM3 and CCCMA models for 2050 under A2a and B2a scenarios to simulate the likely distribution of LF under future climate and population changes. We predict a broad geographic distribution of LF in Africa extending from the west to the east across the middle region of the continent, with high probabilities of occurrence in the Western Africa compared to large areas of medium probability interspersed with smaller areas of high probability in Central and Eastern Africa and in Madagascar. We uncovered complex relationships between predictor ecological niche variables and the probability of LF occurrence. We show for the first time that predicted climate change and population growth will expand both the range and risk of LF infection (and ultimately disease) in an endemic region. We estimate that populations at risk to LF may range from 543 and 804 million currently, and that this could rise to between 1.65 to 1.86 billion in the future depending on the climate scenario used and thresholds applied to signify infection presence. PMID:22359670

  1. Predicting the current and future potential distributions of lymphatic filariasis in Africa using maximum entropy ecological niche modelling.

    PubMed

    Slater, Hannah; Michael, Edwin

    2012-01-01

    Modelling the spatial distributions of human parasite species is crucial to understanding the environmental determinants of infection as well as for guiding the planning of control programmes. Here, we use ecological niche modelling to map the current potential distribution of the macroparasitic disease, lymphatic filariasis (LF), in Africa, and to estimate how future changes in climate and population could affect its spread and burden across the continent. We used 508 community-specific infection presence data collated from the published literature in conjunction with five predictive environmental/climatic and demographic variables, and a maximum entropy niche modelling method to construct the first ecological niche maps describing potential distribution and burden of LF in Africa. We also ran the best-fit model against climate projections made by the HADCM3 and CCCMA models for 2050 under A2a and B2a scenarios to simulate the likely distribution of LF under future climate and population changes. We predict a broad geographic distribution of LF in Africa extending from the west to the east across the middle region of the continent, with high probabilities of occurrence in the Western Africa compared to large areas of medium probability interspersed with smaller areas of high probability in Central and Eastern Africa and in Madagascar. We uncovered complex relationships between predictor ecological niche variables and the probability of LF occurrence. We show for the first time that predicted climate change and population growth will expand both the range and risk of LF infection (and ultimately disease) in an endemic region. We estimate that populations at risk to LF may range from 543 and 804 million currently, and that this could rise to between 1.65 to 1.86 billion in the future depending on the climate scenario used and thresholds applied to signify infection presence.

  2. A spatiotemporal dengue fever early warning model accounting for nonlinear associations with meteorological factors: a Bayesian maximum entropy approach

    NASA Astrophysics Data System (ADS)

    Lee, Chieh-Han; Yu, Hwa-Lung; Chien, Lung-Chang

    2014-05-01

    Dengue fever has been identified as one of the most widespread vector-borne diseases in tropical and sub-tropical. In the last decade, dengue is an emerging infectious disease epidemic in Taiwan especially in the southern area where have annually high incidences. For the purpose of disease prevention and control, an early warning system is urgently needed. Previous studies have showed significant relationships between climate variables, in particular, rainfall and temperature, and the temporal epidemic patterns of dengue cases. However, the transmission of the dengue fever is a complex interactive process that mostly understated the composite space-time effects of dengue fever. This study proposes developing a one-week ahead warning system of dengue fever epidemics in the southern Taiwan that considered nonlinear associations between weekly dengue cases and meteorological factors across space and time. The early warning system based on an integration of distributed lag nonlinear model (DLNM) and stochastic Bayesian Maximum Entropy (BME) analysis. The study identified the most significant meteorological measures including weekly minimum temperature and maximum 24-hour rainfall with continuous 15-week lagged time to dengue cases variation under condition of uncertainty. Subsequently, the combination of nonlinear lagged effects of climate variables and space-time dependence function is implemented via a Bayesian framework to predict dengue fever occurrences in the southern Taiwan during 2012. The result shows the early warning system is useful for providing potential outbreak spatio-temporal prediction of dengue fever distribution. In conclusion, the proposed approach can provide a practical disease control tool for environmental regulators seeking more effective strategies for dengue fever prevention.

  3. Maximum entropy deconvolution of low-count nuclear medicine images

    NASA Astrophysics Data System (ADS)

    McGrath, Deirdre Maria

    Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing-step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were

  4. Maximum entropy and Bayesian methods. Proceedings.

    NASA Astrophysics Data System (ADS)

    Fougère, P. F.

    Bayesian probability theory and maximum entropy are the twin foundations of consistent inductive reasoning about the physical world. This volume contains thirty-two papers which are devoted to both foundations and applications and combine tutorial presentations and more research oriented contributions. Together these provide a state of the art account of latest developments in such diverse areas as coherent imaging, regression analysis, tomography, neural networks, plasma theory, quantum mechanics, and others. The methods described will be of great interest to mathematicians, physicists, astronomers, crystallographers, engineers and those involved in all aspects of signal processing.

  5. Binary versus non-binary information in real time series: empirical results and maximum-entropy matrix models

    NASA Astrophysics Data System (ADS)

    Almog, Assaf; Garlaschelli, Diego

    2014-09-01

    The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of multiple time series of activity of the constituent units, such as stocks or neurons, respectively. While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relations between binary and non-binary properties of financial time series. These relations are a novel quantification of the fact that extreme price increments occur more often when most stocks move in the same direction. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices and their mapping to spin models in statistical physics, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to accurately replicate, and mathematically characterize, the observed binary/non-binary relations. We also obtain a phase diagram allowing us to identify, based only on the instantaneous aggregate return of a set of multiple time series, a regime where the so-called ‘market mode’ has an optimal interpretation in terms of collective (endogenous) effects, a regime where it is parsimoniously explained by pure noise, and a regime where it can be regarded as a combination of endogenous and exogenous factors. Our approach allows us to connect spin models, simple stochastic processes, and ensembles of time series inferred from partial information.

  6. Predicting the spatiotemporal distributions of marine fish species utilizing earth system data in a maximum entropy modeling framework

    NASA Astrophysics Data System (ADS)

    Wang, L.; Kerr, L. A.; Bridger, E.

    2016-12-01

    Changes in species distributions have been widely associated with climate change. Understanding how ocean conditions influence marine fish distributions is critical for elucidating the role of climate in ecosystem change and forecasting how fish may be distributed in the future. Species distribution models (SDMs) can enable estimation of the likelihood of encountering species in space or time as a function of environmental conditions. Traditional SDMs are applied to scientific-survey data that include both presences and absences. Maximum entropy (MaxEnt) models are promising tools as they can be applied to presence-only data, such as those collected from fisheries or citizen science programs. We used MaxEnt to relate the occurrence records of marine fish species (e.g. Atlantic herring, Atlantic mackerel, and butterfish) from NOAA Northeast Fisheries Observer Program to environmental conditions. Environmental variables from earth system data, such as sea surface temperature (SST), sea bottom temperature (SBT), Chlorophyll-a, bathymetry, North Atlantic oscillation (NAO), and Atlantic multidecadal oscillation (AMO), were matched with species occurrence for MaxEnt modeling the fish distributions in Northeast Shelf area. We developed habitat suitability maps for these species, and assessed the relative influence of environmental factors on their distributions. Overall, SST and Chlorophyll-a had greatest influence on their monthly distributions, with bathymetry and SBT having moderate influence and climate indices (NAO and AMO) having little influence. Across months, Atlantic herring distribution was most related to SST 10th percentile, and Atlantic mackerel and butterfish distributions were most related to previous month SST. The fish distributions were most affected by previous month Chlorophyll-a in summer months, which may indirectly indicate the accumulative impact of primary productivity. Results highlighted the importance of spatial and temporal scales when using

  7. Maximum entropy method helps study multifractal processes

    NASA Astrophysics Data System (ADS)

    Balcerak, Ernie

    2011-11-01

    Many natural phenomena exhibit scaling behavior, in which parts of the system resemble the whole. Topography is one example—in some landscapes, shapes seen on a small scale look similar to shapes seen at larger scales. Some processes with scaling behavior are multifractal processes, in which the scaling parameters are described by probability distributions. Nieves et al. show that a method known as the maximum entropy method, which has been applied in information theory and statistical mechanics, can be applied generally to study the statistics of multifractal processes. The authors note that the method, which could be applied to a wide variety of geophysical systems, makes it possible to infer information on multifractal processes even beyond scales where observations are available. (Geophysical Research Letters, doi:10.1029/2011GL048716, 2011)

  8. Traffic network and distribution of cars: Maximum-entropy approach

    SciTech Connect

    Das, N.C.; Chakrabarti, C.G.; Mazumder, S.K.

    2000-02-01

    An urban transport system plays a vital role in the modeling of the modern cosmopolis. A great emphasis is needed for the proper development of a transport system, particularly the traffic network and flow, to meet possible future demand. There are various mathematical models of traffic network and flow. The role of Shannon entropy in the modeling of traffic network and flow was stressed by Tomlin and Tomlin (1968) and Tomlin (1969). In the present note the authors study the role of maximum-entropy principle in the solution of an important problem associated with the traffic network flow. The maximum-entropy principle initiated by Jaynes is a powerful optimization technique of determining the distribution of a random system in the case of partial or incomplete information or data available about the system. This principle has now been broadened and extended and has found wide applications in different fields of science and technology. In the present note the authors show how the Jaynes' maximum-entropy principle, slightly modified, can be successfully applied in determining the flow or distribution of cars in different paths of a traffic network when incomplete information is available about the network.

  9. Stochastic modeling and control system designs of the NASA/MSFC Ground Facility for large space structures: The maximum entropy/optimal projection approach

    NASA Technical Reports Server (NTRS)

    Hsia, Wei-Shen

    1986-01-01

    In the Control Systems Division of the Systems Dynamics Laboratory of the NASA/MSFC, a Ground Facility (GF), in which the dynamics and control system concepts being considered for Large Space Structures (LSS) applications can be verified, was designed and built. One of the important aspects of the GF is to design an analytical model which will be as close to experimental data as possible so that a feasible control law can be generated. Using Hyland's Maximum Entropy/Optimal Projection Approach, a procedure was developed in which the maximum entropy principle is used for stochastic modeling and the optimal projection technique is used for a reduced-order dynamic compensator design for a high-order plant.

  10. BIOSMILE: A semantic role labeling system for biomedical verbs using a maximum-entropy model with automatically generated template features

    PubMed Central

    Tsai, Richard Tzong-Han; Chou, Wen-Chi; Su, Ying-Shan; Lin, Yu-Chun; Sung, Cheng-Lung; Dai, Hong-Jie; Yeh, Irene Tzu-Hsuan; Ku, Wei; Sung, Ting-Yi; Hsu, Wen-Lian

    2007-01-01

    Background Bioinformatics tools for automatic processing of biomedical literature are invaluable for both the design and interpretation of large-scale experiments. Many information extraction (IE) systems that incorporate natural language processing (NLP) techniques have thus been developed for use in the biomedical field. A key IE task in this field is the extraction of biomedical relations, such as protein-protein and gene-disease interactions. However, most biomedical relation extraction systems usually ignore adverbial and prepositional phrases and words identifying location, manner, timing, and condition, which are essential for describing biomedical relations. Semantic role labeling (SRL) is a natural language processing technique that identifies the semantic roles of these words or phrases in sentences and expresses them as predicate-argument structures. We construct a biomedical SRL system called BIOSMILE that uses a maximum entropy (ME) machine-learning model to extract biomedical relations. BIOSMILE is trained on BioProp, our semi-automatic, annotated biomedical proposition bank. Currently, we are focusing on 30 biomedical verbs that are frequently used or considered important for describing molecular events. Results To evaluate the performance of BIOSMILE, we conducted two experiments to (1) compare the performance of SRL systems trained on newswire and biomedical corpora; and (2) examine the effects of using biomedical-specific features. The experimental results show that using BioProp improves the F-score of the SRL system by 21.45% over an SRL system that uses a newswire corpus. It is noteworthy that adding automatically generated template features improves the overall F-score by a further 0.52%. Specifically, ArgM-LOC, ArgM-MNR, and Arg2 achieve statistically significant performance improvements of 3.33%, 2.27%, and 1.44%, respectively. Conclusion We demonstrate the necessity of using a biomedical proposition bank for training SRL systems in the

  11. Maximum entropy principle and partial probability weighted moments

    NASA Astrophysics Data System (ADS)

    Deng, Jian; Pandey, M. D.; Xie, W. C.

    2012-05-01

    Maximum entropy principle (MaxEnt) is usually used for estimating the probability density function under specified moment constraints. The density function is then integrated to obtain the cumulative distribution function, which needs to be inverted to obtain a quantile corresponding to some specified probability. In such analysis, consideration of higher ordermoments is important for accurate modelling of the distribution tail. There are three drawbacks for this conventional methodology: (1) Estimates of higher order (>2) moments from a small sample of data tend to be highly biased; (2) It can merely cope with problems with complete or noncensored samples; (3) Only probability weighted moments of integer orders have been utilized. These difficulties inevitably induce bias and inaccuracy of the resultant quantile estimates and therefore have been the main impediments to the application of the MaxEnt Principle in extreme quantile estimation. This paper attempts to overcome these problems and presents a distribution free method for estimating the quantile function of a non-negative randomvariable using the principle of maximum partial entropy subject to constraints of the partial probability weighted moments estimated from censored sample. The main contributions include: (1) New concepts, i.e., partial entropy, fractional partial probability weighted moments, and partial Kullback-Leibler measure are elegantly defined; (2) Maximum entropy principle is re-formulated to be constrained by fractional partial probability weighted moments; (3) New distribution free quantile functions are derived. Numerical analyses are performed to assess the accuracy of extreme value estimates computed from censored samples.

  12. Determining Dynamical Path Distributions usingMaximum Relative Entropy

    DTIC Science & Technology

    2015-05-31

    information. MaxCal is just The Principle of Maximum Entropy (MaxEnt) where constraints are changing in time. This simply amounts to an additional...Determining Dynamical Path Distributions using Maximum Relative Entropy The views, opinions and/or findings contained in this report are those of the...SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS (ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 Maximum Entropy

  13. Maximum entropy distribution of stock price fluctuations

    NASA Astrophysics Data System (ADS)

    Bartiromo, Rosario

    2013-04-01

    In this paper we propose to use the principle of absence of arbitrage opportunities in its entropic interpretation to obtain the distribution of stock price fluctuations by maximizing its information entropy. We show that this approach leads to a physical description of the underlying dynamics as a random walk characterized by a stochastic diffusion coefficient and constrained to a given value of the expected volatility, in this way taking into account the information provided by the existence of an option market. The model is validated by a comprehensive comparison with observed distributions of both price return and diffusion coefficient. Expected volatility is the only parameter in the model and can be obtained by analysing option prices. We give an analytic formulation of the probability density function for price returns which can be used to extract expected volatility from stock option data.

  14. A multiscale maximum entropy moment closure for locally regulated space-time point process models of population dynamics.

    PubMed

    Raghib, Michael; Hill, Nicholas A; Dieckmann, Ulf

    2011-05-01

    The prevalence of structure in biological populations challenges fundamental assumptions at the heart of continuum models of population dynamics based only on mean densities (local or global). Individual-based models (IBMs) were introduced during the last decade in an attempt to overcome this limitation by following explicitly each individual in the population. Although the IBM approach has been quite useful, the capability to follow each individual usually comes at the expense of analytical tract ability, which limits the generality of the statements that can be made. For the specific case of spatial structure in populations of sessile (and identical) organisms, space-time point processes with local regulation seem to cover the middle ground between analytical tract ability and a higher degree of biological realism. This approach has shown that simplified representations of fecundity, local dispersal and density-dependent mortality weighted by the local competitive environment are sufficient to generate spatial patterns that mimic field observations. Continuum approximations of these stochastic processes try to distill their fundamental properties, and they keep track of not only mean densities, but also higher order spatial correlations. However, due to the non-linearities involved they result in infinite hierarchies of moment equations. This leads to the problem of finding a 'moment closure'; that is, an appropriate order of (lower order) truncation, together with a method of expressing the highest order density not explicitly modelled in the truncated hierarchy in terms of the lower order densities. We use the principle of constrained maximum entropy to derive a closure relationship for truncation at second order using normalisation and the product densities of first and second orders as constraints, and apply it to one such hierarchy. The resulting 'maxent' closure is similar to the Kirkwood superposition approximation, or 'power-3' closure, but it is

  15. NOTE FROM THE EDITOR: Bayesian and Maximum Entropy Methods Bayesian and Maximum Entropy Methods

    NASA Astrophysics Data System (ADS)

    Dobrzynski, L.

    2008-10-01

    The Bayesian and Maximum Entropy Methods are now standard routines in various data analyses, irrespective of ones own preference to the more conventional approach based on so-called frequentists understanding of the notion of the probability. It is not the purpose of the Editor to show all achievements of these methods in various branches of science, technology and medicine. In the case of condensed matter physics most of the oldest examples of Bayesian analysis can be found in the excellent tutorial textbooks by Sivia and Skilling [1], and Bretthorst [2], while the application of the Maximum Entropy Methods were described in `Maximum Entropy in Action' [3]. On the list of questions addressed one finds such problems as deconvolution and reconstruction of the complicated spectra, e.g. counting the number of lines hidden within the spectrum observed with always finite resolution, reconstruction of charge, spin and momentum density distribution from an incomplete sets of data, etc. On the theoretical side one might find problems like estimation of interatomic potentials [4], application of the MEM to quantum Monte Carlo data [5], Bayesian approach to inverse quantum statistics [6], very general to statistical mechanics [7] etc. Obviously, in spite of the power of the Bayesian and Maximum Entropy Methods, it is not possible for everything to be solved in a unique way by application of these particular methods of analysis, and one of the problems which is often raised is connected not only with a uniqueness of a reconstruction of a given distribution (map) but also with its accuracy (error maps). In this `Comments' section we present a few papers showing more recent advances and views, and highlighting some of the aforementioned problems. References [1] Sivia D S and Skilling J 2006 Data Analysis: A Bayesian Tutorial 2nd edn (Oxford: Oxford University Press) [2] Bretthorst G L 1988 Bayesian Spectruim Analysis and Parameter Estimation (Berlin: Springer) [3] Buck B and

  16. Beyond maximum entropy: Fractal pixon-based image reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, R. C.; Pina, R. K.

    1994-01-01

    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other methods, including Goodness-of-Fit (e.g. Least-Squares and Lucy-Richardson) and Maximum Entropy (ME). Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME.

  17. Combining experiments and simulations using the maximum entropy principle.

    PubMed

    Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten

    2014-02-01

    A key component of computational biology is to compare the results of computer modelling with experimental measurements. Despite substantial progress in the models and algorithms used in many areas of computational biology, such comparisons sometimes reveal that the computations are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results that are at not in complete and quantitative accordance with experiments. A common solution to this problem is to explicitly ensure agreement between the two by perturbing the potential energy function towards the experimental data. So far, a general consensus for how such perturbations should be implemented has been lacking. Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges.

  18. Beyond maximum entropy: Fractal pixon-based image reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, R. C.; Pina, R. K.

    1994-01-01

    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other methods, including Goodness-of-Fit (e.g. Least-Squares and Lucy-Richardson) and Maximum Entropy (ME). Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME.

  19. Maximum-Entropy Inference with a Programmable Annealer.

    PubMed

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A

    2016-03-03

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.

  20. Maximum-Entropy Inference with a Programmable Annealer

    NASA Astrophysics Data System (ADS)

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-03-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.

  1. Maximum-Entropy Inference with a Programmable Annealer

    PubMed Central

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-01-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition. PMID:26936311

  2. Modeling loop entropy.

    PubMed

    Chirikjian, Gregory S

    2011-01-01

    Proteins fold from a highly disordered state into a highly ordered one. Traditionally, the folding problem has been stated as one of predicting "the" tertiary structure from sequential information. However, new evidence suggests that the ensemble of unfolded forms may not be as disordered as once believed, and that the native form of many proteins may not be described by a single conformation, but rather an ensemble of its own. Quantifying the relative disorder in the folded and unfolded ensembles as an entropy difference may therefore shed light on the folding process. One issue that clouds discussions of "entropy" is that many different kinds of entropy can be defined: entropy associated with overall translational and rotational Brownian motion, configurational entropy, vibrational entropy, conformational entropy computed in internal or Cartesian coordinates (which can even be different from each other), conformational entropy computed on a lattice, each of the above with different solvation and solvent models, thermodynamic entropy measured experimentally, etc. The focus of this work is the conformational entropy of coil/loop regions in proteins. New mathematical modeling tools for the approximation of changes in conformational entropy during transition from unfolded to folded ensembles are introduced. In particular, models for computing lower and upper bounds on entropy for polymer models of polypeptide coils both with and without end constraints are presented. The methods reviewed here include kinematics (the mathematics of rigid-body motions), classical statistical mechanics, and information theory.

  3. Maximum power entropy method for ecological data analysis

    NASA Astrophysics Data System (ADS)

    Komori, Osamu; Eguchi, Shinto

    2015-01-01

    In ecology predictive models of the geographical distribution of certain species are widely used to capture the spatial diversity. Recently a method of Maxent based on Gibbs distribution is frequently employed to have reasonable accuracy of a target distribution of species at a site using environmental features such as temperature, precipitation, elevation and so on. It requires only presence data, which is a big advantage to the case where absence data is not available or unreliable. It also incorporates our limited knowledge into the model about the target distribution such that the expected values of environmental features are equal to the empirical average. Moreover, the visualization of the inhabiting probability of species is easily done with the aid of geographical coordination information from Global Biodiversity Inventory Facility (GBIF) in a statistical software R. However, the maximum entropy distribution in Maxent is derived from the Boltzmann-Gibbs-Shannon entropy, which causes unstable estimation of the parameters in the model when some outliers in the data are observed. To overcome the weak point and to have deep understandings of the relation among the total number of species, the Boltzmann-Gibbs-Shannon entropy and Simpson's index, we propose a maximum power entropy method based on beta-divergence, which is a special case of U-divergence. It includes the Boltzmann-Gibbs-Shannon entropy as a special case, so it could have better performance of estimation of the target distribution by appropriately choosing the value of the power index beta. We demonstrate the performance of the proposed method by simulation studies as well as publicly available real data.

  4. Application of maximum entropy to neutron tunneling spectroscopy

    SciTech Connect

    Mukhopadhyay, R. Bhabha Atomic Research Centre, Bombay . Solid State Physics Div.); Carlile, C.J. ); Silver, R.N. )

    1990-01-01

    We demonstrate the maximum entropy method for the deconvolution of high resolution tunneling data acquired with a quasielastic spectrometer. Given a precise characterization of the instrument resolution function, a maximum entropy analysis of lutidine data obtained with the IRIS spectrometer at ISIS results in an effective factor of three improvement in resolution. 7 refs., 4 figs.

  5. The equivalence of minimum entropy production and maximum thermal efficiency in endoreversible heat engines.

    PubMed

    Haseli, Y

    2016-05-01

    The objective of this study is to investigate the thermal efficiency and power production of typical models of endoreversible heat engines at the regime of minimum entropy generation rate. The study considers the Curzon-Ahlborn engine, the Novikov's engine, and the Carnot vapor cycle. The operational regimes at maximum thermal efficiency, maximum power output and minimum entropy production rate are compared for each of these engines. The results reveal that in an endoreversible heat engine, a reduction in entropy production corresponds to an increase in thermal efficiency. The three criteria of minimum entropy production, the maximum thermal efficiency, and the maximum power may become equivalent at the condition of fixed heat input.

  6. Modeling Loop Entropy

    PubMed Central

    Chirikjian, Gregory S.

    2011-01-01

    Proteins fold from a highly disordered state into a highly ordered one. Traditionally, the folding problem has been stated as one of predicting ‘the’ tertiary structure from sequential information. However, new evidence suggests that the ensemble of unfolded forms may not be as disordered as once believed, and that the native form of many proteins may not be described by a single conformation, but rather an ensemble of its own. Quantifying the relative disorder in the folded and unfolded ensembles as an entropy difference may therefore shed light on the folding process. One issue that clouds discussions of ‘entropy’ is that many different kinds of entropy can be defined: entropy associated with overall translational and rotational Brownian motion, configurational entropy, vibrational entropy, conformational entropy computed in internal or Cartesian coordinates (which can even be different from each other), conformational entropy computed on a lattice; each of the above with different solvation and solvent models; thermodynamic entropy measured experimentally, etc. The focus of this work is the conformational entropy of coil/loop regions in proteins. New mathematical modeling tools for the approximation of changes in conformational entropy during transition from unfolded to folded ensembles are introduced. In particular, models for computing lower and upper bounds on entropy for polymer models of polypeptide coils both with and without end constraints are presented. The methods reviewed here include kinematics (the mathematics of rigid-body motions), classical statistical mechanics and information theory. PMID:21187223

  7. Microcanonical origin of the maximum entropy principle for open systems.

    PubMed

    Lee, Julian; Pressé, Steve

    2012-10-01

    There are two distinct approaches for deriving the canonical ensemble. The canonical ensemble either follows as a special limit of the microcanonical ensemble or alternatively follows from the maximum entropy principle. We show the equivalence of these two approaches by applying the maximum entropy formulation to a closed universe consisting of an open system plus bath. We show that the target function for deriving the canonical distribution emerges as a natural consequence of partial maximization of the entropy over the bath degrees of freedom alone. By extending this mathematical formalism to dynamical paths rather than equilibrium ensembles, the result provides an alternative justification for the principle of path entropy maximization as well.

  8. Twenty-five years of maximum-entropy principle

    NASA Astrophysics Data System (ADS)

    Kapur, J. N.

    1983-04-01

    The strengths and weaknesses of the maximum entropy principle (MEP) are examined and some challenging problems that remain outstanding at the end of the first quarter century of the principle are discussed. The original formalism of the MEP is presented and its relationship to statistical mechanics is set forth. The use of MEP for characterizing statistical distributions, in statistical inference, nonlinear spectral analysis, transportation models, population density models, models for brand-switching in marketing and vote-switching in elections is discussed. Its application to finance, insurance, image reconstruction, pattern recognition, operations research and engineering, biology and medicine, and nonparametric density estimation is considered.

  9. Maximum Entropy for the International Division of Labor.

    PubMed

    Lei, Hongmei; Chen, Ying; Li, Ruiqi; He, Deli; Zhang, Jiang

    2015-01-01

    As a result of the international division of labor, the trade value distribution on different products substantiated by international trade flows can be regarded as one country's strategy for competition. According to the empirical data of trade flows, countries may spend a large fraction of export values on ubiquitous and competitive products. Meanwhile, countries may also diversify their exports share on different types of products to reduce the risk. In this paper, we report that the export share distribution curves can be derived by maximizing the entropy of shares on different products under the product's complexity constraint once the international market structure (the country-product bipartite network) is given. Therefore, a maximum entropy model provides a good fit to empirical data. The empirical data is consistent with maximum entropy subject to a constraint on the expected value of the product complexity for each country. One country's strategy is mainly determined by the types of products this country can export. In addition, our model is able to fit the empirical export share distribution curves of nearly every country very well by tuning only one parameter.

  10. Maximum Entropy for the International Division of Labor

    PubMed Central

    Lei, Hongmei; Chen, Ying; Li, Ruiqi; He, Deli; Zhang, Jiang

    2015-01-01

    As a result of the international division of labor, the trade value distribution on different products substantiated by international trade flows can be regarded as one country’s strategy for competition. According to the empirical data of trade flows, countries may spend a large fraction of export values on ubiquitous and competitive products. Meanwhile, countries may also diversify their exports share on different types of products to reduce the risk. In this paper, we report that the export share distribution curves can be derived by maximizing the entropy of shares on different products under the product’s complexity constraint once the international market structure (the country-product bipartite network) is given. Therefore, a maximum entropy model provides a good fit to empirical data. The empirical data is consistent with maximum entropy subject to a constraint on the expected value of the product complexity for each country. One country’s strategy is mainly determined by the types of products this country can export. In addition, our model is able to fit the empirical export share distribution curves of nearly every country very well by tuning only one parameter. PMID:26172052

  11. Distribution of the Habitat Suitability of the Main Malaria Vector in French Guiana Using Maximum Entropy Modeling.

    PubMed

    Moua, Yi; Roux, Emmanuel; Girod, Romain; Dusfour, Isabelle; de Thoisy, Benoit; Seyler, Frédérique; Briolant, Sébastien

    2017-05-01

    Malaria is an important health issue in French Guiana. Its principal mosquito vector in this region is Anopheles darlingi Root. Knowledge of the spatial distribution of this species is still very incomplete due to the extent of French Guiana and the difficulty to access most of the territory. Species distribution modeling based on the maximal entropy procedure was used to predict the spatial distribution of An. darlingi using 39 presence sites. The resulting model provided significantly high prediction performances (mean 10-fold cross-validated partial area under the curve and continuous Boyce index equal to, respectively, 1.11-with a level of omission error of 20%-and 0.42). The model also provided a habitat suitability map and environmental response curves in accordance with the known entomological situation. Several environmental characteristics that had a positive correlation with the presence of An. darlingi were highlighted: nonpermanent anthropogenic changes of the natural environment, the presence of roads and tracks, and opening of the forest. Some geomorphological landforms and high altitude landscapes appear to be unsuitable for An. darlingi. The species distribution modeling was able to reliably predict the distribution of suitable habitats for An. darlingi in French Guiana. Results allowed completion of the knowledge of the spatial distribution of the principal malaria vector in this Amazonian region, and identification of the main factors that favor its presence. They should contribute to the definition of a necessary targeted vector control strategy in a malaria pre-elimination stage, and allow extrapolation of the acquired knowledge to other Amazonian or malaria-endemic contexts. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Multi-site, multivariate weather generator using maximum entropy bootstrap

    NASA Astrophysics Data System (ADS)

    Srivastav, Roshan K.; Simonovic, Slobodan P.

    2014-05-01

    Weather generators are increasingly becoming viable alternate models to assess the effects of future climate change scenarios on water resources systems. In this study, a new multisite, multivariate maximum entropy bootstrap weather generator (MEBWG) is proposed for generating daily weather variables, which has the ability to mimic both, spatial and temporal dependence structure in addition to other historical statistics. The maximum entropy bootstrap (MEB) involves two main steps: (1) random sampling from the empirical cumulative distribution function with endpoints selected to allow limited extrapolation and (2) reordering of the random series to respect the rank ordering of the original time series (temporal dependence structure). To capture the multi-collinear structure between the weather variables and between the sites, we combine orthogonal linear transformation with MEB. Daily weather data, which include precipitation, maximum temperature and minimum temperature from 27 years of record from the Upper Thames River Basin in Ontario, Canada, are used to analyze the ability of MEBWG based weather generator. Results indicate that the statistics from the synthetic replicates were not significantly different from the observed data and the model is able to preserve the 27 CLIMDEX indices very well. The MEBWG model shows better performance in terms of extrapolation and computational efficiency when compared to multisite, multivariate K-nearest neighbour model.

  13. Consistent maximum entropy representations of pipe flow networks

    NASA Astrophysics Data System (ADS)

    Waldrip, Steven H.; Niven, Robert K.; Abel, Markus; Schlegel, Michael

    2017-06-01

    The maximum entropy method is used to predict flows on water distribution networks. This analysis extends the water distribution network formulation of Waldrip et al. (2016) Journal of Hydraulic Engineering (ASCE), by the use of a continuous relative entropy defined on a reduced parameter set. This reduction in the parameters that the entropy is defined over ensures consistency between different representations of the same network. The performance of the proposed reduced parameter method is demonstrated with a one-loop network case study.

  14. Maximum entropy production in environmental and ecological systems.

    PubMed

    Kleidon, Axel; Malhi, Yadvinder; Cox, Peter M

    2010-05-12

    The coupled biosphere-atmosphere system entails a vast range of processes at different scales, from ecosystem exchange fluxes of energy, water and carbon to the processes that drive global biogeochemical cycles, atmospheric composition and, ultimately, the planetary energy balance. These processes are generally complex with numerous interactions and feedbacks, and they are irreversible in their nature, thereby producing entropy. The proposed principle of maximum entropy production (MEP), based on statistical mechanics and information theory, states that thermodynamic processes far from thermodynamic equilibrium will adapt to steady states at which they dissipate energy and produce entropy at the maximum possible rate. This issue focuses on the latest development of applications of MEP to the biosphere-atmosphere system including aspects of the atmospheric circulation, the role of clouds, hydrology, vegetation effects, ecosystem exchange of energy and mass, biogeochemical interactions and the Gaia hypothesis. The examples shown in this special issue demonstrate the potential of MEP to contribute to improved understanding and modelling of the biosphere and the wider Earth system, and also explore limitations and constraints to the application of the MEP principle.

  15. Violent relaxation: difficulties with maximum entropy states.

    NASA Astrophysics Data System (ADS)

    Wiechen, H.; Ziegler, H. J.

    The authors prove that for all entropy-like concave functionals stationary points do not exist for arbitrary fixed total mass and energy, considering isolated self-gravitating systems in R3. Only for a special choice for mass and energy, solutions of the variational problem are possible.

  16. Improving predictability of time series using maximum entropy methods

    NASA Astrophysics Data System (ADS)

    Chliamovitch, G.; Dupuis, A.; Golub, A.; Chopard, B.

    2015-04-01

    We discuss how maximum entropy methods may be applied to the reconstruction of Markov processes underlying empirical time series and compare this approach to usual frequency sampling. It is shown that, in low dimension, there exists a subset of the space of stochastic matrices for which the MaxEnt method is more efficient than sampling, in the sense that shorter historical samples have to be considered to reach the same accuracy. Considering short samples is of particular interest when modelling smoothly non-stationary processes, which provides, under some conditions, a powerful forecasting tool. The method is illustrated for a discretized empirical series of exchange rates.

  17. Implementation of the maximum entropy method for analytic continuation

    NASA Astrophysics Data System (ADS)

    Levy, Ryan; LeBlanc, J. P. F.; Gull, Emanuel

    2017-06-01

    We present Maxent, a tool for performing analytic continuation of spectral functions using the maximum entropy method. The code operates on discrete imaginary axis datasets (values with uncertainties) and transforms this input to the real axis. The code works for imaginary time and Matsubara frequency data and implements the 'Legendre' representation of finite temperature Green's functions. It implements a variety of kernels, default models, and grids for continuing bosonic, fermionic, anomalous, and other data. Our implementation is licensed under GPLv3 and extensively documented. This paper shows the use of the programs in detail.

  18. Three faces of entropy for complex systems: Information, thermodynamics, and the maximum entropy principle

    NASA Astrophysics Data System (ADS)

    Thurner, Stefan; Corominas-Murtra, Bernat; Hanel, Rudolf

    2017-09-01

    There are at least three distinct ways to conceptualize entropy: entropy as an extensive thermodynamic quantity of physical systems (Clausius, Boltzmann, Gibbs), entropy as a measure for information production of ergodic sources (Shannon), and entropy as a means for statistical inference on multinomial processes (Jaynes maximum entropy principle). Even though these notions represent fundamentally different concepts, the functional form of the entropy for thermodynamic systems in equilibrium, for ergodic sources in information theory, and for independent sampling processes in statistical systems, is degenerate, H (p ) =-∑ipilogpi . For many complex systems, which are typically history-dependent, nonergodic, and nonmultinomial, this is no longer the case. Here we show that for such processes, the three entropy concepts lead to different functional forms of entropy, which we will refer to as SEXT for extensive entropy, SIT for the source information rate in information theory, and SMEP for the entropy functional that appears in the so-called maximum entropy principle, which characterizes the most likely observable distribution functions of a system. We explicitly compute these three entropy functionals for three concrete examples: for Pólya urn processes, which are simple self-reinforcing processes, for sample-space-reducing (SSR) processes, which are simple history dependent processes that are associated with power-law statistics, and finally for multinomial mixture processes.

  19. The maximum entropy production principle: two basic questions.

    PubMed

    Martyushev, Leonid M

    2010-05-12

    The overwhelming majority of maximum entropy production applications to ecological and environmental systems are based on thermodynamics and statistical physics. Here, we discuss briefly maximum entropy production principle and raises two questions: (i) can this principle be used as the basis for non-equilibrium thermodynamics and statistical mechanics and (ii) is it possible to 'prove' the principle? We adduce one more proof which is most concise today.

  20. Maximum-entropy probability distributions under Lp-norm constraints

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1991-01-01

    Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.

  1. Maximum entropy regularization of the geomagnetic core field inverse problem

    NASA Astrophysics Data System (ADS)

    Jackson, Andrew; Constable, Catherine; Gillet, Nicolas

    2007-12-01

    The maximum entropy technique is an accepted method of image reconstruction when the image is made up of pixels of unknown positive intensity (e.g. a grey-scale image). The problem of reconstructing the magnetic field at the core-mantle boundary from surface data is a problem where the target image, the value of the radial field Br, can be of either sign. We adopt a known extension of the usual maximum entropy method that can be applied to images consisting of pixels of unconstrained sign. We find that we are able to construct images which have high dynamic ranges, but which still have very simple structure. In the spherical harmonic domain they have smoothly decreasing power spectra. It is also noteworthy that these models have far less complex null flux curve topology (lines on which the radial field vanishes) than do models which are quadratically regularized. Problems such as the one addressed are ubiquitous in geophysics, and it is suggested that the applications of the method could be much more widespread than is currently the case.

  2. Possible dynamical explanations for Paltridge's principle of maximum entropy production

    NASA Astrophysics Data System (ADS)

    Virgo, Nathaniel; Ikegami, Takashi

    2014-12-01

    Throughout the history of non-equilibrium thermodynamics a number of theories have been proposed in which complex, far from equilibrium flow systems are hypothesised to reach a steady state that maximises some quantity. Perhaps the most celebrated is Paltridge's principle of maximum entropy production for the horizontal heat flux in Earth's atmosphere, for which there is some empirical support. There have been a number of attempts to derive such a principle from maximum entropy considerations. However, we currently lack a more mechanistic explanation of how any particular system might self-organise into a state that maximises some quantity. This is in contrast to equilibrium thermodynamics, in which models such as the Ising model have been a great help in understanding the relationship between the predictions of MaxEnt and the dynamics of physical systems. In this paper we show that, unlike in the equilibrium case, Paltridge-type maximisation in non-equilibrium systems cannot be achieved by a simple dynamical feedback mechanism. Nevertheless, we propose several possible mechanisms by which maximisation could occur. Showing that these occur in any real system is a task for future work. The possibilities presented here may not be the only ones. We hope that by presenting them we can provoke further discussion about the possible dynamical mechanisms behind extremum principles for non-equilibrium systems, and their relationship to predictions obtained through MaxEnt.

  3. Possible dynamical explanations for Paltridge's principle of maximum entropy production

    SciTech Connect

    Virgo, Nathaniel Ikegami, Takashi

    2014-12-05

    Throughout the history of non-equilibrium thermodynamics a number of theories have been proposed in which complex, far from equilibrium flow systems are hypothesised to reach a steady state that maximises some quantity. Perhaps the most celebrated is Paltridge's principle of maximum entropy production for the horizontal heat flux in Earth's atmosphere, for which there is some empirical support. There have been a number of attempts to derive such a principle from maximum entropy considerations. However, we currently lack a more mechanistic explanation of how any particular system might self-organise into a state that maximises some quantity. This is in contrast to equilibrium thermodynamics, in which models such as the Ising model have been a great help in understanding the relationship between the predictions of MaxEnt and the dynamics of physical systems. In this paper we show that, unlike in the equilibrium case, Paltridge-type maximisation in non-equilibrium systems cannot be achieved by a simple dynamical feedback mechanism. Nevertheless, we propose several possible mechanisms by which maximisation could occur. Showing that these occur in any real system is a task for future work. The possibilities presented here may not be the only ones. We hope that by presenting them we can provoke further discussion about the possible dynamical mechanisms behind extremum principles for non-equilibrium systems, and their relationship to predictions obtained through MaxEnt.

  4. The maximum entropy formalism and the idiosyncratic theory of biodiversity.

    PubMed

    Pueyo, Salvador; He, Fangliang; Zillio, Tommaso

    2007-11-01

    Why does the neutral theory, which is based on unrealistic assumptions, predict diversity patterns so accurately? Answering questions like this requires a radical change in the way we tackle them. The large number of degrees of freedom of ecosystems pose a fundamental obstacle to mechanistic modelling. However, there are tools of statistical physics, such as the maximum entropy formalism (MaxEnt), that allow transcending particular models to simultaneously work with immense families of models with different rules and parameters, sharing only well-established features. We applied MaxEnt allowing species to be ecologically idiosyncratic, instead of constraining them to be equivalent as the neutral theory does. The answer we found is that neutral models are just a subset of the majority of plausible models that lead to the same patterns. Small variations in these patterns naturally lead to the main classical species abundance distributions, which are thus unified in a single framework.

  5. The maximum entropy formalism and the idiosyncratic theory of biodiversity

    PubMed Central

    Pueyo, Salvador; He, Fangliang; Zillio, Tommaso

    2007-01-01

    Why does the neutral theory, which is based on unrealistic assumptions, predict diversity patterns so accurately? Answering questions like this requires a radical change in the way we tackle them. The large number of degrees of freedom of ecosystems pose a fundamental obstacle to mechanistic modelling. However, there are tools of statistical physics, such as the maximum entropy formalism (MaxEnt), that allow transcending particular models to simultaneously work with immense families of models with different rules and parameters, sharing only well-established features. We applied MaxEnt allowing species to be ecologically idiosyncratic, instead of constraining them to be equivalent as the neutral theory does. The answer we found is that neutral models are just a subset of the majority of plausible models that lead to the same patterns. Small variations in these patterns naturally lead to the main classical species abundance distributions, which are thus unified in a single framework. PMID:17692099

  6. How multiplicity determines entropy and the derivation of the maximum entropy principle for complex systems.

    PubMed

    Hanel, Rudolf; Thurner, Stefan; Gell-Mann, Murray

    2014-05-13

    The maximum entropy principle (MEP) is a method for obtaining the most likely distribution functions of observables from statistical systems by maximizing entropy under constraints. The MEP has found hundreds of applications in ergodic and Markovian systems in statistical mechanics, information theory, and statistics. For several decades there has been an ongoing controversy over whether the notion of the maximum entropy principle can be extended in a meaningful way to nonextensive, nonergodic, and complex statistical systems and processes. In this paper we start by reviewing how Boltzmann-Gibbs-Shannon entropy is related to multiplicities of independent random processes. We then show how the relaxation of independence naturally leads to the most general entropies that are compatible with the first three Shannon-Khinchin axioms, the (c,d)-entropies. We demonstrate that the MEP is a perfectly consistent concept for nonergodic and complex statistical systems if their relative entropy can be factored into a generalized multiplicity and a constraint term. The problem of finding such a factorization reduces to finding an appropriate representation of relative entropy in a linear basis. In a particular example we show that path-dependent random processes with memory naturally require specific generalized entropies. The example is to our knowledge the first exact derivation of a generalized entropy from the microscopic properties of a path-dependent random process.

  7. How multiplicity determines entropy and the derivation of the maximum entropy principle for complex systems

    PubMed Central

    Hanel, Rudolf; Thurner, Stefan; Gell-Mann, Murray

    2014-01-01

    The maximum entropy principle (MEP) is a method for obtaining the most likely distribution functions of observables from statistical systems by maximizing entropy under constraints. The MEP has found hundreds of applications in ergodic and Markovian systems in statistical mechanics, information theory, and statistics. For several decades there has been an ongoing controversy over whether the notion of the maximum entropy principle can be extended in a meaningful way to nonextensive, nonergodic, and complex statistical systems and processes. In this paper we start by reviewing how Boltzmann–Gibbs–Shannon entropy is related to multiplicities of independent random processes. We then show how the relaxation of independence naturally leads to the most general entropies that are compatible with the first three Shannon–Khinchin axioms, the -entropies. We demonstrate that the MEP is a perfectly consistent concept for nonergodic and complex statistical systems if their relative entropy can be factored into a generalized multiplicity and a constraint term. The problem of finding such a factorization reduces to finding an appropriate representation of relative entropy in a linear basis. In a particular example we show that path-dependent random processes with memory naturally require specific generalized entropies. The example is to our knowledge the first exact derivation of a generalized entropy from the microscopic properties of a path-dependent random process. PMID:24782541

  8. Application of Maximum Entropy reconstruction to PISEMA spectra.

    PubMed

    Jones, D H; Opella, S J

    2006-03-01

    Maximum Entropy reconstruction is applied to two-dimensional PISEMA spectra of stationary samples of peptide crystals and proteins in magnetically aligned virus particles and membrane bilayers. Improvements in signal-to-noise ratios were observed with minimal distortion of the spectra when Maximum Entropy reconstruction was applied to non-linearly sampled data in the indirect dimension. Maximum Entropy reconstruction was also applied in the direct dimension by selecting sub-sets of data from the free induction decays. Because the noise is uncorrelated in the spectra obtained by Maximum Entropy reconstruction of data with different non-linear sampling schedules, it is possible to improve the signal-to-noise ratios by co-addition of multiple spectra derived from one experimental data set. The combined application of Maximum Entropy to data in the indirect and direct dimensions has the potential to lead to substantial reductions in the total amount of experimental time required for acquisition of data in multidimensional NMR experiments.

  9. Maximum entropy Eddington factors in flux-limited neutrino diffusion

    NASA Astrophysics Data System (ADS)

    Cernohorsky, Jan; Vandenhorn, L. J.; Cooperstein, J.

    A neutrino transport scheme for use in dense stellar environments and collapsing stars is constructed. The maximum entropy principle is used to establish the general form of the angular neutrino distribution functions. The two Lagrange multipliers introduced by this procedure are determined by using the Flux-limited Diffusion Theory (FDT) of Levermore and Pomraning. The anisotropic scattering contribution is taken into account. Its inclusion leads to a modification of the Levermore-Pomraning approach. The transition from a multigroup to an energy integrated transport scheme for FDT is investigated. The link to the two fluid model of Cooperstein et al is made. This extended two fluid model parametrizes the thermal and chemical disequilibrium between matter and neutrinos. The variable Eddington factors are now self-consistently determined through a local dimensionless quantity, rather than by macroscopic geometrical prescription.

  10. Approximate maximum-entropy moment closures for gas dynamics

    NASA Astrophysics Data System (ADS)

    McDonald, James G.

    2016-11-01

    Accurate prediction of flows that exist between the traditional continuum regime and the free-molecular regime have proven difficult to obtain. Current methods are either inaccurate in this regime or prohibitively expensive for practical problems. Moment closures have long held the promise of providing new, affordable, accurate methods in this regime. The maximum-entropy hierarchy of closures seems to offer particularly attractive physical and mathematical properties. Unfortunately, several difficulties render the practical implementation of maximum-entropy closures very difficult. This work examines the use of simple approximations to these maximum-entropy closures and shows that physical accuracy that is vastly improved over continuum methods can be obtained without a significant increase in computational cost. Initially the technique is demonstrated for a simple one-dimensional gas. It is then extended to the full three-dimensional setting. The resulting moment equations are used for the numerical solution of shock-wave profiles with promising results.

  11. Proscriptive Bayesian Programming and Maximum Entropy: a Preliminary Study

    NASA Astrophysics Data System (ADS)

    Koike, Carla Cavalcante

    2008-11-01

    Some problems found in robotics systems, as avoiding obstacles, can be better described using proscriptive commands, where only prohibited actions are indicated in contrast to prescriptive situations, which demands that a specific command be specified. An interesting question arises regarding the possibility to learn automatically if proscriptive commands are suitable and which parametric function could be better applied. Lately, a great variety of problems in robotics domain are object of researches using probabilistic methods, including the use of Maximum Entropy in automatic learning for robot control systems. This works presents a preliminary study on automatic learning of proscriptive robot control using maximum entropy and using Bayesian Programming. It is verified whether Maximum entropy and related methods can favour proscriptive commands in an obstacle avoidance task executed by a mobile robot.

  12. Maximum-entropy distributions of correlated variables with prespecified marginals.

    PubMed

    Larralde, Hernán

    2012-12-01

    The problem of determining the joint probability distributions for correlated random variables with prespecified marginals is considered. When the joint distribution satisfying all the required conditions is not unique, the "most unbiased" choice corresponds to the distribution of maximum entropy. The calculation of the maximum-entropy distribution requires the solution of rather complicated nonlinear coupled integral equations, exact solutions to which are obtained for the case of Gaussian marginals; otherwise, the solution can be expressed as a perturbation around the product of the marginals if the marginal moments exist.

  13. Maximum-entropy distributions of correlated variables with prespecified marginals

    NASA Astrophysics Data System (ADS)

    Larralde, Hernán

    2012-12-01

    The problem of determining the joint probability distributions for correlated random variables with prespecified marginals is considered. When the joint distribution satisfying all the required conditions is not unique, the “most unbiased” choice corresponds to the distribution of maximum entropy. The calculation of the maximum-entropy distribution requires the solution of rather complicated nonlinear coupled integral equations, exact solutions to which are obtained for the case of Gaussian marginals; otherwise, the solution can be expressed as a perturbation around the product of the marginals if the marginal moments exist.

  14. Influence of Pareto optimality on the maximum entropy methods

    NASA Astrophysics Data System (ADS)

    Peddavarapu, Sreehari; Sunil, Gujjalapudi Venkata Sai; Raghuraman, S.

    2017-07-01

    Galerkin meshfree schemes are emerging as a viable substitute to finite element method to solve partial differential equations for the large deformations as well as crack propagation problems. However, the introduction of Shanon-Jayne's entropy principle in to the scattered data approximation has deviated from the trend of defining the approximation functions, resulting in maximum entropy approximants. Further in addition to this, an objective functional which controls the degree of locality resulted in Local maximum entropy approximants. These are based on information-theoretical Pareto optimality between entropy and degree of locality that are defining the basis functions to the scattered nodes. The degree of locality in turn relies on the choice of locality parameter and prior (weight) function. The proper choices of both plays vital role in attain the desired accuracy. Present work is focused on the choice of locality parameter which defines the degree of locality and priors: Gaussian, Cubic spline and quartic spline functions on the behavior of local maximum entropy approximants.

  15. Maximum entropy, word-frequency, Chinese characters, and multiple meanings.

    PubMed

    Yan, Xiaoyong; Minnhagen, Petter

    2015-01-01

    The word-frequency distribution of a text written by an author is well accounted for by a maximum entropy distribution, the RGF (random group formation)-prediction. The RGF-distribution is completely determined by the a priori values of the total number of words in the text (M), the number of distinct words (N) and the number of repetitions of the most common word (k(max)). It is here shown that this maximum entropy prediction also describes a text written in Chinese characters. In particular it is shown that although the same Chinese text written in words and Chinese characters have quite differently shaped distributions, they are nevertheless both well predicted by their respective three a priori characteristic values. It is pointed out that this is analogous to the change in the shape of the distribution when translating a given text to another language. Another consequence of the RGF-prediction is that taking a part of a long text will change the input parameters (M, N, k(max)) and consequently also the shape of the frequency distribution. This is explicitly confirmed for texts written in Chinese characters. Since the RGF-prediction has no system-specific information beyond the three a priori values (M, N, k(max)), any specific language characteristic has to be sought in systematic deviations from the RGF-prediction and the measured frequencies. One such systematic deviation is identified and, through a statistical information theoretical argument and an extended RGF-model, it is proposed that this deviation is caused by multiple meanings of Chinese characters. The effect is stronger for Chinese characters than for Chinese words. The relation between Zipf's law, the Simon-model for texts and the present results are discussed.

  16. Maximum Entropy, Word-Frequency, Chinese Characters, and Multiple Meanings

    PubMed Central

    Yan, Xiaoyong; Minnhagen, Petter

    2015-01-01

    The word-frequency distribution of a text written by an author is well accounted for by a maximum entropy distribution, the RGF (random group formation)-prediction. The RGF-distribution is completely determined by the a priori values of the total number of words in the text (M), the number of distinct words (N) and the number of repetitions of the most common word (kmax). It is here shown that this maximum entropy prediction also describes a text written in Chinese characters. In particular it is shown that although the same Chinese text written in words and Chinese characters have quite differently shaped distributions, they are nevertheless both well predicted by their respective three a priori characteristic values. It is pointed out that this is analogous to the change in the shape of the distribution when translating a given text to another language. Another consequence of the RGF-prediction is that taking a part of a long text will change the input parameters (M, N, kmax) and consequently also the shape of the frequency distribution. This is explicitly confirmed for texts written in Chinese characters. Since the RGF-prediction has no system-specific information beyond the three a priori values (M, N, kmax), any specific language characteristic has to be sought in systematic deviations from the RGF-prediction and the measured frequencies. One such systematic deviation is identified and, through a statistical information theoretical argument and an extended RGF-model, it is proposed that this deviation is caused by multiple meanings of Chinese characters. The effect is stronger for Chinese characters than for Chinese words. The relation between Zipf’s law, the Simon-model for texts and the present results are discussed. PMID:25955175

  17. Stochastic model of the NASA/MSFC ground facility for large space structures with uncertain parameters: The maximum entropy approach, part 2

    NASA Technical Reports Server (NTRS)

    Hsia, Wei Shen

    1989-01-01

    A validated technology data base is being developed in the areas of control/structures interaction, deployment dynamics, and system performance for Large Space Structures (LSS). A Ground Facility (GF), in which the dynamics and control systems being considered for LSS applications can be verified, was designed and built. One of the important aspects of the GF is to verify the analytical model for the control system design. The procedure is to describe the control system mathematically as well as possible, then to perform tests on the control system, and finally to factor those results into the mathematical model. The reduction of the order of a higher order control plant was addressed. The computer program was improved for the maximum entropy principle adopted in Hyland's MEOP method. The program was tested against the testing problem. It resulted in a very close match. Two methods of model reduction were examined: Wilson's model reduction method and Hyland's optimal projection (OP) method. Design of a computer program for Hyland's OP method was attempted. Due to the difficulty encountered at the stage where a special matrix factorization technique is needed in order to obtain the required projection matrix, the program was successful up to the finding of the Linear Quadratic Gaussian solution but not beyond. Numerical results along with computer programs which employed ORACLS are presented.

  18. Crowd macro state detection using entropy model

    NASA Astrophysics Data System (ADS)

    Zhao, Ying; Yuan, Mengqi; Su, Guofeng; Chen, Tao

    2015-08-01

    In the crowd security research area a primary concern is to identify the macro state of crowd behaviors to prevent disasters and to supervise the crowd behaviors. The entropy is used to describe the macro state of a self-organization system in physics. The entropy change indicates the system macro state change. This paper provides a method to construct crowd behavior microstates and the corresponded probability distribution using the individuals' velocity information (magnitude and direction). Then an entropy model was built up to describe the crowd behavior macro state. Simulation experiments and video detection experiments were conducted. It was verified that in the disordered state, the crowd behavior entropy is close to the theoretical maximum entropy; while in ordered state, the entropy is much lower than half of the theoretical maximum entropy. The crowd behavior macro state sudden change leads to the entropy change. The proposed entropy model is more applicable than the order parameter model in crowd behavior detection. By recognizing the entropy mutation, it is possible to detect the crowd behavior macro state automatically by utilizing cameras. Results will provide data support on crowd emergency prevention and on emergency manual intervention.

  19. Beyond maximum entropy: Fractal Pixon-based image reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard C.; Pina, R. K.

    1994-01-01

    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.

  20. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution

    NASA Astrophysics Data System (ADS)

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.

  1. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution.

    PubMed

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.

  2. Pairwise Maximum Entropy Models for Studying Large Biological Systems: When They Can Work and When They Can't

    PubMed Central

    Roudi, Yasser; Nirenberg, Sheila; Latham, Peter E.

    2009-01-01

    One of the most critical problems we face in the study of biological systems is building accurate statistical descriptions of them. This problem has been particularly challenging because biological systems typically contain large numbers of interacting elements, which precludes the use of standard brute force approaches. Recently, though, several groups have reported that there may be an alternate strategy. The reports show that reliable statistical models can be built without knowledge of all the interactions in a system; instead, pairwise interactions can suffice. These findings, however, are based on the analysis of small subsystems. Here, we ask whether the observations will generalize to systems of realistic size, that is, whether pairwise models will provide reliable descriptions of true biological systems. Our results show that, in most cases, they will not. The reason is that there is a crossover in the predictive power of pairwise models: If the size of the subsystem is below the crossover point, then the results have no predictive power for large systems. If the size is above the crossover point, then the results may have predictive power. This work thus provides a general framework for determining the extent to which pairwise models can be used to predict the behavior of large biological systems. Applied to neural data, the size of most systems studied so far is below the crossover point. PMID:19424487

  3. Quantifying evaporation and transpiration fluxes of an Eucalyptus woodland in complex terrain with varying tree cover using the Maximum Entropy Production model of evapotranspiration

    NASA Astrophysics Data System (ADS)

    Gutierrez-Jurado, H. A.; Guan, H.; Wang, H.; Wang, J.; Bras, R. L.; Simmons, C. T.

    2013-12-01

    The measurement of evapotranspiration (ET) fluxes in areas with complex terrain and non-uniform vegetation cover pose a challenge to traditional techniques with fetch constraints, such as the Eddy Covariance method. In this study, we report the results of a field monitoring design based on the Maximum Entropy Production model of ET (MEP-ET), that quantifies evaporation and transpiration from soil and vegetation respectively, using a limited number of measurements of temperature, humidity and net radiation above soil and canopies. Following the MEP-ET model requirements we instrumented a catchment with complex terrain and native vegetation (Eucalyptus leucoxylon) in South Australia. We deployed vertical-through-canopy and near-soil temperature and humidity transects in two opposing slopes (north and south facing) with contrasting canopy cover and understory conditions to measure tree transpiration from 2 eucalyptus trees and soil evaporation of the area under their canopies. We compare the results with transpiration measurements from sapflow data on the same trees and soil evaporation estimates with the Bowen Ratio Energy Balance (BREB) method. Our results show good agreement between the MEP-ET derived transpiration and evaporation and the sapflow transpiration and BREB evaporation estimates, respectively. Using a LiDAR derived canopy cover we upscale the MEP-ET fluxes on each slope and explore the effect of terrain and vegetation cover on the partition of ET and the water budgets across the catchment.

  4. Predicting the potential distribution of main malaria vectors Anopheles stephensi, An. culicifacies s.l. and An. fluviatilis s.l. in Iran based on maximum entropy model.

    PubMed

    Pakdad, Kamran; Hanafi-Bojd, Ahmad Ali; Vatandoost, Hassan; Sedaghat, Mohammad Mehdi; Raeisi, Ahmad; Moghaddam, Abdolreza Salahi; Foroushani, Abbas Rahimi

    2017-05-01

    Malaria is considered as a major public health problem in southern areas of Iran. The goal of this study was to predict best ecological niches of three main malaria vectors of Iran: Anopheles stephensi, Anopheles culicifacies s.l. and Anopheles fluviatilis s.l. A databank was created which included all published data about Anopheles species of Iran from 1961 to 2015. The suitable environmental niches for the three above mentioned Anopheles species were predicted using maximum entropy model (MaxEnt). AUC (area under Roc curve) values were 0.943, 0.974 and 0.956 for An. stephensi, An. culicifacies s.l. and An. fluviatilis s.l respectively, which are considered as high potential power of model in the prediction of species niches. The biggest bioclimatic contributor for An. stephensi and An. fluviatilis s.l. was bio 15 (precipitation seasonality), 25.5% and 36.1% respectively, followed by bio 1 (annual mean temperature), 20.8% for An. stephensi and bio 4 (temperature seasonality) with 49.4% contribution for An. culicifacies s.l. This is the first step in the mapping of the country's malaria vectors. Hence, future weather situation can change the dispersal maps of Anopheles. Iran is under elimination phase of malaria, so that such spatio-temporal studies are essential and could provide guideline for decision makers for IVM strategies in problematic areas.

  5. Propane spectral resolution enhancement by the maximum entropy method

    NASA Technical Reports Server (NTRS)

    Bonavito, N. L.; Stewart, K. P.; Hurley, E. J.; Yeh, K. C.; Inguva, R.

    1990-01-01

    The Burg algorithm for maximum entropy power spectral density estimation is applied to a time series of data obtained from a Michelson interferometer and compared with a standard FFT estimate for resolution capability. The propane transmittance spectrum was estimated by use of the FFT with a 2 to the 18th data sample interferogram, giving a maximum unapodized resolution of 0.06/cm. This estimate was then interpolated by zero filling an additional 2 to the 18th points, and the final resolution was taken to be 0.06/cm. Comparison of the maximum entropy method (MEM) estimate with the FFT was made over a 45/cm region of the spectrum for several increasing record lengths of interferogram data beginning at 2 to the 10th. It is found that over this region the MEM estimate with 2 to the 16th data samples is in close agreement with the FFT estimate using 2 to the 18th samples.

  6. Propane spectral resolution enhancement by the maximum entropy method

    NASA Technical Reports Server (NTRS)

    Bonavito, N. L.; Stewart, K. P.; Hurley, E. J.; Yeh, K. C.; Inguva, R.

    1990-01-01

    The Burg algorithm for maximum entropy power spectral density estimation is applied to a time series of data obtained from a Michelson interferometer and compared with a standard FFT estimate for resolution capability. The propane transmittance spectrum was estimated by use of the FFT with a 2 to the 18th data sample interferogram, giving a maximum unapodized resolution of 0.06/cm. This estimate was then interpolated by zero filling an additional 2 to the 18th points, and the final resolution was taken to be 0.06/cm. Comparison of the maximum entropy method (MEM) estimate with the FFT was made over a 45/cm region of the spectrum for several increasing record lengths of interferogram data beginning at 2 to the 10th. It is found that over this region the MEM estimate with 2 to the 16th data samples is in close agreement with the FFT estimate using 2 to the 18th samples.

  7. Development of an Anisotropic Geological-Based Land Use Regression and Bayesian Maximum Entropy Model for Estimating Groundwater Radon across Northing Carolina

    NASA Astrophysics Data System (ADS)

    Messier, K. P.; Serre, M. L.

    2015-12-01

    Radon (222Rn) is a naturally occurring chemically inert, colorless, and odorless radioactive gas produced from the decay of uranium (238U), which is ubiquitous in rocks and soils worldwide. Exposure to 222Rn is likely the second leading cause of lung cancer after cigarette smoking via inhalation; however, exposure through untreated groundwater is also a contributing factor to both inhalation and ingestion routes. A land use regression (LUR) model for groundwater 222Rn with anisotropic geological and 238U based explanatory variables is developed, which helps elucidate the factors contributing to elevated 222Rn across North Carolina. Geological and uranium based variables are constructed in elliptical buffers surrounding each observation such that they capture the lateral geometric anisotropy present in groundwater 222Rn. Moreover, geological features are defined at three different geological spatial scales to allow the model to distinguish between large area and small area effects of geology on groundwater 222Rn. The LUR is also integrated into the Bayesian Maximum Entropy (BME) geostatistical framework to increase accuracy and produce a point-level LUR-BME model of groundwater 222Rn across North Carolina including prediction uncertainty. The LUR-BME model of groundwater 222Rn results in a leave-one out cross-validation of 0.46 (Pearson correlation coefficient= 0.68), effectively predicting within the spatial covariance range. Modeled results of 222Rn concentrations show variability among Intrusive Felsic geological formations likely due to average bedrock 238U defined on the basis of overlying stream-sediment 238U concentrations that is a widely distributed consistently analyzed point-source data.

  8. Inverse Spin Glass and Related Maximum Entropy Problems

    NASA Astrophysics Data System (ADS)

    Castellana, Michele; Bialek, William

    2014-09-01

    If we have a system of binary variables and we measure the pairwise correlations among these variables, then the least structured or maximum entropy model for their joint distribution is an Ising model with pairwise interactions among the spins. Here we consider inhomogeneous systems in which we constrain, for example, not the full matrix of correlations, but only the distribution from which these correlations are drawn. In this sense, what we have constructed is an inverse spin glass: rather than choosing coupling constants at random from a distribution and calculating correlations, we choose the correlations from a distribution and infer the coupling constants. We argue that such models generate a block structure in the space of couplings, which provides an explicit solution of the inverse problem. This allows us to generate a phase diagram in the space of (measurable) moments of the distribution of correlations. We expect that these ideas will be most useful in building models for systems that are nonequilibrium statistical mechanics problems, such as networks of real neurons.

  9. A maximum entropy method for MEG source imaging

    SciTech Connect

    Khosla, D. |; Singh, M.

    1996-12-31

    The estimation of three-dimensional dipole current sources on the cortical surface from the measured magnetoencephalogram (MEG) is a highly under determined inverse problem as there are many {open_quotes}feasible{close_quotes} images which are consistent with the MEG data. Previous approaches to this problem have concentrated on the use of weighted minimum norm inverse methods. While these methods ensure a unique solution, they often produce overly smoothed solutions and exhibit severe sensitivity to noise. In this paper we explore the maximum entropy approach to obtain better solutions to the problem. This estimation technique selects that image from the possible set of feasible images which has the maximum entropy permitted by the information available to us. In order to account for the presence of noise in the data, we have also incorporated a noise rejection or likelihood term into our maximum entropy method. This makes our approach mirror a Bayesian maximum a posteriori (MAP) formulation. Additional information from other functional techniques like functional magnetic resonance imaging (fMRI) can be incorporated in the proposed method in the form of a prior bias function to improve solutions. We demonstrate the method with experimental phantom data from a clinical 122 channel MEG system.

  10. Triadic conceptual structure of the maximum entropy approach to evolution.

    PubMed

    Herrmann-Pillath, Carsten; Salthe, Stanley N

    2011-03-01

    Many problems in evolutionary theory are cast in dyadic terms, such as the polar oppositions of organism and environment. We argue that a triadic conceptual structure offers an alternative perspective under which the information generating role of evolution as a physical process can be analyzed, and propose a new diagrammatic approach. Peirce's natural philosophy was deeply influenced by his reception of both Darwin's theory and thermodynamics. Thus, we elaborate on a new synthesis which puts together his theory of signs and modern Maximum Entropy approaches to evolution in a process discourse. Following recent contributions to the naturalization of Peircean semiosis, pointing towards 'physiosemiosis' or 'pansemiosis', we show that triadic structures involve the conjunction of three different kinds of causality, efficient, formal and final. In this, we accommodate the state-centered thermodynamic framework to a process approach. We apply this on Ulanowicz's analysis of autocatalytic cycles as primordial patterns of life. This paves the way for a semiotic view of thermodynamics which is built on the idea that Peircean interpretants are systems of physical inference devices evolving under natural selection. In this view, the principles of Maximum Entropy, Maximum Power, and Maximum Entropy Production work together to drive the emergence of information carrying structures, which at the same time maximize information capacity as well as the gradients of energy flows, such that ultimately, contrary to Schrödinger's seminal contribution, the evolutionary process is seen to be a physical expression of the Second Law.

  11. Nonparametric supervised learning by linear interpolation with maximum entropy.

    PubMed

    Gupta, Maya R; Gray, Robert M; Olshen, Richard A

    2006-05-01

    Nonparametric neighborhood methods for learning entail estimation of class conditional probabilities based on relative frequencies of samples that are "near-neighbors" of a test point. We propose and explore the behavior of a learning algorithm that uses linear interpolation and the principle of maximum entropy (LIME). We consider some theoretical properties of the LIME algorithm: LIME weights have exponential form; the estimates are consistent; and the estimates are robust to additive noise. In relation to bias reduction, we show that near-neighbors contain a test point in their convex hull asymptotically. The common linear interpolation solution used for regression on grids or look-up-tables is shown to solve a related maximum entropy problem. LIME simulation results support use of the method, and performance on a pipeline integrity classification problem demonstrates that the proposed algorithm has practical value.

  12. On the maximum entropy distributions of inherently positive nuclear data

    NASA Astrophysics Data System (ADS)

    Taavitsainen, A.; Vanhanen, R.

    2017-05-01

    The multivariate log-normal distribution is used by many authors and statistical uncertainty propagation programs for inherently positive quantities. Sometimes it is claimed that the log-normal distribution results from the maximum entropy principle, if only means, covariances and inherent positiveness of quantities are known or assumed to be known. In this article we show that this is not true. Assuming a constant prior distribution, the maximum entropy distribution is in fact a truncated multivariate normal distribution - whenever it exists. However, its practical application to multidimensional cases is hindered by lack of a method to compute its location and scale parameters from means and covariances. Therefore, regardless of its theoretical disadvantage, use of other distributions seems to be a practical necessity.

  13. Maximum information entropy: a foundation for ecological theory.

    PubMed

    Harte, John; Newman, Erica A

    2014-07-01

    The maximum information entropy (MaxEnt) principle is a successful method of statistical inference that has recently been applied to ecology. Here, we show how MaxEnt can accurately predict patterns such as species-area relationships (SARs) and abundance distributions in macroecology and be a foundation for ecological theory. We discuss the conceptual foundation of the principle, why it often produces accurate predictions of probability distributions in science despite not incorporating explicit mechanisms, and how mismatches between predictions and data can shed light on driving mechanisms in ecology. We also review possible future extensions of the maximum entropy theory of ecology (METE), a potentially important foundation for future developments in ecological theory. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. PNNL: A Supervised Maximum Entropy Approach to Word Sense Disambiguation

    SciTech Connect

    Tratz, Stephen C.; Sanfilippo, Antonio P.; Gregory, Michelle L.; Chappell, Alan R.; Posse, Christian; Whitney, Paul D.

    2007-06-23

    In this paper, we described the PNNL Word Sense Disambiguation system as applied to the English All-Word task in Se-mEval 2007. We use a supervised learning approach, employing a large number of features and using Information Gain for dimension reduction. Our Maximum Entropy approach combined with a rich set of features produced results that are significantly better than baseline and are the highest F-score for the fined-grained English All-Words subtask.

  15. Maximum entropy distributions of scale-invariant processes.

    PubMed

    Nieves, Veronica; Wang, Jingfeng; Bras, Rafael L; Wood, Elizabeth

    2010-09-10

    Organizations of many variables in nature such as soil moisture and topography exhibit patterns with no dominant scales. The maximum entropy (ME) principle is proposed to show how these variables can be statistically described using their scale-invariant properties and geometric mean. The ME principle predicts with great simplicity the probability distribution of a scale-invariant process in terms of macroscopic observables. The ME principle offers a universal and unified framework for characterizing such multiscaling processes.

  16. Chebyshev recursion methods: Kernel polynomials and maximum entropy

    SciTech Connect

    Silver, R.N.; Roeder, H.; Voter, A.F.; Kress, J.D.

    1995-10-01

    The authors describe two Chebyshev recursion methods for calculations with very large sparse Hamiltonians, the kernel polynomial method (KPM) and the maximum entropy method (MEM). They are especially applicable to physical properties involving large numbers of eigenstates, which include densities of states, spectral functions, thermodynamics, total energies, as well as forces for molecular dynamics and Monte Carlo simulations. The authors apply Chebyshev methods to the electronic structure of Si, the thermodynamics of Heisenberg antiferromagnets, and a polaron problem.

  17. Maximum Entropy Calculations on a Discrete Probability Space

    DTIC Science & Technology

    1986-01-01

    UGERE Shannon then proved [see also Tribus (1961, 1969)] that this measure of information has the form: H = - K Pi log Pi (2) and furthermore that...53, Tribus and Motroni (1972) Gage and Hestenes (1973) and Hobson (1972). See also Friedman (1973) and Shimony (1973) for their replies. b. Frieden’s... Tribus and Motroni and to Gage and Hestenes, J. Stat. Phys 9, 265. Friedman K. and A. Shimony, (1971). Jaynes Maximum Entropy Prescription and Probability

  18. Maximum entropy reconstruction of the configurational density of states from microcanonical simulations

    NASA Astrophysics Data System (ADS)

    Davis, Sergio

    2013-02-01

    In this work we develop a method for inferring the underlying configurational density of states of a molecular system by combining information from several microcanonical molecular dynamics or Monte Carlo simulations at different energies. This method is based on Jaynes' Maximum Entropy formalism (MaxEnt) for Bayesian statistical inference under known expectation values. We present results of its application to measure thermodynamic entropy and free energy differences in embedded-atom models of metals.

  19. Predicting the current potential and future world wide distribution of the onion maggot, Delia antiqua using maximum entropy ecological niche modeling

    PubMed Central

    Feng, Jinian

    2017-01-01

    Climate change will markedly impact biology, population ecology, and spatial distribution patterns of insect pests because of the influence of future greenhouse effects on insect development and population dynamics. Onion maggot, Delia antiqua, larvae are subterranean pests with limited mobility, that directly feed on bulbs of Allium sp. and render them completely unmarketable. Modeling the spatial distribution of such a widespread and damaging pest is crucial not only to identify current potentially suitable climactic areas but also to predict where the pest is likely to spread in the future so that appropriate monitoring and management programs can be developed. In this study, Maximum Entropy Niche Modeling was used to estimate the current potential distribution of D. antiqua and to predict the future distribution of this species in 2030, 2050, 2070 and 2080 by using emission scenario (A2) with 7 climate variables. The results of this study show that currently highly suitable habitats for D.antiqua occur throughout most of East Asia, some regions of North America, Western Europe, and Western Asian countries near the Caspian sea and Black Sea. In the future, we predict an even broader distribution of this pest spread more extensively throughout Asia, North America and Europe, particularly in most of European countries, Central regions of United States and much of East Asia. Our present day and future predictions can enhance strategic planning of agricultural organizations by identifying regions that will need to develop Integrated Pest Management programs to manage the onion maggot. The distribution forecasts will also help governments to optimize economic investments in management programs for this pest by identifying regions that are or will become less suitable for current and future infestations. PMID:28158259

  20. Predicting the current potential and future world wide distribution of the onion maggot, Delia antiqua using maximum entropy ecological niche modeling.

    PubMed

    Ning, Shuoying; Wei, Jiufeng; Feng, Jinian

    2017-01-01

    Climate change will markedly impact biology, population ecology, and spatial distribution patterns of insect pests because of the influence of future greenhouse effects on insect development and population dynamics. Onion maggot, Delia antiqua, larvae are subterranean pests with limited mobility, that directly feed on bulbs of Allium sp. and render them completely unmarketable. Modeling the spatial distribution of such a widespread and damaging pest is crucial not only to identify current potentially suitable climactic areas but also to predict where the pest is likely to spread in the future so that appropriate monitoring and management programs can be developed. In this study, Maximum Entropy Niche Modeling was used to estimate the current potential distribution of D. antiqua and to predict the future distribution of this species in 2030, 2050, 2070 and 2080 by using emission scenario (A2) with 7 climate variables. The results of this study show that currently highly suitable habitats for D.antiqua occur throughout most of East Asia, some regions of North America, Western Europe, and Western Asian countries near the Caspian sea and Black Sea. In the future, we predict an even broader distribution of this pest spread more extensively throughout Asia, North America and Europe, particularly in most of European countries, Central regions of United States and much of East Asia. Our present day and future predictions can enhance strategic planning of agricultural organizations by identifying regions that will need to develop Integrated Pest Management programs to manage the onion maggot. The distribution forecasts will also help governments to optimize economic investments in management programs for this pest by identifying regions that are or will become less suitable for current and future infestations.

  1. Quasiparticle density of states by inversion with maximum entropy method

    NASA Astrophysics Data System (ADS)

    Sui, Xiao-Hong; Wang, Han-Ting; Tang, Hui; Su, Zhao-Bin

    2016-10-01

    We propose to extract the quasiparticle density of states (DOS) of the superconductor directly from the experimentally measured superconductor-insulator-superconductor junction tunneling data by applying the maximum entropy method to the nonlinear systems. It merits the advantage of model independence with minimum a priori assumptions. Various components of the proposed method have been carefully investigated, including the meaning of the targeting function, the mock function, as well as the role and the designation of the input parameters. The validity of the developed scheme is shown by two kinds of tests for systems with known DOS. As a preliminary application to a Bi2Sr2CaCu2O8 +δ sample with its critical temperature Tc=89 K , we extract the DOS from the measured intrinsic Josephson junction current data at temperatures of T =4.2 K , 45 K , 55 K , 95 K , and 130 K . The energy gap decreases with increasing temperature below Tc, while above Tc, a kind of energy gap survives, which provides an angle to investigate the pseudogap phenomenon in high-Tc superconductors. The developed method itself might be a useful tool for future applications in various fields.

  2. Estimating Thermal Inertia with a Maximum Entropy Boundary Condition

    NASA Astrophysics Data System (ADS)

    Nearing, G.; Moran, M. S.; Scott, R.; Ponce-Campos, G.

    2012-04-01

    Thermal inertia, P [Jm-2s-1/2K-1], is a physical property the land surface which determines resistance to temperature change under seasonal or diurnal heating. It is a function of volumetric heat capacity, c [Jm-3K-1], and thermal conductivity, k [Wm-1K-1] of the soil near the surface: P=√ck. Thermal inertia of soil varies with moisture content due the difference between thermal properties of water and air, and a number of studies have demonstrated that it is feasible to estimate soil moisture given thermal inertia (e.g. Lu et al, 2009, Murray and Verhoef, 2007). We take the common approach to estimating thermal inertia using measurements of surface temperature by modeling the Earth's surface as a 1-dimensional homogeneous diffusive half-space. In this case, surface temperature is a function of the ground heat flux (G) boundary condition and thermal inertia and a daily value of P was estimated by matching measured and modeled diurnal surface temperature fluctuations. The difficulty is in measuring G; we demonstrate that the new maximum entropy production (MEP) method for partitioning net radiation into surface energy fluxes (Wang and Bras, 2011) provides a suitable boundary condition for estimating P. Adding the diffusion representation of heat transfer in the soil reduces the number of free parameters in the MEP model from two to one, and we provided a sensitivity analysis which suggests that, for the purpose of estimating P, it is preferable to parameterize the coupled MEP-diffusion model by the ratio of thermal inertia of the soil to the effective thermal inertia of convective heat transfer to the atmosphere. We used this technique to estimate thermal inertia at two semiarid, non-vegetated locations in the Walnut Gulch Experimental Watershed in southeast AZ, USA and compared these estimates to estimates of P made using the Xue and Cracknell (1995) solution for a linearized ground heat flux boundary condition, and we found that the MEP-diffusion model produced

  3. Quantum maximum entropy principle for a system of identical particles

    SciTech Connect

    Trovato, M.; Reggiani, L.

    2010-02-15

    By introducing a functional of the reduced density matrix, we generalize the definition of a quantum entropy which incorporates the indistinguishability principle of a system of identical particles. With the present definition, the principle of quantum maximum entropy permits us to solve the closure problem for a quantum hydrodynamic set of balance equations corresponding to an arbitrary number of moments in the framework of extended thermodynamics. The determination of the reduced Wigner function for equilibrium and nonequilibrium conditions is found to become possible only by assuming that the Lagrange multipliers can be expanded in powers of (Planck constant/2pi){sup 2}. Quantum contributions are expressed in powers of (Planck constant/2pi){sup 2} while classical results are recovered in the limit (Planck constant/2pi)->0.

  4. Plasma self-organization by maximum entropy production

    NASA Astrophysics Data System (ADS)

    Kim, Y.-B.

    2005-10-01

    Understanding turbulence saturation mechanism in magnetically confined plasma is one of the most important but unsolved problems in plasma physics research. The following hypothesis has been proposed as a possible turbulence saturation mechanism in confined plasma. The confined system filled with plasma, turbulent electromagnetic field and trace amount of neutral particles, e.g., magnetically confined thermonuclear system, will approach to the state of global maximum entry production. This hypothesis determines unique equilibrium plasma profiles without knowing detailed underlying turbulence dynamics in certain cases. This approach is different from the conventional picture of transport; in which source is balanced by linear thermodynamic forces and then transport coefficients are determined from either microscopic theory or experiment. The definition and evolution of entropy in this complex system is introduced and global entropy production rate is maximized under the constraint of particle, momentum, and energy conservation. Results from analytical and numerical calculus of variation will be discussed.

  5. Gravitational entropies in LTB dust models

    NASA Astrophysics Data System (ADS)

    Sussman, Roberto A.; Larena, Julien

    2014-04-01

    We consider generic Lemaître-Tolman-Bondi (LTB) dust models to probe the gravitational entropy proposals of Clifton, Ellis and Tavakol (CET) and of Hosoya and Buchert (HB). We also consider a variant of the HB proposal based on a suitable quasi-local scalar weighted average. We show that the conditions for entropy growth for all proposals are directly related to a negative correlation of similar fluctuations of the energy density and Hubble scalar. While this correlation is evaluated locally for the CET proposal, it must be evaluated in a non-local domain dependent manner for the two HB proposals. By looking at the fulfilment of these conditions at the relevant asymptotic limits we are able to provide a well grounded qualitative description of the full time evolution and radial asymptotic scaling of the three entropies in generic models. The following rigorous analytic results are obtained for the three proposals: (i) entropy grows when the density growing mode is dominant, (ii) all ever-expanding hyperbolic models reach a stable terminal equilibrium characterized by an inhomogeneous entropy maximum in their late time evolution; (iii) regions with decaying modes and collapsing elliptic models exhibit unstable equilibria associated with an entropy minimum (iv) near singularities the CET entropy diverges while the HB entropies converge; (v) the CET entropy converges for all models in the radial asymptotic range, whereas the HB entropies only converge for models asymptotic to a Friedmann-Lemaître-Robertson-Walker background. The fact that different independent proposals yield fairly similar conditions for entropy production, time evolution and radial scaling in generic LTB models seems to suggest that their common notion of a ‘gravitational entropy’ may be a theoretically robust concept applicable to more general spacetimes.

  6. Time-Reversal Acoustics and Maximum-Entropy Imaging

    SciTech Connect

    Berryman, J G

    2001-08-22

    Target location is a common problem in acoustical imaging using either passive or active data inversion. Time-reversal methods in acoustics have the important characteristic that they provide a means of determining the eigenfunctions and eigenvalues of the scattering operator for either of these problems. Each eigenfunction may often be approximately associated with an individual scatterer. The resulting decoupling of the scattered field from a collection of targets is a very useful aid to localizing the targets, and suggests a number of imaging and localization algorithms. Two of these are linear subspace methods and maximum-entropy imaging.

  7. On estimating distributions with the maximum entropy principle

    NASA Astrophysics Data System (ADS)

    Zhigunov, V. P.; Kostkina, T. B.; Spiridonov, A. A.

    1988-12-01

    The possibility to use the maximum entropy principle to estimate distributions from measurements with known resolution functions has been considered. The general analytical form of the distribution estimate has been obtained. The statistical properties of this estimate, i.e. the error matrix and bias, have been analyzed. The method is generalized for the case when the unknown distribution is considered to be close to a certain known one. The proposed method is illustrated by a number of numerical experiments. The results are compared with those obtained by other methods.

  8. Jarzynski equality in the context of maximum path entropy

    NASA Astrophysics Data System (ADS)

    González, Diego; Davis, Sergio

    2017-06-01

    In the global framework of finding an axiomatic derivation of nonequilibrium Statistical Mechanics from fundamental principles, such as the maximum path entropy - also known as Maximum Caliber principle -, this work proposes an alternative derivation of the well-known Jarzynski equality, a nonequilibrium identity of great importance today due to its applications to irreversible processes: biological systems (protein folding), mechanical systems, among others. This equality relates the free energy differences between two equilibrium thermodynamic states with the work performed when going between those states, through an average over a path ensemble. In this work the analysis of Jarzynski's equality will be performed using the formalism of inference over path space. This derivation highlights the wide generality of Jarzynski's original result, which could even be used in non-thermodynamical settings such as social systems, financial and ecological systems.

  9. Nuclear-weighted X-ray maximum entropy method - NXMEM.

    PubMed

    Christensen, Sebastian; Bindzus, Niels; Christensen, Mogens; Brummerstedt Iversen, Bo

    2015-01-01

    Subtle structural features such as disorder and anharmonic motion may be accurately characterized from nuclear density distributions (NDDs). As a viable alternative to neutron diffraction, this paper introduces a new approach named the nuclear-weighted X-ray maximum entropy method (NXMEM) for reconstructing pseudo NDDs. It calculates an electron-weighted nuclear density distribution (eNDD), exploiting that X-ray diffraction delivers data of superior quality, requires smaller sample volumes and has higher availability. NXMEM is tested on two widely different systems: PbTe and Ba(8)Ga(16)Sn(30). The first compound, PbTe, possesses a deceptively simple crystal structure on the macroscopic level that is unable to account for its excellent thermoelectric properties. The key mechanism involves local distortions, and the capability of NXMEM to probe this intriguing feature is established with simulated powder diffraction data. In the second compound, Ba(8)Ga(16)Sn(30), disorder among the Ba guest atoms is analysed with both experimental and simulated single-crystal diffraction data. In all cases, NXMEM outperforms the maximum entropy method by substantially enhancing the nuclear resolution. The induced improvements correlate with the amount of available data, rendering NXMEM especially powerful for powder and low-resolution single-crystal diffraction. The NXMEM procedure can be implemented in existing software and facilitates widespread characterization of disorder in functional materials.

  10. LIBOR troubles: Anomalous movements detection based on maximum entropy

    NASA Astrophysics Data System (ADS)

    Bariviera, Aurelio F.; Martín, María T.; Plastino, Angelo; Vampa, Victoria

    2016-05-01

    According to the definition of the London Interbank Offered Rate (LIBOR), contributing banks should give fair estimates of their own borrowing costs in the interbank market. Between 2007 and 2009, several banks made inappropriate submissions of LIBOR, sometimes motivated by profit-seeking from their trading positions. In 2012, several newspapers' articles began to cast doubt on LIBOR integrity, leading surveillance authorities to conduct investigations on banks' behavior. Such procedures resulted in severe fines imposed to involved banks, who recognized their financial inappropriate conduct. In this paper, we uncover such unfair behavior by using a forecasting method based on the Maximum Entropy principle. Our results are robust against changes in parameter settings and could be of great help for market surveillance.

  11. Conjugate variables in continuous maximum-entropy inference.

    PubMed

    Davis, Sergio; Gutiérrez, Gonzalo

    2012-11-01

    For a continuous maximum-entropy distribution (obtained from an arbitrary number of simultaneous constraints), we derive a general relation connecting the Lagrange multipliers and the expectation values of certain particularly constructed functions of the states of the system. From this relation, an estimator for a given Lagrange multiplier can be constructed from derivatives of the corresponding constraining function. These estimators sometimes lead to the determination of the Lagrange multipliers by way of solving a linear system, and, in general, they provide another tool to widen the applicability of Jaynes's formalism. This general relation, especially well suited for computer simulation techniques, also provides some insight into the interpretation of the hypervirial relations known in statistical mechanics and the recently derived microcanonical dynamical temperature. We illustrate the usefulness of these new relations with several applications in statistics.

  12. A maximum entropy reconstruction technique for tomographic particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Bilsky, A. V.; Lozhkin, V. A.; Markovich, D. M.; Tokarev, M. P.

    2013-04-01

    This paper studies a novel approach for reducing tomographic PIV computational complexity. The proposed approach is an algebraic reconstruction technique, termed MENT (maximum entropy). This technique computes the three-dimensional light intensity distribution several times faster than SMART, using at least ten times less memory. Additionally, the reconstruction quality remains nearly the same as with SMART. This paper presents the theoretical computation performance comparison for MENT, SMART and MART, followed by validation using synthetic particle images. Both the theoretical assessment and validation of synthetic images demonstrate significant computational time reduction. The data processing accuracy of MENT was compared to that of SMART in a slot jet experiment. A comparison of the average velocity profiles shows a high level of agreement between the results obtained with MENT and those obtained with SMART.

  13. Test images for the maximum entropy image restoration method

    NASA Technical Reports Server (NTRS)

    Mackey, James E.

    1990-01-01

    One of the major activities of any experimentalist is data analysis and reduction. In solar physics, remote observations are made of the sun in a variety of wavelengths and circumstances. In no case is the data collected free from the influence of the design and operation of the data gathering instrument as well as the ever present problem of noise. The presence of significant noise invalidates the simple inversion procedure regardless of the range of known correlation functions. The Maximum Entropy Method (MEM) attempts to perform this inversion by making minimal assumptions about the data. To provide a means of testing the MEM and characterizing its sensitivity to noise, choice of point spread function, type of data, etc., one would like to have test images of known characteristics that can represent the type of data being analyzed. A means of reconstructing these images is presented.

  14. Maximum entropy algorithm with inexact upper entropy bound based on Fup basis functions with compact support

    NASA Astrophysics Data System (ADS)

    Gotovac, Hrvoje; Gotovac, Blaž

    2009-12-01

    The maximum entropy (MaxEnt) principle is a versatile tool for statistical inference of the probability density function (pdf) from its moments as a least-biased estimation among all other possible pdf's. It maximizes Shannon entropy, satisfying the moment constraints. Thus, the MaxEnt algorithm transforms the original constrained optimization problem to the unconstrained dual optimization problem using Lagrangian multipliers. The Classic Moment Problem (CMP) uses algebraic power moments, causing typical conventional numerical methods to fail for higher-order moments (m>5-10) due to different sensitivities of Lagrangian multipliers and unbalanced nonlinearities. Classic MaxEnt algorithms overcome these difficulties by using orthogonal polynomials, which enable roughly the same sensitivity for all Lagrangian multipliers. In this paper, we employ an idea based on different principles, using Fupn basis functions with compact support, which can exactly describe algebraic polynomials, but only if the Fup order- n is greater than or equal to the polynomial's order. Our algorithm solves the CMP with respect to the moments of only low order Fup2 basis functions, finding a Fup2 optimal pdf with better balanced Lagrangian multipliers. The algorithm is numerically very efficient due to localized properties of Fup2 basis functions implying a weaker dependence between Lagrangian multipliers and faster convergence. Only consequences are an iterative scheme of the algorithm where power moments are a sum of Fup2 and residual moments and an inexact entropy upper bound. However, due to small residual moments, the algorithm converges very quickly as demonstrated on two continuous pdf examples - the beta distribution and a bi-modal pdf, and two discontinuous pdf examples - the step and double Dirac pdf. Finally, these pdf examples present that Fup MaxEnt algorithm yields smaller entropy value than classic MaxEnt algorithm, but differences are very small for all practical engineering

  15. In Vivo potassium-39 NMR spectra by the burg maximum-entropy method

    NASA Astrophysics Data System (ADS)

    Uchiyama, Takanori; Minamitani, Haruyuki

    The Burg maximum-entropy method was applied to estimate 39K NMR spectra of mung bean root tips. The maximum-entropy spectra have as good a linearity between peak areas and potassium concentrations as those obtained by fast Fourier transform and give a better estimation of intracellular potassium concentrations. Therefore potassium uptake and loss processes of mung bean root tips are shown to be more clearly traced by the maximum-entropy method.

  16. Maximum Entropy Production As a Framework for Understanding How Living Systems Evolve, Organize and Function

    NASA Astrophysics Data System (ADS)

    Vallino, J. J.; Algar, C. K.; Huber, J. A.; Fernandez-Gonzalez, N.

    2014-12-01

    The maximum entropy production (MEP) principle holds that non equilibrium systems with sufficient degrees of freedom will likely be found in a state that maximizes entropy production or, analogously, maximizes potential energy destruction rate. The theory does not distinguish between abiotic or biotic systems; however, we will show that systems that can coordinate function over time and/or space can potentially dissipate more free energy than purely Markovian processes (such as fire or a rock rolling down a hill) that only maximize instantaneous entropy production. Biological systems have the ability to store useful information acquired via evolution and curated by natural selection in genomic sequences that allow them to execute temporal strategies and coordinate function over space. For example, circadian rhythms allow phototrophs to "predict" that sun light will return and can orchestrate metabolic machinery appropriately before sunrise, which not only gives them a competitive advantage, but also increases the total entropy production rate compared to systems that lack such anticipatory control. Similarly, coordination over space, such a quorum sensing in microbial biofilms, can increase acquisition of spatially distributed resources and free energy and thereby enhance entropy production. In this talk we will develop a modeling framework to describe microbial biogeochemistry based on the MEP conjecture constrained by information and resource availability. Results from model simulations will be compared to laboratory experiments to demonstrate the usefulness of the MEP approach.

  17. Exploiting Acoustic and Syntactic Features for Automatic Prosody Labeling in a Maximum Entropy Framework

    PubMed Central

    Sridhar, Vivek Kumar Rangarajan; Bangalore, Srinivas; Narayanan, Shrikanth S.

    2009-01-01

    In this paper, we describe a maximum entropy-based automatic prosody labeling framework that exploits both language and speech information. We apply the proposed framework to both prominence and phrase structure detection within the Tones and Break Indices (ToBI) annotation scheme. Our framework utilizes novel syntactic features in the form of supertags and a quantized acoustic–prosodic feature representation that is similar to linear parameterizations of the prosodic contour. The proposed model is trained discriminatively and is robust in the selection of appropriate features for the task of prosody detection. The proposed maximum entropy acoustic–syntactic model achieves pitch accent and boundary tone detection accuracies of 86.0% and 93.1% on the Boston University Radio News corpus, and, 79.8% and 90.3% on the Boston Directions corpus. The phrase structure detection through prosodic break index labeling provides accuracies of 84% and 87% on the two corpora, respectively. The reported results are significantly better than previously reported results and demonstrate the strength of maximum entropy model in jointly modeling simple lexical, syntactic, and acoustic features for automatic prosody labeling. PMID:19603083

  18. Local image statistics: maximum-entropy constructions and perceptual salience

    PubMed Central

    Victor, Jonathan D.; Conte, Mary M.

    2012-01-01

    The space of visual signals is high-dimensional and natural visual images have a highly complex statistical structure. While many studies suggest that only a limited number of image statistics are used for perceptual judgments, a full understanding of visual function requires analysis not only of the impact of individual image statistics, but also, how they interact. In natural images, these statistical elements (luminance distributions, correlations of low and high order, edges, occlusions, etc.) are intermixed, and their effects are difficult to disentangle. Thus, there is a need for construction of stimuli in which one or more statistical elements are introduced in a controlled fashion, so that their individual and joint contributions can be analyzed. With this as motivation, we present algorithms to construct synthetic images in which local image statistics—including luminance distributions, pair-wise correlations, and higher-order correlations—are explicitly specified and all other statistics are determined implicitly by maximum-entropy. We then apply this approach to measure the sensitivity of the human visual system to local image statistics and to sample their interactions. PMID:22751397

  19. Analytical continuation of imaginary axis data using maximum entropy

    NASA Astrophysics Data System (ADS)

    Gunnarsson, O.; Haverkort, M. W.; Sangiovanni, G.

    2010-04-01

    We study the maximum entropy (MaxEnt) approach for analytical continuation of spectral data from imaginary times to real frequencies. The total error is divided in a statistical error, due to the noise in the input data, and a systematic error, due to deviations of the default function, used in the MaxEnt approach, from the exact spectrum. We find that the MaxEnt approach in its classical formulation can lead to a nonoptimal balance between the two types of errors, leading to an unnecessary large statistical error. The statistical error can be reduced by splitting up the data in several batches, performing a MaxEnt calculation for each batch and averaging. This can outweigh an increase in the systematic error resulting from this approach. The output from the MaxEnt calculation can be used as a default function for a new MaxEnt calculation. Such iterations often lead to worse results due to an increase in the statistical error. By splitting up the data in batches, the statistical error is reduced and the increase resulting from iterations can be outweighed by a decrease in the systematic error. Finally we consider a linearized version to obtain a better understanding of the method.

  20. Characterization of a maximum entropy image reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Lyon, Richard G.; Dorband, John E.; Hollis, Jan M.

    1995-08-01

    Maximum entropy algorithms (MEM) can be used to restore imagery within a region of corrupted data. The necessary condition being that the point spread function (PSF) must be sufficiently large with respect to the region of corrupted data. In most cases MEM will give a result which may not be the result desired. In general the error assessment is qualitative and the restored image appears cosmetically more pleasing to the yee. This paper presents a characterization of one MEM algorithm which estimates an object consistent with Boltzmann statistics within a corrupted region of the detector array. Chosen as an example will be two prefix Hubble Space Telescope (HST) Faint Object Camera (FOC) images. The characterization consists of an assessment of photometric accuracy, precision, and resolution. The results presented here will be parameterized in terms of signal to noise ratio and size of corrupted data. This study is conducted with a set of simulated data which closely match that of the HST FOC. As a demonstration, we apply these techniques to an actual data set obtained from the HST FOC. These FOC data are corrupted in a region which we restore using a synthetic PSF.

  1. Application of the maximum relative entropy method to the physics of ferromagnetic materials

    NASA Astrophysics Data System (ADS)

    Giffin, Adom; Cafaro, Carlo; Ali, Sean Alan

    2016-08-01

    It is known that the Maximum relative Entropy (MrE) method can be used to both update and approximate probability distributions functions in statistical inference problems. In this manuscript, we apply the MrE method to infer magnetic properties of ferromagnetic materials. In addition to comparing our approach to more traditional methodologies based upon the Ising model and Mean Field Theory, we also test the effectiveness of the MrE method on conventionally unexplored ferromagnetic materials with defects.

  2. Comparison between experiments and predictions based on maximum entropy for sprays from a pressure atomizer

    NASA Astrophysics Data System (ADS)

    Li, X.; Chin, L. P.; Tankin, R. S.; Jackson, T.; Stutrud, J.; Switzer, G.

    1991-07-01

    Measurements were made of the droplet size and velocity distributions in a hollow cone spray from a pressure atomizer using a phase/Doppler particle analyzer. The maximum entropy principle is used to predict these distributions. The constraints imposed in this model involve conversation of mass, momentum, and energy. Estimates of the source terms associated with these constraints are made based on physical reasoning. Agreement between the measurements and the predictions is very good.

  3. Maximum Entropy/Optimal Projection (MEOP) control design synthesis: Optimal quantification of the major design tradeoffs

    NASA Technical Reports Server (NTRS)

    Hyland, D. C.; Bernstein, D. S.

    1987-01-01

    The underlying philosophy and motivation of the optimal projection/maximum entropy (OP/ME) stochastic modeling and reduced control design methodology for high order systems with parameter uncertainties are discussed. The OP/ME design equations for reduced-order dynamic compensation including the effect of parameter uncertainties are reviewed. The application of the methodology to several Large Space Structures (LSS) problems of representative complexity is illustrated.

  4. Maximum entropy principle based estimation of performance distribution in queueing theory.

    PubMed

    He, Dayi; Li, Ran; Huang, Qi; Lei, Ping

    2014-01-01

    In related research on queuing systems, in order to determine the system state, there is a widespread practice to assume that the system is stable and that distributions of the customer arrival ratio and service ratio are known information. In this study, the queuing system is looked at as a black box without any assumptions on the distribution of the arrival and service ratios and only keeping the assumption on the stability of the queuing system. By applying the principle of maximum entropy, the performance distribution of queuing systems is derived from some easily accessible indexes, such as the capacity of the system, the mean number of customers in the system, and the mean utilization of the servers. Some special cases are modeled and their performance distributions are derived. Using the chi-square goodness of fit test, the accuracy and generality for practical purposes of the principle of maximum entropy approach is demonstrated.

  5. Maximum Entropy Principle Based Estimation of Performance Distribution in Queueing Theory

    PubMed Central

    He, Dayi; Li, Ran; Huang, Qi; Lei, Ping

    2014-01-01

    In related research on queuing systems, in order to determine the system state, there is a widespread practice to assume that the system is stable and that distributions of the customer arrival ratio and service ratio are known information. In this study, the queuing system is looked at as a black box without any assumptions on the distribution of the arrival and service ratios and only keeping the assumption on the stability of the queuing system. By applying the principle of maximum entropy, the performance distribution of queuing systems is derived from some easily accessible indexes, such as the capacity of the system, the mean number of customers in the system, and the mean utilization of the servers. Some special cases are modeled and their performance distributions are derived. Using the chi-square goodness of fit test, the accuracy and generality for practical purposes of the principle of maximum entropy approach is demonstrated. PMID:25207992

  6. Multifield stochastic particle production: beyond a maximum entropy ansatz

    NASA Astrophysics Data System (ADS)

    Amin, Mustafa A.; Garcia, Marcos A. G.; Xie, Hong-Yi; Wen, Osmond

    2017-09-01

    We explore non-adiabatic particle production for Nf coupled scalar fields in a time-dependent background with stochastically varying effective masses, cross-couplings and intervals between interactions. Under the assumption of weak scattering per interaction, we provide a framework for calculating the typical particle production rates after a large number of interactions. After setting up the framework, for analytic tractability, we consider interactions (effective masses and cross couplings) characterized by series of Dirac-delta functions in time with amplitudes and locations drawn from different distributions. Without assuming that the fields are statistically equivalent, we present closed form results (up to quadratures) for the asymptotic particle production rates for the Nf=1 and Nf=2 cases. We also present results for the general Nf >2 case, but with more restrictive assumptions. We find agreement between our analytic results and direct numerical calculations of the total occupation number of the produced particles, with departures that can be explained in terms of violation of our assumptions. We elucidate the precise connection between the maximum entropy ansatz (MEA) used in Amin & Baumann (2015) and the underlying statistical distribution of the self and cross couplings. We provide and justify a simple to use (MEA-inspired) expression for the particle production rate, which agrees with our more detailed treatment when the parameters characterizing the effective mass and cross-couplings between fields are all comparable to each other. However, deviations are seen when some parameters differ significantly from others. We show that such deviations become negligible for a broad range of parameters when Nfgg 1.

  7. Multi-dimensional validation of a maximum-entropy-based interpolative moment closure

    NASA Astrophysics Data System (ADS)

    Tensuda, Boone R.; McDonald, James G.; Groth, Clinton P. T.

    2016-11-01

    The performance of a novel maximum-entropy-based 14-moment interpolative closure is examined for multi-dimensional flows via validation of the closure for several established benchmark problems. Despite its consideration of heat transfer, this 14-moment closure contains closed-form expressions for the closing fluxes, unlike the maximum-entropy models on which it is based. While still retaining singular behaviour in some regions of realizable moment space, the interpolative closure proves to have a large region of hyperbolicity while remaining computationally tractable. Furthermore, the singular nature has been shown to be advantageous for practical simulations. The multi-dimensional cases considered here include Couette flow, heat transfer between infinite parallel plates, subsonic flow past a circular cylinder, and lid-driven cavity flow. The 14-moment predictions are compared to analytical, DSMC, and experimental results as well the results of other closures. For each case, a range of Knudsen numbers are explored in order to assess the validity and accuracy of the closure in different regimes. For Couette flow and heat transfer between flat plates, it is shown that the closure predictions are consistent with the expected analytical solutions in all regimes. In the cases of flow past a circular cylinder and lid-driven cavity flow, the closure is found to give more accurate results than the related lower-order maximum-entropy Gaussian and maximum-entropy-based regularized Gaussian closures. The ability to predict important non-equilibrium phenomena, such as a counter-gradient heat flux, is also established.

  8. Maximum entropy analytic continuation for frequency-dependent transport coefficients with nonpositive spectral weight

    NASA Astrophysics Data System (ADS)

    Reymbaut, A.; Gagnon, A.-M.; Bergeron, D.; Tremblay, A.-M. S.

    2017-03-01

    The computation of transport coefficients, even in linear response, is a major challenge for theoretical methods that rely on analytic continuation of correlation functions obtained numerically in Matsubara space. While maximum entropy methods can be used for certain correlation functions, this is not possible in general, important examples being the Seebeck, Hall, Nernst, and Reggi-Leduc coefficients. Indeed, positivity of the spectral weight on the positive real-frequency axis is not guaranteed in these cases. The spectral weight can even be complex in the presence of broken time-reversal symmetry. Various workarounds, such as the neglect of vertex corrections or the study of the infinite frequency or Kelvin limits, have been proposed. Here, we show that one can define auxiliary response functions that allow one to extract the desired real-frequency susceptibilities from maximum entropy methods in the most general multiorbital cases with no particular symmetry. As a benchmark case, we study the longitudinal thermoelectric response and corresponding Onsager coefficient in the single-band two-dimensional Hubbard model treated with dynamical mean-field theory and continuous-time quantum Monte Carlo. We thereby extend the maximum entropy analytic continuation with auxiliary functions (MaxEntAux method), developed for the study of the superconducting pairing dynamics of correlated materials, to transport coefficients.

  9. Vertical and horizontal processes in the global atmosphere and the maximum entropy production conjecture

    NASA Astrophysics Data System (ADS)

    Pascale, S.; Gregory, J. M.; Ambaum, M. H. P.; Tailleux, R.; Lucarini, V.

    2012-01-01

    The objective of this paper is to reconsider the Maximum Entropy Production conjecture (MEP) in the context of a very simple two-dimensional zonal-vertical climate model able to represent the total material entropy production due at the same time to both horizontal and vertical heat fluxes. MEP is applied first to a simple four-box model of climate which accounts for both horizontal and vertical material heat fluxes. It is shown that, under condition of fixed insolation, a MEP solution is found with reasonably realistic temperature and heat fluxes, thus generalising results from independent two-box horizontal or vertical models. It is also shown that the meridional and the vertical entropy production terms are independently involved in the maximisation and thus MEP can be applied to each subsystem with fixed boundary conditions. We then extend the four-box model by increasing its resolution, and compare it with GCM output. A MEP solution is found which is fairly realistic as far as the horizontal large scale organisation of the climate is concerned whereas the vertical structure looks to be unrealistic and presents seriously unstable features. This study suggest that the thermal meridional structure of the atmosphere is predicted fairly well by MEP once the insolation is given but the vertical structure of the atmosphere cannot be predicted satisfactorily by MEP unless constraints are imposed to represent the determination of longwave absorption by water vapour and clouds as a function of the state of the climate. Furthermore an order-of-magnitude estimate of contributions to the material entropy production due to horizontal and vertical processes within the climate system is provided by using two different methods. In both cases we found that approximately 40 mW m-2 K-1 of material entropy production is due to vertical heat transport and 5-7 mW m-2 K-1 to horizontal heat transport.

  10. Maximum entropy principle for stationary states underpinned by stochastic thermodynamics

    NASA Astrophysics Data System (ADS)

    Ford, Ian J.

    2015-11-01

    The selection of an equilibrium state by maximizing the entropy of a system, subject to certain constraints, is often powerfully motivated as an exercise in logical inference, a procedure where conclusions are reached on the basis of incomplete information. But such a framework can be more compelling if it is underpinned by dynamical arguments, and we show how this can be provided by stochastic thermodynamics, where an explicit link is made between the production of entropy and the stochastic dynamics of a system coupled to an environment. The separation of entropy production into three components allows us to select a stationary state by maximizing the change, averaged over all realizations of the motion, in the principal relaxational or nonadiabatic component, equivalent to requiring that this contribution to the entropy production should become time independent for all realizations. We show that this recovers the usual equilibrium probability density function (pdf) for a conservative system in an isothermal environment, as well as the stationary nonequilibrium pdf for a particle confined to a potential under nonisothermal conditions, and a particle subject to a constant nonconservative force under isothermal conditions. The two remaining components of entropy production account for a recently discussed thermodynamic anomaly between over- and underdamped treatments of the dynamics in the nonisothermal stationary state.

  11. Nonuniform sampling and maximum entropy reconstruction in multidimensional NMR.

    PubMed

    Hoch, Jeffrey C; Maciejewski, Mark W; Mobli, Mehdi; Schuyler, Adam D; Stern, Alan S

    2014-02-18

    NMR spectroscopy is one of the most powerful and versatile analytic tools available to chemists. The discrete Fourier transform (DFT) played a seminal role in the development of modern NMR, including the multidimensional methods that are essential for characterizing complex biomolecules. However, it suffers from well-known limitations: chiefly the difficulty in obtaining high-resolution spectral estimates from short data records. Because the time required to perform an experiment is proportional to the number of data samples, this problem imposes a sampling burden for multidimensional NMR experiments. At high magnetic field, where spectral dispersion is greatest, the problem becomes particularly acute. Consequently multidimensional NMR experiments that rely on the DFT must either sacrifice resolution in order to be completed in reasonable time or use inordinate amounts of time to achieve the potential resolution afforded by high-field magnets. Maximum entropy (MaxEnt) reconstruction is a non-Fourier method of spectrum analysis that can provide high-resolution spectral estimates from short data records. It can also be used with nonuniformly sampled data sets. Since resolution is substantially determined by the largest evolution time sampled, nonuniform sampling enables high resolution while avoiding the need to uniformly sample at large numbers of evolution times. The Nyquist sampling theorem does not apply to nonuniformly sampled data, and artifacts that occur with the use of nonuniform sampling can be viewed as frequency-aliased signals. Strategies for suppressing nonuniform sampling artifacts include the careful design of the sampling scheme and special methods for computing the spectrum. Researchers now routinely report that they can complete an N-dimensional NMR experiment 3(N-1) times faster (a 3D experiment in one ninth of the time). As a result, high-resolution three- and four-dimensional experiments that were prohibitively time consuming are now practical

  12. Maximum Entropy Reconstruction and Nonuniform Sampling in Multidimensional NMR

    PubMed Central

    HOCH, JEFFREY C.; MACIEJEWSKI, MARK W.; MOBLI, MEHDI; SCHUYLER, ADAM D.; STERN, ALAN S.

    2014-01-01

    CONSPECTUS NMR spectroscopy is one of the most powerful and versatile analytic tools available to chemists. The discrete Fourier transform (DFT) played a seminal role in the development of modern NMR, including the multidimensional methods that are essential for complex biomolecules, but it suffers from well-known limitations. Chief among these is the difficulty of obtaining high-resolution spectral estimates from short data records. For multidimensional NMR experiments, this imposes a sampling burden, because the time required to perform an experiment is proportional to the number of data samples. At high magnetic field, where spectral dispersion is greatest, the problem becomes particularly acute. Consequently multidimensional NMR experiments that rely on the DFT either must sacrifice resolution in order to be completed in reasonable time, or they must use inordinate amounts of time to achieve the potential resolution afforded by high-field magnets. Maximum entropy (MaxEnt) reconstruction is a non-Fourier method of spectrum analysis capable of providing high-resolution spectral estimates from short data records. It can also be used with nonuniformly sampled data sets. Since resolution is substantially determined by the largest evolution time sampled, nonuniform sampling enables high resolution while avoiding the need to uniformly sample at large numbers of evolution times. The Nyquist sampling theorem does not apply to nonuniformly sampled data, and artifacts that attend the use of nonuniform sampling can be viewed as frequency-aliased signals. Strategies for suppressing nonuniform sampling artifacts include careful design of the sampling scheme and special methods for computing the spectrum. Time savings of a factor of three for each of the N-1 indirect dimensions of an N-dimensional NMR experiment are now routinely reported, making practical high-resolution 3- and 4-dimensional experiments that were previously prohibitively time consuming. Conversely, tailored

  13. Maximum-entropy closure for a Galerkin system of incompressible shear flow

    NASA Astrophysics Data System (ADS)

    Noack, Bernd R.; Niven, Robert K.

    2011-11-01

    A statistical physics closure is proposed for Galerkin models of incompressible shear flows. This closure employs a maximum entropy (MaxEnt) principle to infer the probability distribution in Galerkin state space using exact statistical balance equations as side constraints. Application to an empirical Galerkin model of the periodic cylinder wake predicts mean amplitude values and modal energy levels in good agreement with direct numerical simulation. Recipes for more complicated Galerkin systems are provided. Partially funded by the ANR Chaires d'Excellence TUCOROM.

  14. Application of maximum entropy method for droplet size distribution prediction using instability analysis of liquid sheet

    NASA Astrophysics Data System (ADS)

    Movahednejad, E.; Ommi, F.; Hosseinalipour, S. M.; Chen, C. P.; Mahdavi, S. A.

    2011-12-01

    This paper describes the implementation of the instability analysis of wave growth on liquid jet surface, and maximum entropy principle (MEP) for prediction of droplet diameter distribution in primary breakup region. The early stage of the primary breakup, which contains the growth of wave on liquid-gas interface, is deterministic; whereas the droplet formation stage at the end of primary breakup is random and stochastic. The stage of droplet formation after the liquid bulk breakup can be modeled by statistical means based on the maximum entropy principle. The MEP provides a formulation that predicts the atomization process while satisfying constraint equations based on conservations of mass, momentum and energy. The deterministic aspect considers the instability of wave motion on jet surface before the liquid bulk breakup using the linear instability analysis, which provides information of the maximum growth rate and corresponding wavelength of instabilities in breakup zone. The two sub-models are coupled together using momentum source term and mean diameter of droplets. This model is also capable of considering drag force on droplets through gas-liquid interaction. The predicted results compared favorably with the experimentally measured droplet size distributions for hollow-cone sprays.

  15. Predicting protein β-sheet contacts using a maximum entropy-based correlated mutation measure.

    PubMed

    Burkoff, Nikolas S; Várnai, Csilla; Wild, David L

    2013-03-01

    The problem of ab initio protein folding is one of the most difficult in modern computational biology. The prediction of residue contacts within a protein provides a more tractable immediate step. Recently introduced maximum entropy-based correlated mutation measures (CMMs), such as direct information, have been successful in predicting residue contacts. However, most correlated mutation studies focus on proteins that have large good-quality multiple sequence alignments (MSA) because the power of correlated mutation analysis falls as the size of the MSA decreases. However, even with small autogenerated MSAs, maximum entropy-based CMMs contain information. To make use of this information, in this article, we focus not on general residue contacts but contacts between residues in β-sheets. The strong constraints and prior knowledge associated with β-contacts are ideally suited for prediction using a method that incorporates an often noisy CMM. Using contrastive divergence, a statistical machine learning technique, we have calculated a maximum entropy-based CMM. We have integrated this measure with a new probabilistic model for β-contact prediction, which is used to predict both residue- and strand-level contacts. Using our model on a standard non-redundant dataset, we significantly outperform a 2D recurrent neural network architecture, achieving a 5% improvement in true positives at the 5% false-positive rate at the residue level. At the strand level, our approach is competitive with the state-of-the-art single methods achieving precision of 61.0% and recall of 55.4%, while not requiring residue solvent accessibility as an input. http://www2.warwick.ac.uk/fac/sci/systemsbiology/research/software/

  16. Application of the method of maximum entropy in the mean to classification problems

    NASA Astrophysics Data System (ADS)

    Gzyl, Henryk; ter Horst, Enrique; Molina, German

    2015-11-01

    In this note we propose an application of the method of maximum entropy in the mean to solve a class of inverse problems comprising classification problems and feasibility problems appearing in optimization. Such problems may be thought of as linear inverse problems with convex constraints imposed on the solution as well as on the data. The method of maximum entropy in the mean proves to be a very useful tool to deal with this type of problems.

  17. Application of a maximum-entropy technique to spherical geometry in radiative transfer and reactor physics

    NASA Astrophysics Data System (ADS)

    Degheidy, A. R.; Madkour, M. A.

    1993-05-01

    The maximum-entropy technique is used to solving three problems in radiative transfer and reactor physics involving spherical geometry. These problems are: (1) luminosity or the total energy emitted by a sphere, (2) neutron capture probability, and (3) the albedo problem. Numerical calculations are done and compared with the exact values as well as with Pade's approximant results. The comparisons show that the maximum-entropy results are very good and converge to the exact results.

  18. Maximum entropy method applied to deblurring images on a MasPar MP-1 computer

    NASA Technical Reports Server (NTRS)

    Bonavito, N. L.; Dorband, John; Busse, Tim

    1991-01-01

    A statistical inference method based on the principle of maximum entropy is developed for the purpose of enhancing and restoring satellite images. The proposed maximum entropy image restoration method is shown to overcome the difficulties associated with image restoration and provide the smoothest and most appropriate solution consistent with the measured data. An implementation of the method on the MP-1 computer is described, and results of tests on simulated data are presented.

  19. Maximum entropy analysis of flow and reaction networks

    NASA Astrophysics Data System (ADS)

    Niven, Robert K.; Abel, Markus; Schlegel, Michael; Waldrip, Steven H.

    2015-01-01

    We present a generalised MaxEnt method to infer the stationary state of a flow network, subject to "observable" constraints on expectations of various parameters, as well as "physical" constraints arising from frictional properties (resistance functions) and conservation laws (Kirchhoff laws). The method invokes an entropy defined over all uncertainties in the system, in this case the internal and external flow rates and potential differences. The proposed MaxEnt framework is readily extendable to the analysis of networks with uncertainty in the network structure itself.

  20. Exploration of the Maximum Entropy/Optimal Projection Approach to Control Design Synthesis for Large Space Structures.

    DTIC Science & Technology

    1985-02-01

    Energy Analysis , a branch of dynamic modal analysis developed for analyzing acoustic vibration problems, its present stage of development embodies a...Maximum Entropy Stochastic Modelling and Reduced-Order Design Synthesis is a rigorous new approach to this class of problems. Inspired by Statistical

  1. Unification of field theory and maximum entropy methods for learning probability densities.

    PubMed

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  2. Estimation of typhoon rainfall in GaoPing River: A Multivariate Maximum Entropy Method

    NASA Astrophysics Data System (ADS)

    Pei-Jui, Wu; Hwa-Lung, Yu

    2016-04-01

    The heavy rainfall from typhoons is the main factor of the natural disaster in Taiwan, which causes the significant loss of human lives and properties. Statistically average 3.5 typhoons invade Taiwan every year, and the serious typhoon, Morakot in 2009, impacted Taiwan in recorded history. Because the duration, path and intensity of typhoon, also affect the temporal and spatial rainfall type in specific region , finding the characteristics of the typhoon rainfall type is advantageous when we try to estimate the quantity of rainfall. This study developed a rainfall prediction model and can be divided three parts. First, using the EEOF(extended empirical orthogonal function) to classify the typhoon events, and decompose the standard rainfall type of all stations of each typhoon event into the EOF and PC(principal component). So we can classify the typhoon events which vary similarly in temporally and spatially as the similar typhoon types. Next, according to the classification above, we construct the PDF(probability density function) in different space and time by means of using the multivariate maximum entropy from the first to forth moment statistically. Therefore, we can get the probability of each stations of each time. Final we use the BME(Bayesian Maximum Entropy method) to construct the typhoon rainfall prediction model , and to estimate the rainfall for the case of GaoPing river which located in south of Taiwan.This study could be useful for typhoon rainfall predictions in future and suitable to government for the typhoon disaster prevention .

  3. Entropy-based portfolio models: Practical issues

    NASA Astrophysics Data System (ADS)

    Shirazi, Yasaman Izadparast; Sabiruzzaman, Md.; Hamzah, Nor Aishah

    2015-10-01

    Entropy is a nonparametric alternative of variance and has been used as a measure of risk in portfolio analysis. In this paper, the computation of entropy risk for a given set of data is discussed with illustration. A comparison between entropy-based portfolio models is made. We propose a natural extension of the mean entropy portfolio to make it more general and diversified. In terms of performance, this new model is similar to the mean-entropy portfolio when applied to real and simulated data, and offers higher return if no constraint is set for the desired return; also it is found to be the most diversified portfolio model.

  4. Automatic feature template generation for maximum entropy based intonational phrase break prediction

    NASA Astrophysics Data System (ADS)

    Zhou, You

    2013-03-01

    The prediction of intonational phrase (IP) breaks is important for both the naturalness and intelligibility of Text-to- Speech (TTS) systems. In this paper, we propose a maximum entropy (ME) model to predict IP breaks from unrestricted text, and evaluate various keyword selection approaches in different domains. Furthermore, we design a hierarchical clustering algorithm for automatic generation of feature templates, which minimizes the need for human supervision during ME model training. Results of comparative experiments show that, for the task of IP break prediction, ME model obviously outperforms classification and regression tree (CART), log-likelihood ratio is the best scoring measure of keyword selection, compared with manual templates, templates automatically generated by our approach greatly improves the F-score of ME based IP break prediction, and significantly reduces the size of ME model.

  5. A novel impact identification algorithm based on a linear approximation with maximum entropy

    NASA Astrophysics Data System (ADS)

    Sanchez, N.; Meruane, V.; Ortiz-Bernardin, A.

    2016-09-01

    This article presents a novel impact identification algorithm that uses a linear approximation handled by a statistical inference model based on the maximum-entropy principle, termed linear approximation with maximum entropy (LME). Unlike other regression algorithms as artificial neural networks (ANNs) and support vector machines, the proposed algorithm requires only parameter to be selected and the impact is identified after solving a convex optimization problem that has a unique solution. In addition, with LME data is processed in a period of time that is comparable to the one of other algorithms. The performance of the proposed methodology is validated by considering an experimental aluminum plate. Time varying strain data is measured using four piezoceramic sensors bonded to the plate. To demonstrate the potential of the proposed approach over existing ones, results obtained via LME are compared with those of ANN and least square support vector machines. The results demonstrate that with a low number of sensors it is possible to accurately locate and quantify impacts on a structure and that LME outperforms other impact identification algorithms.

  6. Estimation of Lithological Classification in Taipei Basin: A Bayesian Maximum Entropy Method

    NASA Astrophysics Data System (ADS)

    Wu, Meng-Ting; Lin, Yuan-Chien; Yu, Hwa-Lung

    2015-04-01

    In environmental or other scientific applications, we must have a certain understanding of geological lithological composition. Because of restrictions of real conditions, only limited amount of data can be acquired. To find out the lithological distribution in the study area, many spatial statistical methods used to estimate the lithological composition on unsampled points or grids. This study applied the Bayesian Maximum Entropy (BME method), which is an emerging method of the geological spatiotemporal statistics field. The BME method can identify the spatiotemporal correlation of the data, and combine not only the hard data but the soft data to improve estimation. The data of lithological classification is discrete categorical data. Therefore, this research applied Categorical BME to establish a complete three-dimensional Lithological estimation model. Apply the limited hard data from the cores and the soft data generated from the geological dating data and the virtual wells to estimate the three-dimensional lithological classification in Taipei Basin. Keywords: Categorical Bayesian Maximum Entropy method, Lithological Classification, Hydrogeological Setting

  7. Generalized maximum entropy approach to quasistationary states in long-range systems

    NASA Astrophysics Data System (ADS)

    Martelloni, Gabriele; Martelloni, Gianluca; de Buyl, Pierre; Fanelli, Duccio

    2016-02-01

    Systems with long-range interactions display a short-time relaxation towards quasistationary states (QSSs) whose lifetime increases with the system size. In the paradigmatic Hamiltonian mean-field model (HMF) out-of-equilibrium phase transitions are predicted and numerically detected which separate homogeneous (zero magnetization) and inhomogeneous (nonzero magnetization) QSSs. In the former regime, the velocity distribution presents (at least) two large, symmetric bumps, which cannot be self-consistently explained by resorting to the conventional Lynden-Bell maximum entropy approach. We propose a generalized maximum entropy scheme which accounts for the pseudoconservation of additional charges, the even momenta of the single-particle distribution. These latter are set to the asymptotic values, as estimated by direct integration of the underlying Vlasov equation, which formally holds in the thermodynamic limit. Methodologically, we operate in the framework of a generalized Gibbs ensemble, as sometimes defined in statistical quantum mechanics, which contains an infinite number of conserved charges. The agreement between theory and simulations is satisfying, both above and below the out-of-equilibrium transition threshold. A previously unaccessible feature of the QSSs, the multiple bumps in the velocity profile, is resolved by our approach.

  8. Most likely maximum entropy for population analysis: A case study in decompression sickness prevention

    NASA Astrophysics Data System (ADS)

    Bennani, Youssef; Pronzato, Luc; Rendas, Maria João

    2015-01-01

    We estimate the density of a set of biophysical parameters from region censored observations. We propose a new Maximum Entropy (maxent) estimator formulated as finding the most likely constrained maxent density. By using the Ŕnyi entropy of order two instead of the Shannon entropy, we are lead to a quadratic optimization problem with linear inequality constraints that has an efficient numerical solution. We compare the proposed estimator to the NPMLE and to the best fitting maxent solutions in real data from hyperbaric diving, showing that the resulting distribution has better generalization performance than NPMLE or maxent alone.

  9. Entropy modeling of sustainable development of megacities

    NASA Astrophysics Data System (ADS)

    Tyrsin, A. N.; Gevorgyan, G. G.

    2017-06-01

    The entropy approach of modeling multidimensional stochastic systems is described. It is based on the system representation as a multidimensional random vector and on the use of its differential entropy as a mathematical model. The possibilities of using this entropy model are considered for problems of monitoring the state of complex systems, including megacities, regions and critical infrastructure. Examples of practical implementation of the model are presented for the study of the sustainable development of megacities and regional environmental protection systems.

  10. Maximum entropy approach to statistical inference for an ocean acoustic waveguide.

    PubMed

    Knobles, D P; Sagers, J D; Koch, R A

    2012-02-01

    A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations.

  11. Jaynes' MaxEnt, Steady State Flow Systems and the Maximum Entropy Production Principle

    NASA Astrophysics Data System (ADS)

    Niven, Robert K.

    2009-12-01

    Jaynes' maximum entropy (MaxEnt) principle was recently used to give a conditional, local derivation of the "maximum entropy production" (MEP) principle, which states that a flow system with fixed flow(s) or gradient(s) will converge to a steady state of maximum production of thermodynamic entropy (R. K. Niven, Phys. Rev. E, 80(2) (2009) 021113). The analysis provides a steady state analog of the MaxEnt formulation of equilibrium thermodynamics, applicable to many complex flow systems at steady state. The present study examines the classification of physical systems, with emphasis on the choice of constraints in MaxEnt. The discussion clarifies the distinction between equilibrium, fluid flow, source/sink, flow/reactive and other systems, leading into an appraisal of the application of MaxEnt to steady state flow and reactive systems.

  12. A Bayes-Maximum Entropy method for multi-sensor data fusion

    SciTech Connect

    Beckerman, M.

    1991-01-01

    In this paper we introduce a Bayes-Maximum Entropy formalism for multi-sensor data fusion, and present an application of this methodology to the fusion of ultrasound and visual sensor data as acquired by a mobile robot. In our approach the principle of maximum entropy is applied to the construction of priors and likelihoods from the data. Distances between ultrasound and visual points of interest in a dual representation are used to define Gibbs likelihood distributions. Both one- and two-dimensional likelihoods are presented, and cast into a form which makes explicit their dependence upon the mean. The Bayesian posterior distributions are used to test a null hypothesis, and Maximum Entropy Maps used for navigation are updated using the resulting information from the dual representation. 14 refs., 9 figs.

  13. Ecosystem biogeochemistry considered as a distributed metabolic network ordered by maximum entropy production

    PubMed Central

    Vallino, Joseph J.

    2010-01-01

    We examine the application of the maximum entropy production principle for describing ecosystem biogeochemistry. Since ecosystems can be functionally stable despite changes in species composition, we use a distributed metabolic network for describing biogeochemistry, which synthesizes generic biological structures that catalyse reaction pathways, but is otherwise organism independent. Allocation of biological structure and regulation of biogeochemical reactions is determined via solution of an optimal control problem in which entropy production is maximized. However, because synthesis of biological structures cannot occur if entropy production is maximized instantaneously, we propose that information stored within the metagenome allows biological systems to maximize entropy production when averaged over time. This differs from abiotic systems that maximize entropy production at a point in space–time, which we refer to as the steepest descent pathway. It is the spatio-temporal averaging that allows biological systems to outperform abiotic processes in entropy production, at least in many situations. A simulation of a methanotrophic system is used to demonstrate the approach. We conclude with a brief discussion on the implications of viewing ecosystems as self-organizing molecular machines that function to maximize entropy production at the ecosystem level of organization. PMID:20368260

  14. Maximum entropy restoration of blurred and oversaturated Hubble Space Telescope imagery.

    PubMed

    Bonavito, N L; Dorband, J E; Busse, T

    1993-10-10

    A brief introduction to image reconstruction is made and the basic concepts of the maximum entropy method are outlined. A statistical inference algorithm based on this method is presented. The algorithm is tested on simulated data and applied to real data. The latter is from a 1024 × 1024 Hubble Space Telescope image of the binary stellar system R Aquarii, which suffers from both spherical aberration and detector saturation. Under these constraints the maximum entropy method produces an image that agrees closely with observed results. The calculations were performed on the MasPar MP-1 single-instruction/multiple-data computer.

  15. Infrared image segmentation method based on spatial coherence histogram and maximum entropy

    NASA Astrophysics Data System (ADS)

    Liu, Songtao; Shen, Tongsheng; Dai, Yao

    2014-11-01

    In order to segment the target well and suppress background noises effectively, an infrared image segmentation method based on spatial coherence histogram and maximum entropy is proposed. First, spatial coherence histogram is presented by weighting the importance of the different position of these pixels with the same gray-level, which is obtained by computing their local density. Then, after enhancing the image by spatial coherence histogram, 1D maximum entropy method is used to segment the image. The novel method can not only get better segmentation results, but also have a faster computation time than traditional 2D histogram-based segmentation methods.

  16. Maximum joint entropy and information-based collaboration of automated learning machines

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Knuth, K. H.; Lary, D. J.

    2012-05-01

    We are working to develop automated intelligent agents, which can act and react as learning machines with minimal human intervention. To accomplish this, an intelligent agent is viewed as a question-asking machine, which is designed by coupling the processes of inference and inquiry to form a model-based learning unit. In order to select maximally-informative queries, the intelligent agent needs to be able to compute the relevance of a question. This is accomplished by employing the inquiry calculus, which is dual to the probability calculus, and extends information theory by explicitly requiring context. Here, we consider the interaction between two questionasking intelligent agents, and note that there is a potential information redundancy with respect to the two questions that the agents may choose to pose. We show that the information redundancy is minimized by maximizing the joint entropy of the questions, which simultaneously maximizes the relevance of each question while minimizing the mutual information between them. Maximum joint entropy is therefore an important principle of information-based collaboration, which enables intelligent agents to efficiently learn together.

  17. Determination of zero-coupon and spot rates from treasury data by maximum entropy methods

    NASA Astrophysics Data System (ADS)

    Gzyl, Henryk; Mayoral, Silvia

    2016-08-01

    An interesting and important inverse problem in finance consists of the determination of spot rates or prices of the zero coupon bonds, when the only information available consists of the prices of a few coupon bonds. A variety of methods have been proposed to deal with this problem. Here we present variants of a non-parametric method to treat with such problems, which neither imposes an analytic form on the rates or bond prices, nor imposes a model for the (random) evolution of the yields. The procedure consists of transforming the problem of the determination of the prices of the zero coupon bonds into a linear inverse problem with convex constraints, and then applying the method of maximum entropy in the mean. This method is flexible enough to provide a possible solution to a mispricing problem.

  18. Frequency-domain localization of alpha rhythm in humans via a maximum entropy approach

    NASA Astrophysics Data System (ADS)

    Patel, Pankaj; Khosla, Deepak; Al-Dayeh, Louai; Singh, Manbir

    1997-05-01

    Generators of spontaneous human brain activity such as alpha rhythm may be easier and more accurate to localize in frequency-domain than in time-domain since these generators are characterized by a specific frequency range. We carried out a frequency-domain analysis of synchronous alpha sources by generating equivalent potential maps using the Fourier transform of each channel of electro-encephalographic (EEG) recordings. SInce the alpha rhythm recorded by EEG scalp measurements is probably produced by several independent generators, a distributed source imaging approach was considered more appropriate than a model based on a single equivalent current dipole. We used an imaging approach based on a Bayesian maximum entropy technique. Reconstructed sources were superposed on corresponding anatomy form magnetic resonance imaging. Results from human studies suggest that reconstructed sources responsible for alpha rhythm are mainly located in the occipital and parieto- occipital lobes.

  19. Uncertainty estimation of the self-thinning process by Maximum-Entropy Principle

    Treesearch

    Shoufan Fang; George Z. Gertner

    2000-01-01

    When available information is scarce, the Maximum-Entropy Principle can estimate the distributions of parameters. In our case study, we estimated the distributions of the parameters of the forest self-thinning process based on literature information, and we derived the conditional distribution functions and estimated the 95 percent confidence interval (CI) of the self-...

  20. Monitoring of Time-Dependent System Profiles by Multiplex Gas Chromatography with Maximum Entropy Demodulation

    NASA Technical Reports Server (NTRS)

    Becker, Joseph F.; Valentin, Jose

    1996-01-01

    The maximum entropy technique was successfully applied to the deconvolution of overlapped chromatographic peaks. An algorithm was written in which the chromatogram was represented as a vector of sample concentrations multiplied by a peak shape matrix. Simulation results demonstrated that there is a trade off between the detector noise and peak resolution in the sense that an increase of the noise level reduced the peak separation that could be recovered by the maximum entropy method. Real data originated from a sample storage column was also deconvoluted using maximum entropy. Deconvolution is useful in this type of system because the conservation of time dependent profiles depends on the band spreading processes in the chromatographic column, which might smooth out the finer details in the concentration profile. The method was also applied to the deconvolution of previously interpretted Pioneer Venus chromatograms. It was found in this case that the correct choice of peak shape function was critical to the sensitivity of maximum entropy in the reconstruction of these chromatograms.

  1. Maximum-entropy expectation-maximization algorithm for image reconstruction and sensor field estimation.

    PubMed

    Hong, Hunsop; Schonfeld, Dan

    2008-06-01

    In this paper, we propose a maximum-entropy expectation-maximization (MEEM) algorithm. We use the proposed algorithm for density estimation. The maximum-entropy constraint is imposed for smoothness of the estimated density function. The derivation of the MEEM algorithm requires determination of the covariance matrix in the framework of the maximum-entropy likelihood function, which is difficult to solve analytically. We, therefore, derive the MEEM algorithm by optimizing a lower-bound of the maximum-entropy likelihood function. We note that the classical expectation-maximization (EM) algorithm has been employed previously for 2-D density estimation. We propose to extend the use of the classical EM algorithm for image recovery from randomly sampled data and sensor field estimation from randomly scattered sensor networks. We further propose to use our approach in density estimation, image recovery and sensor field estimation. Computer simulation experiments are used to demonstrate the superior performance of the proposed MEEM algorithm in comparison to existing methods.

  2. Semisupervised learning for a hybrid generative/discriminative classifier based on the maximum entropy principle.

    PubMed

    Fujino, Akinori; Ueda, Naonori; Saito, Kazumi

    2008-03-01

    This paper presents a method for designing semi-supervised classifiers trained on labeled and unlabeled samples. We focus on probabilistic semi-supervised classifier design for multi-class and single-labeled classification problems, and propose a hybrid approach that takes advantage of generative and discriminative approaches. In our approach, we first consider a generative model trained by using labeled samples and introduce a bias correction model, where these models belong to the same model family, but have different parameters. Then, we construct a hybrid classifier by combining these models based on the maximum entropy principle. To enable us to apply our hybrid approach to text classification problems, we employed naive Bayes models as the generative and bias correction models. Our experimental results for four text data sets confirmed that the generalization ability of our hybrid classifier was much improved by using a large number of unlabeled samples for training when there were too few labeled samples to obtain good performance. We also confirmed that our hybrid approach significantly outperformed generative and discriminative approaches when the performance of the generative and discriminative approaches was comparable. Moreover, we examined the performance of our hybrid classifier when the labeled and unlabeled data distributions were different.

  3. Maximum entropy, fractal dimension and lacunarity in quantification of cellular rejection in myocardial biopsy of patients submitted to heart transplantation

    NASA Astrophysics Data System (ADS)

    Neves, L. A.; Oliveira, F. R.; Peres, F. A.; Moreira, R. D.; Moriel, A. R.; de Godoy, M. F.; Murta Junior, L. O.

    2011-03-01

    This paper presents a method for the quantification of cellular rejection in endomyocardial biopsies of patients submitted to heart transplant. The model is based on automatic multilevel thresholding, which employs histogram quantification techniques, histogram slope percentage analysis and the calculation of maximum entropy. The structures were quantified with the aid of the multi-scale fractal dimension and lacunarity for the identification of behavior patterns in myocardial cellular rejection in order to determine the most adequate treatment for each case.

  4. Separation of Stochastic and Deterministic Information from Seismological Time Series with Nonlinear Dynamics and Maximum Entropy Methods

    SciTech Connect

    Gutierrez, Rafael M.; Useche, Gina M.; Buitrago, Elias

    2007-11-13

    We present a procedure developed to detect stochastic and deterministic information contained in empirical time series, useful to characterize and make models of different aspects of complex phenomena represented by such data. This procedure is applied to a seismological time series to obtain new information to study and understand geological phenomena. We use concepts and methods from nonlinear dynamics and maximum entropy. The mentioned method allows an optimal analysis of the available information.

  5. Ecosystem functioning and maximum entropy production: a quantitative test of hypotheses

    PubMed Central

    Meysman, Filip J. R.; Bruers, Stijn

    2010-01-01

    The idea that entropy production puts a constraint on ecosystem functioning is quite popular in ecological thermodynamics. Yet, until now, such claims have received little quantitative verification. Here, we examine three ‘entropy production’ hypotheses that have been forwarded in the past. The first states that increased entropy production serves as a fingerprint of living systems. The other two hypotheses invoke stronger constraints. The state selection hypothesis states that when a system can attain multiple steady states, the stable state will show the highest entropy production rate. The gradient response principle requires that when the thermodynamic gradient increases, the system's new stable state should always be accompanied by a higher entropy production rate. We test these three hypotheses by applying them to a set of conventional food web models. Each time, we calculate the entropy production rate associated with the stable state of the ecosystem. This analysis shows that the first hypothesis holds for all the food webs tested: the living state shows always an increased entropy production over the abiotic state. In contrast, the state selection and gradient response hypotheses break down when the food web incorporates more than one trophic level, indicating that they are not generally valid. PMID:20368259

  6. Inverting ion images without Abel inversion: maximum entropy reconstruction of velocity maps.

    PubMed

    Dick, Bernhard

    2014-01-14

    A new method for the reconstruction of velocity maps from ion images is presented, which is based on the maximum entropy concept. In contrast to other methods used for Abel inversion the new method never applies an inversion or smoothing to the data. Instead, it iteratively finds the map which is the most likely cause for the observed data, using the correct likelihood criterion for data sampled from a Poissonian distribution. The entropy criterion minimizes the information content in this map, which hence contains no information for which there is no evidence in the data. Two implementations are proposed, and their performance is demonstrated with simulated and experimental data: Maximum Entropy Velocity Image Reconstruction (MEVIR) obtains a two-dimensional slice through the velocity distribution and can be compared directly to Abel inversion. Maximum Entropy Velocity Legendre Reconstruction (MEVELER) finds one-dimensional distribution functions Q(l)(v) in an expansion of the velocity distribution in Legendre polynomials P((cos θ) for the angular dependence. Both MEVIR and MEVELER can be used for the analysis of ion images with intensities as low as 0.01 counts per pixel, with MEVELER performing significantly better than MEVIR for images with low intensity. Both methods perform better than pBASEX, in particular for images with less than one average count per pixel.

  7. Self-Assembled Wiggling Nano-Structures and the Principle of Maximum Entropy Production

    NASA Astrophysics Data System (ADS)

    Belkin, A.; Hubler, A.; Bezryadin, A.

    2015-02-01

    While behavior of equilibrium systems is well understood, evolution of nonequilibrium ones is much less clear. Yet, many researches have suggested that the principle of the maximum entropy production is of key importance in complex systems away from equilibrium. Here, we present a quantitative study of large ensembles of carbon nanotubes suspended in a non-conducting non-polar fluid subject to a strong electric field. Being driven out of equilibrium, the suspension spontaneously organizes into an electrically conducting state under a wide range of parameters. Such self-assembly allows the Joule heating and, therefore, the entropy production in the fluid, to be maximized. Curiously, we find that emerging self-assembled structures can start to wiggle. The wiggling takes place only until the entropy production in the suspension reaches its maximum, at which time the wiggling stops and the structure becomes quasi-stable. Thus, we provide strong evidence that maximum entropy production principle plays an essential role in the evolution of self-organizing systems far from equilibrium.

  8. Maximum entropy production allows a simple representation of heterogeneity in semiarid ecosystems

    PubMed Central

    Schymanski, Stanislaus J.; Kleidon, Axel; Stieglitz, Marc; Narula, Jatin

    2010-01-01

    Feedbacks between water use, biomass and infiltration capacity in semiarid ecosystems have been shown to lead to the spontaneous formation of vegetation patterns in a simple model. The formation of patterns permits the maintenance of larger overall biomass at low rainfall rates compared with homogeneous vegetation. This results in a bias of models run at larger scales neglecting subgrid-scale variability. In the present study, we investigate the question whether subgrid-scale heterogeneity can be parameterized as the outcome of optimal partitioning between bare soil and vegetated area. We find that a two-box model reproduces the time-averaged biomass of the patterns emerging in a 100 × 100 grid model if the vegetated fraction is optimized for maximum entropy production (MEP). This suggests that the proposed optimality-based representation of subgrid-scale heterogeneity may be generally applicable to different systems and at different scales. The implications for our understanding of self-organized behaviour and its modelling are discussed. PMID:20368263

  9. Estimation of Groundwater Radon in North Carolina Using Land Use Regression and Bayesian Maximum Entropy.

    PubMed

    Messier, Kyle P; Campbell, Ted; Bradley, Philip J; Serre, Marc L

    2015-08-18

    Radon ((222)Rn) is a naturally occurring chemically inert, colorless, and odorless radioactive gas produced from the decay of uranium ((238)U), which is ubiquitous in rocks and soils worldwide. Exposure to (222)Rn is likely the second leading cause of lung cancer after cigarette smoking via inhalation; however, exposure through untreated groundwater is also a contributing factor to both inhalation and ingestion routes. A land use regression (LUR) model for groundwater (222)Rn with anisotropic geological and (238)U based explanatory variables is developed, which helps elucidate the factors contributing to elevated (222)Rn across North Carolina. The LUR is also integrated into the Bayesian Maximum Entropy (BME) geostatistical framework to increase accuracy and produce a point-level LUR-BME model of groundwater (222)Rn across North Carolina including prediction uncertainty. The LUR-BME model of groundwater (222)Rn results in a leave-one out cross-validation r(2) of 0.46 (Pearson correlation coefficient = 0.68), effectively predicting within the spatial covariance range. Modeled results of (222)Rn concentrations show variability among intrusive felsic geological formations likely due to average bedrock (238)U defined on the basis of overlying stream-sediment (238)U concentrations that is a widely distributed consistently analyzed point-source data.

  10. Nonequilibrium thermodynamics and maximum entropy production in the Earth system: applications and implications.

    PubMed

    Kleidon, Axel

    2009-06-01

    The Earth system is maintained in a unique state far from thermodynamic equilibrium, as, for instance, reflected in the high concentration of reactive oxygen in the atmosphere. The myriad of processes that transform energy, that result in the motion of mass in the atmosphere, in oceans, and on land, processes that drive the global water, carbon, and other biogeochemical cycles, all have in common that they are irreversible in their nature. Entropy production is a general consequence of these processes and measures their degree of irreversibility. The proposed principle of maximum entropy production (MEP) states that systems are driven to steady states in which they produce entropy at the maximum possible rate given the prevailing constraints. In this review, the basics of nonequilibrium thermodynamics are described, as well as how these apply to Earth system processes. Applications of the MEP principle are discussed, ranging from the strength of the atmospheric circulation, the hydrological cycle, and biogeochemical cycles to the role that life plays in these processes. Nonequilibrium thermodynamics and the MEP principle have potentially wide-ranging implications for our understanding of Earth system functioning, how it has evolved in the past, and why it is habitable. Entropy production allows us to quantify an objective direction of Earth system change (closer to vs further away from thermodynamic equilibrium, or, equivalently, towards a state of MEP). When a maximum in entropy production is reached, MEP implies that the Earth system reacts to perturbations primarily with negative feedbacks. In conclusion, this nonequilibrium thermodynamic view of the Earth system shows great promise to establish a holistic description of the Earth as one system. This perspective is likely to allow us to better understand and predict its function as one entity, how it has evolved in the past, and how it is modified by human activities in the future.

  11. Structural damage assessment using linear approximation with maximum entropy and transmissibility data

    NASA Astrophysics Data System (ADS)

    Meruane, V.; Ortiz-Bernardin, A.

    2015-03-01

    Supervised learning algorithms have been proposed as a suitable alternative to model updating methods in structural damage assessment, being Artificial Neural Networks the most frequently used. Notwithstanding, the slow learning speed and the large number of parameters that need to be tuned within the training stage have been a major bottleneck in their application. This article presents a new algorithm for real-time damage assessment that uses a linear approximation method in conjunction with antiresonant frequencies that are identified from transmissibility functions. The linear approximation is handled by a statistical inference model based on the maximum-entropy principle. The merits of this new approach are twofold: training is avoided and data is processed in a period of time that is comparable to the one of Neural Networks. The performance of the proposed methodology is validated by considering three experimental structures: an eight-degree-of-freedom (DOF) mass-spring system, a beam, and an exhaust system of a car. To demonstrate the potential of the proposed algorithm over existing ones, the obtained results are compared with those of a model updating method based on parallel genetic algorithms and a multilayer feedforward neural network approach.

  12. Reinterpreting maximum entropy in ecology: a null hypothesis constrained by ecological mechanism.

    PubMed

    O'Dwyer, James P; Rominger, Andrew; Xiao, Xiao

    2017-07-01

    Simplified mechanistic models in ecology have been criticised for the fact that a good fit to data does not imply the mechanism is true: pattern does not equal process. In parallel, the maximum entropy principle (MaxEnt) has been applied in ecology to make predictions constrained by just a handful of state variables, like total abundance or species richness. But an outstanding question remains: what principle tells us which state variables to constrain? Here we attempt to solve both problems simultaneously, by translating a given set of mechanisms into the state variables to be used in MaxEnt, and then using this MaxEnt theory as a null model against which to compare mechanistic predictions. In particular, we identify the sufficient statistics needed to parametrise a given mechanistic model from data and use them as MaxEnt constraints. Our approach isolates exactly what mechanism is telling us over and above the state variables alone. © 2017 John Wiley & Sons Ltd/CNRS.

  13. Entanglement entropy in top-down models

    NASA Astrophysics Data System (ADS)

    Jones, Peter A. R.; Taylor, Marika

    2016-08-01

    We explore holographic entanglement entropy in ten-dimensional supergravity solutions. It has been proposed that entanglement entropy can be computed in such top-down models using minimal surfaces which asymptotically wrap the compact part of the geometry. We show explicitly in a wide range of examples that the holographic entan-glement entropy thus computed agrees with the entanglement entropy computed using the Ryu-Takayanagi formula from the lower-dimensional Einstein metric obtained from reduc-tion over the compact space. Our examples include not only consistent truncations but also cases in which no consistent truncation exists and Kaluza-Klein holography is used to identify the lower-dimensional Einstein metric. We then give a general proof, based on the Lewkowycz-Maldacena approach, of the top-down entanglement entropy formula.

  14. Maximum-Entropy Meshfree Method for Compressible and Near-Incompressible Elasticity

    SciTech Connect

    Ortiz, A; Puso, M A; Sukumar, N

    2009-09-04

    Numerical integration errors and volumetric locking in the near-incompressible limit are two outstanding issues in Galerkin-based meshfree computations. In this paper, we present a modified Gaussian integration scheme on background cells for meshfree methods that alleviates errors in numerical integration and ensures patch test satisfaction to machine precision. Secondly, a locking-free small-strain elasticity formulation for meshfree methods is proposed, which draws on developments in assumed strain methods and nodal integration techniques. In this study, maximum-entropy basis functions are used; however, the generality of our approach permits the use of any meshfree approximation. Various benchmark problems in two-dimensional compressible and near-incompressible small strain elasticity are presented to demonstrate the accuracy and optimal convergence in the energy norm of the maximum-entropy meshfree formulation.

  15. In-medium dispersion relations of charmonia studied by the maximum entropy method

    NASA Astrophysics Data System (ADS)

    Ikeda, Atsuro; Asakawa, Masayuki; Kitazawa, Masakiyo

    2017-01-01

    We study in-medium spectral properties of charmonia in the vector and pseudoscalar channels at nonzero momenta on quenched lattices, especially focusing on their dispersion relation and the weight of the peak. We measure the lattice Euclidean correlation functions with nonzero momenta on the anisotropic quenched lattices and study the spectral functions with the maximum entropy method. The dispersion relations of charmonia and the momentum dependence of the weight of the peak are analyzed with the maximum entropy method together with the errors estimated probabilistically in this method. We find a significant increase of the masses of charmonia in medium. We also find that the functional form of the charmonium dispersion relations is not changed from that in the vacuum within the error even at T ≃1.6 Tc for all the channels we analyze.

  16. Maximum information entropy principle and the interpretation of probabilities in statistical mechanics - a short review

    NASA Astrophysics Data System (ADS)

    Kuić, Domagoj

    2016-05-01

    In this paper an alternative approach to statistical mechanics based on the maximum information entropy principle (MaxEnt) is examined, specifically its close relation with the Gibbs method of ensembles. It is shown that the MaxEnt formalism is the logical extension of the Gibbs formalism of equilibrium statistical mechanics that is entirely independent of the frequentist interpretation of probabilities only as factual (i.e. experimentally verifiable) properties of the real world. Furthermore, we show that, consistently with the law of large numbers, the relative frequencies of the ensemble of systems prepared under identical conditions (i.e. identical constraints) actually correspond to the MaxEnt probabilites in the limit of a large number of systems in the ensemble. This result implies that the probabilities in statistical mechanics can be interpreted, independently of the frequency interpretation, on the basis of the maximum information entropy principle.

  17. Maximum entropy deconvolution of the optical jet of 3C 273

    NASA Technical Reports Server (NTRS)

    Evans, I. N.; Ford, H. C.; Hui, X.

    1989-01-01

    The technique of maximum entropy image restoration is applied to the problem of deconvolving the point spread function from a deep, high-quality V band image of the optical jet of 3C 273. The resulting maximum entropy image has an approximate spatial resolution of 0.6 arcsec and has been used to study the morphology of the optical jet. Four regularly-spaced optical knots are clearly evident in the data, together with an optical 'extension' at each end of the optical jet. The jet oscillates around its center of gravity, and the spatial scale of the oscillations is very similar to the spacing between the optical knots. The jet is marginally resolved in the transverse direction and has an asymmetric profile perpendicular to the jet axis. The distribution of V band flux along the length of the jet, and accurate astrometry of the optical knot positions are presented.

  18. Reconstruction of motional states of neutral atoms via maximum entropy principle

    NASA Astrophysics Data System (ADS)

    Drobný, Gabriel; Bužek, Vladimír

    2002-05-01

    We present a scheme for a reconstruction of states of quantum systems from incomplete tomographiclike data. The proposed scheme is based on the Jaynes principle of maximum entropy. We apply our algorithm for a reconstruction of motional quantum states of neutral atoms. As an example we analyze the experimental data obtained by Salomon and co-workers and we reconstruct Wigner functions of motional quantum states of Cs atoms trapped in an optical lattice.

  19. Hydrodynamic equations for electrons in graphene obtained from the maximum entropy principle

    SciTech Connect

    Barletti, Luigi

    2014-08-15

    The maximum entropy principle is applied to the formal derivation of isothermal, Euler-like equations for semiclassical fermions (electrons and holes) in graphene. After proving general mathematical properties of the equations so obtained, their asymptotic form corresponding to significant physical regimes is investigated. In particular, the diffusive regime, the Maxwell-Boltzmann regime (high temperature), the collimation regime and the degenerate gas limit (vanishing temperature) are considered.

  20. Charmonium spectra and dispersion relations with maximum entropy method in extended vector space

    NASA Astrophysics Data System (ADS)

    Ikeda, Atsuro

    2014-09-01

    We study charmonium properties at finite temperature and finite momentum in quenched lattice QCD with an extended maximum entropy method. We analyze the spectral functions and the dispersion relations of charmonia in an extended vector space, which is a product space of two different lattice correlators. We find that there is a mass shift of charmonium in pseudoscalar and vector channels at finite temperature. Our result shows that the dispersion relations are nevertheless consistent with Lorentz invariant form even near the dissociation temperature.

  1. Maximum-Entropy Inference and Inverse Continuity of the Numerical Range

    NASA Astrophysics Data System (ADS)

    Weis, Stephan

    2016-04-01

    We study the continuity of the maximum-entropy inference map for two observables in finite dimensions. We prove that the continuity is equivalent to the strong continuity of the set-valued inverse numerical range map. This gives a continuity condition in terms of analytic eigenvalue functions which implies that discontinuities are very rare. It shows also that the continuity of the MaxEnt inference method is independent of the prior state.

  2. REMARKS ON THE MAXIMUM ENTROPY METHOD APPLIED TO FINITE TEMPERATURE LATTICE QCD.

    SciTech Connect

    UMEDA, T.; MATSUFURU, H.

    2005-07-25

    We make remarks on the Maximum Entropy Method (MEM) for studies of the spectral function of hadronic correlators in finite temperature lattice QCD. We discuss the virtues and subtlety of MEM in the cases that one does not have enough number of data points such as at finite temperature. Taking these points into account, we suggest several tests which one should examine to keep the reliability for the results, and also apply them using mock and lattice QCD data.

  3. Estimation of fine particulate matter in Taipei using landuse regression and bayesian maximum entropy methods.

    PubMed

    Yu, Hwa-Lung; Wang, Chih-Hsih; Liu, Ming-Che; Kuo, Yi-Ming

    2011-06-01

    Fine airborne particulate matter (PM2.5) has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS), the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME) method. The resulting epistemic framework can assimilate knowledge bases including: (a) empirical-based spatial trends of PM concentration based on landuse regression, (b) the spatio-temporal dependence among PM observation information, and (c) site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan) from 2005-2007.

  4. Scour development around submarine pipelines due to current based on the maximum entropy theory

    NASA Astrophysics Data System (ADS)

    Zhang, Jing; Shi, Bing; Guo, Yakun; Xu, Weilin; Yang, Kejun; Zhao, Enjin

    2016-10-01

    This paper presents the results from laboratory experiments and theoretical analysis to investigate the development of scour around submarine pipeline under steady current conditions. Experiments show that the scour process takes place in two stages: the initial rapid scour and the subsequent gradual scour development stage. An empirical formula for calculating the equilibrium scour depth (the maximum scour depth) is developed by using the regression method. This formula together with the maximum entropy theory can be applied to establish a formula to predict the scour process for given water depth, diameter of pipeline and flow velocity. Good agreement between the predicted and measured scour depth is obtained.

  5. From maximum power to a trade-off optimization of low-dissipation heat engines: Influence of control parameters and the role of entropy generation

    NASA Astrophysics Data System (ADS)

    Gonzalez-Ayala, Julian; Calvo Hernández, A.; Roco, J. M. M.

    2017-02-01

    For a low-dissipation heat engine model we present the role of the partial contact times and the total operational time as control parameters to switch from maximum power state to maximum Ω trade-off state. The symmetry of the dissipation coefficients may be used in the design of the heat engine to offer, in such switching, a suitable compromise between efficiency gain, power losses, and entropy change. Bounds for entropy production, efficiency, and power output are presented for transitions between both regimes. In the maximum power and maximum Ω trade-off cases the relevant space of parameters are analyzed together with the configuration of minimum entropy production. A detailed analysis of the parameter's space shows physically prohibited regions in which there is no longer a heat engine and another region that is physically well behaved but is not suitable for possible optimization criteria.

  6. From maximum power to a trade-off optimization of low-dissipation heat engines: Influence of control parameters and the role of entropy generation.

    PubMed

    Gonzalez-Ayala, Julian; Calvo Hernández, A; Roco, J M M

    2017-02-01

    For a low-dissipation heat engine model we present the role of the partial contact times and the total operational time as control parameters to switch from maximum power state to maximum Ω trade-off state. The symmetry of the dissipation coefficients may be used in the design of the heat engine to offer, in such switching, a suitable compromise between efficiency gain, power losses, and entropy change. Bounds for entropy production, efficiency, and power output are presented for transitions between both regimes. In the maximum power and maximum Ω trade-off cases the relevant space of parameters are analyzed together with the configuration of minimum entropy production. A detailed analysis of the parameter's space shows physically prohibited regions in which there is no longer a heat engine and another region that is physically well behaved but is not suitable for possible optimization criteria.

  7. Understanding frequency distributions of path-dependent processes with non-multinomial maximum entropy approaches

    NASA Astrophysics Data System (ADS)

    Hanel, Rudolf; Corominas-Murtra, Bernat; Thurner, Stefan

    2017-03-01

    Path-dependent stochastic processes are often non-ergodic and observables can no longer be computed within the ensemble picture. The resulting mathematical difficulties pose severe limits to the analytical understanding of path-dependent processes. Their statistics is typically non-multinomial in the sense that the multiplicities of the occurrence of states is not a multinomial factor. The maximum entropy principle is tightly related to multinomial processes, non-interacting systems, and to the ensemble picture; it loses its meaning for path-dependent processes. Here we show that an equivalent to the ensemble picture exists for path-dependent processes, such that the non-multinomial statistics of the underlying dynamical process, by construction, is captured correctly in a functional that plays the role of a relative entropy. We demonstrate this for self-reinforcing Pólya urn processes, which explicitly generalize multinomial statistics. We demonstrate the adequacy of this constructive approach towards non-multinomial entropies by computing frequency and rank distributions of Pólya urn processes. We show how microscopic update rules of a path-dependent process allow us to explicitly construct a non-multinomial entropy functional, that, when maximized, predicts the time-dependent distribution function.

  8. Two dimensional IR-FID-CPMG acquisition and adaptation of a maximum entropy reconstruction

    NASA Astrophysics Data System (ADS)

    Rondeau-Mouro, C.; Kovrlija, R.; Van Steenberge, E.; Moussaoui, S.

    2016-04-01

    By acquiring the FID signal in two-dimensional TD-NMR spectroscopy, it is possible to characterize mixtures or complex samples composed of solid and liquid phases. We have developed a new sequence for this purpose, called IR-FID-CPMG, making it possible to correlate spin-lattice T1 and spin-spin T2 relaxation times, including both liquid and solid phases in samples. We demonstrate here the potential of a new algorithm for the 2D inverse Laplace transformation of IR-FID-CPMG data based on an adapted reconstruction of the maximum entropy method, combining the standard decreasing exponential decay function with an additional term drawn from Abragam's FID function. The results show that the proposed IR-FID-CPMG sequence and its related inversion model allow accurate characterization and quantification of both solid and liquid phases in multiphasic and compartmentalized systems. Moreover, it permits to distinguish between solid phases having different T1 relaxation times or to highlight cross-relaxation phenomena.

  9. Causal nexus between energy consumption and carbon dioxide emission for Malaysia using maximum entropy bootstrap approach.

    PubMed

    Gul, Sehrish; Zou, Xiang; Hassan, Che Hashim; Azam, Muhammad; Zaman, Khalid

    2015-12-01

    This study investigates the relationship between energy consumption and carbon dioxide emission in the causal framework, as the direction of causality remains has a significant policy implication for developed and developing countries. The study employed maximum entropy bootstrap (Meboot) approach to examine the causal nexus between energy consumption and carbon dioxide emission using bivariate as well as multivariate framework for Malaysia, over a period of 1975-2013. This is a unified approach without requiring the use of conventional techniques based on asymptotical theory such as testing for possible unit root and cointegration. In addition, it can be applied in the presence of non-stationary of any type including structural breaks without any type of data transformation to achieve stationary. Thus, it provides more reliable and robust inferences which are insensitive to time span as well as lag length used. The empirical results show that there is a unidirectional causality running from energy consumption to carbon emission both in the bivariate model and multivariate framework, while controlling for broad money supply and population density. The results indicate that Malaysia is an energy-dependent country and hence energy is stimulus to carbon emissions.

  10. Maximum entropy inference of seabed attenuation parameters using ship radiated broadband noise.

    PubMed

    Knobles, D P

    2015-12-01

    The received acoustic field generated by a single passage of a research vessel on the New Jersey continental shelf is employed to infer probability distributions for the parameter values representing the frequency dependence of the seabed attenuation and the source levels of the ship. The statistical inference approach employed in the analysis is a maximum entropy methodology. The average value of the error function, needed to uniquely specify a conditional posterior probability distribution, is estimated with data samples from time periods in which the ship-receiver geometry is dominated by either the stern or bow aspect. The existence of ambiguities between the source levels and the environmental parameter values motivates an attempt to partially decouple these parameter values. The main result is the demonstration that parameter values for the attenuation (α and the frequency exponent), the sediment sound speed, and the source levels can be resolved through a model space reduction technique. The results of this multi-step statistical inference developed for ship radiated noise is then tested by processing towed source data over the same bandwidth and source track to estimate continuous wave source levels that were measured independently with a reference hydrophone on the tow body.

  11. Merging daily sea surface temperature data from multiple satellites using a Bayesian maximum entropy method

    NASA Astrophysics Data System (ADS)

    Tang, Shaolei; Yang, Xiaofeng; Dong, Di; Li, Ziwei

    2015-12-01

    Sea surface temperature (SST) is an important variable for understanding interactions between the ocean and the atmosphere. SST fusion is crucial for acquiring SST products of high spatial resolution and coverage. This study introduces a Bayesian maximum entropy (BME) method for blending daily SSTs from multiple satellite sensors. A new spatiotemporal covariance model of an SST field is built to integrate not only single-day SSTs but also time-adjacent SSTs. In addition, AVHRR 30-year SST climatology data are introduced as soft data at the estimation points to improve the accuracy of blended results within the BME framework. The merged SSTs, with a spatial resolution of 4 km and a temporal resolution of 24 hours, are produced in the Western Pacific Ocean region to demonstrate and evaluate the proposed methodology. Comparisons with in situ drifting buoy observations show that the merged SSTs are accurate and the bias and root-mean-square errors for the comparison are 0.15°C and 0.72°C, respectively.

  12. A Bayesian maximum entropy-based methodology for optimal spatiotemporal design of groundwater monitoring networks.

    PubMed

    Hosseini, Marjan; Kerachian, Reza

    2017-09-01

    This paper presents a new methodology for analyzing the spatiotemporal variability of water table levels and redesigning a groundwater level monitoring network (GLMN) using the Bayesian Maximum Entropy (BME) technique and a multi-criteria decision-making approach based on ordered weighted averaging (OWA). The spatial sampling is determined using a hexagonal gridding pattern and a new method, which is proposed to assign a removal priority number to each pre-existing station. To design temporal sampling, a new approach is also applied to consider uncertainty caused by lack of information. In this approach, different time lag values are tested by regarding another source of information, which is simulation result of a numerical groundwater flow model. Furthermore, to incorporate the existing uncertainties in available monitoring data, the flexibility of the BME interpolation technique is taken into account in applying soft data and improving the accuracy of the calculations. To examine the methodology, it is applied to the Dehgolan plain in northwestern Iran. Based on the results, a configuration of 33 monitoring stations for a regular hexagonal grid of side length 3600 m is proposed, in which the time lag between samples is equal to 5 weeks. Since the variance estimation errors of the BME method are almost identical for redesigned and existing networks, the redesigned monitoring network is more cost-effective and efficient than the existing monitoring network with 52 stations and monthly sampling frequency.

  13. Two dimensional IR-FID-CPMG acquisition and adaptation of a maximum entropy reconstruction.

    PubMed

    Rondeau-Mouro, C; Kovrlija, R; Van Steenberge, E; Moussaoui, S

    2016-04-01

    By acquiring the FID signal in two-dimensional TD-NMR spectroscopy, it is possible to characterize mixtures or complex samples composed of solid and liquid phases. We have developed a new sequence for this purpose, called IR-FID-CPMG, making it possible to correlate spin-lattice T1 and spin-spin T2 relaxation times, including both liquid and solid phases in samples. We demonstrate here the potential of a new algorithm for the 2D inverse Laplace transformation of IR-FID-CPMG data based on an adapted reconstruction of the maximum entropy method, combining the standard decreasing exponential decay function with an additional term drawn from Abragam's FID function. The results show that the proposed IR-FID-CPMG sequence and its related inversion model allow accurate characterization and quantification of both solid and liquid phases in multiphasic and compartmentalized systems. Moreover, it permits to distinguish between solid phases having different T1 relaxation times or to highlight cross-relaxation phenomena. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. A basic introduction to the thermodynamics of the Earth system far from equilibrium and maximum entropy production.

    PubMed

    Kleidon, A

    2010-05-12

    The Earth system is remarkably different from its planetary neighbours in that it shows pronounced, strong global cycling of matter. These global cycles result in the maintenance of a unique thermodynamic state of the Earth's atmosphere which is far from thermodynamic equilibrium (TE). Here, I provide a simple introduction of the thermodynamic basis to understand why Earth system processes operate so far away from TE. I use a simple toy model to illustrate the application of non-equilibrium thermodynamics and to classify applications of the proposed principle of maximum entropy production (MEP) to such processes into three different cases of contrasting flexibility in the boundary conditions. I then provide a brief overview of the different processes within the Earth system that produce entropy, review actual examples of MEP in environmental and ecological systems, and discuss the role of interactions among dissipative processes in making boundary conditions more flexible. I close with a brief summary and conclusion.

  15. A basic introduction to the thermodynamics of the Earth system far from equilibrium and maximum entropy production

    PubMed Central

    Kleidon, A.

    2010-01-01

    The Earth system is remarkably different from its planetary neighbours in that it shows pronounced, strong global cycling of matter. These global cycles result in the maintenance of a unique thermodynamic state of the Earth's atmosphere which is far from thermodynamic equilibrium (TE). Here, I provide a simple introduction of the thermodynamic basis to understand why Earth system processes operate so far away from TE. I use a simple toy model to illustrate the application of non-equilibrium thermodynamics and to classify applications of the proposed principle of maximum entropy production (MEP) to such processes into three different cases of contrasting flexibility in the boundary conditions. I then provide a brief overview of the different processes within the Earth system that produce entropy, review actual examples of MEP in environmental and ecological systems, and discuss the role of interactions among dissipative processes in making boundary conditions more flexible. I close with a brief summary and conclusion. PMID:20368248

  16. Quantum maximum-entropy principle for closed quantum hydrodynamic transport within a Wigner function formalism

    SciTech Connect

    Trovato, M.; Reggiani, L.

    2011-12-15

    By introducing a quantum entropy functional of the reduced density matrix, the principle of quantum maximum entropy is asserted as fundamental principle of quantum statistical mechanics. Accordingly, we develop a comprehensive theoretical formalism to construct rigorously a closed quantum hydrodynamic transport within a Wigner function approach. The theoretical formalism is formulated in both thermodynamic equilibrium and nonequilibrium conditions, and the quantum contributions are obtained by only assuming that the Lagrange multipliers can be expanded in powers of ({h_bar}/2{pi}){sup 2}. In particular, by using an arbitrary number of moments, we prove that (1) on a macroscopic scale all nonlocal effects, compatible with the uncertainty principle, are imputable to high-order spatial derivatives, both of the numerical density n and of the effective temperature T; (2) the results available from the literature in the framework of both a quantum Boltzmann gas and a degenerate quantum Fermi gas are recovered as a particular case; (3) the statistics for the quantum Fermi and Bose gases at different levels of degeneracy are explicitly incorporated; (4) a set of relevant applications admitting exact analytical equations are explicitly given and discussed; (5) the quantum maximum entropy principle keeps full validity in the classical limit, when ({h_bar}/2{pi}){yields}0.

  17. Applications of the principle of maximum entropy: from physics to ecology.

    PubMed

    Banavar, Jayanth R; Maritan, Amos; Volkov, Igor

    2010-02-17

    There are numerous situations in physics and other disciplines which can be described at different levels of detail in terms of probability distributions. Such descriptions arise either intrinsically as in quantum mechanics, or because of the vast amount of details necessary for a complete description as, for example, in Brownian motion and in many-body systems. We show that an application of the principle of maximum entropy for estimating the underlying probability distribution can depend on the variables used for describing the system. The choice of characterization of the system carries with it implicit assumptions about fundamental attributes such as whether the system is classical or quantum mechanical or equivalently whether the individuals are distinguishable or indistinguishable. We show that the correct procedure entails the maximization of the relative entropy subject to known constraints and, additionally, requires knowledge of the behavior of the system in the absence of these constraints. We present an application of the principle of maximum entropy to understanding species diversity in ecology and introduce a new statistical ensemble corresponding to the distribution of a variable population of individuals into a set of species not defined a priori.

  18. Quantum maximum-entropy principle for closed quantum hydrodynamic transport within a Wigner function formalism.

    PubMed

    Trovato, M; Reggiani, L

    2011-12-01

    By introducing a quantum entropy functional of the reduced density matrix, the principle of quantum maximum entropy is asserted as fundamental principle of quantum statistical mechanics. Accordingly, we develop a comprehensive theoretical formalism to construct rigorously a closed quantum hydrodynamic transport within a Wigner function approach. The theoretical formalism is formulated in both thermodynamic equilibrium and nonequilibrium conditions, and the quantum contributions are obtained by only assuming that the Lagrange multipliers can be expanded in powers of h(2). In particular, by using an arbitrary number of moments, we prove that (1) on a macroscopic scale all nonlocal effects, compatible with the uncertainty principle, are imputable to high-order spatial derivatives, both of the numerical density n and of the effective temperature T; (2) the results available from the literature in the framework of both a quantum Boltzmann gas and a degenerate quantum Fermi gas are recovered as a particular case; (3) the statistics for the quantum Fermi and Bose gases at different levels of degeneracy are explicitly incorporated; (4) a set of relevant applications admitting exact analytical equations are explicitly given and discussed; (5) the quantum maximum entropy principle keeps full validity in the classical limit, when h → 0.

  19. A Maximum-Entropy approach for accurate document annotation in the biomedical domain

    PubMed Central

    2012-01-01

    The increasing number of scientific literature on the Web and the absence of efficient tools used for classifying and searching the documents are the two most important factors that influence the speed of the search and the quality of the results. Previous studies have shown that the usage of ontologies makes it possible to process document and query information at the semantic level, which greatly improves the search for the relevant information and makes one step further towards the Semantic Web. A fundamental step in these approaches is the annotation of documents with ontology concepts, which can also be seen as a classification task. In this paper we address this issue for the biomedical domain and present a new automated and robust method, based on a Maximum Entropy approach, for annotating biomedical literature documents with terms from the Medical Subject Headings (MeSH). The experimental evaluation shows that the suggested Maximum Entropy approach for annotating biomedical documents with MeSH terms is highly accurate, robust to the ambiguity of terms, and can provide very good performance even when a very small number of training documents is used. More precisely, we show that the proposed algorithm obtained an average F-measure of 92.4% (precision 99.41%, recall 86.77%) for the full range of the explored terms (4,078 MeSH terms), and that the algorithm’s performance is resilient to terms’ ambiguity, achieving an average F-measure of 92.42% (precision 99.32%, recall 86.87%) in the explored MeSH terms which were found to be ambiguous according to the Unified Medical Language System (UMLS) thesaurus. Finally, we compared the results of the suggested methodology with a Naive Bayes and a Decision Trees classification approach, and we show that the Maximum Entropy based approach performed with higher F-Measure in both ambiguous and monosemous MeSH terms. PMID:22541593

  20. A Maximum-Entropy approach for accurate document annotation in the biomedical domain.

    PubMed

    Tsatsaronis, George; Macari, Natalia; Torge, Sunna; Dietze, Heiko; Schroeder, Michael

    2012-04-24

    The increasing number of scientific literature on the Web and the absence of efficient tools used for classifying and searching the documents are the two most important factors that influence the speed of the search and the quality of the results. Previous studies have shown that the usage of ontologies makes it possible to process document and query information at the semantic level, which greatly improves the search for the relevant information and makes one step further towards the Semantic Web. A fundamental step in these approaches is the annotation of documents with ontology concepts, which can also be seen as a classification task. In this paper we address this issue for the biomedical domain and present a new automated and robust method, based on a Maximum Entropy approach, for annotating biomedical literature documents with terms from the Medical Subject Headings (MeSH).The experimental evaluation shows that the suggested Maximum Entropy approach for annotating biomedical documents with MeSH terms is highly accurate, robust to the ambiguity of terms, and can provide very good performance even when a very small number of training documents is used. More precisely, we show that the proposed algorithm obtained an average F-measure of 92.4% (precision 99.41%, recall 86.77%) for the full range of the explored terms (4,078 MeSH terms), and that the algorithm's performance is resilient to terms' ambiguity, achieving an average F-measure of 92.42% (precision 99.32%, recall 86.87%) in the explored MeSH terms which were found to be ambiguous according to the Unified Medical Language System (UMLS) thesaurus. Finally, we compared the results of the suggested methodology with a Naive Bayes and a Decision Trees classification approach, and we show that the Maximum Entropy based approach performed with higher F-Measure in both ambiguous and monosemous MeSH terms.

  1. Application of a multiscale maximum entropy image restoration algorithm to HXMT observations

    NASA Astrophysics Data System (ADS)

    Guan, Ju; Song, Li-Ming; Huo, Zhuo-Xi

    2016-08-01

    This paper introduces a multiscale maximum entropy (MSME) algorithm for image restoration of the Hard X-ray Modulation Telescope (HXMT), which is a collimated scan X-ray satellite mainly devoted to a sensitive all-sky survey and pointed observations in the 1-250 keV range. The novelty of the MSME method is to use wavelet decomposition and multiresolution support to control noise amplification at different scales. Our work is focused on the application and modification of this method to restore diffuse sources detected by HXMT scanning observations. An improved method, the ensemble multiscale maximum entropy (EMSME) algorithm, is proposed to alleviate the problem of mode mixing exiting in MSME. Simulations have been performed on the detection of the diffuse source Cen A by HXMT in all-sky survey mode. The results show that the MSME method is adapted to the deconvolution task of HXMT for diffuse source detection and the improved method could suppress noise and improve the correlation and signal-to-noise ratio, thus proving itself a better algorithm for image restoration. Through one all-sky survey, HXMT could reach a capacity of detecting a diffuse source with maximum differential flux of 0.5 mCrab. Supported by Strategic Priority Research Program on Space Science, Chinese Academy of Sciences (XDA04010300) and National Natural Science Foundation of China (11403014)

  2. An Entropy Model for Artificial Grammar Learning

    PubMed Central

    Pothos, Emmanuel M.

    2010-01-01

    A model is proposed to characterize the type of knowledge acquired in artificial grammar learning (AGL). In particular, Shannon entropy is employed to compute the complexity of different test items in an AGL task, relative to the training items. According to this model, the more predictable a test item is from the training items, the more likely it is that this item should be selected as compatible with the training items. The predictions of the entropy model are explored in relation to the results from several previous AGL datasets and compared to other AGL measures. This particular approach in AGL resonates well with similar models in categorization and reasoning which also postulate that cognitive processing is geared towards the reduction of entropy. PMID:21607072

  3. Reply to ``Comment on `Mobility spectrum computational analysis using a maximum entropy approach' ''

    NASA Astrophysics Data System (ADS)

    Mironov, O. A.; Myronov, M.; Kiatgamolchai, S.; Kantser, V. G.

    2004-03-01

    In their Comment [J. Antoszewski, D. D. Redfern, L. Faraone, J. R. Meyer, I. Vurgaftman, and J. Lindemuth, Phys. Rev. E 69, 038701 (2004)] on our paper [S. Kiatgamolchai, M. Myronov, O. A. Mironov, V. G. Kantser, E. H. C. Parker, and T. E. Whall, Phys. Rev. E 66, 036705 (2002)] the authors present computational results obtained with the improved quantitative mobility spectrum analysis technique implemented in the commercial software of Lake Shore Cryotronics. We suggest that this is just information additional to the mobility spectrum analysis (MSA) in general without any direct relation to our maximum entropy MSA (ME-MSA) algorithm.

  4. A study of the maximum entropy technique for phase space tomography

    NASA Astrophysics Data System (ADS)

    Hock, K. M.; Ibison, M. G.

    2013-02-01

    We study a problem with the Maximum Entropy Technique (MENT) when applied to tomographic measurements of the transverse phase space of electron beams, and suggest some ways to improve its reliability. We show that the outcome of a phase space reconstruction can be highly sensitive to the choice of projection angles. It is quite likely to obtain reconstructed distributions of the phase space that are obviously different from the actual distributions. We propose a method to obtain a ``good'' choice of projections angles using a normalised phase space. We demonstrate that the resulting reconstructions of the phase space can be significantly improved.

  5. The industrial use of filtered back projection and maximum entropy reconstruction algorithms

    SciTech Connect

    Kruger, R.P.; London, J.R.

    1982-11-01

    Industrial tomography involves applications where experimental conditions may vary greatly. Some applications resemble more conventional medical tomography because a large number of projections are available. However, in other situations, scan time restrictions, object accessibility, or equipment limitations will reduce the number and/or angular range of the projections. This paper presents results from studies where both experimental conditions exist. The use of two algorithms, the more conventional filtered back projection (FBP) and the maximum entropy (MENT), are discussed and applied to several examples.

  6. An Application of the Finite Element Method to Maximum Entropy Tomography Image Reconstruction.

    DTIC Science & Technology

    1987-04-07

    R.D. Levine and M Tribus , The IMxi n Q Formalism NUT Piess, (’anbridge. 1978 4M Klaus and R.T Smith, "A Hilbert Space Approach to Maximum Entropy...Price Urbana , IL 61801-2978 W. C. Strahle Atlanta, GA 30332Unied Of IllinoisDept o oretical & Virginia Polytechnic Applied Mechanic Institute and...ATTN: R. J. Adrian State University 216 Talbot Lab ATTN: J.A. Schetz 104 South Wright St. R. T. Smith Urbana , IL 61801 Blacksburg, VA 24061 University of

  7. A homotopy algorithm for synthesizing robust controllers for flexible structures via the maximum entropy design equations

    NASA Technical Reports Server (NTRS)

    Collins, Emmanuel G., Jr.; Richter, Stephen

    1990-01-01

    One well known deficiency of LQG compensators is that they do not guarantee any measure of robustness. This deficiency is especially highlighted when considering control design for complex systems such as flexible structures. There has thus been a need to generalize LQG theory to incorporate robustness constraints. Here we describe the maximum entropy approach to robust control design for flexible structures, a generalization of LQG theory, pioneered by Hyland, which has proved useful in practice. The design equations consist of a set of coupled Riccati and Lyapunov equations. A homotopy algorithm that is used to solve these design equations is presented.

  8. On the pH Dependence of the Potential of Maximum Entropy of Ir(111) Electrodes.

    PubMed

    Ganassin, Alberto; Sebastián, Paula; Climent, Víctor; Schuhmann, Wolfgang; Bandarenka, Aliaksandr S; Feliu, Juan

    2017-04-28

    Studies over the entropy of components forming the electrode/electrolyte interface can give fundamental insights into the properties of electrified interphases. In particular, the potential where the entropy of formation of the double layer is maximal (potential of maximum entropy, PME) is an important parameter for the characterization of electrochemical systems. Indeed, this parameter determines the majority of electrode processes. In this work, we determine PMEs for Ir(111) electrodes. The latter currently play an important role to understand electrocatalysis for energy provision; and at the same time, iridium is one of the most stable metals against corrosion. For the experiments, we used a combination of the laser induced potential transient to determine the PME, and CO charge-displacement to determine the potentials of zero total charge, (EPZTC). Both PME and EPZTC were assessed for perchlorate solutions in the pH range from 1 to 4. Surprisingly, we found that those are located in the potential region where the adsorption of hydrogen and hydroxyl species takes place, respectively. The PMEs demonstrated a shift by ~30 mV per a pH unit (in the RHE scale). Connections between the PME and electrocatalytic properties of the electrode surface are discussed.

  9. An entropy-assisted musculoskeletal shoulder model.

    PubMed

    Xu, Xu; Lin, Jia-Hua; McGorry, Raymond W

    2017-04-01

    Optimization combined with a musculoskeletal shoulder model has been used to estimate mechanical loading of musculoskeletal elements around the shoulder. Traditionally, the objective function is to minimize the summation of the total activities of the muscles with forces, moments, and stability constraints. Such an objective function, however, tends to neglect the antagonist muscle co-contraction. In this study, an objective function including an entropy term is proposed to address muscle co-contractions. A musculoskeletal shoulder model is developed to apply the proposed objective function. To find the optimal weight for the entropy term, an experiment was conducted. In the experiment, participants generated various 3-D shoulder moments in six shoulder postures. The surface EMG of 8 shoulder muscles was measured and compared with the predicted muscle activities based on the proposed objective function using Bhattacharyya distance and concordance ratio under different weight of the entropy term. The results show that a small weight of the entropy term can improve the predictability of the model in terms of muscle activities. Such a result suggests that the concept of entropy could be helpful for further understanding the mechanism of muscle co-contractions as well as developing a shoulder biomechanical model with greater validity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. High resolution VLBI polarization imaging of AGN with the maximum entropy method

    NASA Astrophysics Data System (ADS)

    Coughlan, Colm P.; Gabuzda, Denise C.

    2016-12-01

    Radio polarization images of the jets of Active Galactic Nuclei (AGN) can provide a deep insight into the launching and collimation mechanisms of relativistic jets. However, even at VLBI scales, resolution is often a limiting factor in the conclusions that can be drawn from observations. The maximum entropy method (MEM) is a deconvolution algorithm that can outperform the more common CLEAN algorithm in many cases, particularly when investigating structures present on scales comparable to or smaller than the nominal beam size with `super-resolution'. A new implementation of the MEM suitable for single- or multiple-wavelength VLBI polarization observations has been developed and is described here. Monte Carlo simulations comparing the performances of CLEAN and MEM at reconstructing the properties of model images are presented; these demonstrate the enhanced reliability of MEM over CLEAN when images of the fractional polarization and polarization angle are constructed using convolving beams that are appreciably smaller than the full CLEAN beam. The results of using this new MEM software to image VLBA observations of the AGN 0716+714 at six different wavelengths are presented, and compared to corresponding maps obtained with CLEAN. MEM and CLEAN maps of Stokes I, the polarized flux, the fractional polarization and the polarization angle are compared for convolving beams ranging from the full CLEAN beam down to a beam one-third of this size. MEM's ability to provide more trustworthy polarization imaging than a standard CLEAN-based deconvolution when convolving beams appreciably smaller than the full CLEAN beam are used is discussed.

  11. Algorithms for optimized maximum entropy and diagnostic tools for analytic continuation

    NASA Astrophysics Data System (ADS)

    Bergeron, Dominic; Tremblay, A.-M. S.

    2016-08-01

    Analytic continuation of numerical data obtained in imaginary time or frequency has become an essential part of many branches of quantum computational physics. It is, however, an ill-conditioned procedure and thus a hard numerical problem. The maximum-entropy approach, based on Bayesian inference, is the most widely used method to tackle that problem. Although the approach is well established and among the most reliable and efficient ones, useful developments of the method and of its implementation are still possible. In addition, while a few free software implementations are available, a well-documented, optimized, general purpose, and user-friendly software dedicated to that specific task is still lacking. Here we analyze all aspects of the implementation that are critical for accuracy and speed and present a highly optimized approach to maximum entropy. Original algorithmic and conceptual contributions include (1) numerical approximations that yield a computational complexity that is almost independent of temperature and spectrum shape (including sharp Drude peaks in broad background, for example) while ensuring quantitative accuracy of the result whenever precision of the data is sufficient, (2) a robust method of choosing the entropy weight α that follows from a simple consistency condition of the approach and the observation that information- and noise-fitting regimes can be identified clearly from the behavior of χ2 with respect to α , and (3) several diagnostics to assess the reliability of the result. Benchmarks with test spectral functions of different complexity and an example with an actual physical simulation are presented. Our implementation, which covers most typical cases for fermions, bosons, and response functions, is available as an open source, user-friendly software.

  12. Maximum entropy reconstruction method for moment-based solution of the BGK equation

    NASA Astrophysics Data System (ADS)

    Summy, Dustin; Pullin, D. I.

    2016-11-01

    We describe a method for a moment-based solution of the BGK equation. The starting point is a set of equations for a moment representation which must have even-ordered highest moments. The partial-differential equations for these moments are unclosed, containing higher-order moments in the flux terms. These are evaluated using a maximum-entropy reconstruction of the one-particle velocity distribution function f (x , t) , using the known moments. An analytic, asymptotic solution describing the singular behavior of the maximum-entropy construction near to the local equilibrium velocity distribution is presented, and is used to construct a complete hybrid closure scheme for the case of fourth-order and lower moments. For the steady-flow normal shock wave, this produces a set of 9 ordinary differential equations describing the shock structure. For a variable hard-sphere gas these can be solved numerically. Comparisons with results using the direct-simulation Monte-Carlo method will be presented. Supported partially by NSF award DMS 1418903.

  13. Multifrequency synthesis algorithm based on the generalized maximum entropy method: application to 0954+658

    NASA Astrophysics Data System (ADS)

    Bajkova, Anisa T.; Pushkarev, Alexander B.

    2011-10-01

    We propose the multifrequency synthesis (MFS) algorithm with the spectral correction of frequency-dependent source brightness distribution based on the maximum entropy method. In order to take into account the spectral terms of nth order in the Taylor expansion for the frequency-dependent brightness distribution, we use a generalized form of the maximum entropy method. This is suitable for the reconstruction of not only positive-definite functions, but also sign-variable functions. With the proposed algorithm, we aim to produce both an improved total intensity image and a two-dimensional spectral index distribution over the source. We also consider the problem of the frequency-dependent variation of the radio-core positions of self-absorbed active galactic nuclei, which should be taken into account in a correct MFS. The proposed MFS algorithm has first been tested on simulated data and then applied to the four-frequency synthesis imaging of the radio source 0954+658 using Very Large Baseline Array observational data obtained quasi-simultaneously at 5, 8, 15 and 22 GHz.

  14. Inversion of emissivity spectrum and temperature in the TIR waveband based on the Maximum Entropy

    NASA Astrophysics Data System (ADS)

    Liu, Junchi; Li, Hongwen; Wang, Jianli; Li, Hongzhuang; Yin, Limei; Zhang, Zhenduo

    2015-09-01

    In the TIR (Thermal Infrared) waveband, solving the target emissivity spectrum and temperature leads to an ill-posed problem that the number of unknown parameters is more than the number of available measurements. Generally, the approaches developed for solving this kind of problems are called by a joint name, the TES (Temperature and Emissivity Separation) algorithm. Regarded as a promotion of the MaxEnTES (Maximum Entropy TES) algorithm proposed by A. Barducci, a novel method called the New MaxEnTES algorithm is presented in this paper. The Maximum Entropy estimation is utilized to be the basic framework of each preceding algorithm. What makes the two algorithms different is that the Alpha Spectrum derived by ADE (Alpha Derived Emissivity method) or the Beta Spectrum derived by NEM (Normalized Emissivity Method) is considered as a priori information to be added in the New MaxEnTES algorithm. As a result, the New MaxEnTES algorithm keeps a simpler mathematical formalism. Without any doubt, the New MaxEnTES algorithm provides a faster computation for large volumes of data (i.e. hyperspectral images of the Earth). Some numerical simulations have been conducted to assess the performance of the New MaxEnTES algorithm. The results show that, the New MaxEnTES algorithm carefully reconstructs the target emissivity spectrum and temperature, and keeps the same orders of magnitude on accuracy with the MaxEnTES algorithm. What's more, it also keeps good robust stabilization towards noise.

  15. Non-destructive depth profiling using variable kinetic energy- x-ray photoelectron spectroscopy with maximum entropy regularization

    NASA Astrophysics Data System (ADS)

    Krajewski, James J.

    This study will describe a nondestructive method to determine compositional depth profiles of thicker films using Variable Kinetic Energy X-ray Photoelectron Spectroscopy (VKE-XPS) data by applying proven regularization methods successfully used in Angle-Resolved X-ray Photoelectron Spectroscopy (AR-XPS). To demonstrate the applicability of various regularization procedures to the experimental VKE-XPS data, simulated TiO2/Si film structures of two different thicknesses and known compositional profiles were "created" and then analyzed. It is found that superior results are attained when using a maximum entropy-like method with an initial model/prior knowledge of thickness is similar to the simulated film thickness. Other regularization functions, Slopes, Curvature and Total Variance Analysis (TVA) give acceptable results when there is no prior knowledge since they do not depend on an accurate initial model. The maximum entropy algorithm is then applied to two actual films of TiO2 deposited on silicon substrate. These results will show the applicability of generating compositional depth profiles with experimental VKE-XPS data. Accuracy of the profiles is confirmed by subjecting these actual films to a variety of "alternate" analytical thin film techniques including Sputtered Angle Resolved Photoelectron Spectroscopy, Auger Electron Spectroscopy, Rutherford Backscattering Spectroscopy, Focused Ion Beam Spectroscopy, Transmission and Scanning Electron Spectroscopy and Variable Angle Spectroscopic Ellipsometry. Future work will include applying different regularizations functions to better fit the MaxEnt composition depth profile other than those described in this study.

  16. Maximum entropy production, carbon assimilation, and the spatial organization of vegetation in river basins

    PubMed Central

    del Jesus, Manuel; Foti, Romano; Rinaldo, Andrea; Rodriguez-Iturbe, Ignacio

    2012-01-01

    The spatial organization of functional vegetation types in river basins is a major determinant of their runoff production, biodiversity, and ecosystem services. The optimization of different objective functions has been suggested to control the adaptive behavior of plants and ecosystems, often without a compelling justification. Maximum entropy production (MEP), rooted in thermodynamics principles, provides a tool to justify the choice of the objective function controlling vegetation organization. The application of MEP at the ecosystem scale results in maximum productivity (i.e., maximum canopy photosynthesis) as the thermodynamic limit toward which the organization of vegetation appears to evolve. Maximum productivity, which incorporates complex hydrologic feedbacks, allows us to reproduce the spatial macroscopic organization of functional types of vegetation in a thoroughly monitored river basin, without the need for a reductionist description of the underlying microscopic dynamics. The methodology incorporates the stochastic characteristics of precipitation and the associated soil moisture on a spatially disaggregated framework. Our results suggest that the spatial organization of functional vegetation types in river basins naturally evolves toward configurations corresponding to dynamically accessible local maxima of the maximum productivity of the ecosystem. PMID:23213227

  17. Estimation of Wild Fire Risk Area based on Climate and Maximum Entropy in Korean Peninsular

    NASA Astrophysics Data System (ADS)

    Kim, T.; Lim, C. H.; Song, C.; Lee, W. K.

    2015-12-01

    The number of forest fires and accompanying human injuries and physical damages has been increased by frequent drought. In this study, forest fire danger zone of Korea is estimated to predict and prepare for future forest fire hazard regions. The MaxEnt (Maximum Entropy) model is used to estimate the forest fire hazard region which estimates the probability distribution of the status. The MaxEnt model is primarily for the analysis of species distribution, but its applicability for various natural disasters is getting recognition. The detailed forest fire occurrence data collected by the MODIS for past 5 years (2010-2014) is used as occurrence data for the model. Also meteorology, topography, vegetation data are used as environmental variable. In particular, various meteorological variables are used to check impact of climate such as annual average temperature, annual precipitation, precipitation of dry season, annual effective humidity, effective humidity of dry season, aridity index. Consequently, the result was valid based on the AUC(Area Under the Curve) value (= 0.805) which is used to predict accuracy in the MaxEnt model. Also predicted forest fire locations were practically corresponded with the actual forest fire distribution map. Meteorological variables such as effective humidity showed the greatest contribution, and topography variables such as TWI (Topographic Wetness Index) and slope also contributed on the forest fire. As a result, the east coast and the south part of Korea peninsula were predicted to have high risk on the forest fire. In contrast, high-altitude mountain area and the west coast appeared to be safe with the forest fire. The result of this study is similar with former studies, which indicates high risks of forest fire in accessible area and reflects climatic characteristics of east and south part in dry season. To sum up, we estimated the forest fire hazard zone with existing forest fire locations and environment variables and had

  18. Comparison of maximum entropy and quadrature-based moment closures for shock transitions prediction in one-dimensional gaskinetic theory

    NASA Astrophysics Data System (ADS)

    Laplante, Jérémie; Groth, Clinton P. T.

    2016-11-01

    The Navier-Stokes-Fourier (NSF) equations are conventionally used to model continuum flow near local thermodynamic equilibrium. In the presence of more rarefied flows, there exists a transitional regime in which the NSF equations no longer hold, and where particle-based methods become too expensive for practical problems. To close this gap, moment closure techniques having the potential of being both valid and computationally tractable for these applications are sought. In this study, a number of five-moment closures for a model one-dimensional kinetic equation are assessed and compared. In particular, four different moment closures are applied to the solution of stationary shocks. The first of these is a Grad-type moment closure, which is known to fail for moderate departures from equilibrium. The second is an interpolative closure based on maximization of thermodynamic entropy which has previously been shown to provide excellent results for 1D gaskinetic theory. Additionally, two quadrature methods of moments (QMOM) are considered. One method is based on the representation of the distribution function in terms of a combination of three Dirac delta functions. The second method, an extended QMOM (EQMOM), extends the quadrature-based approach by assuming a bi-Maxwellian representation of the distribution function. The closing fluxes are analyzed in each case and the region of physical realizability is examined for the closures. Numerical simulations of stationary shock structures as predicted by each moment closure are compared to reference kinetic and the corresponding NSF-like equation solutions. It is shown that the bi-Maxwellian and interpolative maximum-entropy-based moment closures are able to closely reproduce the results of the true maximum-entropy distribution closure for this case very well, whereas the other methods do not. For moderate departures from local thermodynamic equilibrium, the Grad-type and QMOM closures produced unphysical subshocks and were

  19. Entropy and equilibrium state of free market models

    NASA Astrophysics Data System (ADS)

    Iglesias, J. R.; de Almeida, R. M. C.

    2012-03-01

    Many recent models of trade dynamics use the simple idea of wealth exchanges among economic agents in order to obtain a stable or equilibrium distribution of wealth among the agents. In particular, a plain analogy compares the wealth in a society with the energy in a physical system, and the trade between agents to the energy exchange between molecules during collisions. In physical systems, the energy exchange among molecules leads to a state of equipartition of the energy and to an equilibrium situation where the entropy is a maximum. On the other hand, in a large class of exchange models, the system converges to a very unequal condensed state, where one or a few agents concentrate all the wealth of the society while the wide majority of agents shares zero or almost zero fraction of the wealth. So, in those economic systems a minimum entropy state is attained. We propose here an analytical model where we investigate the effects of a particular class of economic exchanges that minimize the entropy. By solving the model we discuss the conditions that can drive the system to a state of minimum entropy, as well as the mechanisms to recover a kind of equipartition of wealth.

  20. Learning Gaussian mixture models with entropy-based criteria.

    PubMed

    Penalver Benavent, Antonio; Escolano Ruiz, Francisco; Saez, Juan Manuel

    2009-11-01

    In this paper, we address the problem of estimating the parameters of Gaussian mixture models. Although the expectation-maximization (EM) algorithm yields the maximum-likelihood (ML) solution, its sensitivity to the selection of the starting parameters is well-known and it may converge to the boundary of the parameter space. Furthermore, the resulting mixture depends on the number of selected components, but the optimal number of kernels may be unknown beforehand. We introduce the use of the entropy of the probability density function (pdf) associated to each kernel to measure the quality of a given mixture model with a fixed number of kernels. We propose two methods to approximate the entropy of each kernel and a modification of the classical EM algorithm in order to find the optimum number of components of the mixture. Moreover, we use two stopping criteria: a novel global mixture entropy-based criterion called Gaussianity deficiency (GD) and a minimum description length (MDL) principle-based one. Our algorithm, called entropy-based EM (EBEM), starts with a unique kernel and performs only splitting by selecting the worst kernel attending to GD. We have successfully tested it in probability density estimation, pattern classification, and color image segmentation. Experimental results improve the ones of other state-of-the-art model order selection methods.

  1. Maximum Entropy Methods as the Bridge Between Microscopic and Macroscopic Theory

    NASA Astrophysics Data System (ADS)

    Taylor, Jamie M.

    2016-09-01

    This paper is concerned with an investigation into a function of macroscopic variables known as the singular potential, building on previous work by Ball and Majumdar. The singular potential is a function of the admissible statistical averages of probability distributions on a state space, defined so that it corresponds to the maximum possible entropy given known observed statistical averages, although non-classical entropy-like objective functions will also be considered. First the set of admissible moments must be established, and under the conditions presented in this work the set is open, bounded and convex allowing a description in terms of supporting hyperplanes, which provides estimates on the development of singularities for related probability distributions. Under appropriate conditions it is shown that the singular potential is strictly convex, as differentiable as the microscopic entropy, and blows up uniformly as the macroscopic variable tends to the boundary of the set of admissible moments. Applications of the singular potential are then discussed, and particular consideration will be given to certain free-energy functionals typical in mean-field theory, demonstrating an equivalence between certain microscopic and macroscopic free-energy functionals. This allows statements about L^1-local minimisers of Onsager's free energy to be obtained which cannot be given by two-sided variations, and overcomes the need to ensure local minimisers are bounded away from zero and +∞ before taking L^∞ variations. The analysis also permits the definition of a dual order parameter for which Onsager's free energy allows an explicit representation. Also, the difficulties in approximating the singular potential by everywhere defined functions, in particular by polynomial functions, are addressed, with examples demonstrating the failure of the Taylor approximation to preserve relevant shape properties of the singular potential.

  2. Computation of short periodical variations of pole coordinates using maximum entropy spectral analysis and an ormsby filter

    NASA Astrophysics Data System (ADS)

    Kosek, W.

    1987-06-01

    The main purpose of this paper is the search for the optimum method of detecting weak short periodical variations of pole coordinates determined by different techniques in the MERIT Campaign. The optimum filter, length of Maximum Entropy Spectral Analysis ( MESA) for these analyses was investigated on the basis of the Rovelli-Vulpiani formula. The unbiased autocovariance estimation multiplied by different lag windows was introduced into this formula for a better estimation of the optimum filter length. The optimum filter length in the MESA is discussed on the basis of the model data similar to the observed data. The model data was disturbed by white and red noises with standard deviations greater than the average amplitude of an oscillation in the model. Each short periodical oscillation in pole coordinates was calculated by a properly, defined Ormsby band pass filter. Their sum creates a short periodical signal part which subtracted from smoothed pole coordinates diminishes their standard deviations and their autocovariance estimations.

  3. Process-conditioned investing with incomplete information using maximum causal entropy

    NASA Astrophysics Data System (ADS)

    Ziebart, Brian D.

    2012-05-01

    Investing to optimally maximize the growth rate of wealth based on sequences of event outcomes has many information-theoretic interpretations. Namely, the mutual information characterizes the benefit of additional side information being available when making investment decisions [1] in settings where the probabilistic relationships between side information and event outcomes are known. Additionally, the relative variant of the principle of maximum entropy [2] provides the optimal investment allocation in the more general setting where the relationships between side information and event outcomes are only partially known [3]. In this paper, we build upon recent work characterizing the growth rates of investment in settings with inter-dependent side information and event outcome sequences [4]. We consider the extension to settings with inter-dependent event outcomes and side information where the probabilistic relationships between side information and event outcomes are only partially known. We introduce the principle of minimum relative causal entropy to obtain the optimal worst-case investment allocations for this setting. We present efficient algorithms for obtaining these investment allocations using convex optimization techniques and dynamic programming that illustrates a close connection to optimal control theory.

  4. A maximum (non-extensive) entropy approach to equity options bid-ask spread

    NASA Astrophysics Data System (ADS)

    Tapiero, Oren J.

    2013-07-01

    The cross-section of options bid-ask spreads with their strikes are modelled by maximising the Kaniadakis entropy. A theoretical model results with the bid-ask spread depending explicitly on the implied volatility; the probability of expiring at-the-money and an asymmetric information parameter (κ). Considering AIG as a test case for the period between January 2006 and October 2008, we find that information flows uniquely from the trading activity in the underlying asset to its derivatives. Suggesting that κ is possibly an option implied measure of the current state of trading liquidity in the underlying asset.

  5. Modelling the spreading rate of controlled communicable epidemics through an entropy-based thermodynamic model

    NASA Astrophysics Data System (ADS)

    Wang, WenBin; Wu, ZiNiu; Wang, ChunFeng; Hu, RuiFeng

    2013-11-01

    A model based on a thermodynamic approach is proposed for predicting the dynamics of communicable epidemics assumed to be governed by controlling efforts of multiple scales so that an entropy is associated with the system. All the epidemic details are factored into a single and time-dependent coefficient, the functional form of this coefficient is found through four constraints, including notably the existence of an inflexion point and a maximum. The model is solved to give a log-normal distribution for the spread rate, for which a Shannon entropy can be defined. The only parameter, that characterizes the width of the distribution function, is uniquely determined through maximizing the rate of entropy production. This entropy-based thermodynamic (EBT) model predicts the number of hospitalized cases with a reasonable accuracy for SARS in the year 2003. This EBT model can be of use for potential epidemics such as avian influenza and H7N9 in China.

  6. Continuity of the maximum-entropy inference: Convex geometry and numerical ranges approach

    SciTech Connect

    Rodman, Leiba; Spitkovsky, Ilya M. E-mail: ilya@math.wm.edu; Szkoła, Arleta Weis, Stephan

    2016-01-15

    We study the continuity of an abstract generalization of the maximum-entropy inference—a maximizer. It is defined as a right-inverse of a linear map restricted to a convex body which uniquely maximizes on each fiber of the linear map a continuous function on the convex body. Using convex geometry we prove, amongst others, the existence of discontinuities of the maximizer at limits of extremal points not being extremal points themselves and apply the result to quantum correlations. Further, we use numerical range methods in the case of quantum inference which refers to two observables. One result is a complete characterization of points of discontinuity for 3 × 3 matrices.

  7. Optimal resolution in maximum entropy image reconstruction from projections with multigrid acceleration

    NASA Technical Reports Server (NTRS)

    Limber, Mark A.; Manteuffel, Thomas A.; Mccormick, Stephen F.; Sholl, David S.

    1993-01-01

    We consider the problem of image reconstruction from a finite number of projections over the space L(sup 1)(Omega), where Omega is a compact subset of the set of Real numbers (exp 2). We prove that, given a discretization of the projection space, the function that generates the correct projection data and maximizes the Boltzmann-Shannon entropy is piecewise constant on a certain discretization of Omega, which we call the 'optimal grid'. It is on this grid that one obtains the maximum resolution given the problem setup. The size of this grid grows very quickly as the number of projections and number of cells per projection grow, indicating fast computational methods are essential to make its use feasible. We use a Fenchel duality formulation of the problem to keep the number of variables small while still using the optimal discretization, and propose a multilevel scheme to improve convergence of a simple cyclic maximization scheme applied to the dual problem.

  8. Halo-independence with quantified maximum entropy at DAMA/LIBRA

    NASA Astrophysics Data System (ADS)

    Fowlie, Andrew

    2017-10-01

    Using the DAMA/LIBRA anomaly as an example, we formalise the notion of halo-independence in the context of Bayesian statistics and quantified maximum entropy. We consider an infinite set of possible profiles, weighted by an entropic prior and constrained by a likelihood describing noisy measurements of modulated moments by DAMA/LIBRA. Assuming an isotropic dark matter (DM) profile in the galactic rest frame, we find the most plausible DM profiles and predictions for unmodulated signal rates at DAMA/LIBRA. The entropic prior contains an a priori unknown regularisation factor, β, that describes the strength of our conviction that the profile is approximately Maxwellian. By varying β, we smoothly interpolate between a halo-independent and a halo-dependent analysis, thus exploring the impact of prior information about the DM profile.

  9. An understanding of human dynamics in urban subway traffic from the Maximum Entropy Principle

    NASA Astrophysics Data System (ADS)

    Yong, Nuo; Ni, Shunjiang; Shen, Shifei; Ji, Xuewei

    2016-08-01

    We studied the distribution of entry time interval in Beijing subway traffic by analyzing the smart card transaction data, and then deduced the probability distribution function of entry time interval based on the Maximum Entropy Principle. Both theoretical derivation and data statistics indicated that the entry time interval obeys power-law distribution with an exponential cutoff. In addition, we pointed out the constraint conditions for the distribution form and discussed how the constraints affect the distribution function. It is speculated that for bursts and heavy tails in human dynamics, when the fitted power exponent is less than 1.0, it cannot be a pure power-law distribution, but with an exponential cutoff, which may be ignored in the previous studies.

  10. Background adjustment of cDNA microarray images by Maximum Entropy distributions.

    PubMed

    Argyropoulos, Christos; Daskalakis, Antonis; Nikiforidis, George C; Sakellaropoulos, George C

    2010-08-01

    Many empirical studies have demonstrated the exquisite sensitivity of both traditional and novel statistical and machine intelligence algorithms to the method of background adjustment used to analyze microarray datasets. In this paper we develop a statistical framework that approaches background adjustment as a classic stochastic inverse problem, whose noise characteristics are given in terms of Maximum Entropy distributions. We derive analytic closed form approximations to the combined problem of estimating the magnitude of the background in microarray images and adjusting for its presence. The proposed method reduces standardized measures of log expression variability across replicates in situations of known differential and non-differential gene expression without increasing the bias. Additionally, it results in computationally efficient procedures for estimation and learning based on sufficient statistics and can filter out spot measures with intensities that are numerically close to the background level resulting in a noise reduction of about 7%.

  11. Sampling properties of the maximum entropy estimators for the extreme-value type-1 distribution

    NASA Astrophysics Data System (ADS)

    Phien, Huynh Ngoc

    1986-10-01

    The extreme-value type-1 (EV1) distribution can be viewed as the distribution that satisfies two specified expected values. These expected values give rise to a method of parameter estimation referred to as the method of maximum entropy (MME). The main purpose of this note is to provide a scheme to estimate the variances and covariance of the MME estimators. As a by-product of the simulation runs used, some useful sampling properties of the MME estimators are obtained. These clearly show that the MME is a good method for fitting the EV1 distribution, and the approximations obtained analytically for the variance of estimates of the T-year event are of sufficient accuracy.

  12. On the stability of the moments of the maximum entropy wind wave spectrum

    SciTech Connect

    Pena, H.G.

    1983-03-01

    The stability of some current wind wave parameters as a function of high-frequency cut-off and degrees of freedom of the spectrum has been numerically investigated when computed in terms of the moments of the wave energy spectrum. From the Pierson-Moskovitz wave spectrum type, a sea surface profile is simulated and its wave energy spectrum is estimated by the Maximum Entropy Method (MEM). As the degrees of freedom of the MEM spectral estimation are varied, the results show a much better stability of the wave parameters as compared to the classical periodogram and correlogram spectral approaches. The stability of wave parameters as a function of high-frequency cut-off has the same result as obtained by the classical techniques.

  13. Maximum entropy image reconstruction - A practical non-information-theoretic approach

    NASA Astrophysics Data System (ADS)

    Nityananda, R.; Narayan, R.

    1982-12-01

    An alternative motivation for the maximum entropy method (MEM) is given and its practical implementation discussed. The need for nonlinear restoration methods in general is considered, arguing in favor of nonclassical techniques such as MEM. Earlier work on MEM is summarized and the present approach is introduced. The whole family of restoration methods based on maximizing the integral of some function of the brightness is addressed. Criteria for the choice of the function are given and their properties are discussed. A parameter for measuring the resolution of the restored map is identified, and a scheme for controlling it by adding a constant to the zero-spacing correlation is introduced. Numerical schemes for implementing MEM are discussed and restorations obtained with various choices of the brightness function are compared. Data noise is discussed, showing that the standard least squares approach leads to a bias in the restoration.

  14. Optical Spectrum Analysis of Real-Time TDDFT Using the Maximum Entropy Method

    NASA Astrophysics Data System (ADS)

    Toogoshi, M.; Kato, M.; Kano, S. S.; Zempo, Y.

    2014-05-01

    In the calculation of time-dependent density-functional theory in real time, we apply an external field to perturb the optimized electronic structure, and follow the time evolution of the dipole moment to calculate the oscillator strength distribution. We solve the time-dependent equation of motion, keeping track of the dipole moment as time-series data. We adopt Burg's maximum entropy method (MEM) to compute the spectrum of the oscillator strength, and apply this technique to several molecules. We find that MEM provides the oscillator strength distribution at high resolution even with a half of the evolution time of a simple FFT of the dynamic dipole moment. In this paper we show the effectiveness and efficiency of MEM in comparison with that of FFT. Not only the total number of time steps, but also the length of the autocorrelation, the lag, plays an important role in improving the resolution of the spectrum.

  15. Deconvolution of complex echo signals by the maximum entropy method in ultrasonic nondestructive inspection

    NASA Astrophysics Data System (ADS)

    Bazulin, A. E.; Bazulin, E. G.

    2009-11-01

    The problem of inversion of convolution with the echo signal point source function is considered with the use of the regularization and maximum entropy method and further reconstruction of two-dimensional images by the method of projection in the spectral domain. The inverse convolution problem is solved for the complex-valued signal that is obtained from the real valued signal through the Hilbert transform. Numerical and experimental simulation is performed. A possibility of enhancing the echo signal along the ray’s resolution and of lowering the spectrum’s noise level with the use of complex signals (pseudo-random sequences) is demonstrated. The results are compared with those obtained using the autoregression method and the reference hologram method.

  16. Comparison of two views of maximum entropy in biodiversity: Frank (2011) and Pueyo et al. (2007)

    PubMed Central

    Pueyo, Salvador

    2012-01-01

    An increasing number of authors agree in that the maximum entropy principle (MaxEnt) is essential for the understanding of macroecological patterns. However, there are subtle but crucial differences among the approaches by several of these authors. This poses a major obstacle for anyone interested in applying the methodology of MaxEnt in this context. In a recent publication, Frank (2011) gives some arguments why his own approach would represent an improvement as compared to the earlier paper by Pueyo et al. (2007) and also to the views by Edwin T. Jaynes, who first formulated MaxEnt in the context of statistical physics. Here I show that his criticisms are flawed and that there are fundamental reasons to prefer the original approach. PMID:22837843

  17. Block entropy and quantum phase transition in the anisotropic Kondo necklace model

    NASA Astrophysics Data System (ADS)

    Mendoza-Arenas, J. J.; Franco, R.; Silva-Valencia, J.

    2010-06-01

    We study the von Neumann block entropy in the Kondo necklace model for different anisotropies η in the XY interaction between conduction spins using the density matrix renormalization group method. It was found that the block entropy presents a maximum for each η considered, and, comparing it with the results of the quantum criticality of the model based on the behavior of the energy gap, we observe that the maximum block entropy occurs at the quantum critical point between an antiferromagnetic and a Kondo singlet state, so this measure of entanglement is useful for giving information about where a quantum phase transition occurs in this model. We observe that the block entropy also presents a maximum at the quantum critical points that are obtained when an anisotropy Δ is included in the Kondo exchange between localized and conduction spins; when Δ diminishes for a fixed value of η, the critical point increases, favoring the antiferromagnetic phase.

  18. Forest Tree Species Distribution Mapping Using Landsat Satellite Imagery and Topographic Variables with the Maximum Entropy Method in Mongolia

    NASA Astrophysics Data System (ADS)

    Hao Chiang, Shou; Valdez, Miguel; Chen, Chi-Farn

    2016-06-01

    Forest is a very important ecosystem and natural resource for living things. Based on forest inventories, government is able to make decisions to converse, improve and manage forests in a sustainable way. Field work for forestry investigation is difficult and time consuming, because it needs intensive physical labor and the costs are high, especially surveying in remote mountainous regions. A reliable forest inventory can give us a more accurate and timely information to develop new and efficient approaches of forest management. The remote sensing technology has been recently used for forest investigation at a large scale. To produce an informative forest inventory, forest attributes, including tree species are unavoidably required to be considered. In this study the aim is to classify forest tree species in Erdenebulgan County, Huwsgul province in Mongolia, using Maximum Entropy method. The study area is covered by a dense forest which is almost 70% of total territorial extension of Erdenebulgan County and is located in a high mountain region in northern Mongolia. For this study, Landsat satellite imagery and a Digital Elevation Model (DEM) were acquired to perform tree species mapping. The forest tree species inventory map was collected from the Forest Division of the Mongolian Ministry of Nature and Environment as training data and also used as ground truth to perform the accuracy assessment of the tree species classification. Landsat images and DEM were processed for maximum entropy modeling, and this study applied the model with two experiments. The first one is to use Landsat surface reflectance for tree species classification; and the second experiment incorporates terrain variables in addition to the Landsat surface reflectance to perform the tree species classification. All experimental results were compared with the tree species inventory to assess the classification accuracy. Results show that the second one which uses Landsat surface reflectance coupled

  19. Application of maximum-entropy spectral estimation to deconvolution of XPS data. [X-ray Photoelectron Spectroscopy

    NASA Technical Reports Server (NTRS)

    Vasquez, R. P.; Klein, J. D.; Barton, J. J.; Grunthaner, F. J.

    1981-01-01

    A comparison is made between maximum-entropy spectral estimation and traditional methods of deconvolution used in electron spectroscopy. The maximum-entropy method is found to have higher resolution-enhancement capabilities and, if the broadening function is known, can be used with no adjustable parameters with a high degree of reliability. The method and its use in practice are briefly described, and a criterion is given for choosing the optimal order for the prediction filter based on the prediction-error power sequence. The method is demonstrated on a test case and applied to X-ray photoelectron spectra.

  20. The Maximum Entropy Method for Optical Spectrum Analysis of Real-Time TDDFT

    NASA Astrophysics Data System (ADS)

    Toogoshi, M.; Kano, S. S.; Zempo, Y.

    2015-09-01

    The maximum entropy method (MEM) is one of the key techniques for spectral analysis. The major feature is that spectra in the low frequency part can be described by the short time-series data. Thus, we applied MEM to analyse the spectrum from the time dependent dipole moment obtained from the time-dependent density functional theory (TDDFT) calculation in real time. It is intensively studied for computing optical properties. In the MEM analysis, however, the maximum lag of the autocorrelation is restricted by the total number of time-series data. We proposed that, as an improved MEM analysis, we use the concatenated data set made from the several-times repeated raw data. We have applied this technique to the spectral analysis of the TDDFT dipole moment of ethylene and oligo-fluorene with n = 8. As a result, the higher resolution can be obtained, which is closer to that of FT with practically time-evoluted data as the same total number of time steps. The efficiency and the characteristic feature of this technique are presented in this paper.

  1. Spectral analysis of the Chandler wobble: comparison of the discrete Fourier analysis and the maximum entropy method

    NASA Astrophysics Data System (ADS)

    Brzezinski, A.

    2014-12-01

    The methods of spectral analysis are applied to solve the following two problems concerning the free Chandler wobble (CW): 1) to estimate the CW resonance parameters, the period T and the quality factor Q, and 2) to perform the excitation balance of the observed free wobble. It appears, however, that the results depend on the algorithm of spectral analysis applied. Here we compare the following two algorithms which are frequently applied for analysis of the polar motion data, the classical discrete Fourier analysis and the maximum entropy method corresponding to the autoregressive modeling of the input time series. We start from general description of both methods and of their application to the analysis of the Earth orientation observations. Then we compare results of the analysis of the polar motion and the related excitation data.

  2. The effect of the shape function on small-angle scattering analysis by the maximum entropy method

    SciTech Connect

    Jemian, P.R.; Allen, A.J. |

    1992-09-15

    Analysis of small-angle scattering data to obtain a particle size distribution is dependent upon the shape function used to model the scattering. Using a maximum entropy analysis of small-angle scattering data, the effect of shape function selection on obtained size distribution is demonstrated using three different shape functions to describe the same scattering data from each of two steels. The alloys have been revealed by electron microscopy to contain a distribution of randomly oriented and mainly non-interacting, irregular, ellipsoidal precipitates. Comparison is made between the different forms of the shape function. Effect of an incident wavelength distribution is also shown. The importance of testing appropriate shape functions and validating these against other microstructural studies is discussed.

  3. Entropy Based Modelling for Estimating Demographic Trends.

    PubMed

    Li, Guoqi; Zhao, Daxuan; Xu, Yi; Kuo, Shyh-Hao; Xu, Hai-Yan; Hu, Nan; Zhao, Guangshe; Monterola, Christopher

    2015-01-01

    In this paper, an entropy-based method is proposed to forecast the demographical changes of countries. We formulate the estimation of future demographical profiles as a constrained optimization problem, anchored on the empirically validated assumption that the entropy of age distribution is increasing in time. The procedure of the proposed method involves three stages, namely: 1) Prediction of the age distribution of a country's population based on an "age-structured population model"; 2) Estimation the age distribution of each individual household size with an entropy-based formulation based on an "individual household size model"; and 3) Estimation the number of each household size based on a "total household size model". The last stage is achieved by projecting the age distribution of the country's population (obtained in stage 1) onto the age distributions of individual household sizes (obtained in stage 2). The effectiveness of the proposed method is demonstrated by feeding real world data, and it is general and versatile enough to be extended to other time dependent demographic variables.

  4. Non-equilibrium thermodynamics, maximum entropy production and Earth-system evolution.

    PubMed

    Kleidon, Axel

    2010-01-13

    The present-day atmosphere is in a unique state far from thermodynamic equilibrium. This uniqueness is for instance reflected in the high concentration of molecular oxygen and the low relative humidity in the atmosphere. Given that the concentration of atmospheric oxygen has likely increased throughout Earth-system history, we can ask whether this trend can be generalized to a trend of Earth-system evolution that is directed away from thermodynamic equilibrium, why we would expect such a trend to take place and what it would imply for Earth-system evolution as a whole. The justification for such a trend could be found in the proposed general principle of maximum entropy production (MEP), which states that non-equilibrium thermodynamic systems maintain steady states at which entropy production is maximized. Here, I justify and demonstrate this application of MEP to the Earth at the planetary scale. I first describe the non-equilibrium thermodynamic nature of Earth-system processes and distinguish processes that drive the system's state away from equilibrium from those that are directed towards equilibrium. I formulate the interactions among these processes from a thermodynamic perspective and then connect them to a holistic view of the planetary thermodynamic state of the Earth system. In conclusion, non-equilibrium thermodynamics and MEP have the potential to provide a simple and holistic theory of Earth-system functioning. This theory can be used to derive overall evolutionary trends of the Earth's past, identify the role that life plays in driving thermodynamic states far from equilibrium, identify habitability in other planetary environments and evaluate human impacts on Earth-system functioning. This journal is © 2010 The Royal Society

  5. An Instructive Model of Entropy

    ERIC Educational Resources Information Center

    Zimmerman, Seth

    2010-01-01

    This article first notes the misinterpretation of a common thought experiment, and the misleading comment that "systems tend to flow from less probable to more probable macrostates". It analyses the experiment, generalizes it and introduces a new tool of investigation, the simplectic structure. A time-symmetric model is built upon this structure,…

  6. An Instructive Model of Entropy

    ERIC Educational Resources Information Center

    Zimmerman, Seth

    2010-01-01

    This article first notes the misinterpretation of a common thought experiment, and the misleading comment that "systems tend to flow from less probable to more probable macrostates". It analyses the experiment, generalizes it and introduces a new tool of investigation, the simplectic structure. A time-symmetric model is built upon this structure,…

  7. Maximum-entropy calculation of free energy distributions for two forms of myoglobin.

    PubMed

    Poland, Douglas

    2002-03-01

    The temperature dependence of the heat capacity of myoglobin depends dramatically on pH. At low pH (near 4.5), there are two weak maxima in the heat capacity at low and intermediate temperatures, respectively, whereas at high pH (near 10.7), there is one strong maximum at high temperature. Using literature data for the low-pH form (Hallerbach and Hinz, 1999) and for the high-pH form (Makhatadze and Privalov, 1995), we applied a recently developed technique (Poland, 2001d) to calculate the free energy distributions for the two forms of the protein. In this method, the temperature dependence of the heat capacity is used to calculate moments of the protein enthalpy distribution function, which in turn, using the maximum-entropy method, are used to construct the actual distribution function. The enthalpy distribution function for a protein gives the fraction of protein molecules in solution having a given value of the enthalpy, which can be interpreted as the probability that a molecule picked at random has a given enthalpy value. Given the enthalpy distribution functions at several temperatures, one can then construct a master free energy function from which the probability distributions at all temperatures can be calculated. For the high-pH form of myoglobin, the enthalpy distribution function that is obtained exhibits bimodal behavior at the temperature corresponding to the maximum in the heat capacity (Poland, 2001a), reflecting the presence of two populations of molecules (native and unfolded). For this form of myoglobin, the temperature evolution of the relative probabilities of the two populations can be obtained in detail from the master free energy function. In contrast, the enthalpy distribution function for the low-pH form of myoglobin does not show any special structure at any temperature. In this form of myoglobin the enthalpy distribution function simply exhibits a single maximum at all temperatures, with the position of the maximum increasing to higher

  8. Recovery of lifetime distributions from frequency-domain fluorometry data by means of the quantified maximum entropy method.

    PubMed

    Brochon, J C; Pouget, J; Valeur, B

    1995-06-01

    The new quantified version of the maximum entropy method allows one to recover lifetime distributions with a precise statement of the accuracy of position, surface, and broadness of peaks in the distribution. Applications to real data (2,6-ANS in aqueous solutions of sodium dodecyl sulfate micelles of Β-cyclodextrin) are presented.

  9. Analysis of the Velocity Distribution in Partially-Filled Circular Pipe Employing the Principle of Maximum Entropy.

    PubMed

    Jiang, Yulin; Li, Bin; Chen, Jie

    2016-01-01

    The flow velocity distribution in partially-filled circular pipe was investigated in this paper. The velocity profile is different from full-filled pipe flow, since the flow is driven by gravity, not by pressure. The research findings show that the position of maximum flow is below the water surface, and varies with the water depth. In the region of near tube wall, the fluid velocity is mainly influenced by the friction of the wall and the pipe bottom slope, and the variation of velocity is similar to full-filled pipe. But near the free water surface, the velocity distribution is mainly affected by the contractive tube wall and the secondary flow, and the variation of the velocity is relatively small. Literature retrieval results show relatively less research has been shown on the practical expression to describe the velocity distribution of partially-filled circular pipe. An expression of two-dimensional (2D) velocity distribution in partially-filled circular pipe flow was derived based on the principle of maximum entropy (POME). Different entropies were compared according to fluid knowledge, and non-extensive entropy was chosen. A new cumulative distribution function (CDF) of partially-filled circular pipe velocity in terms of flow depth was hypothesized. Combined with the CDF hypothesis, the 2D velocity distribution was derived, and the position of maximum velocity distribution was analyzed. The experimental results show that the estimated velocity values based on the principle of maximum Tsallis wavelet entropy are in good agreement with measured values.

  10. Maximum entropy theory and the rapid relaxation of three-dimensional quasi-geostrophic turbulence.

    PubMed

    Schecter, David A

    2003-12-01

    Turbulent flow in a rapidly rotating stably stratified fluid (quasi-geostrophic turbulence) commonly decays toward a stable pattern of large-scale jets or vortices. A formula for the most probable three-dimensional end state, the maximum entropy state (MES), is derived using a form of Lynden-Bell statistical mechanics. The MES is determined by a set of integral invariants, including energy, as opposed to a complete description of the initial condition. A computed MES qualitatively resembles the quasistationary end state of a numerical simulation that is initialized with red noise, and relaxes for a time on the order of 100 (initial) eddy turnovers. However, the potential enstrophy of the end state, obtained from a coarsened potential vorticity distribution, exceeds that of the MES by nearly a factor of 2. The estimated errors for both theory and simulation do not account for the discrepancy. This suggests that the MES, if ever realized, requires a much longer time scale to fully develop.

  11. Fast Maximum Entropy Moment Closure Approach to Solving the Boltzmann Equation

    NASA Astrophysics Data System (ADS)

    Summy, Dustin; Pullin, Dale

    2015-11-01

    We describe a method for a moment-based solution of the Boltzmann Equation (BE). This is applicable to an arbitrary set of velocity moments whose transport is governed by partial-differential equations (PDEs) derived from the BE. The equations are unclosed, containing both higher-order moments and molecular-collision terms. These are evaluated using a maximum-entropy reconstruction of the velocity distribution function f (c , x , t) , from the known moments, within a finite-box domain of single-particle velocity (c) space. Use of a finite-domain alleviates known problems (Junk and Unterreiter, Continuum Mech. Thermodyn., 2002) concerning existence and uniqueness of the reconstruction. Unclosed moments are evaluated with quadrature while collision terms are calculated using any desired method. This allows integration of the moment PDEs in time. The high computational cost of the general method is greatly reduced by careful choice of the velocity moments, allowing the necessary integrals to be reduced from three- to one-dimensional in the case of strictly 1D flows. A method to extend this enhancement to fully 3D flows is discussed. Comparison with relaxation and shock-wave problems using the DSMC method will be presented. Partially supported by NSF grant DMS-1418903.

  12. Maximum-entropy reconstruction method for moment-based solution of the Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Summy, Dustin; Pullin, Dale

    2013-11-01

    We describe a method for a moment-based solution of the Boltzmann equation. This starts with moment equations for a 10 + 9 N , N = 0 , 1 , 2 . . . -moment representation. The partial-differential equations (PDEs) for these moments are unclosed, containing both higher-order moments and molecular-collision terms. These are evaluated using a maximum-entropy construction of the velocity distribution function f (c , x , t) , using the known moments, within a finite-box domain of single-particle-velocity (c) space. Use of a finite-domain alleviates known problems (Junk and Unterreiter, Continuum Mech. Thermodyn., 2002) concerning existence and uniqueness of the reconstruction. Unclosed moments are evaluated with quadrature while collision terms are calculated using a Monte-Carlo method. This allows integration of the moment PDEs in time. Illustrative examples will include zero-space- dimensional relaxation of f (c , t) from a Mott-Smith-like initial condition toward equilibrium and one-space dimensional, finite Knudsen number, planar Couette flow. Comparison with results using the direct-simulation Monte-Carlo method will be presented.

  13. Linearized semiclassical initial value time correlation functions with maximum entropy analytic continuation

    SciTech Connect

    Liu, Jian; Miller, William H.

    2008-08-01

    The maximum entropy analytic continuation (MEAC) method is used to extend the range of accuracy of the linearized semiclassical initial value representation (LSC-IVR)/classical Wigner approximation for real time correlation functions. The LSC-IVR provides a very effective 'prior' for the MEAC procedure since it is very good for short times, exact for all time and temperature for harmonic potentials (even for correlation functions of nonlinear operators), and becomes exact in the classical high temperature limit. This combined MEAC+LSC/IVR approach is applied here to two highly nonlinear dynamical systems, a pure quartic potential in one dimensional and liquid para-hydrogen at two thermal state points (25K and 14K under nearly zero external pressure). The former example shows the MEAC procedure to be a very significant enhancement of the LSC-IVR, for correlation functions of both linear and nonlinear operators, and especially at low temperature where semiclassical approximations are least accurate. For liquid para-hydrogen, the LSC-IVR is seen already to be excellent at T = 25K, but the MEAC procedure produces a significant correction at the lower temperature (T = 14K). Comparisons are also made to how the MEAC procedure is able to provide corrections for other trajectory-based dynamical approximations when used as priors.

  14. Computational Amide I Spectroscopy for Refinement of Disordered Peptide Ensembles: Maximum Entropy and Related Approaches

    NASA Astrophysics Data System (ADS)

    Reppert, Michael; Tokmakoff, Andrei

    The structural characterization of intrinsically disordered peptides (IDPs) presents a challenging biophysical problem. Extreme heterogeneity and rapid conformational interconversion make traditional methods difficult to interpret. Due to its ultrafast (ps) shutter speed, Amide I vibrational spectroscopy has received considerable interest as a novel technique to probe IDP structure and dynamics. Historically, Amide I spectroscopy has been limited to delivering global secondary structural information. More recently, however, the method has been adapted to study structure at the local level through incorporation of isotope labels into the protein backbone at specific amide bonds. Thanks to the acute sensitivity of Amide I frequencies to local electrostatic interactions-particularly hydrogen bonds-spectroscopic data on isotope labeled residues directly reports on local peptide conformation. Quantitative information can be extracted using electrostatic frequency maps which translate molecular dynamics trajectories into Amide I spectra for comparison with experiment. Here we present our recent efforts in the development of a rigorous approach to incorporating Amide I spectroscopic restraints into refined molecular dynamics structural ensembles using maximum entropy and related approaches. By combining force field predictions with experimental spectroscopic data, we construct refined structural ensembles for a family of short, strongly disordered, elastin-like peptides in aqueous solution.

  15. Iteration free vector orientation using maximum relative entropy with observational priors

    NASA Astrophysics Data System (ADS)

    Urniezius, Renaldas; Giffin, Adom

    2012-05-01

    The amount of data to be measured and processed becomes extremely large in modern digital systems. As a result, the performance of the method to processing the data is of prime importance. This paper presents an iteration free method using the Maximum relative Entropy method when observed data is introduced as a Gaussian prior. It is shown that the posterior is a Gaussian function, but with updated means. Such an approach is capable of performing optimization for a single sample, which makes it applicable to real time applications. A practical example is provided in the paper where updating is performed on vector coordinate variables, where the constraint is an average vector length. This produces a general vector normalization solution that takes into account uncertainty in the measurements. A special case of this updating is also shown in limit as the standard deviation goes to zero in the priors (essentially making them Dirac delta functions). This reproduces the vector normalization formula. In addition, comparisons with some Bayesian filtering methods will be discussed.

  16. Initial system-bath state via the maximum-entropy principle

    NASA Astrophysics Data System (ADS)

    Dai, Jibo; Len, Yink Loong; Ng, Hui Khoon

    2016-11-01

    The initial state of a system-bath composite is needed as the input for prediction from any quantum evolution equation to describe subsequent system-only reduced dynamics or the noise on the system from joint evolution of the system and the bath. The conventional wisdom is to write down an uncorrelated state as if the system and the bath were prepared in the absence of each other; yet, such a factorized state cannot be the exact description in the presence of system-bath interactions. Here, we show how to go beyond the simplistic factorized-state prescription using ideas from quantum tomography: We employ the maximum-entropy principle to deduce an initial system-bath state consistent with the available information. For the generic case of weak interactions, we obtain an explicit formula for the correction to the factorized state. Such a state turns out to have little correlation between the system and the bath, which we can quantify using our formula. This has implications, in particular, on the subject of subsequent non-completely positive dynamics of the system. Deviation from predictions based on such an almost uncorrelated state is indicative of accidental control of hidden degrees of freedom in the bath.

  17. Maximum entropy estimation of glutamate and glutamine in MR spectroscopic imaging.

    PubMed

    Rathi, Yogesh; Ning, Lipeng; Michailovich, Oleg; Liao, HuiJun; Gagoski, Borjan; Grant, P Ellen; Shenton, Martha E; Stern, Robert; Westin, Carl-Fredrik; Lin, Alexander

    2014-01-01

    Magnetic resonance spectroscopic imaging (MRSI) is often used to estimate the concentration of several brain metabolites. Abnormalities in these concentrations can indicate specific pathology, which can be quite useful in understanding the disease mechanism underlying those changes. Due to higher concentration, metabolites such as N-acetylaspartate (NAA), Creatine (Cr) and Choline (Cho) can be readily estimated using standard Fourier transform techniques. However, metabolites such as Glutamate (Glu) and Glutamine (Gln) occur in significantly lower concentrations and their resonance peaks are very close to each other making it difficult to accurately estimate their concentrations (separately). In this work, we propose to use the theory of 'Spectral Zooming' or high-resolution spectral analysis to separate the Glutamate and Glutamine peaks and accurately estimate their concentrations. The method works by estimating a unique power spectral density, which corresponds to the maximum entropy solution of a zero-mean stationary Gaussian process. We demonstrate our estimation technique on several physical phantom data sets as well as on in-vivo brain spectroscopic imaging data. The proposed technique is quite general and can be used to estimate the concentration of any other metabolite of interest.

  18. Maximum Entropy Estimation of Probability Distribution of Variables in Higher Dimensions from Lower Dimensional Data.

    PubMed

    Das, Jayajit; Mukherjee, Sayak; Hodge, Susan E

    2015-07-01

    A common statistical situation concerns inferring an unknown distribution Q(x) from a known distribution P(y), where X (dimension n), and Y (dimension m) have a known functional relationship. Most commonly, n ≤ m, and the task is relatively straightforward for well-defined functional relationships. For example, if Y1 and Y2 are independent random variables, each uniform on [0, 1], one can determine the distribution of X = Y1 + Y2; here m = 2 and n = 1. However, biological and physical situations can arise where n > m and the functional relation Y→X is non-unique. In general, in the absence of additional information, there is no unique solution to Q in those cases. Nevertheless, one may still want to draw some inferences about Q. To this end, we propose a novel maximum entropy (MaxEnt) approach that estimates Q(x) based only on the available data, namely, P(y). The method has the additional advantage that one does not need to explicitly calculate the Lagrange multipliers. In this paper we develop the approach, for both discrete and continuous probability distributions, and demonstrate its validity. We give an intuitive justification as well, and we illustrate with examples.

  19. On the relevance of the maximum entropy principle in non-equilibrium statistical mechanics

    NASA Astrophysics Data System (ADS)

    Auletta, Gennaro; Rondoni, Lamberto; Vulpiani, Angelo

    2017-07-01

    At first glance, the maximum entropy principle (MEP) apparently allows us to derive, or justify in a simple way, fundamental results of equilibrium statistical mechanics. Because of this, a school of thought considers the MEP as a powerful and elegant way to make predictions in physics and other disciplines, rather than a useful technical tool like others in statistical physics. From this point of view the MEP appears as an alternative and more general predictive method than the traditional ones of statistical physics. Actually, careful inspection shows that such a success is due to a series of fortunate facts that characterize the physics of equilibrium systems, but which are absent in situations not described by Hamiltonian dynamics, or generically in nonequilibrium phenomena. Here we discuss several important examples in non equilibrium statistical mechanics, in which the MEP leads to incorrect predictions, proving that it does not have a predictive nature. We conclude that, in these paradigmatic examples, an approach that uses a detailed analysis of the relevant aspects of the dynamics cannot be avoided.

  20. Bayesian Maximum Entropy space/time estimation of surface water chloride in Maryland using river distances.

    PubMed

    Jat, Prahlad; Serre, Marc L

    2016-12-01

    Widespread contamination of surface water chloride is an emerging environmental concern. Consequently accurate and cost-effective methods are needed to estimate chloride along all river miles of potentially contaminated watersheds. Here we introduce a Bayesian Maximum Entropy (BME) space/time geostatistical estimation framework that uses river distances, and we compare it with Euclidean BME to estimate surface water chloride from 2005 to 2014 in the Gunpowder-Patapsco, Severn, and Patuxent subbasins in Maryland. River BME improves the cross-validation R(2) by 23.67% over Euclidean BME, and river BME maps are significantly different than Euclidean BME maps, indicating that it is important to use river BME maps to assess water quality impairment. The river BME maps of chloride concentration show wide contamination throughout Baltimore and Columbia-Ellicott cities, the disappearance of a clean buffer separating these two large urban areas, and the emergence of multiple localized pockets of contamination in surrounding areas. The number of impaired river miles increased by 0.55% per year in 2005-2009 and by 1.23% per year in 2011-2014, corresponding to a marked acceleration of the rate of impairment. Our results support the need for control measures and increased monitoring of unassessed river miles.

  1. Maximum entropy estimation of a Benzene contaminated plume using ecotoxicological assays.

    PubMed

    Wahyudi, Agung; Bartzke, Mariana; Küster, Eberhard; Bogaert, Patrick

    2013-01-01

    Ecotoxicological bioassays, e.g. based on Danio rerio teratogenicity (DarT) or the acute luminescence inhibition with Vibrio fischeri, could potentially lead to significant benefits for detecting on site contaminations on qualitative or semi-quantitative bases. The aim was to use the observed effects of two ecotoxicological assays for estimating the extent of a Benzene groundwater contamination plume. We used a Maximum Entropy (MaxEnt) method to rebuild a bivariate probability table that links the observed toxicity from the bioassays with Benzene concentrations. Compared with direct mapping of the contamination plume as obtained from groundwater samples, the MaxEnt concentration map exhibits on average slightly higher concentrations though the global pattern is close to it. This suggest MaxEnt is a valuable method to build a relationship between quantitative data, e.g. contaminant concentrations, and more qualitative or indirect measurements, in a spatial mapping framework, which is especially useful when clear quantitative relation is not at hand.

  2. Nonuniform sampling and maximum entropy reconstruction applied to the accurate measurement of residual dipolar couplings.

    PubMed

    Kubat, Jayne A; Chou, James J; Rovnyak, David

    2007-06-01

    Residual dipolar couplings (RDC) provide important global restraints for accurate structure determination by NMR. We show that nonuniform sampling in combination with maximum entropy reconstruction (MaxEnt) is a promising strategy for accelerating and potentially enhancing the acquisition of RDC spectra. Using MaxEnt-processed spectra of nonuniformly sampled data sets that are reduced up to one fifth relative to uniform sampling, accurate 13C'-13Calpha RDCs can be obtained that agree with an RMS of 0.67 Hz with those derived from uniformly sampled, Fourier transformed spectra. While confirming that frequency errors in MaxEnt spectra are very slight, an unexpected class of systematic errors was found to occur in the 6th significant figure of 13C' chemical shifts of doublets obtained by MaxEnt reconstruction. We show that this error stems from slight line shape perturbations and predict it should be encountered in other nonlinear spectral estimation algorithms. In the case of MaxEnt reconstruction, the error can easily be rendered systematic by straightforward optimization of MaxEnt reconstruction parameters and self-cancels in obtaining RDCs from nonuniformly sampled, MaxEnt reconstructed spectra.

  3. Reconstruction of an atmospheric tracer source using the principle of maximum entropy. I: Theory

    NASA Astrophysics Data System (ADS)

    Bocquet, Marc

    2005-07-01

    Over recent years, tracing back sources of chemical species dispersed through the atmosphere has been of considerable importance, with an emphasis on increasing the precision of the source resolution. This need stems from many problems: being able to estimate the emissions of pollutants; spotting the source of radionuclides; evaluating diffuse gas fluxes; etc.We study the high-resolution retrieval on a continental scale of the source of a passive atmospheric tracer, given a set of concentration measurements. In the first of this two-part paper, we lay out and develop theoretical grounds for the reconstruction. Our approach is based on the principle of maximum entropy on the mean. It offers a general framework in which the information input prior to the inversion is used in a flexible and controlled way. The inversion is shown to be equivalent to the minimization of an optimal cost function, expressed in the dual space of observations. Examples of such cost functions are given for different priors of interest to the retrieval of an atmospheric tracer. In this respect, variational assimilation (4D-Var), as well as projection techniques, are obtained as biproducts of the method. The framework is enlarged to incorporate noisy data in the inversion scheme. Part II of this paper is devoted to the application and testing of these methods.

  4. Time-dependent radiative transfer through thin films: Chapman Enskog-maximum entropy method

    NASA Astrophysics Data System (ADS)

    Abulwafa, E. M.; Hassan, T.; El-Wakil, S. A.; Razi Naqvi, K.

    2005-09-01

    Approximate solutions to the time-dependent radiative transfer equation, also called the phonon radiative transfer equation, for a plane-parallel system have been obtained by combining the flux-limited Chapman-Enskog approximation with the maximum entropy method. For problems involving heat transfer at small scales (short times and/or thin films), the results found by this combined approach are closer to the outcome of the more labour-intensive Laguerre-Galerkin technique (a moment method described recently by the authors) than the results obtained by using the diffusion equation (Fourier's law) or the telegraph equation (Cattaneo's law). The results for heat flux and temperature are presented in graphical form for xL = 0.01, 0.1, 1 and 10, and at τ = 0.01, 0.1, 1.0 and 10, where xL is the film thickness in mean free paths, and τ is the value of time in mean free times.

  5. The quantum dynamics of interfacial hydrogen: Path integral maximum entropy calculation of adsorbate vibrational line shapes for the H/Ni(111) system

    NASA Astrophysics Data System (ADS)

    Kim, Dongsup; Doll, J. D.; Gubernatis, J. E.

    1997-01-01

    Vibrational line shapes for a hydrogen atom on an embedded atom model (EAM) of the Ni(111) surface are extracted from path integral Monte Carlo data. Maximum entropy methods are utilized to stabilize this inversion. Our results indicate that anharmonic effects are significant, particularly for vibrational motion parallel to the surface. Unlike their normal mode analogs, calculated quantum line shapes for the EAM potential predict the correct ordering of vibrational features corresponding to parallel and perpendicular adsorbate motion.

  6. Entropy-based consistent model driven architecture

    NASA Astrophysics Data System (ADS)

    Niepostyn, Stanisław Jerzy

    2016-09-01

    A description of software architecture is a plan of the IT system construction, therefore any architecture gaps affect the overall success of an entire project. The definitions mostly describe software architecture as a set of views which are mutually unrelated, hence potentially inconsistent. Software architecture completeness is also often described in an ambiguous way. As a result most methods of IT systems building comprise many gaps and ambiguities, thus presenting obstacles for software building automation. In this article the consistency and completeness of software architecture are mathematically defined based on calculation of entropy of the architecture description. Following this approach, in this paper we also propose our method of automatic verification of consistency and completeness of the software architecture development method presented in our previous article as Consistent Model Driven Architecture (CMDA). The proposed FBS (Functionality-Behaviour-Structure) entropy-based metric applied in our CMDA approach enables IT architects to decide whether the modelling process is complete and consistent. With this metric, software architects could assess the readiness of undergoing modelling work for the start of IT system building. It even allows them to assess objectively whether the designed software architecture of the IT system could be implemented at all. The overall benefit of such an approach is that it facilitates the preparation of complete and consistent software architecture more effectively as well as it enables assessing and monitoring of the ongoing modelling development status. We demonstrate this with a few industry examples of IT system designs.

  7. Developing Soil Moisture Profiles Utilizing Remotely Sensed MW and TIR Based SM Estimates Through Principle of Maximum Entropy

    NASA Astrophysics Data System (ADS)

    Mishra, V.; Cruise, J. F.; Mecikalski, J. R.

    2015-12-01

    Developing accurate vertical soil moisture profiles with minimum input requirements is important to agricultural as well as land surface modeling. Earlier studies show that the principle of maximum entropy (POME) can be utilized to develop vertical soil moisture profiles with accuracy (MAE of about 1% for a monotonically dry profile; nearly 2% for monotonically wet profiles and 3.8% for mixed profiles) with minimum constraints (surface, mean and bottom soil moisture contents). In this study, the constraints for the vertical soil moisture profiles were obtained from remotely sensed data. Low resolution (25 km) MW soil moisture estimates (AMSR-E) were downscaled to 4 km using a soil evaporation efficiency index based disaggregation approach. The downscaled MW soil moisture estimates served as a surface boundary condition, while 4 km resolution TIR based Atmospheric Land Exchange Inverse (ALEXI) estimates provided the required mean root-zone soil moisture content. Bottom soil moisture content is assumed to be a soil dependent constant. Mulit-year (2002-2011) gridded profiles were developed for the southeastern United States using the POME method. The soil moisture profiles were compared to those generated in land surface models (Land Information System (LIS) and an agricultural model DSSAT) along with available NRCS SCAN sites in the study region. The end product, spatial soil moisture profiles, can be assimilated into agricultural and hydrologic models in lieu of precipitation for data scarce regions.Developing accurate vertical soil moisture profiles with minimum input requirements is important to agricultural as well as land surface modeling. Previous studies have shown that the principle of maximum entropy (POME) can be utilized with minimal constraints to develop vertical soil moisture profiles with accuracy (MAE = 1% for monotonically dry profiles; MAE = 2% for monotonically wet profiles and MAE = 3.8% for mixed profiles) when compared to laboratory and field

  8. Simulation and Modeling in High Entropy Alloys

    NASA Astrophysics Data System (ADS)

    Toda-Caraballo, I.; Wróbel, J. S.; Nguyen-Manh, D.; Pérez, P.; Rivera-Díaz-del-Castillo, P. E. J.

    2017-08-01

    High entropy alloys (HEAs) is a fascinating field of research, with an increasing number of new alloys discovered. This would hardly be conceivable without the aid of materials modeling and computational alloy design to investigate the immense compositional space. The simplicity of the microstructure achieved contrasts with the enormous complexity of its composition, which, in turn, increases the variety of property behavior observed. Simulation and modeling techniques are of paramount importance in the understanding of such material performance. There are numerous examples of how different models have explained the observed experimental results; yet, there are theories and approaches developed for conventional alloys, where the presence of one element is predominant, that need to be adapted or re-developed. In this paper, we review of the current state of the art of the modeling techniques applied to explain HEAs properties, identifying the potential new areas of research to improve the predictability of these techniques.

  9. A maximum entropy approach to detect close-in giant planets around active stars

    NASA Astrophysics Data System (ADS)

    Petit, P.; Donati, J.-F.; Hébrard, E.; Morin, J.; Folsom, C. P.; Böhm, T.; Boisse, I.; Borgniet, S.; Bouvier, J.; Delfosse, X.; Hussain, G.; Jeffers, S. V.; Marsden, S. C.; Barnes, J. R.

    2015-12-01

    Context. The high spot coverage of young active stars is responsible for distortions of spectral lines that hamper the detection of close-in planets through radial velocity methods. Aims: We aim to progress towards more efficient exoplanet detection around active stars by optimizing the use of Doppler imaging in radial velocity measurements. Methods: We propose a simple method to simultaneously extract a brightness map and a set of orbital parameters through a tomographic inversion technique derived from classical Doppler mapping. Based on the maximum entropy principle, the underlying idea is to determine the set of orbital parameters that minimizes the information content of the resulting Doppler map. We carry out a set of numerical simulations to perform a preliminary assessment of the robustness of our method, using an actual Doppler map of the very active star HR 1099 to produce a realistic synthetic data set for various sets of orbital parameters of a single planet in a circular orbit. Results: Using a simulated time series of 50 line profiles affected by a peak-to-peak activity jitter of 2.5 km s-1, in most cases we are able to recover the radial velocity amplitude, orbital phase, and orbital period of an artificial planet down to a radial velocity semi-amplitude of the order of the radial velocity scatter due to the photon noise alone (about 50 m s-1 in our case). One noticeable exception occurs when the planetary orbit is close to co-rotation, in which case significant biases are observed in the reconstructed radial velocity amplitude, while the orbital period and phase remain robustly recovered. Conclusions: The present method constitutes a very simple way to extract orbital parameters from heavily distorted line profiles of active stars, when more classical radial velocity detection methods generally fail. It is easily adaptable to most existing Doppler imaging codes, paving the way towards a systematic search for close-in planets orbiting young, rapidly

  10. Improved constraints on transit time distributions from argon 39: A maximum entropy approach

    NASA Astrophysics Data System (ADS)

    Holzer, Mark; Primeau, FrançOis W.

    2010-12-01

    We use 39Ar in conjunction with CFCs, natural radiocarbon, and the cyclostationary tracers PO4*, temperature, and salinity to estimate the ocean's transit time distributions (TTDs). A maximum entropy method is employed to deconvolve the tracer data for the TTDs. The constraint provided by the 39Ar data allows us to estimate TTDs even in the deep Pacific where CFCs have not yet penetrated. From the TTDs, we calculate the ideal mean age, Γ, the TTD width, Δ, and the mass fraction of water with transit times less than a century, f1. We also quantify the entropic uncertainties due to the nonuniqueness of the deconvolutions. In the Atlantic, the patterns of Γ and f1 reflect the distribution of the major water masses. At the deepest locations in the North Atlantic Γ ≃ 300-100+300 a, while at the deepest locations in the South Atlantic Γ ≃ 500-100+200 a. The Pacific is nearly homogeneous below 2000 m with Γ ≃ 1300-50+200 a in the North Pacific and Γ ≃ 900-100+200 a in the deep South Pacific. The Southern Ocean locations have little vertical structure, with Γ ranging from 300 to 450 a with an uncertainty of about -40+150 a. The importance of diffusion compared to advection as quantified by Δ/Γ has most probable values ranging from 0.2 to 3 but with large entropic uncertainty bounds ranging from 0.2 to 9. For the majority of locations analyzed, the effect of 39Ar is to reduce f1 and to correspondingly increase Γ by about a century. The additional constraint provided by 39Ar reduces the entropic uncertainties of f1 by roughly 50% on average.

  11. Cluster Prototypes and Fuzzy Memberships Jointly Leveraged Cross-Domain Maximum Entropy Clustering

    PubMed Central

    Qian, Pengjiang; Jiang, Yizhang; Deng, Zhaohong; Hu, Lingzhi; Sun, Shouwei; Wang, Shitong; Muzic, Raymond F.

    2016-01-01

    The classical maximum entropy clustering (MEC) algorithm usually cannot achieve satisfactory results in the situations where the data is insufficient, incomplete, or distorted. To address this problem, inspired by transfer learning, the specific cluster prototypes and fuzzy memberships jointly leveraged (CPM-JL) framework for cross-domain MEC (CDMEC) is firstly devised in this paper, and then the corresponding algorithm referred to as CPM-JL-CDMEC and the dedicated validity index named fuzzy memberships-based cross-domain difference measurement (FM-CDDM) are concurrently proposed. In general, the contributions of this paper are fourfold: 1) benefiting from the delicate CPM-JL framework, CPM-JL-CDMEC features high-clustering effectiveness and robustness even in some complex data situations; 2) the reliability of FM-CDDM has been demonstrated to be close to well-established external criteria, e.g., normalized mutual information and rand index, and it does not require additional label information. Hence, using FM-CDDM as a dedicated validity index significantly enhances the applicability of CPM-JL-CDMEC under realistic scenarios; 3) the performance of CPM-JL-CDMEC is generally better than, at least equal to, that of MEC because CPM-JL-CDMEC can degenerate into the standard MEC algorithm after adopting the proper parameters, and which avoids the issue of negative transfer; and 4) in order to maximize privacy protection, CPM-JL-CDMEC employs the known cluster prototypes and their associated fuzzy memberships rather than the raw data in the source domain as prior knowledge. The experimental studies thoroughly evaluated and demonstrated these advantages on both synthetic and real-life transfer datasets. PMID:26684257

  12. Comparison of cosmology and seabed acoustics measurements using statistical inference from maximum entropy

    NASA Astrophysics Data System (ADS)

    Knobles, David; Stotts, Steven; Sagers, Jason

    2012-03-01

    Why can one obtain from similar measurements a greater amount of information about cosmological parameters than seabed parameters in ocean waveguides? The cosmological measurements are in the form of a power spectrum constructed from spatial correlations of temperature fluctuations within the microwave background radiation. The seabed acoustic measurements are in the form of spatial correlations along the length of a spatial aperture. This study explores the above question from the perspective of posterior probability distributions obtained from maximizing a relative entropy functional. An answer is in part that the seabed in shallow ocean environments generally has large temporal and spatial inhomogeneities, whereas the early universe was a nearly homogeneous cosmological soup with small but important fluctuations. Acoustic propagation models used in shallow water acoustics generally do not capture spatial and temporal variability sufficiently well, which leads to model error dominating the statistical inference problem. This is not the case in cosmology. Further, the physics of the acoustic modes in cosmology is that of a standing wave with simple initial conditions, whereas for underwater acoustics it is a traveling wave in a strongly inhomogeneous bounded medium.

  13. Application of the maximum entropy technique in tomographic reconstruction from laser diffraction data to determine local spray drop size distribution

    NASA Astrophysics Data System (ADS)

    Yongyingsakthavorn, Pisit; Vallikul, Pumyos; Fungtammasan, Bundit; Dumouchel, Christophe

    2007-03-01

    This work proposes a new deconvolution technique to obtain local drop size distributions from line-of-sight intensity data measured by laser diffraction technique. The tomographic reconstruction, based on the maximum entropy (ME) technique, is applied to forward scattered light signal from a laser beam scanning horizontally through the spray on each plane from the center to the edge of spray, resulting in the reconstructed scattered light intensities at particular points in the spray. These reconstructed intensities are in turn converted to local drop size distributions. Unlike the classical method of the onion peeling technique or other mathematical transformation techniques that yield unrealistic negative scattered light intensity solutions, the maximum entropy constraints ensure positive light intensity. Experimental validations to the reconstructed results are achieved by using phase Doppler particle analyzer (PDPA). The results from the PDPA measurements agree very well with the proposed ME tomographic reconstruction.

  14. Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle

    PubMed Central

    2016-01-01

    We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined. PMID:26997886

  15. Using the Maximum Entropy Principle as a Unifying Theory for Characterization and Sampling of Multi-scaling Processes in Hydrometeorology

    DTIC Science & Technology

    2012-01-25

    Inventions (DD882) Scientific Progress The project during the first year was focusing on (1) deriving the maximum entropy ( MaxEnt ) distributions of...1. Derivation and validation of MaxEnt distributions of Type I multi-scaling processes Following the MaxEnt formalism, the probability distribution...corresponding to the given multi-scaling moments. Figure 1 The MaxEnt distributions have been validated against empirical histograms of soil moisture

  16. Analysis of the Velocity Distribution in Partially-Filled Circular Pipe Employing the Principle of Maximum Entropy

    PubMed Central

    2016-01-01

    The flow velocity distribution in partially-filled circular pipe was investigated in this paper. The velocity profile is different from full-filled pipe flow, since the flow is driven by gravity, not by pressure. The research findings show that the position of maximum flow is below the water surface, and varies with the water depth. In the region of near tube wall, the fluid velocity is mainly influenced by the friction of the wall and the pipe bottom slope, and the variation of velocity is similar to full-filled pipe. But near the free water surface, the velocity distribution is mainly affected by the contractive tube wall and the secondary flow, and the variation of the velocity is relatively small. Literature retrieval results show relatively less research has been shown on the practical expression to describe the velocity distribution of partially-filled circular pipe. An expression of two-dimensional (2D) velocity distribution in partially-filled circular pipe flow was derived based on the principle of maximum entropy (POME). Different entropies were compared according to fluid knowledge, and non-extensive entropy was chosen. A new cumulative distribution function (CDF) of partially-filled circular pipe velocity in terms of flow depth was hypothesized. Combined with the CDF hypothesis, the 2D velocity distribution was derived, and the position of maximum velocity distribution was analyzed. The experimental results show that the estimated velocity values based on the principle of maximum Tsallis wavelet entropy are in good agreement with measured values. PMID:26986064

  17. Using the Maximum Entropy Principle as a Unifying Theory for Characterization and Sampling of Multi-scaling Processes in Hydrometeorology

    DTIC Science & Technology

    2010-02-25

    a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. a...REPORT Using the Maximum Entropy Principle as a Unifying Theory for Characterization and Sampling of Multi-scaling Processes in Hydrometeorology 14... Theory for Characterization and Sampling of Multi-scaling Processes in Hydrometeorology Rafael L. Bras and Jingfeng Wang 25 February 2010 Activities and

  18. Practical aspects of the maximum entropy inversion of the laplace transform for the quantitative analysis of multi-exponential data.

    PubMed

    Lórenz-Fonfría, Víctor A; Kandori, Hideki

    2007-01-01

    The number, position, area, and width of the bands in a lifetime distribution give the number of exponentials present in time-resolved data and their time constants, amplitudes, and heterogeneities. The maximum entropy inversion of the Laplace transform (MaxEnt-iLT) provides a lifetime distribution from time-resolved data, which is very helpful in the analysis of the relaxation of complex systems. In some applications both positive and negative values for the lifetime distribution amplitudes are physical, but most studies to date have focused on positive-constrained solutions. In this work, we first discuss optimal conditions to obtain a sign-unrestricted maximum entropy lifetime distribution, i.e., the selection of the entropy function and the regularization value. For the selection of the regularization value we compared four methods: the chi2 criterion and Bayesian inference (already used in sign-restricted MaxEnt-iLT), and the L-curve and the generalized cross-validation methods (not yet used in MaxEnt-iLT to our knowledge). Except for the frequently used chi2 criterion, these methods recommended similar regularization values, providing close to optimum solutions. However, even when an optimal entropy function and regularization value are used, a MaxEnt lifetime distribution will contain noise-induced errors, as well as systematic distortions induced by the entropy maximization (regularization-induced errors). We introduce the concept of the apparent resolution function in MaxEnt, which allows both the noise and regularization-induced errors to be estimated. We show the capability of this newly introduced concept in both synthetic and experimental time-resolved Fourier transform infrared (FT-IR) data from the bacteriorhodopsin photocycle.

  19. Location of Cu2+ in CHA zeolite investigated by X-ray diffraction using the Rietveld/maximum entropy method

    PubMed Central

    Andersen, Casper Welzel; Bremholm, Martin; Vennestrøm, Peter Nicolai Ravnborg; Blichfeld, Anders Bank; Lundegaard, Lars Fahl; Iversen, Bo Brummerstedt

    2014-01-01

    Accurate structural models of reaction centres in zeolite catalysts are a prerequisite for mechanistic studies and further improvements to the catalytic performance. The Rietveld/maximum entropy method is applied to synchrotron powder X-ray diffraction data on fully dehydrated CHA-type zeolites with and without loading of catalytically active Cu2+ for the selective catalytic reduction of NOx with NH3. The method identifies the known Cu2+ sites in the six-membered ring and a not previously observed site in the eight-membered ring. The sum of the refined Cu occupancies for these two sites matches the chemical analysis and thus all the Cu is accounted for. It is furthermore shown that approximately 80% of the Cu2+ is located in the new 8-ring site for an industrially relevant CHA zeolite with Si/Al = 15.5 and Cu/Al = 0.45. Density functional theory calculations are used to corroborate the positions and identity of the two Cu sites, leading to the most complete structural description of dehydrated silicoaluminate CHA loaded with catalytically active Cu2+ cations. PMID:25485118

  20. Using maximum entropy to predict suitable habitat for the endangered dwarf wedgemussel in the Maryland Coastal Plain

    USGS Publications Warehouse

    Campbell, Cara; Hilderbrand, Robert H.

    2017-01-01

    Species distribution modelling can be useful for the conservation of rare and endangered species. Freshwater mussel declines have thinned species ranges producing spatially fragmented distributions across large areas. Spatial fragmentation in combination with a complex life history and heterogeneous environment makes predictive modelling difficult.A machine learning approach (maximum entropy) was used to model occurrences and suitable habitat for the federally endangered dwarf wedgemussel, Alasmidonta heterodon, in Maryland's Coastal Plain catchments. Landscape-scale predictors (e.g. land cover, land use, soil characteristics, geology, flow characteristics, and climate) were used to predict the suitability of individual stream segments for A. heterodon.The best model contained variables at three scales: minimum elevation (segment scale), percentage Tertiary deposits, low intensity development, and woody wetlands (sub-catchment), and percentage low intensity development, pasture/hay agriculture, and average depth to the water table (catchment). Despite a very small sample size owing to the rarity of A. heterodon, cross-validated prediction accuracy was 91%.Most predicted suitable segments occur in catchments not known to contain A. heterodon, which provides opportunities for new discoveries or population restoration. These model predictions can guide surveys toward the streams with the best chance of containing the species or, alternatively, away from those streams with little chance of containing A. heterodon.Developed reaches had low predicted suitability for A. heterodon in the Coastal Plain. Urban and exurban sprawl continues to modify stream ecosystems in the region, underscoring the need to preserve existing populations and to discover and protect new populations.

  1. THE LICK AGN MONITORING PROJECT: VELOCITY-DELAY MAPS FROM THE MAXIMUM-ENTROPY METHOD FOR Arp 151

    SciTech Connect

    Bentz, Misty C.; Barth, Aaron J.; Walsh, Jonelle L.; Horne, Keith; Bennert, Vardha Nicola; Treu, Tommaso; Canalizo, Gabriela; Filippenko, Alexei V.; Gates, Elinor L.; Malkan, Matthew A.; Minezaki, Takeo; Woo, Jong-Hak

    2010-09-01

    We present velocity-delay maps for optical H I, He I, and He II recombination lines in Arp 151, recovered by fitting a reverberation model to spectrophotometric monitoring data using the maximum-entropy method. H I response is detected over the range 0-15 days, with the response confined within the virial envelope. The Balmer-line maps have similar morphologies but exhibit radial stratification, with progressively longer delays for H{gamma} to H{beta} to H{alpha}. The He I and He II response is confined within 1-2 days. There is a deficit of prompt response in the Balmer-line cores but strong prompt response in the red wings. Comparison with simple models identifies two classes that reproduce these features: free-falling gas and a half-illuminated disk with a hot spot at small radius on the receding lune. Symmetrically illuminated models with gas orbiting in an inclined disk or an isotropic distribution of randomly inclined circular orbits can reproduce the virial structure but not the observed asymmetry. Radial outflows are also largely ruled out by the observed asymmetry. A warped-disk geometry provides a physically plausible mechanism for the asymmetric illumination and hot spot features. Simple estimates show that a disk in the broad-line region of Arp 151 could be unstable to warping induced by radiation pressure. Our results demonstrate the potential power of detailed modeling combined with monitoring campaigns at higher cadence to characterize the gas kinematics and physical processes that give rise to the broad emission lines in active galactic nuclei.

  2. Entropy, complexity, and Markov diagrams for random walk cancer models

    NASA Astrophysics Data System (ADS)

    Newton, Paul K.; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter

    2014-12-01

    The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.

  3. Entropy, complexity, and Markov diagrams for random walk cancer models

    PubMed Central

    Newton, Paul K.; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter

    2014-01-01

    The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential. PMID:25523357

  4. Entropy, complexity, and Markov diagrams for random walk cancer models.

    PubMed

    Newton, Paul K; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter

    2014-12-19

    The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.

  5. Oseen vortex as a maximum entropy state of a two dimensional fluid

    NASA Astrophysics Data System (ADS)

    Montgomery, D. C.; Matthaeus, W. H.

    2011-07-01

    During the last four decades, a considerable number of investigations has been carried out into the evolution of turbulence in two dimensional Navier-Stokes flows. Much of the information has come from numerical solution of the (otherwise insoluble) dynamical equations and thus has necessarily required some kind of boundary conditions: spatially periodic, no-slip, stress-free, or free-slip. The theoretical framework that has proved to be of the most predictive value has been one employing an entropy functional (sometimes called the Boltzmann entropy) whose maximization has been correlated well in several cases with the late-time configurations into which the computed turbulence has relaxed. More recently, flow in the unbounded domain has been addressed by Gallay and Wayne who have shown a late-time relaxation to the classical Oseen vortex (also sometimes called the Lamb-Oseen vortex) for situations involving a finite net circulation or non-zero total integrated vorticity. Their proof involves powerful but difficult mathematics that might be thought to be beyond the preparation of many practicing fluid dynamicists. The purpose of this present paper is to remark that relaxation to the Oseen vortex can also be predicted in the more intuitive framework that has previously proved useful in predicting computational results with boundary conditions: that of an appropriate entropy maximization. The results make no assumption about the size of the Reynolds numbers, as long as they are finite, and the viscosity is treated as finite throughout.

  6. Influence of nonlinearity of the phonon dispersion relation on wave velocities in the four-moment maximum entropy phonon hydrodynamics

    NASA Astrophysics Data System (ADS)

    Larecki, Wieslaw; Banach, Zbigniew

    2014-01-01

    This paper analyzes the propagation of the waves of weak discontinuity in a phonon gas described by the four-moment maximum entropy phonon hydrodynamics involving a nonlinear isotropic phonon dispersion relation. For the considered hyperbolic equations of phonon gas hydrodynamics, the eigenvalue problem is analyzed and the condition of genuine nonlinearity is discussed. The speed of the wave front propagating into the region in thermal equilibrium is first determined in terms of the integral formula dependent on the phonon dispersion relation and subsequently explicitly calculated for the Dubey dispersion-relation model: |k|=ωc-1(1+bω2). The specification of the parameters c and b for sodium fluoride (NaF) and semimetallic bismuth (Bi) then makes it possible to compare the calculated dependence of the wave-front speed on the sample’s temperature with the empirical relations of Coleman and Newman (1988) describing for NaF and Bi the variation of the second-sound speed with temperature. It is demonstrated that the calculated temperature dependence of the wave-front speed resembles the empirical relation and that the parameters c and b obtained from fitting respectively the empirical relation and the original material parameters of Dubey (1973) are of the same order of magnitude, the difference being in the values of the numerical factors. It is also shown that the calculated temperature dependence is in good agreement with the predictions of Hardy and Jaswal’s theory (Hardy and Jaswal, 1971) on second-sound propagation. This suggests that the nonlinearity of a phonon dispersion relation should be taken into account in the theories aiming at the description of the wave-type phonon heat transport and that the Dubey nonlinear isotropic dispersion-relation model can be very useful for this purpose.

  7. The Study on Business Growth Process Management Entropy Model

    NASA Astrophysics Data System (ADS)

    Jing, Duan

    Enterprise's growth is a dynamic process. The factors of enterprise development are changing all the time. For this reason, it is difficult to study management entropy growth-oriented enterprises from static view. Its characteristic is the business enterprise growth stage, and puts forward a kind of measuring and calculating model based on enterprise management entropy for business scale, the enterprise ability and development speed. According to entropy measured by the model, enterprise can adopt revolution measure in the moment of truth. It can make the enterprise avoid crisis and take the road of sustainable development.

  8. Entropy of biogeochemical compartment models: complexity and information content as a tool for model development

    NASA Astrophysics Data System (ADS)

    Metzler, Holger; Sierra, Carlos A.

    2017-04-01

    Most soil organic matter decomposition models consist of a number of compartments describing the dynamics of substrate and microbial biomass pools. The fluxes of mass between the compartments are usually described by a system of ordinary differential equations, in which the number of compartments and the connections among them define the complexity of the model and the number of biological processes that need to be described. With this approach, it is difficult to determine the level of detail that is required to describe a given system, and it is also difficult to compare models against each other due to large differences in their level of complexity. Here, we propose entropy as a tool to determine the level of complexity required to describe a biogeochemical system and to compare the information content of different models. Instead of entire masses on bulk soil level, we look at such models from the point of view of a single particle on the molecular level. This particle enters the system, cycles through it, and leaves it at some point later in time, thereby following a path through the system. We think of this path as a particular stochastic process, a Markov renewal process. If we consider this path as a random variable in a path space, its Shannon information entropy describes its information content, i.e. how much we learn when we observe the entire path of a particle traveling through the system. In other words, it tells us how hard it is to predict this path and thus how much we do not know about what is going to happen to one single particle. The entropy as a measure of model complexity can help us to decide whether a model is not complex enough to represent the information that we have about a system or whether it is too complex. The concept of maximum entropy provides a powerful tool to develop unbiased models, i.e. models that contain the exact amount of information that we have about the system. In addition, differences between a soil organic matter

  9. Coupling diffusion and maximum entropy models to estimate thermal inertia

    USDA-ARS?s Scientific Manuscript database

    Thermal inertia is a physical property of soil at the land surface related to water content. We have developed a method for estimating soil thermal inertia using two daily measurements of surface temperature, to capture the diurnal range, and diurnal time series of net radiation and specific humidi...

  10. Adaptive Statistical Language Modeling; A Maximum Entropy Approach

    DTIC Science & Technology

    1994-04-19

    ACCUSE] ACCUSE, ACCUSED, ACCUSES, ACCUSING [ACCUSTOM] : ACCUSTOMED [ ACCUTANE ] ACCUTANE [ACE] :ACE [ACHIEVE] : ACHIEVE, ACHIEVED, ACHIEVES, ACHIEVING...Appendix C. Best Triggers by the MI-3g Measure ACCUSING 4= FILED COURT ACCUTANE 4- ACCUTANE ACNE DEFECTS HOFFMANN ROCHE BIRTH DRUG’S DRUG PREG- NANT...TOP GOING INDUSTRY PRESIDENT WANT OFTEN LOT OWN TOO WHERE ACME = ACME STEEL ACNE = ACNE RETIN DRUG ACCUTANE HOFFMANN ROCHE PRESCRIPTION DRUG’S SKIN

  11. Quantitative LC-MS of polymers: determining accurate molecular weight distributions by combined size exclusion chromatography and electrospray mass spectrometry with maximum entropy data processing.

    PubMed

    Gruendling, Till; Guilhaus, Michael; Barner-Kowollik, Christopher

    2008-09-15

    We report on the successful application of size exclusion chromatography (SEC) combined with electrospray ionization mass spectrometry (ESI-MS) and refractive index (RI) detection for the determination of accurate molecular weight distributions of synthetic polymers, corrected for chromatographic band broadening. The presented method makes use of the ability of ESI-MS to accurately depict the peak profiles and retention volumes of individual oligomers eluting from the SEC column, whereas quantitative information on the absolute concentration of oligomers is obtained from the RI-detector only. A sophisticated computational algorithm based on the maximum entropy principle is used to process the data gained by both detectors, yielding an accurate molecular weight distribution, corrected for chromatographic band broadening. Poly(methyl methacrylate) standards with molecular weights up to 10 kDa serve as model compounds. Molecular weight distributions (MWDs) obtained by the maximum entropy procedure are compared to MWDs, which were calculated by a conventional calibration of the SEC-retention time axis with peak retention data obtained from the mass spectrometer. Comparison showed that for the employed chromatographic system, distributions below 7 kDa were only weakly influenced by chromatographic band broadening. However, the maximum entropy algorithm could successfully correct the MWD of a 10 kDa standard for band broadening effects. Molecular weight averages were between 5 and 14% lower than the manufacturer stated data obtained by classical means of calibration. The presented method demonstrates a consistent approach for analyzing data obtained by coupling mass spectrometric detectors and concentration sensitive detectors to polymer liquid chromatography.

  12. Estimation of depth to magnetic source using maximum entropy power spectra, with application to the Peru-Chile Trench

    USGS Publications Warehouse

    Blakely, Richard J.

    1981-01-01

    Estimations of the depth to magnetic sources using the power spectrum of magnetic anomalies generally require long magnetic profiles. The method developed here uses the maximum entropy power spectrum (MEPS) to calculate depth to source on short windows of magnetic data; resolution is thereby improved. The method operates by dividing a profile into overlapping windows, calculating a maximum entropy power spectrum for each window, linearizing the spectra, and calculating with least squares the various depth estimates. The assumptions of the method are that the source is two dimensional and that the intensity of magnetization includes random noise; knowledge of the direction of magnetization is not required. The method is applied to synthetic data and to observed marine anomalies over the Peru-Chile Trench. The analyses indicate a continuous magnetic basement extending from the eastern margin of the Nazca plate and into the subduction zone. The computed basement depths agree with acoustic basement seaward of the trench axis, but deepen as the plate approaches the inner trench wall. This apparent increase in the computed depths may result from the deterioration of magnetization in the upper part of the ocean crust, possibly caused by compressional disruption of the basaltic layer. Landward of the trench axis, the depth estimates indicate possible thrusting of the oceanic material into the lower slope of the continental margin.

  13. Maximum entropy approach for batch-arrival queue under N policy with an un-reliable server and single vacation

    NASA Astrophysics Data System (ADS)

    Ke, Jau-Chuan; Lin, Chuen-Horng

    2008-11-01

    We consider the M[x]/G/1 queueing system, in which the server operates N policy and a single vacation. As soon as the system becomes empty the server leaves for a vacation of random length V. When he returns from the vacation and the system size is greater than or equal to a threshold value N, he starts to serve the waiting customers. If he finds fewer customers than N. he waits in the system until the system size reaches or exceeds N. The server is subject to breakdowns according to a Poisson process and his repair time obeys an arbitrary distribution. We use maximum entropy principle to derive the approximate formulas for the steady-state probability distributions of the queue length. We perform a comparative analysis between the approximate results with established exact results for various batch size, vacation time, service time and repair time distributions. We demonstrate that the maximum entropy approach is efficient enough for practical purpose and is a feasible method for approximating the solution of complex queueing systems.

  14. Estimating Prior Model Probabilities Using an Entropy Principle

    NASA Astrophysics Data System (ADS)

    Ye, M.; Meyer, P. D.; Neuman, S. P.; Pohlmann, K.

    2004-12-01

    Considering conceptual model uncertainty is an important process in environmental uncertainty/risk analyses. Bayesian Model Averaging (BMA) (Hoeting et al., 1999) and its Maximum Likelihood version, MLBMA, (Neuman, 2003) jointly assess predictive uncertainty of competing alternative models to avoid bias and underestimation of uncertainty caused by relying on one single model. These methods provide posterior distribution (or, equivalently, leading moments) of quantities of interests for decision-making. One important step of these methods is to specify prior probabilities of alternative models for the calculation of posterior model probabilities. This problem, however, has not been satisfactorily resolved and equally likely prior model probabilities are usually accepted as a neutral choice. Ye et al. (2004) have shown that whereas using equally likely prior model probabilities has led to acceptable geostatistical estimates of log air permeability data from fractured unsaturated tuff at the Apache Leap Research Site (ALRS) in Arizona, identifying more accurate prior probabilities can improve these estimates. In this paper we present a new methodology to evaluate prior model probabilities by maximizing Shannon's entropy with restrictions postulated a priori based on model plausibility relationships. It yields optimum prior model probabilities conditional on prior information used to postulate the restrictions. The restrictions and corresponding prior probabilities can be modified as more information becomes available. The proposed method is relatively easy to use in practice as it is generally less difficult for experts to postulate relationships between models than to specify numerical prior model probability values. Log score, mean square prediction error (MSPE) and mean absolute predictive error (MAPE) criteria consistently show that applying our new method to the ALRS data reduces geostatistical estimation errors provided relationships between models are

  15. Entropy model of dissipative structure on corporate social responsibility

    NASA Astrophysics Data System (ADS)

    Li, Zuozhi; Jiang, Jie

    2017-06-01

    Enterprise is prompted to fulfill the social responsibility requirement by the internal and external environment. In this complex system, some studies suggest that firms have an orderly or chaotic entropy exchange behavior. Based on the theory of dissipative structure, this paper constructs the entropy index system of corporate social responsibility(CSR) and explores the dissipative structure of CSR through Brusselator model criterion. Picking up listed companies of the equipment manufacturing, the research shows that CSR has positive incentive to negative entropy and promotes the stability of dissipative structure. In short, the dissipative structure of CSR has a positive impact on the interests of stakeholders and corporate social images.

  16. Maximum Entropy and the Inference of Pattern and Dynamics in Ecology

    NASA Astrophysics Data System (ADS)

    Harte, John

    Constrained maximization of information entropy yields least biased probability distributions. From physics to economics, from forensics to medicine, this powerful inference method has enriched science. Here I apply this method to ecology, using constraints derived from ratios of ecological state variables, and infer functional forms for the ecological metrics describing patterns in the abundance, distribution, and energetics of species. I show that a static version of the theory describes remarkably well observed patterns in quasi-steady-state ecosystems across a wide range of habitats, spatial scales, and taxonomic groups. A systematic pattern of failure is observed, however, for ecosystems either losing species following disturbance or diversifying in evolutionary time; I show that this problem may be remedied with a stochastic-dynamic extension of the theory.

  17. Maximum caliber inference and the stochastic Ising model

    NASA Astrophysics Data System (ADS)

    Cafaro, Carlo; Ali, Sean Alan

    2016-11-01

    We investigate the maximum caliber variational principle as an inference algorithm used to predict dynamical properties of complex nonequilibrium, stationary, statistical systems in the presence of incomplete information. Specifically, we maximize the path entropy over discrete time step trajectories subject to normalization, stationarity, and detailed balance constraints together with a path-dependent dynamical information constraint reflecting a given average global behavior of the complex system. A general expression for the transition probability values associated with the stationary random Markov processes describing the nonequilibrium stationary system is computed. By virtue of our analysis, we uncover that a convenient choice of the dynamical information constraint together with a perturbative asymptotic expansion with respect to its corresponding Lagrange multiplier of the general expression for the transition probability leads to a formal overlap with the well-known Glauber hyperbolic tangent rule for the transition probability for the stochastic Ising model in the limit of very high temperatures of the heat reservoir.

  18. Maximum caliber inference and the stochastic Ising model.

    PubMed

    Cafaro, Carlo; Ali, Sean Alan

    2016-11-01

    We investigate the maximum caliber variational principle as an inference algorithm used to predict dynamical properties of complex nonequilibrium, stationary, statistical systems in the presence of incomplete information. Specifically, we maximize the path entropy over discrete time step trajectories subject to normalization, stationarity, and detailed balance constraints together with a path-dependent dynamical information constraint reflecting a given average global behavior of the complex system. A general expression for the transition probability values associated with the stationary random Markov processes describing the nonequilibrium stationary system is computed. By virtue of our analysis, we uncover that a convenient choice of the dynamical information constraint together with a perturbative asymptotic expansion with respect to its corresponding Lagrange multiplier of the general expression for the transition probability leads to a formal overlap with the well-known Glauber hyperbolic tangent rule for the transition probability for the stochastic Ising model in the limit of very high temperatures of the heat reservoir.

  19. Entropy Based Modelling for Estimating Demographic Trends

    PubMed Central

    Kuo, Shyh-Hao; Xu, Hai-Yan; Hu, Nan; Zhao, Guangshe; Monterola, Christopher

    2015-01-01

    In this paper, an entropy-based method is proposed to forecast the demographical changes of countries. We formulate the estimation of future demographical profiles as a constrained optimization problem, anchored on the empirically validated assumption that the entropy of age distribution is increasing in time. The procedure of the proposed method involves three stages, namely: 1) Prediction of the age distribution of a country’s population based on an “age-structured population model”; 2) Estimation the age distribution of each individual household size with an entropy-based formulation based on an “individual household size model”; and 3) Estimation the number of each household size based on a “total household size model”. The last stage is achieved by projecting the age distribution of the country’s population (obtained in stage 1) onto the age distributions of individual household sizes (obtained in stage 2). The effectiveness of the proposed method is demonstrated by feeding real world data, and it is general and versatile enough to be extended to other time dependent demographic variables. PMID:26382594

  20. Irreversible entropy model for damage diagnosis in resistors

    SciTech Connect

    Cuadras, Angel Crisóstomo, Javier; Ovejas, Victoria J.; Quilez, Marcos

    2015-10-28

    We propose a method to characterize electrical resistor damage based on entropy measurements. Irreversible entropy and the rate at which it is generated are more convenient parameters than resistance for describing damage because they are essentially positive in virtue of the second law of thermodynamics, whereas resistance may increase or decrease depending on the degradation mechanism. Commercial resistors were tested in order to characterize the damage induced by power surges. Resistors were biased with constant and pulsed voltage signals, leading to power dissipation in the range of 4–8 W, which is well above the 0.25 W nominal power to initiate failure. Entropy was inferred from the added power and temperature evolution. A model is proposed to understand the relationship among resistance, entropy, and damage. The power surge dissipates into heat (Joule effect) and damages the resistor. The results show a correlation between entropy generation rate and resistor failure. We conclude that damage can be conveniently assessed from irreversible entropy generation. Our results for resistors can be easily extrapolated to other systems or machines that can be modeled based on their resistance.

  1. Efficient reliability analysis of structures with the rotational quasi-symmetric point- and the maximum entropy methods

    NASA Astrophysics Data System (ADS)

    Xu, Jun; Dang, Chao; Kong, Fan

    2017-10-01

    This paper presents a new method for efficient structural reliability analysis. In this method, a rotational quasi-symmetric point method (RQ-SPM) is proposed for evaluating the fractional moments of the performance function. Then, the derivation of the performance function's probability density function (PDF) is carried out based on the maximum entropy method in which constraints are specified in terms of fractional moments. In this regard, the probability of failure can be obtained by a simple integral over the performance function's PDF. Six examples, including a finite element-based reliability analysis and a dynamic system with strong nonlinearity, are used to illustrate the efficacy of the proposed method. All the computed results are compared with those by Monte Carlo simulation (MCS). It is found that the proposed method can provide very accurate results with low computational effort.

  2. Poisson-gap sampling and forward maximum entropy reconstruction for enhancing the resolution and sensitivity of protein NMR data.

    PubMed

    Hyberts, Sven G; Takeuchi, Koh; Wagner, Gerhard

    2010-02-24

    The Fourier transform has been the gold standard for transforming data from the time domain to the frequency domain in many spectroscopic methods, including NMR spectroscopy. While reliable, it has the drawback that it requires a grid of uniformely sampled data points, which is not efficient for decaying signals, and it also suffers from artifacts when dealing with nondecaying signals. Over several decades, many alternative sampling and transformation schemes have been proposed. Their common problem is that relative signal amplitudes are not well-preserved. Here we demonstrate the superior performance of a sine-weighted Poisson-gap distribution sparse-sampling scheme combined with forward maximum entropy (FM) reconstruction. While the relative signal amplitudes are well-preserved, we also find that the signal-to-noise ratio is enhanced up to 4-fold per unit of data acquisition time relative to traditional linear sampling.

  3. Maximum entropy principle for predicting response to multiple-drug exposure in bacteria and human cancer cells

    NASA Astrophysics Data System (ADS)

    Wood, Kevin; Nishida, Satoshi; Sontag, Eduardo; Cluzel, Philippe

    2012-02-01

    Drugs are commonly used in combinations larger than two for treating infectious disease. However, it is generally impossible to infer the net effect of a multi-drug combination on cell growth directly from the effects of individual drugs. We combined experiments with maximum entropy methods to develop a mechanism-independent framework for calculating the response of both bacteria and human cancer cells to a large variety of drug combinations comprised of anti-microbial or anti-cancer drugs. We experimentally show that the cellular responses to drug pairs are sufficient to infer the effects of larger drug combinations in gram negative bacteria, Escherichia coli, gram positive bacteria, Staphylococcus aureus, and also human breast cancer and melanoma cell lines. Remarkably, the accurate predictions of this framework suggest that the multi-drug response obeys statistical rather than chemical laws for combinations larger than two. Consequently, these findings offer a new strategy for the rational design of therapies using large drug combinations.

  4. Morphology from the maximum entropy principle: domains in a phase ordering system and a crack pattern in broken glass.

    PubMed

    Fiałkowski, Marcin; Hołyst, Robert

    2002-05-01

    The maximum entropy principle is applied to study the morphology of a phase ordering two-dimensional system below the critical point. The distribution of domain area A is a function of ratio of the area to contour length L, R=A/L(A), and is given by exp(-lambda R(mu)) with exponent mu=2, which follows from the Lifshitz-Cahn-Allen theory. A and L are linked through the relation L approximately A(nu). We find two types of domain in the system: large of elongated shape (nu=0.88) and small of circular shape (nu=0.5). A crack pattern in broken glass belongs to the same morphology class with mu=1 and nu=0.72.

  5. A generalized model on the evaluation of entropy and entropy of mixing of liquid Na-Sn alloys

    NASA Astrophysics Data System (ADS)

    Satpathy, Alok; Sengupta, Saumendu

    2017-01-01

    Recently proposed theory of entropy of mixing of the structurally inhomogeneous binary liquid alloys of alkali metals and group-IV elements is applied successfully to the liquid Na-Sn alloy. This alloy indicates chemical short range ordering (CSRO) i.e. exhibits partially salt like characteristics due to strong tendencies to compound formation, in the solid as well as in the liquid state. So, the generalized model for entropy of charged-hard-spheres mixture of arbitrary charge and size is employed to evaluate entropies of mixing, treating the sample as partially charge transfer system. The computed entropies of mixing are in excellent agreement with the experimental data.

  6. Entanglement entropy of Wilson loops: Holography and matrix models

    NASA Astrophysics Data System (ADS)

    Gentle, Simon A.; Gutperle, Michael

    2014-09-01

    A half-Bogomol'nyi-Prasad-Sommerfeld circular Wilson loop in N=4 SU(N) supersymmetric Yang-Mills theory in an arbitrary representation is described by a Gaussian matrix model with a particular insertion. The additional entanglement entropy of a spherical region in the presence of such a loop was recently computed by Lewkowycz and Maldacena using exact matrix model results. In this paper we utilize the supergravity solutions that are dual to such Wilson loops in a representation with order N2 boxes to calculate this entropy holographically. Employing the matrix model results of Gomis, Matsuura, Okuda and Trancanelli we express this holographic entanglement entropy in a form that can be compared with the calculation of Lewkowycz and Maldacena. We find complete agreement between the matrix model and holographic calculations.

  7. Coupling entropy of co-processing model on social networks

    NASA Astrophysics Data System (ADS)

    Zhang, Zhanli

    2015-08-01

    Coupling entropy of co-processing model on social networks is investigated in this paper. As one crucial factor to determine the processing ability of nodes, the information flow with potential time lag is modeled by co-processing diffusion which couples the continuous time processing and the discrete diffusing dynamics. Exact results on master equation and stationary state are achieved to disclose the formation. In order to understand the evolution of the co-processing and design the optimal routing strategy according to the maximal entropic diffusion on networks, we propose the coupling entropy comprehending the structural characteristics and information propagation on social network. Based on the analysis of the co-processing model, we analyze the coupling impact of the structural factor and information propagating factor on the coupling entropy, where the analytical results fit well with the numerical ones on scale-free social networks.

  8. Experiments and Model for Serration Statistics in Low-Entropy, Medium-Entropy, and High-Entropy Alloys.

    PubMed

    Carroll, Robert; Lee, Chi; Tsai, Che-Wei; Yeh, Jien-Wei; Antonaglia, James; Brinkman, Braden A W; LeBlanc, Michael; Xie, Xie; Chen, Shuying; Liaw, Peter K; Dahmen, Karin A

    2015-11-23

    High-entropy alloys (HEAs) are new alloys that contain five or more elements in roughly-equal proportion. We present new experiments and theory on the deformation behavior of HEAs under slow stretching (straining), and observe differences, compared to conventional alloys with fewer elements. For a specific range of temperatures and strain-rates, HEAs deform in a jerky way, with sudden slips that make it difficult to precisely control the deformation. An analytic model explains these slips as avalanches of slipping weak spots and predicts the observed slip statistics, stress-strain curves, and their dependence on temperature, strain-rate, and material composition. The ratio of the weak spots' healing rate to the strain-rate is the main tuning parameter, reminiscent of the Portevin-LeChatellier effect and time-temperature superposition in polymers. Our model predictions agree with the experimental results. The proposed widely-applicable deformation mechanism is useful for deformation control and alloy design.

  9. Experiments and Model for Serration Statistics in Low-Entropy, Medium-Entropy, and High-Entropy Alloys

    NASA Astrophysics Data System (ADS)

    Carroll, Robert; Lee, Chi; Tsai, Che-Wei; Yeh, Jien-Wei; Antonaglia, James; Brinkman, Braden A. W.; Leblanc, Michael; Xie, Xie; Chen, Shuying; Liaw, Peter K.; Dahmen, Karin A.

    2015-11-01

    High-entropy alloys (HEAs) are new alloys that contain five or more elements in roughly-equal proportion. We present new experiments and theory on the deformation behavior of HEAs under slow stretching (straining), and observe differences, compared to conventional alloys with fewer elements. For a specific range of temperatures and strain-rates, HEAs deform in a jerky way, with sudden slips that make it difficult to precisely control the deformation. An analytic model explains these slips as avalanches of slipping weak spots and predicts the observed slip statistics, stress-strain curves, and their dependence on temperature, strain-rate, and material composition. The ratio of the weak spots’ healing rate to the strain-rate is the main tuning parameter, reminiscent of the Portevin-LeChatellier effect and time-temperature superposition in polymers. Our model predictions agree with the experimental results. The proposed widely-applicable deformation mechanism is useful for deformation control and alloy design.

  10. Classic maximum entropy recovery of the average joint distribution of apparent FRET efficiency and fluorescence photons for single-molecule burst measurements.

    PubMed

    DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K

    2012-04-05

    We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.

  11. Classic Maximum Entropy Recovery of the Average Joint Distribution of Apparent FRET Efficiency and Fluorescence Photons for Single-molecule Burst Measurements

    PubMed Central

    DeVore, Matthew S.; Gull, Stephen F.; Johnson, Carey K.

    2012-01-01

    We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions. PMID:22338694

  12. Monotonic entropy growth for a nonlinear model of random exchanges.

    PubMed

    Apenko, S M

    2013-02-01

    We present a proof of the monotonic entropy growth for a nonlinear discrete-time model of a random market. This model, based on binary collisions, also may be viewed as a particular case of Ulam's redistribution of energy problem. We represent each step of this dynamics as a combination of two processes. The first one is a linear energy-conserving evolution of the two-particle distribution, for which the entropy growth can be easily verified. The original nonlinear process is actually a result of a specific "coarse graining" of this linear evolution, when after the collision one variable is integrated away. This coarse graining is of the same type as the real space renormalization group transformation and leads to an additional entropy growth. The combination of these two factors produces the required result which is obtained only by means of information theory inequalities.

  13. An improved model for the transit entropy of monatomic liquids

    SciTech Connect

    Wallace, Duane C; Chisolm, Eric D; Bock, Nicolas

    2009-01-01

    In the original formulation of V-T theory for monatomic liquid dynamics, the transit contribution to entropy was taken to be a universal constant, calibrated to the constant-volume entropy of melting. This model suffers two deficiencies: (a) it does not account for experimental entropy differences of {+-}2% among elemental liquids, and (b) it implies a value of zero for the transit contribution to internal energy. The purpose of this paper is to correct these deficiencies. To this end, the V-T equation for entropy is fitted to an overall accuracy of {+-}0.1% to the available experimental high temperature entropy data for elemental liquids. The theory contains two nuclear motion contributions: (a) the dominant vibrational contribution S{sub vib}(T/{theta}{sub 0}), where T is temperature and {theta}{sub 0} is the vibrational characteristic temperature, and (b) the transit contribution S{sub tr}(T/{theta}{sub tr}), where {theta}{sub tr} is a scaling temperature for each liquid. The appearance of a common functional form of S{sub tr} for all the liquids studied is a property of the experimental data, when analyzed via the V-T formula. The resulting S{sub tr} implies the correct transit contribution to internal energy. The theoretical entropy of melting is derived, in a single formula applying to normal and anomalous melting alike. An ab initio calculation of {theta}{sub 0}, based on density functional theory, is reported for liquid Na and Cu. Comparison of these calculations with the above analysis of experimental entropy data provides verification of V-T theory. In view of the present results, techniques currently being applied in ab initio simulations of liquid properties can be employed to advantage in the further testing and development of V-T theory.

  14. Entanglement entropy of fermionic quadratic band touching model

    NASA Astrophysics Data System (ADS)

    Chen, Xiao; Cho, Gil Young; Fradkin, Eduardo

    2014-03-01

    The entanglement entropy has been proven to be a useful tool to diagnose and characterize strongly correlated systems such as topologically ordered phases and some critical points. Motivated by the successes, we study the entanglement entropy (EE) of a fermionic quadratic band touching model in (2 + 1) dimension. This is a fermionic ``spinor'' model with a finite DOS at k=0 and infinitesimal instabilities. The calculation on two-point correlation functions shows that a Dirac fermion model and the quadratic band touching model both have the asymptotically identical behavior in the long distance limit. This implies that EE for the quadratic band touching model also has an area law as the Dirac fermion. This is in contradiction with the expectation that dense fermi systems with a finite DOS should exhibit LlogL violations to the area law of entanglement entropy (L is the length of the boundary of the sub-region) by analogy with the Fermi surface. We performed numerical calculations of entanglement entropies on a torus of the lattice models for the quadratic band touching point and the Dirac fermion to confirm this. The numerical calculation shows that EE for both cases satisfy the area law. We further verify this result by the analytic calculation on the torus geometry. This work was supported in part by the NSF grant DMR-1064319.

  15. Thermospheric density model biases at sunspot maximum

    NASA Astrophysics Data System (ADS)

    Pardini, Carmen; Moe, Kenneth; Anselmo, Luciano

    A previous study (Pardini C., Anselmo L, Moe K., Moe M.M., Drag and energy accommodation coefficients during sunspot maximum, Adv. Space Res., 2009, doi:10.1016/j.asr.2009.08.034), including ten satellites with altitudes between 200 and 630 km, has yielded values for the energy accommodation coefficient as well as for the physical drag coefficient as a function of height during solar maximum conditions. The results are consistent with the altitude and solar cycle variation of atomic oxygen, which is known to be adsorbed on satellite surfaces, affecting both the energy accommodation and angular distribution of the reemitted molecules. Taking advantage of these results, an investigation of thermospheric density model biases at sunspot maximum became possible using the recently upgraded CDFIT software code. Specif-ically developed at ISTI/CNR, CDFIT is used to fit the observed satellite semi-major axis decay. All the relevant orbital perturbations are considered and several atmospheric density models have been implemented over the years, including JR-71, MSISE-90, NRLMSISE-00, GOST2004 and JB2006. For this analysis we reused the satellites Cosmos 2265 and Cosmos 2332 (altitude: 275 km), SNOE (altitude: 480 km), and Clementine (altitude: 630 km), spanning the last solar cycle maximum (October 1999 -January 2003). For each satellite, and for each of the above men-tioned atmospheric density models, the fitted drag coefficient was obtained with CDFIT, using the observed orbital decay, and then compared with the corresponding physical drag coefficient estimated in the previous study (Pardini et al., 2009). It was consequently possible to derive the average density biases of the thermospheric models during the considered time span. The average results obtained for the last sunspot maximum can be summarized as follows (the sign "+" means that the atmospheric density is overestimated by the model, while the sign "-" means that the atmospheric density is underestimated

  16. Existence of the Entropy Solution for a Viscoelastic Model

    NASA Astrophysics Data System (ADS)

    Zhu, Changjiang

    1998-06-01

    In this paper, we consider the Cauchy problem for a viscoelastic model with relaxationut+σx=0, (σ-f(u))t+{1}/{δ} (σ-μf(u))=0with discontinuous, large initial data, where 0<μ<1,δ>0 are constants. When the system is nonstrictly hyperbolic, under the additional assumptionv0x∈L∞, the system is reduced to an inhomogeneous scalar balance law by employing the special form of the system itself. After introducing a definition of entropy solutions to the system, we prove the existence, uniqueness, and continuous dependence of the global entropy solution for the system. When the system is strictly hyperbolic, some special entropy pairs of the Lax type are constructed, in which the progression terms are functions of a single variable, and the necessary estimates for the major terms are obtained by using the theory of singular perturbation of the ordinary differential equations. The special entropy pairs are used to prove the existence of the global entropy solutions for the corresponding Cauchy problem by applying the method of compensated compactness

  17. An articulatorily constrained, maximum entropy approach to speech recognition and speech coding

    SciTech Connect

    Hogden, J.

    1996-12-31

    Hidden Markov models (HMM`s) are among the most popular tools for performing computer speech recognition. One of the primary reasons that HMM`s typically outperform other speech recognition techniques is that the parameters used for recognition are determined by the data, not by preconceived notions of what the parameters should be. This makes HMM`s better able to deal with intra- and inter-speaker variability despite the limited knowledge of how speech signals vary and despite the often limited ability to correctly formulate rules describing variability and invariance in speech. In fact, it is often the case that when HMM parameter values are constrained using the limited knowledge of speech, recognition performance decreases. However, the structure of an HMM has little in common with the mechanisms underlying speech production. Here, the author argues that by using probabilistic models that more accurately embody the process of speech production, he can create models that have all the advantages of HMM`s, but that should more accurately capture the statistical properties of real speech samples--presumably leading to more accurate speech recognition. The model he will discuss uses the fact that speech articulators move smoothly and continuously. Before discussing how to use articulatory constraints, he will give a brief description of HMM`s. This will allow him to highlight the similarities and differences between HMM`s and the proposed technique.

  18. Entanglement entropies of the quarter filled Hubbard model

    NASA Astrophysics Data System (ADS)

    Calabrese, Pasquale; Essler, Fabian H. L.; Läuchli, Andreas M.

    2014-09-01

    We study Rényi and von Neumann entanglement entropies in the ground state of the one dimensional quarter-filled Hubbard model with periodic boundary conditions. We show that they exhibit an unexpected dependence on system size: for L = 4mod 8 the results are in agreement with expectations based on conformal field theory, while for L = 0mod 8 additional contributions arise. We show that these can be understood in terms of a ‘shell-filling’ effect and we develop a conformal field theory approach to calculate the additional contributions to the entropies. These analytic results are found to be in excellent agreement with density matrix renormalization group computations for weak Hubbard interactions. We argue that for larger interactions the presence of a marginal irrelevant operator in the spin sector strongly affects the entropies at the finite sizes accessible numerically and we present an effective way to take them into account.

  19. Fluctuations and entropy in models of quantum optical resonance

    NASA Astrophysics Data System (ADS)

    Phoenix, S. J. D.; Knight, P. L.

    1988-09-01

    We use variances, entropy, and the Shannon entropy to analyse the fluctuations and quantum evolution of various simple models of quantum optical resonance. We discuss at length the properties of the single-mode radiation field coupled to a single two-level atom, and then extend our analysis to describe the micromaser in which a cavity mode is repeatedly pumped by a succession of atoms passing through the cavity. We also discuss the fluctuations in the single-mode laser theory of Scully and Lamb.

  20. Relative entropy and covariance type constraints yielding ARMA models

    NASA Astrophysics Data System (ADS)

    Girardin, Valérie

    2001-05-01

    We consider zero mean weakly stationary multidimensional scalar time series. We determine the form of the spectral density which minimizes a relative entropy under trigonometric moment constraints, as covariance, impulse responses or cepstral coefficients. It often yields autoregressive moving average models giving one more justification to their widespread use. .

  1. Maximum-entropy large-scale structures of Boolean networks optimized for criticality

    NASA Astrophysics Data System (ADS)

    Möller, Marco; Peixoto, Tiago P.

    2015-04-01

    We construct statistical ensembles of modular Boolean networks that are constrained to lie at the critical line between frozen and chaotic dynamic regimes. The ensembles are maximally random given the imposed constraints, and thus represent null models of critical networks. By varying the network density and the entropic cost associated with biased Boolean functions, the ensembles undergo several phase transitions. The observed structures range from fully random to several ordered ones, including a prominent core-periphery-like structure, and an 'attenuated' two-group structure, where the network is divided in two groups of nodes, and one of them has Boolean functions with very low sensitivity. This shows that such simple large-scale structures are the most likely to occur when optimizing for criticality, in the absence of any other constraint or competing optimization criteria.

  2. Factor Analysis of Wildfire and Risk Area Estimation in Korean Peninsula Using Maximum Entropy

    NASA Astrophysics Data System (ADS)

    Kim, Teayeon; Lim, Chul-Hee; Lee, Woo-Kyun; Kim, YouSeung; Heo, Seongbong; Cha, Sung Eun; Kim, Seajin

    2016-04-01

    The number of wildfires and accompanying human injuries and physical damages has been increased by frequent drought. Especially, Korea experienced severe drought and numbers of wildfire took effect this year. We used MaxEnt model to figure out major environmental factors for wildfire and used RCP scenarios to predict future wildfire risk area. In this study, environmental variables including topographic, anthropogenic, meteorologic data was used to figure out contributing variables of wildfire in South and North Korea, and compared accordingly. As for occurrence data, we used MODIS fire data after verification. In North Korea, AUC(Area Under the ROC Curve) value was 0.890 which was high enough to explain the distribution of wildfires. South Korea had low AUC value than North Korea and high mean standard deviation which means there is low anticipation to predict fire with same environmental variables. It is expected to enhance AUC value in South Korea with environmental variables such as distance from trails, wildfire management systems. For instance, fire occurred within DMZ(demilitarized zone, 4kms boundary from 38th parallel) has decisive influence on fire risk area in South Korea, but not in North Korea. The contribution of each environmental variables was more distributed among variables in North Korea than in South Korea. This means South Korea is dependent on few certain variables, and North Korea can be explained as number of variables with evenly distributed portions. Although the AUC value and standard deviation of South Korea was not high enough to predict wildfire, the result carries an significant meaning to figure out scientific and social matters that certain environmental variables has great weight by understanding their response curves. We also made future wildfire risk area map in whole Korean peninsula using the same model. In four RCP scenarios, it was found that severe climate change would lead wildfire risk area move north. Especially North

  3. Density maximum and polarizable models of water.

    PubMed

    Kiss, Péter T; Baranyai, András

    2012-08-28

    To estimate accurately the density of water over a wide range of temperatures with a density maximum at 4 °C is one of the most stringent tests of molecular models. The shape of the curve influences the ability to describe critical properties and to predict the freezing temperature. While it was demonstrated that with a proper parameter fit nonpolarizable models can approximate this behavior accurately, it is much more difficult to do this for polarizable models. We provide a short overview of ρ-T diagrams for existing models, then we give an explanation of this difficulty. We present a version of the BK model [A. Baranyai and P. T. Kiss, J. Chem. Phys. 133, 144109 (2010); and ibid. 135, 234110 (2011)] which is capable to predict the density of water over a wide range of temperature. The BK model uses the charge-on-spring method with three Gaussian charges. Since the experimental dipole moment and the geometry is fixed, and the quadrupole moment is approximated by a least mean square procedure, parameters of the repulsion and dispersive attraction forces remained as free tools to match experimental properties. Relying on a simplified but plausible justification, the new version of the model uses repulsion and attraction as functions of the induced dipole moment of the molecule. The repulsive force increases, while the attractive force decreases with the size of the molecular dipole moment. At the same time dipole moment dependent dispersion forces are taking part in the polarization of the molecule. This scheme iterates well and, in addition to a reasonable density-temperature function, creates dipole distributions with accurate estimation of the dielectric constant of the liquid.

  4. A Unified Theory of Turbulence: Maximum Entropy Increase Due To Turbulent Dissipation In Fluid Systems From Laboratory-scale Turbulence To Global-scale Circulations

    NASA Astrophysics Data System (ADS)

    Ozawa, Hisashi; Shimokawa, Shinya; Sakuma, Hirofumi

    Turbulence is ubiquitous in nature, yet remains an enigma in many respects. Here we investigate dissipative properties of turbulence so as to find out a statistical "law" of turbulence. Two general expressions are derived for a rate of entropy increase due to thermal and viscous dissipation (turbulent dissipation) in a fluid system. It is found with these equations that phenomenological properties of turbulence such as Malkus's suggestion on maximum heat transport in thermal convection as well as Busse's sug- gestion on maximum momentum transport in shear turbulence can rigorously be ex- plained by a unique state in which the rate of entropy increase due to the turbulent dissipation is at a maximum (dS/dt = Max.). It is also shown that the same state cor- responds to the maximum entropy climate suggested by Paltridge. The tendency to increase the rate of entropy increase has also been confirmed by our recent GCM ex- periments. These results suggest the existence of a universal law that manifests itself in the long-term statistics of turbulent fluid systems from laboratory-scale turbulence to planetary-scale circulations. Ref.) Ozawa, H., Shimokawa, S., and Sakuma, H., Phys. Rev. E 64, 026303, 2001.

  5. n-Order and maximum fuzzy similarity entropy for discrimination of signals of different complexity: Application to fetal heart rate signals.

    PubMed

    Zaylaa, Amira; Oudjemia, Souad; Charara, Jamal; Girault, Jean-Marc

    2015-09-01

    This paper presents two new concepts for discrimination of signals of different complexity. The first focused initially on solving the problem of setting entropy descriptors by varying the pattern size instead of the tolerance. This led to the search for the optimal pattern size that maximized the similarity entropy. The second paradigm was based on the n-order similarity entropy that encompasses the 1-order similarity entropy. To improve the statistical stability, n-order fuzzy similarity entropy was proposed. Fractional Brownian motion was simulated to validate the different methods proposed, and fetal heart rate signals were used to discriminate normal from abnormal fetuses. In all cases, it was found that it was possible to discriminate time series of different complexity such as fractional Brownian motion and fetal heart rate signals. The best levels of performance in terms of sensitivity (90%) and specificity (90%) were obtained with the n-order fuzzy similarity entropy. However, it was shown that the optimal pattern size and the maximum similarity measurement were related to intrinsic features of the time series. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Stable discretization of the Boltzmann equation based on spherical harmonics, box integration, and a maximum entropy dissipation principle

    NASA Astrophysics Data System (ADS)

    Jungemann, C.; Pham, A. T.; Meinerzhagen, B.; Ringhofer, C.; Bollhöfer, M.

    2006-07-01

    The Boltzmann equation for transport in semiconductors is projected onto spherical harmonics in such a way that the resultant balance equations for the coefficients of the distribution function times the generalized density of states can be discretized over energy and real spaces by box integration. This ensures exact current continuity for the discrete equations. Spurious oscillations of the distribution function are suppressed by stabilization based on a maximum entropy dissipation principle avoiding the H transformation. The derived formulation can be used on arbitrary grids as long as box integration is possible. The approach works not only with analytical bands but also with full band structures in the case of holes. Results are presented for holes in bulk silicon based on a full band structure and electrons in a Si NPN bipolar junction transistor. The convergence of the spherical harmonics expansion is shown for a device, and it is found that the quasiballistic transport in nanoscale devices requires an expansion of considerably higher order than the usual first one. The stability of the discretization is demonstrated for a range of grid spacings in the real space and bias points which produce huge gradients in the electron density and electric field. It is shown that the resultant large linear system of equations can be solved in a memory efficient way by the numerically robust package ILUPACK.

  7. Electron density topology of high-pressure Ba8Si46 from a combined Rietveld and maximum-entropy analysis

    NASA Astrophysics Data System (ADS)

    Tse, John S.; Flacau, Roxana; Desgreniers, Serge; Iitaka, Toshiaki; Jiang, J. Z.

    2007-11-01

    Under pressure, Ba8Si46 is found to undergo an isostructural transition, as observed by Raman spectroscopy, extended x-ray-absorption fine structure, and x-ray diffraction. Rietveld analysis of the x-ray diffraction data shows a homothetic contraction of the host lattice after the structural transition at 17GPa . Using the Rietveld and maximum-entropy methods, we have performed an analysis of high resolution x-ray diffraction patterns collected from ambient to 30GPa obtained in a diamond anvil cell using He as a quasihydrostatic pressure transmitting medium. The results indicate unambiguously that the homothetic phase transition at about 17GPa is due to an extensive rehybridization of the Si atoms leading to a transfer of valence electrons from the bonding to the interstitial region. Consequently, the Si Si bonds are weakened substantially at high density, leading to an abrupt collapse of the unit cell volume without a change in crystalline structure. The transition pressure and the change in the chemical bonding are remarkably similar to that observed in elemental Si V.

  8. Precipitation Interpolation by Multivariate Bayesian Maximum Entropy Based on Meteorological Data in Yun- Gui-Guang region, Mainland China

    NASA Astrophysics Data System (ADS)

    Wang, Chaolin; Zhong, Shaobo; Zhang, Fushen; Huang, Quanyi

    2016-11-01

    Precipitation interpolation has been a hot area of research for many years. It had close relation to meteorological factors. In this paper, precipitation from 91 meteorological stations located in and around Yunnan, Guizhou and Guangxi Zhuang provinces (or autonomous region), Mainland China was taken into consideration for spatial interpolation. Multivariate Bayesian maximum entropy (BME) method with auxiliary variables, including mean relative humidity, water vapour pressure, mean temperature, mean wind speed and terrain elevation, was used to get more accurate regional distribution of annual precipitation. The means, standard deviations, skewness and kurtosis of meteorological factors were calculated. Variogram and cross- variogram were fitted between precipitation and auxiliary variables. The results showed that the multivariate BME method was precise with hard and soft data, probability density function. Annual mean precipitation was positively correlated with mean relative humidity, mean water vapour pressure, mean temperature and mean wind speed, negatively correlated with terrain elevation. The results are supposed to provide substantial reference for research of drought and waterlog in the region.

  9. Experiments and Model for Serration Statistics in Low-Entropy, Medium-Entropy, and High-Entropy Alloys

    SciTech Connect

    Carroll, Robert; Lee, Chi; Tsai, Che-Wei; Yeh, Jien-Wei; Antonaglia, James; Brinkman, Braden A.W.; LeBlanc, Michael; Xie, Xie; Chen, Shuying; Liaw, Peter K; Dahmen, Karin A

    2015-11-23

    In this study, high-entropy alloys (HEAs) are new alloys that contain five or more elements in roughly-equal proportion. We present new experiments and theory on the deformation behavior of HEAs under slow stretching (straining), and observe differences, compared to conventional alloys with fewer elements. For a specific range of temperatures and strain-rates, HEAs deform in a jerky way, with sudden slips that make it difficult to precisely control the deformation. An analytic model explains these slips as avalanches of slipping weak spots and predicts the observed slip statistics, stress-strain curves, and their dependence on temperature, strain-rate, and material composition. The ratio of the weak spots’ healing rate to the strain-rate is the main tuning parameter, reminiscent of the Portevin- LeChatellier effect and time-temperature superposition in polymers. Our model predictions agree with the experimental results. The proposed widely-applicable deformation mechanism is useful for deformation control and alloy design.

  10. Experiments and Model for Serration Statistics in Low-Entropy, Medium-Entropy, and High-Entropy Alloys

    PubMed Central

    Carroll, Robert; Lee, Chi; Tsai, Che-Wei; Yeh, Jien-Wei; Antonaglia, James; Brinkman, Braden A. W.; LeBlanc, Michael; Xie, Xie; Chen, Shuying; Liaw, Peter K.; Dahmen, Karin A.

    2015-01-01

    High-entropy alloys (HEAs) are new alloys that contain five or more elements in roughly-equal proportion. We present new experiments and theory on the deformation behavior of HEAs under slow stretching (straining), and observe differences, compared to conventional alloys with fewer elements. For a specific range of temperatures and strain-rates, HEAs deform in a jerky way, with sudden slips that make it difficult to precisely control the deformation. An analytic model explains these slips as avalanches of slipping weak spots and predicts the observed slip statistics, stress-strain curves, and their dependence on temperature, strain-rate, and material composition. The ratio of the weak spots’ healing rate to the strain-rate is the main tuning parameter, reminiscent of the Portevin-LeChatellier effect and time-temperature superposition in polymers. Our model predictions agree with the experimental results. The proposed widely-applicable deformation mechanism is useful for deformation control and alloy design. PMID:26593056

  11. Experiments and Model for Serration Statistics in Low-Entropy, Medium-Entropy, and High-Entropy Alloys

    DOE PAGES

    Carroll, Robert; Lee, Chi; Tsai, Che-Wei; ...

    2015-11-23

    High-entropy alloys (HEAs) are new alloys that contain five or more elements in roughly equal proportion. We present new experiments and theory on the deformation behavior of HEAs under slow stretching (straining), and observe differences, compared to conventional alloys with fewer elements. For a specific range of temperatures and strain-rates, HEAs deform in a jerky way, with sudden slips that make it difficult to precisely control the deformation. An analytic model explains these slips as avalanches of slipping weak spots and predicts the observed slip statistics, stress-strain curves, and their dependence on temperature, strain-rate, and material composition. The ratio ofmore » the weak spots’ healing rate to the strain-rate is the main tuning parameter, reminiscent of the Portevin-LeChatellier effect and time-temperature superposition in polymers. Our model predictions agree with the experimental results. The proposed widely-applicable deformation mechanism is useful for deformation control and alloys design.« less

  12. Probabilistic modelling of flood events using the entropy copula

    NASA Astrophysics Data System (ADS)

    Li, Fan; Zheng, Qian

    2016-11-01

    The estimation of flood frequency is vital for the flood control strategies and hydraulic structure design. Generating synthetic flood events according to statistical properties of observations is one of plausible methods to analyze the flood frequency. Due to the statistical dependence among the flood event variables (i.e. the flood peak, volume and duration), a multidimensional joint probability estimation is required. Recently, the copula method is widely used for multivariable dependent structure construction, however, the copula family should be chosen before application and the choice process is sometimes rather subjective. The entropy copula, a new copula family, employed in this research proposed a way to avoid the relatively subjective process by combining the theories of copula and entropy. The analysis shows the effectiveness of the entropy copula for probabilistic modelling the flood events of two hydrological gauges, and a comparison of accuracy with the popular copulas was made. The Gibbs sampling technique was applied for trivariate flood events simulation in order to mitigate the calculation difficulties of extending to three dimension directly. The simulation results indicate that the entropy copula is a simple and effective copula family for trivariate flood simulation.

  13. Does the soil's effective hydraulic conductivity adapt in order to obey the Maximum Entropy Production principle? A lab experiment

    NASA Astrophysics Data System (ADS)

    Westhoff, Martijn; Zehe, Erwin; Erpicum, Sébastien; Archambeau, Pierre; Pirotton, Michel; Dewals, Benjamin

    2015-04-01

    The Maximum Entropy Production (MEP) principle is a conjecture assuming that a medium is organized in such a way that maximum power is subtracted from a gradient driving a flux (with power being a flux times its driving gradient). This maximum power is also known as the Carnot limit. It has already been shown that the atmosphere operates close to this Carnot limit when it comes to heat transport from the Equator to the poles, or vertically, from the surface to the atmospheric boundary layer. To reach this state close to the Carnot limit, the effective thermal conductivity of the atmosphere is adapted by the creation of convection cells (e.g. wind). The aim of this study is to test if the soil's effective hydraulic conductivity also adapts itself in such a way that it operates close to the Carnot limit. The big difference between atmosphere and soil is the way of adaptation of its resistance. The soil's hydraulic conductivity is either changed by weathering processes, which is a very slow process, or by creation of preferential flow paths. In this study the latter process is simulated in a lab experiment, where we focus on the preferential flow paths created by piping. Piping is the process of backwards erosion of sand particles subject to a large pressure gradient. Since this is a relatively fast process, it is suitable for being tested in the lab. In the lab setup a horizontal sand bed connects two reservoirs that both drain freely at a level high enough to keep the sand bed always saturated. By adding water to only one reservoir, a horizontal pressure gradient is maintained. If the flow resistance is small, a large gradient develops, leading to the effect of piping. When pipes are being formed, the effective flow resistance decreases; the flow through the sand bed increases and the pressure gradient decreases. At a certain point, the flow velocity is small enough to stop the pipes from growing any further. In this steady state, the effective flow resistance of

  14. Inflation via logarithmic entropy-corrected holographic dark energy model

    NASA Astrophysics Data System (ADS)

    Darabi, F.; Felegary, F.; Setare, M. R.

    2016-12-01

    We study the inflation in terms of the logarithmic entropy-corrected holographic dark energy (LECHDE) model with future event horizon, particle horizon, and Hubble horizon cut-offs, and we compare the results with those obtained in the study of inflation by the holographic dark energy HDE model. In comparison, the spectrum of primordial scalar power spectrum in the LECHDE model becomes redder than the spectrum in the HDE model. Moreover, the consistency with the observational data in the LECHDE model of inflation constrains the reheating temperature and Hubble parameter by one parameter of holographic dark energy and two new parameters of logarithmic corrections.

  15. A thermodynamic interpretation of Budyko and L'vovich formulations of annual water balance: Proportionality Hypothesis and maximum entropy production

    NASA Astrophysics Data System (ADS)

    Wang, Dingbao; Zhao, Jianshi; Tang, Yin; Sivapalan, Murugesu

    2015-04-01

    The paper forms part of the search for a thermodynamic explanation for the empirical Budyko Curve, addressing a long-standing research question in hydrology. Here this issue is pursued by invoking the Proportionality Hypothesis underpinning the Soil Conservation Service (SCS) curve number method widely used for estimating direct runoff at the event scale. In this case, the Proportionality Hypothesis posits that the ratio of continuing abstraction to its potential value is equal to the ratio of direct runoff to its potential value. Recently, the validity of the Proportionality Hypothesis has been extended to the partitioning of precipitation into runoff and evaporation at the annual time scale as well. In this case, the Proportionality Hypothesis dictates that the ratio of continuing evaporation to its potential value is equal to the ratio of runoff to its potential value. The Budyko Curve could then be seen as the straightforward outcome of the application of the Proportionality Hypothesis to estimate mean annual water balance. In this paper, we go further and demonstrate that the Proportionality Hypothesis itself can be seen as a result of the application of the thermodynamic principle of Maximum Entropy Production (MEP). In this way, we demonstrate a possible thermodynamic basis for the Proportionality Hypothesis, and consequently for the Budyko Curve. As a further extension, the L'vovich formulation for the two-stage partitioning of annual precipitation is also demonstrated to be a result of MEP: one for the competition between soil wetting and fast flow during the first stage; another for the competition between evaporation and base flow during the second stage.

  16. Improving Estimations of Spatial Distribution of Soil Respiration Using the Bayesian Maximum Entropy Algorithm and Soil Temperature as Auxiliary Data.

    PubMed

    Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi

    2016-01-01

    Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points.

  17. Non-uniformly under-sampled multi-dimensional spectroscopic imaging in vivo: maximum entropy versus compressed sensing reconstruction.

    PubMed

    Burns, Brian; Wilson, Neil E; Furuyama, Jon K; Thomas, M Albert

    2014-02-01

    The four-dimensional (4D) echo-planar correlated spectroscopic imaging (EP-COSI) sequence allows for the simultaneous acquisition of two spatial (ky, kx) and two spectral (t2, t1) dimensions in vivo in a single recording. However, its scan time is directly proportional to the number of increments in the ky and t1 dimensions, and a single scan can take 20–40 min using typical parameters, which is too long to be used for a routine clinical protocol. The present work describes efforts to accelerate EP-COSI data acquisition by application of non-uniform under-sampling (NUS) to the ky–t1 plane of simulated and in vivo EP-COSI datasets then reconstructing missing samples using maximum entropy (MaxEnt) and compressed sensing (CS). Both reconstruction problems were solved using the Cambridge algorithm, which offers many workflow improvements over other l1-norm solvers. Reconstructions of retrospectively under-sampled simulated data demonstrate that the MaxEnt and CS reconstructions successfully restore data fidelity at signal-to-noise ratios (SNRs) from 4 to 20 and 5× to 1.25× NUS. Retrospectively and prospectively 4× under-sampled 4D EP-COSI in vivo datasets show that both reconstruction methods successfully remove NUS artifacts; however, MaxEnt provides reconstructions equal to or better than CS. Our results show that NUS combined with iterative reconstruction can reduce 4D EP-COSI scan times by 75% to a clinically viable 5 min in vivo, with MaxEnt being the preferred method. 2013 John Wiley & Sons, Ltd.

  18. Improving Estimations of Spatial Distribution of Soil Respiration Using the Bayesian Maximum Entropy Algorithm and Soil Temperature as Auxiliary Data

    PubMed Central

    Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi

    2016-01-01

    Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points. PMID:26807579

  19. Entropy corrected holographic dark energy models in modified gravity

    NASA Astrophysics Data System (ADS)

    Jawad, Abdul; Azhar, Nadeem; Rani, Shamaila

    We consider the power law and the entropy corrected holographic dark energy (HDE) models with Hubble horizon in the dynamical Chern-Simons modified gravity. We explore various cosmological parameters and planes in this framework. The Hubble parameter lies within the consistent range at the present and later epoch for both entropy corrected models. The deceleration parameter explains the accelerated expansion of the universe. The equation of state (EoS) parameter corresponds to quintessence and cold dark matter (ΛCDM) limit. The ωΛ‑ωΛ‧ approaches to ΛCDM limit and freezing region in both entropy corrected models. The statefinder parameters are consistent with ΛCDM limit and dark energy (DE) models. The generalized second law of thermodynamics remain valid in all cases of interacting parameter. It is interesting to mention here that our results of Hubble, EoS parameter and ωΛ‑ωΛ‧ plane show consistency with the present observations like Planck, WP, BAO, H0, SNLS and nine-year WMAP.

  20. ATR applications of minimax entropy models of texture and shape

    NASA Astrophysics Data System (ADS)

    Zhu, Song-Chun; Yuille, Alan L.; Lanterman, Aaron D.

    2001-10-01

    Concepts from information theory have recently found favor in both the mainstream computer vision community and the military automatic target recognition community. In the computer vision literature, the principles of minimax entropy learning theory have been used to generate rich probabilitistic models of texture and shape. In addition, the method of types and large deviation theory has permitted the difficulty of various texture and shape recognition tasks to be characterized by 'order parameters' that determine how fundamentally vexing a task is, independent of the particular algorithm used. These information-theoretic techniques have been demonstrated using traditional visual imagery in applications such as simulating cheetah skin textures and such as finding roads in aerial imagery. We discuss their application to problems in the specific application domain of automatic target recognition using infrared imagery. We also review recent theoretical and algorithmic developments which permit learning minimax entropy texture models for infrared textures in reasonable timeframes.

  1. Holography and entropy bounds in the plane wave matrix model

    SciTech Connect

    Bousso, Raphael; Mints, Aleksey L.

    2006-06-15

    As a quantum theory of gravity, matrix theory should provide a realization of the holographic principle, in the sense that a holographic theory should contain one binary degree of freedom per Planck area. We present evidence that Bekenstein's entropy bound, which is related to area differences, is manifest in the plane wave matrix model. If holography is implemented in this way, we predict crossover behavior at strong coupling when the energy exceeds N{sup 2} in units of the mass scale.

  2. Modeling maximum daily temperature using a varying coefficient regression model

    NASA Astrophysics Data System (ADS)

    Li, Han; Deng, Xinwei; Kim, Dong-Yun; Smith, Eric P.

    2014-04-01

    Relationships between stream water and air temperatures are often modeled using linear or nonlinear regression methods. Despite a strong relationship between water and air temperatures and a variety of models that are effective for data summarized on a weekly basis, such models did not yield consistently good predictions for summaries such as daily maximum temperature. A good predictive model for daily maximum temperature is required because daily maximum temperature is an important measure for predicting survival of temperature sensitive fish. To appropriately model the strong relationship between water and air temperatures at a daily time step, it is important to incorporate information related to the time of the year into the modeling. In this work, a time-varying coefficient model is used to study the relationship between air temperature and water temperature. The time-varying coefficient model enables dynamic modeling of the relationship, and can be used to understand how the air-water temperature relationship varies over time. The proposed model is applied to 10 streams in Maryland, West Virginia, Virginia, North Carolina, and Georgia using daily maximum temperatures. It provides a better fit and better predictions than those produced by a simple linear regression model or a nonlinear logistic model.

  3. Modeling maximum daily temperature using a varying coefficient regression model

    Treesearch

    Han Li; Xinwei Deng; Dong-Yum Kim; Eric P. Smith

    2014-01-01

    Relationships between stream water and air temperatures are often modeled using linear or nonlinear regression methods. Despite a strong relationship between water and air temperatures and a variety of models that are effective for data summarized on a weekly basis, such models did not yield consistently good predictions for summaries such as daily maximum temperature...

  4. Modeling of high entropy alloys of refractory elements

    NASA Astrophysics Data System (ADS)

    del Grosso, M. F.; Bozzolo, G.; Mosca, H. O.

    2012-08-01

    Reverting the traditional process of developing new alloys based on one or two single elements with minority additions, the study of high entropy alloys (HEA) (equimolar combinations of many elements) has become a relevant and interesting new field of research due to their tendency to form solid solutions with particular properties in the absence of intermetallic phases. Theoretical or modeling studies at the atomic level on specific HEA, describing the formation, structure, and properties of these alloys are limited due to the large number of constituents involved. In this work we focus on HEA with refractory elements showing atomistic modeling results for W-Nb-Mo-Ta and W-Nb-Mo-Ta-V HEA, for which experimental background exists. An atomistic modeling approach is applied for the determination of the role of each element and identification of the interactions and features responsible for the transition to the high entropy regime. Results for equimolar alloys of 4 and 5 refractory elements, for which experimental results exist, are shown. A straightforward algorithm is introduced to interpret the transition to the high entropy regime.

  5. Stability of ecological industry chain: an entropy model approach.

    PubMed

    Wang, Qingsong; Qiu, Shishou; Yuan, Xueliang; Zuo, Jian; Cao, Dayong; Hong, Jinglan; Zhang, Jian; Dong, Yong; Zheng, Ying

    2016-07-01

    A novel methodology is proposed in this study to examine the stability of ecological industry chain network based on entropy theory. This methodology is developed according to the associated dissipative structure characteristics, i.e., complexity, openness, and nonlinear. As defined in the methodology, network organization is the object while the main focus is the identification of core enterprises and core industry chains. It is proposed that the chain network should be established around the core enterprise while supplementation to the core industry chain helps to improve system stability, which is verified quantitatively. Relational entropy model can be used to identify core enterprise and core eco-industry chain. It could determine the core of the network organization and core eco-industry chain through the link form and direction of node enterprises. Similarly, the conductive mechanism of different node enterprises can be examined quantitatively despite the absence of key data. Structural entropy model can be employed to solve the problem of order degree for network organization. Results showed that the stability of the entire system could be enhanced by the supplemented chain around the core enterprise in eco-industry chain network organization. As a result, the sustainability of the entire system could be further improved.

  6. On the possibility of obtaining non-diffused proximity functions from cloud-chamber data: II. Maximum entropy and Bayesian methods.

    PubMed

    Zaider, M; Minerbo, G N

    1988-11-01

    Maximum entropy and Bayesian methods are applied to an inversion problem which consists of unfolding diffusion from proximity functions calculated from cloud-chamber data. The solution appears to be relatively insensitive to statistical errors in the data (an important feature) given the limited number of tracks normally available from cloud-chamber measurements. It is the first time, to our knowledge, that such methods are applied to microdosimetry.

  7. A Bayesian Maximum Entropy approach to address the change of support problem in the spatial analysis of childhood asthma prevalence across North Carolina

    PubMed Central

    LEE, SEUNG-JAE; YEATTS, KARIN; SERRE, MARC L.

    2009-01-01

    The spatial analysis of data observed at different spatial observation scales leads to the change of support problem (COSP). A solution to the COSP widely used in linear spatial statistics consists in explicitly modeling the spatial autocorrelation of the variable observed at different spatial scales. We present a novel approach that takes advantage of the non-linear Bayesian Maximum Entropy (BME) extension of linear spatial statistics to address the COSP directly without relying on the classical linear approach. Our procedure consists in modeling data observed over large areas as soft data for the process at the local scale. We demonstrate the application of our approach to obtain spatially detailed maps of childhood asthma prevalence across North Carolina (NC). Because of the high prevalence of childhood asthma in NC, the small number problem is not an issue, so we can focus our attention solely to the COSP of integrating prevalence data observed at the county-level together with data observed at a targeted local scale equivalent to the scale of school-districts. Our spatially detailed maps can be used for different applications ranging from exploratory and hypothesis generating analyses to targeting intervention and exposure mitigation efforts. PMID:20300553

  8. Application of a maximum entropy method to estimate the probability density function of nonlinear or chaotic behavior in structural health monitoring data

    NASA Astrophysics Data System (ADS)

    Livingston, Richard A.; Jin, Shuang

    2005-05-01

    Bridges and other civil structures can exhibit nonlinear and/or chaotic behavior under ambient traffic or wind loadings. The probability density function (pdf) of the observed structural responses thus plays an important role for long-term structural health monitoring, LRFR and fatigue life analysis. However, the actual pdf of such structural response data often has a very complicated shape due to its fractal nature. Various conventional methods to approximate it can often lead to biased estimates. This paper presents recent research progress at the Turner-Fairbank Highway Research Center of the FHWA in applying a novel probabilistic scaling scheme for enhanced maximum entropy evaluation to find the most unbiased pdf. The maximum entropy method is applied with a fractal interpolation formulation based on contraction mappings through an iterated function system (IFS). Based on a fractal dimension determined from the entire response data set by an algorithm involving the information dimension, a characteristic uncertainty parameter, called the probabilistic scaling factor, can be introduced. This allows significantly enhanced maximum entropy evaluation through the added inferences about the fine scale fluctuations in the response data. Case studies using the dynamic response data sets collected from a real world bridge (Commodore Barry Bridge, PA) and from the simulation of a classical nonlinear chaotic system (the Lorenz system) are presented in this paper. The results illustrate the advantages of the probabilistic scaling method over conventional approaches for finding the unbiased pdf especially in the critical tail region that contains the larger structural responses.

  9. Singularities and Entropy in Bulk Viscosity Dark Energy Model

    NASA Astrophysics Data System (ADS)

    Meng, Xin-He; Dou, Xu

    2011-11-01

    In this paper bulk viscosity is introduced to describe the effects of cosmic non-perfect fluid on the cosmos evolution and to build the unified dark energy (DE) with (dark) matter models. Also we derive a general relation between the bulk viscosity form and Hubble parameter that can provide a procedure for the viscosity DE model building. Especially, a redshift dependent viscosity parameter ζ ∝ λ0 + λ1(1 + z)n proposed in the previous work [X.H. Meng and X. Dou, Commun. Theor. Phys. 52 (2009) 377] is investigated extensively in this present work. Further more we use the recently released supernova dataset (the Constitution dataset) to constrain the model parameters. In order to differentiate the proposed concrete dark energy models from the well known ΛCDM model, statefinder diagnostic method is applied to this bulk viscosity model, as a complementary to the Om parameter diagnostic and the deceleration parameter analysis performed by us before. The DE model evolution behavior and tendency are shown in the plane of the statefinder diagnostic parameter pair {r, s} as axes where the fixed point represents the ΛCDM model. The possible singularity property in this bulk viscosity cosmology is also discussed to which we can conclude that in the different parameter regions chosen properly, this concrete viscosity DE model can have various late evolution behaviors and the late time singularity could be avoided. We also calculate the cosmic entropy in the bulk viscosity dark energy frame, and find that the total entropy in the viscosity DE model increases monotonously with respect to the scale factor evolution, thus this monotonous increasing property can indicate an arrow of time in the universe evolution, though the quantum version of the arrow of time is still very puzzling.

  10. Entropy maximization under the constraints on the generalized Gini index and its application in modeling income distributions

    NASA Astrophysics Data System (ADS)

    Khosravi Tanak, A.; Mohtashami Borzadaran, G. R.; Ahmadi, J.

    2015-11-01

    In economics and social sciences, the inequality measures such as Gini index, Pietra index etc., are commonly used to measure the statistical dispersion. There is a generalization of Gini index which includes it as special case. In this paper, we use principle of maximum entropy to approximate the model of income distribution with a given mean and generalized Gini index. Many distributions have been used as descriptive models for the distribution of income. The most widely known of these models are the generalized beta of second kind and its subclass distributions. The obtained maximum entropy distributions are fitted to the US family total money income in 2009, 2011 and 2013 and their relative performances with respect to generalized beta of second kind family are compared.

  11. Hierarchical Linear Modeling with Maximum Likelihood, Restricted Maximum Likelihood, and Fully Bayesian Estimation

    ERIC Educational Resources Information Center

    Boedeker, Peter

    2017-01-01

    Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…

  12. Probability distributions of bed load particle velocities, accelerations, hop distances, and travel times informed by Jaynes's principle of maximum entropy

    NASA Astrophysics Data System (ADS)

    Furbish, David Jon; Schmeeckle, Mark W.; Schumer, Rina; Fathel, Siobhan L.

    2016-07-01

    We describe the most likely forms of the probability distributions of bed load particle velocities, accelerations, hop distances, and travel times, in a manner that formally appeals to inferential statistics while honoring mechanical and kinematic constraints imposed by equilibrium transport conditions. The analysis is based on E. Jaynes's elaboration of the implications of the similarity between the Gibbs entropy in statistical mechanics and the Shannon entropy in information theory. By maximizing the information entropy of a distribution subject to known constraints on its moments, our choice of the form of the distribution is unbiased. The analysis suggests that particle velocities and travel times are exponentially distributed and that particle accelerations follow a Laplace distribution with zero mean. Particle hop distances, viewed alone, ought to be distributed exponentially. However, the covariance between hop distances and travel times precludes this result. Instead, the covariance structure suggests that hop distances follow a Weibull distribution. These distributions are consistent with high-resolution measurements obtained from high-speed imaging of bed load particle motions. The analysis brings us closer to choosing distributions based on our mechanical insight.

  13. Probability distributions of bed load particle velocities, accelerations, hop distances, and travel times informed by Jaynes's principle of maximum entropy

    USGS Publications Warehouse

    Furbish, David J.; Schmeeckle, Mark; Schumer, Rina; Fathel, Siobhan L.

    2016-01-01

    We describe the most likely forms of the probability distributions of bed load particle velocities, accelerations, hop distances, and travel times, in a manner that formally appeals to inferential statistics while honoring mechanical and kinematic constraints imposed by equilibrium transport conditions. The analysis is based on E. Jaynes's elaboration of the implications of the similarity between the Gibbs entropy in statistical mechanics and the Shannon entropy in information theory. By maximizing the information entropy of a distribution subject to known constraints on its moments, our choice of the form of the distribution is unbiased. The analysis suggests that particle velocities and travel times are exponentially distributed and that particle accelerations follow a Laplace distribution with zero mean. Particle hop distances, viewed alone, ought to be distributed exponentially. However, the covariance between hop distances and travel times precludes this result. Instead, the covariance structure suggests that hop distances follow a Weibull distribution. These distributions are consistent with high-resolution measurements obtained from high-speed imaging of bed load particle motions. The analysis brings us closer to choosing distributions based on our mechanical insight.

  14. Maximum Entropy Method and Charge Flipping, a Powerful Combination to Visualize the True Nature of Structural Disorder from in situ X-ray Powder Diffraction Data

    SciTech Connect

    Samy, A.; Dinnebier, R; van Smaalen, S; Jansen, M

    2010-01-01

    In a systematic approach, the ability of the Maximum Entropy Method (MEM) to reconstruct the most probable electron density of highly disordered crystal structures from X-ray powder diffraction data was evaluated. As a case study, the ambient temperature crystal structures of disordered {alpha}-Rb{sub 2}[C{sub 2}O{sub 4}] and {alpha}-Rb{sub 2}[CO{sub 3}] and ordered {delta}-K{sub 2}[C{sub 2}O{sub 4}] were investigated in detail with the aim of revealing the 'true' nature of the apparent disorder. Different combinations of F (based on phased structure factors) and G constraints (based on structure-factor amplitudes) from different sources were applied in MEM calculations. In particular, a new combination of the MEM with the recently developed charge-flipping algorithm with histogram matching for powder diffraction data (pCF) was successfully introduced to avoid the inevitable bias of the phases of the structure-factor amplitudes by the Rietveld model. Completely ab initio electron-density distributions have been obtained with the MEM applied to a combination of structure-factor amplitudes from Le Bail fits with phases derived from pCF. All features of the crystal structures, in particular the disorder of the oxalate and carbonate anions, and the displacements of the cations, are clearly obtained. This approach bears the potential of a fast method of electron-density determination, even for highly disordered materials. All the MEM maps obtained in this work were compared with the MEM map derived from the best Rietveld refined model. In general, the phased observed structure factors obtained from Rietveld refinement (applying F and G constraints) were found to give the closest description of the experimental data and thus lead to the most accurate image of the actual disorder.

  15. Improved lesion detection in MR mammography: three-dimensional segmentation, moving voxel sampling, and normalized maximum intensity-time ratio entropy.

    PubMed

    Ertaş, Gökhan; Gülçür, H Ozcan; Tunaci, Mehtap

    2007-02-01

    The objective of this work was to develop a quantitative method for improving lesion detection in dynamic contrast-enhanced magnetic resonance mammography (DCEMRM). For this purpose, we segmented and analyzed suspicious regions according to their contrast enhancement dynamics, generated a normalized maximum intensity-time ratio (nMITR) projection, and explored it to extract important features, to improve accuracy and reproducibility of detection. A novel automated method is introduced to segment and analyze lesions in three dimensions. It consists of four consecutive stages: volume of interest selection, nMITR projection generation using a voxel sampling method based on a moving 3 x 3 mask, three-dimensional lesion segmentation, and feature extraction. The nMITR projection of the detected lesion is used to extract six features: mean, maximum, standard deviation, kurtosis, skewness, and entropy, and their diagnostic significance is studied in detail. High-resolution MR images of 52 breast masses from 46 women are analyzed using the technique developed. Entropy, standard deviation, and the maximum and mean value features were found to have high significance (P < 0.001) and diagnostic accuracy (0.86-0.97). The kurtosis and skewness were not significant. Automated analysis of DCEMRM using nMITR was shown to be feasible. The lesion detection method described is efficient and leads to improved, accurate, reproducible diagnoses. It is reliable in terms of observer variability and may allow for a better standardization of clinical evaluations. The findings demonstrate the usefulness of nMITR based features; nMITR-entropy shows the best performance for quantitative diagnosis.

  16. Emergence of spacetime dynamics in entropy corrected and braneworld models

    SciTech Connect

    Sheykhi, A.; Dehghani, M.H.; Hosseini, S.E. E-mail: mhd@shirazu.ac.ir

    2013-04-01

    A very interesting new proposal on the origin of the cosmic expansion was recently suggested by Padmanabhan [arXiv:1206.4916]. He argued that the difference between the surface degrees of freedom and the bulk degrees of freedom in a region of space drives the accelerated expansion of the universe, as well as the standard Friedmann equation through relation ΔV = Δt(N{sub sur}−N{sub bulk}). In this paper, we first present the general expression for the number of degrees of freedom on the holographic surface, N{sub sur}, using the general entropy corrected formula S = A/(4L{sub p}{sup 2})+s(A). Then, as two example, by applying the Padmanabhan's idea we extract the corresponding Friedmann equations in the presence of power-law and logarithmic correction terms in the entropy. We also extend the study to RS II and DGP braneworld models and derive successfully the correct form of the Friedmann equations in these theories. Our study further supports the viability of Padmanabhan's proposal.

  17. Single-particle spectral density of the unitary Fermi gas: Novel approach based on the operator product expansion, sum rules and the maximum entropy method

    SciTech Connect

    Gubler, Philipp; Yamamoto, Naoki; Hatsuda, Tetsuo; Nishida, Yusuke

    2015-05-15

    Making use of the operator product expansion, we derive a general class of sum rules for the imaginary part of the single-particle self-energy of the unitary Fermi gas. The sum rules are analyzed numerically with the help of the maximum entropy method, which allows us to extract the single-particle spectral density as a function of both energy and momentum. These spectral densities contain basic information on the properties of the unitary Fermi gas, such as the dispersion relation and the superfluid pairing gap, for which we obtain reasonable agreement with the available results based on quantum Monte-Carlo simulations.

  18. Reprint of : Connection between wave transport through disordered 1D waveguides and energy density inside the sample: A maximum-entropy approach

    NASA Astrophysics Data System (ADS)

    Mello, Pier A.; Shi, Zhou; Genack, Azriel Z.

    2016-08-01

    We study the average energy - or particle - density of waves inside disordered 1D multiply-scattering media. We extend the transfer-matrix technique that was used in the past for the calculation of the intensity beyond the sample to study the intensity in the interior of the sample by considering the transfer matrices of the two segments that form the entire waveguide. The statistical properties of the two disordered segments are found using a maximum-entropy ansatz subject to appropriate constraints. The theoretical expressions are shown to be in excellent agreement with 1D transfer-matrix simulations.

  19. The calculation of transport properties in quantum liquids using the maximum entropy numerical analytic continuation method: Application to liquid para-hydrogen

    PubMed Central

    Rabani, Eran; Reichman, David R.; Krilov, Goran; Berne, Bruce J.

    2002-01-01

    We present a method based on augmenting an exact relation between a frequency-dependent diffusion constant and the imaginary time velocity autocorrelation function, combined with the maximum entropy numerical analytic continuation approach to study transport properties in quantum liquids. The method is applied to the case of liquid para-hydrogen at two thermodynamic state points: a liquid near the triple point and a high-temperature liquid. Good agreement for the self-diffusion constant and for the real-time velocity autocorrelation function is obtained in comparison to experimental measurements and other theoretical predictions. Improvement of the methodology and future applications are discussed. PMID:11830656

  20. Maximum likelihood estimation of finite mixture model for economic data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  1. Shifting distributions of adult Atlantic sturgeon amidst post-industrialization and future impacts in the Delaware River: a maximum entropy approach.

    PubMed

    Breece, Matthew W; Oliver, Matthew J; Cimino, Megan A; Fox, Dewayne A

    2013-01-01

    Atlantic sturgeon (Acipenser oxyrinchus oxyrinchus) experienced severe declines due to habitat destruction and overfishing beginning in the late 19(th) century. Subsequent to the boom and bust period of exploitation, there has been minimal fishing pressure and improving habitats. However, lack of recovery led to the 2012 listing of Atlantic sturgeon under the Endangered Species Act. Although habitats may be improving, the availability of high quality spawning habitat, essential for the survival and development of eggs and larvae may still be a limiting factor in the recovery of Atlantic sturgeon. To estimate adult Atlantic sturgeon spatial distributions during riverine occupancy in the Delaware River, we utilized a maximum entropy (MaxEnt) approach along with passive biotelemetry during the likely spawning season. We found that substrate composition and distance from the salt front significantly influenced the locations of adult Atlantic sturgeon in the Delaware River. To broaden the scope of this study we projected our model onto four scenarios depicting varying locations of the salt front in the Delaware River: the contemporary location of the salt front during the likely spawning season, the location of the salt front during the historic fishery in the late 19(th) century, an estimated shift in the salt front by the year 2100 due to climate change, and an extreme drought scenario, similar to that which occurred in the 1960's. The movement of the salt front upstream as a result of dredging and climate change likely eliminated historic spawning habitats and currently threatens areas where Atlantic sturgeon spawning may be taking place. Identifying where suitable spawning substrate and water chemistry intersect with the likely occurrence of adult Atlantic sturgeon in the Delaware River highlights essential spawning habitats, enhancing recovery prospects for this imperiled species.

  2. Shifting Distributions of Adult Atlantic Sturgeon Amidst Post-Industrialization and Future Impacts in the Delaware River: a Maximum Entropy Approach

    PubMed Central

    Breece, Matthew W.; Oliver, Matthew J.; Cimino, Megan A.; Fox, Dewayne A.

    2013-01-01

    Atlantic sturgeon (Acipenser oxyrinchus oxyrinchus) experienced severe declines due to habitat destruction and overfishing beginning in the late 19th century. Subsequent to the boom and bust period of exploitation, there has been minimal fishing pressure and improving habitats. However, lack of recovery led to the 2012 listing of Atlantic sturgeon under the Endangered Species Act. Although habitats may be improving, the availability of high quality spawning habitat, essential for the survival and development of eggs and larvae may still be a limiting factor in the recovery of Atlantic sturgeon. To estimate adult Atlantic sturgeon spatial distributions during riverine occupancy in the Delaware River, we utilized a maximum entropy (MaxEnt) approach along with passive biotelemetry during the likely spawning season. We found that substrate composition and distance from the salt front significantly influenced the locations of adult Atlantic sturgeon in the Delaware River. To broaden the scope of this study we projected our model onto four scenarios depicting varying locations of the salt front in the Delaware River: the contemporary location of the salt front during the likely spawning season, the location of the salt front during the historic fishery in the late 19th century, an estimated shift in the salt front by the year 2100 due to climate change, and an extreme drought scenario, similar to that which occurred in the 1960’s. The movement of the salt front upstream as a result of dredging and climate change likely eliminated historic spawning habitats and currently threatens areas where Atlantic sturgeon spawning may be taking place. Identifying where suitable spawning substrate and water chemistry intersect with the likely occurrence of adult Atlantic sturgeon in the Delaware River highlights essential spawning habitats, enhancing recovery prospects for this imperiled species. PMID:24260570

  3. Modeling the Overalternating Bias with an Asymmetric Entropy Measure

    PubMed Central

    Gronchi, Giorgio; Raglianti, Marco; Noventa, Stefano; Lazzeri, Alessandro; Guazzini, Andrea

    2016-01-01

    Psychological research has found that human perception of randomness is biased. In particular, people consistently show the overalternating bias: they rate binary sequences of symbols (such as Heads and Tails in coin flipping) with an excess of alternation as more random than prescribed by the normative criteria of Shannon's entropy. Within data mining for medical applications, Marcellin proposed an asymmetric measure of entropy that can be ideal to account for such bias and to quantify subjective randomness. We fitted Marcellin's entropy and Renyi's entropy (a generalized form of uncertainty measure comprising many different kinds of entropies) to experimental data found in the literature with the Differential Evolution algorithm. We observed a better fit for Marcellin's entropy compared to Renyi's entropy. The fitted asymmetric entropy measure also showed good predictive properties when applied to different datasets of randomness-related tasks. We concluded that Marcellin's entropy can be a parsimonious and effective measure of subjective randomness that can be useful in psychological research about randomness perception. PMID:27458418

  4. Models, Entropy and Information of Temporal Social Networks

    NASA Astrophysics Data System (ADS)

    Zhao, Kun; Karsai, Márton; Bianconi, Ginestra

    Temporal social networks are characterized by heterogeneous duration of contacts, which can either follow a power-law distribution, such as in face-to-face interactions, or a Weibull distribution, such as in mobile-phone communication. Here we model the dynamics of face-to-face interaction and mobile phone communication by a reinforcement dynamics, which explains the data observed in these different types of social interactions. We quantify the information encoded in the dynamics of these networks by the entropy of temporal networks. Finally, we show evidence that human dynamics is able to modulate the information present in social network dynamics when it follows circadian rhythms and when it is interfacing with a new technology such as the mobile-phone communication technology.

  5. Maximum entropy analysis of data simulations and practical aspects of time-resolved fluorescence measurements in the study of molecular interactions

    NASA Astrophysics Data System (ADS)

    Henry, Etienne; Deprez, Eric; Brochon, Jean-Claude

    2014-12-01

    Time-resolved fluorescence spectroscopy and microscopy is increasingly used to probe molecular interactions and dynamics events in vitro and in vivo. We point out some pitfalls to avoid in the data acquisition procedure using time correlated single photon counting. A good accuracy in fluorescence decay measurements is not only linked to the counts in the peak channel but also to the statistics at the end of the curve. A too narrow time interval between successive excitation leads to an overlap of decays, and that should be taken into account in data analysis. The counting rate in the peak channel of the excitation profile should be close to the one of the fluorescence decay. Many distributions of lifetimes can fit an incomplete or noisy data set to satisfying precisions corresponding to close values of χ2. The maximum entropy principle is appropriate to distinguish among these in a consistent way. It is also shown that encoding a prior knowledge about the system study dramatically improves the quality of the recovered distribution, particularly in case of a set of discrete components. Based on simulated noisy quantify maximum entropy method (MEM) data analysis, we propose a simple strategy for estimating the quality of information and inferences we can draw from experimental results.

  6. Scaling of Entanglement Entropy for the Heisenberg Model on Clusters Joined by Point Contacts

    NASA Astrophysics Data System (ADS)

    Friedman, B. A.; Levine, G. C.

    2016-11-01

    The scaling of entanglement entropy for the nearest neighbor antiferromagnetic Heisenberg spin model is studied computationally for clusters joined by a single bond. Bisecting the balanced three legged Bethe cluster, gives a second Renyi entropy and the valence bond entropy which scales as the number of sites in the cluster. For the analogous situation with square clusters, i.e. two L × L clusters joined by a single bond, numerical results suggest that the second Renyi entropy and the valence bond entropy scales as L. For both systems, the environment and the system are connected by the single bond and interaction is short range. The entropy is not constant with system size as suggested by the area law.

  7. Mixture Rasch Models with Joint Maximum Likelihood Estimation

    ERIC Educational Resources Information Center

    Willse, John T.

    2011-01-01

    This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…

  8. Mixture Rasch Models with Joint Maximum Likelihood Estimation

    ERIC Educational Resources Information Center

    Willse, John T.

    2011-01-01

    This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…

  9. Assessing Bayesian model averaging uncertainty of groundwater modeling based on information entropy method

    NASA Astrophysics Data System (ADS)

    Zeng, Xiankui; Wu, Jichun; Wang, Dong; Zhu, Xiaobin; Long, Yuqiao

    2016-07-01

    Because of groundwater conceptualization uncertainty, multi-model methods are usually used and the corresponding uncertainties are estimated by integrating Markov Chain Monte Carlo (MCMC) and Bayesian model averaging (BMA) methods. Generally, the variance method is used to measure the uncertainties of BMA prediction. The total variance of ensemble prediction is decomposed into within-model and between-model variances, which represent the uncertainties derived from parameter and conceptual model, respectively. However, the uncertainty of a probability distribution couldn't be comprehensively quantified by variance solely. A new measuring method based on information entropy theory is proposed in this study. Due to actual BMA process hard to meet the ideal mutually exclusive collectively exhaustive condition, BMA predictive uncertainty could be decomposed into parameter, conceptual model, and overlapped uncertainties, respectively. Overlapped uncertainty is induced by the combination of predictions from correlated model structures. In this paper, five simple analytical functions are firstly used to illustrate the feasibility of the variance and information entropy methods. A discrete distribution example shows that information entropy could be more appropriate to describe between-model uncertainty than variance. Two continuous distribution examples show that the two methods are consistent in measuring normal distribution, and information entropy is more appropriate to describe bimodal distribution than variance. The two examples of BMA uncertainty decomposition demonstrate that the two methods are relatively consistent in assessing the uncertainty of unimodal BMA prediction. Information entropy is more informative in describing the uncertainty decomposition of bimodal BMA prediction. Then, based on a synthetical groundwater model, the variance and information entropy methods are used to assess the BMA uncertainty of groundwater modeling. The uncertainty assessments of

  10. Relevance Data for Language Models Using Maximum Likelihood.

    ERIC Educational Resources Information Center

    Bodoff, David; Wu, Bin; Wong, K. Y. Michael

    2003-01-01

    Presents a preliminary empirical test of a maximum likelihood approach to using relevance data for training information retrieval parameters. Discusses similarities to language models; the unification of document-oriented and query-oriented views; tests on data sets; algorithms and scalability; and the effectiveness of maximum likelihood…

  11. Forecasting flood-prone areas using Shannon's entropy model

    NASA Astrophysics Data System (ADS)

    Haghizadeh, Ali; Siahkamari, Safoura; Haghiabi, Amir Hamzeh; Rahmati, Omid

    2017-04-01

    With regard to the lack of quality information and data in watersheds, it is of high importance to present a new method for evaluating flood potential. Shannon's entropy model is a new model in evaluating dangers and it has not yet been used to evaluate flood potential. Therefore, being a new model in determining flood potential, it requires evaluation and investigation in different regions and this study is going to deal with this issue. For to this purpose, 70 flooding areas were recognized and their distribution map was provided by ArcGIS10.2 software in the study area. Information layers of altitude, slope angle, slope aspect, plan curvature, drainage density, distance from the river, topographic wetness index (TWI), lithology, soil type, and land use were recognized as factors affecting flooding and the mentioned maps were provided and digitized by GIS environment. Then, flood susceptibility forecasting map was provided and model accuracy evaluation was conducted using ROC curve and 30% flooding areas express good precision of the model (73.5%) for the study area.

  12. A stochastic model for the analysis of maximum daily temperature

    NASA Astrophysics Data System (ADS)

    Sirangelo, B.; Caloiero, T.; Coscarelli, R.; Ferrari, E.

    2016-08-01

    In this paper, a stochastic model for the analysis of the daily maximum temperature is proposed. First, a deseasonalization procedure based on the truncated Fourier expansion is adopted. Then, the Johnson transformation functions were applied for the data normalization. Finally, the fractionally autoregressive integrated moving average model was used to reproduce both short- and long-memory behavior of the temperature series. The model was applied to the data of the Cosenza gauge (Calabria region) and verified on other four gauges of southern Italy. Through a Monte Carlo simulation procedure based on the proposed model, 105 years of daily maximum temperature have been generated. Among the possible applications of the model, the occurrence probabilities of the annual maximum values have been evaluated. Moreover, the procedure was applied for the estimation of the return periods of long sequences of days with maximum temperature above prefixed thresholds.

  13. Two aspects of black hole entropy in Lanczos-Lovelock models of gravity

    NASA Astrophysics Data System (ADS)

    Kolekar, Sanved; Kothawala, Dawood; Padmanabhan, T.

    2012-03-01

    We consider two specific approaches to evaluate the black hole entropy which are known to produce correct results in the case of Einstein’s theory and generalize them to Lanczos-Lovelock models. In the first approach (which could be called extrinsic), we use a procedure motivated by earlier work by Pretorius, Vollick, and Israel, and by Oppenheim, and evaluate the entropy of a configuration of densely packed gravitating shells on the verge of forming a black hole in Lanczos-Lovelock theories of gravity. We find that this matter entropy is not equal to (it is less than) Wald entropy, except in the case of Einstein theory, where they are equal. The matter entropy is proportional to the Wald entropy if we consider a specific mth-order Lanczos-Lovelock model, with the proportionality constant depending on the spacetime dimensions D and the order m of the Lanczos-Lovelock theory as (D-2m)/(D-2). Since the proportionality constant depends on m, the proportionality between matter entropy and Wald entropy breaks down when we consider a sum of Lanczos-Lovelock actions involving different m. In the second approach (which could be called intrinsic), we generalize a procedure, previously introduced by Padmanabhan in the context of general relativity, to study off-shell entropy of a class of metrics with horizon using a path integral method. We consider the Euclidean action of Lanczos-Lovelock models for a class of metrics off shell and interpret it as a partition function. We show that in the case of spherically symmetric metrics, one can interpret the Euclidean action as the free energy and read off both the entropy and energy of a black hole spacetime. Surprisingly enough, this leads to exactly the Wald entropy and the energy of the spacetime in Lanczos-Lovelock models obtained by other methods. We comment on possible implications of the result.

  14. The performance evaluation model of mining project founded on the weight optimization entropy value method

    NASA Astrophysics Data System (ADS)

    Mao, Chao; Chen, Shou

    2017-01-01

    According to the traditional entropy value method still have low evaluation accuracy when evaluating the performance of mining projects, a performance evaluation model of mineral project founded on improved entropy is proposed. First establish a new weight assignment model founded on compatible matrix analysis of analytic hierarchy process (AHP) and entropy value method, when the compatibility matrix analysis to achieve consistency requirements, if it has differences between subjective weights and objective weights, moderately adjust both proportions, then on this basis, the fuzzy evaluation matrix for performance evaluation. The simulation experiments show that, compared with traditional entropy and compatible matrix analysis method, the proposed performance evaluation model of mining project based on improved entropy value method has higher accuracy assessment.

  15. Discrete state model and accurate estimation of loop entropy of RNA secondary structures.

    PubMed

    Zhang, Jian; Lin, Ming; Chen, Rong; Wang, Wei; Liang, Jie

    2008-03-28

    Conformational entropy makes important contribution to the stability and folding of RNA molecule, but it is challenging to either measure or compute conformational entropy associated with long loops. We develop optimized discrete k-state models of RNA backbone based on known RNA structures for computing entropy of loops, which are modeled as self-avoiding walks. To estimate entropy of hairpin, bulge, internal loop, and multibranch loop of long length (up to 50), we develop an efficient sampling method based on the sequential Monte Carlo principle. Our method considers excluded volume effect. It is general and can be applied to calculating entropy of loops with longer length and arbitrary complexity. For loops of short length, our results are in good agreement with a recent theoretical model and experimental measurement. For long loops, our estimated entropy of hairpin loops is in excellent agreement with the Jacobson-Stockmayer extrapolation model. However, for bulge loops and more complex secondary structures such as internal and multibranch loops, we find that the Jacobson-Stockmayer extrapolation model has large errors. Based on estimated entropy, we have developed empirical formulae for accurate calculation of entropy of long loops in different secondary structures. Our study on the effect of asymmetric size of loops suggest that loop entropy of internal loops is largely determined by the total loop length, and is only marginally affected by the asymmetric size of the two loops. Our finding suggests that the significant asymmetric effects of loop length in internal loops measured by experiments are likely to be partially enthalpic. Our method can be applied to develop improved energy parameters important for studying RNA stability and folding, and for predicting RNA secondary and tertiary structures. The discrete model and the program used to calculate loop entropy can be downloaded at http://gila.bioengr.uic.edu/resources/RNA.html.

  16. Entropy-Based Model for Interpreting Life Systems in Traditional Chinese Medicine

    PubMed Central

    Kang, Guo-lian; Zhang, Ji-feng

    2008-01-01

    Traditional Chinese medicine (TCM) treats qi as the core of the human life systems. Starting with a hypothetical correlation between TCM qi and the entropy theory, we address in this article a holistic model for evaluating and unveiling the rule of TCM life systems. Several new concepts such as acquired life entropy (ALE), acquired life entropy flow (ALEF) and acquired life entropy production (ALEP) are propounded to interpret TCM life systems. Using the entropy theory, mathematical models are established for ALE, ALEF and ALEP, which reflect the evolution of life systems. Some criteria are given on physiological activities and pathological changes of the body in different stages of life. Moreover, a real data-based simulation shows life entropies of the human body with different ages, Cold and Hot constitutions and in different seasons in North China are coincided with the manifestations of qi as well as the life evolution in TCM descriptions. Especially, based on the comparative and quantitative analysis, the entropy-based model can nicely describe the evolution of life entropies in Cold and Hot individuals thereby fitting the Yin–Yang theory in TCM. Thus, this work establishes a novel approach to interpret the fundamental principles in TCM, and provides an alternative understanding for the complex life systems. PMID:18830452

  17. Cosmological dynamics of interacting logarithmic entropy corrected holographic dark energy model

    NASA Astrophysics Data System (ADS)

    Darabi, F.; Felegary, F.; Setare, M. R.

    We investigate the cosmological dynamics of interacting Logarithmic Entropy Corrected Holographic Dark Energy model with Cold Dark Matter. Fixed points are determined and their corresponding cosmological models are presented. Moreover, the dynamical properties of these fixed points are derived.

  18. Develop and test a solvent accessible surface area-based model in conformational entropy calculations.

    PubMed

    Wang, Junmei; Hou, Tingjun

    2012-05-25

    It is of great interest in modern drug design to accurately calculate the free energies of protein-ligand or nucleic acid-ligand binding. MM-PBSA (molecular mechanics Poisson-Boltzmann surface area) and MM-GBSA (molecular mechanics generalized Born surface area) have gained popularity in this field. For both methods, the conformational entropy, which is usually calculated through normal-mode analysis (NMA), is needed to calculate the absolute binding free energies. Unfortunately, NMA is computationally demanding and becomes a bottleneck of the MM-PB/GBSA-NMA methods. In this work, we have developed a fast approach to estimate the conformational entropy based upon solvent accessible surface area calculations. In our approach, the conformational entropy of a molecule, S, can be obtained by summing up the contributions of all atoms, no matter they are buried or exposed. Each atom has two types of surface areas, solvent accessible surface area (SAS) and buried SAS (BSAS). The two types of surface areas are weighted to estimate the contribution of an atom to S. Atoms having the same atom type share the same weight and a general parameter k is applied to balance the contributions of the two types of surface areas. This entropy model was parametrized using a large set of small molecules for which their conformational entropies were calculated at the B3LYP/6-31G* level taking the solvent effect into account. The weighted solvent accessible surface area (WSAS) model was extensively evaluated in three tests. For convenience, TS values, the product of temperature T and conformational entropy S, were calculated in those tests. T was always set to 298.15 K through the text. First of all, good correlations were achieved between WSAS TS and NMA TS for 44 protein or nucleic acid systems sampled with molecular dynamics simulations (10 snapshots were collected for postentropy calculations): the mean correlation coefficient squares (R²) was 0.56. As to the 20 complexes, the TS

  19. Improved maximum entropy method for the analysis of fluorescence spectroscopy data: evaluating zero-time shift and assessing its effect on the determination of fluorescence lifetimes.

    PubMed

    Esposito, Rosario; Mensitieri, Giuseppe; de Nicola, Sergio

    2015-12-21

    A new algorithm based on the Maximum Entropy Method (MEM) is proposed for recovering both the lifetime distribution and the zero-time shift from time-resolved fluorescence decay intensities. The developed algorithm allows the analysis of complex time decays through an iterative scheme based on entropy maximization and the Brent method to determine the minimum of the reduced chi-squared value as a function of the zero-time shift. The accuracy of this algorithm has been assessed through comparisons with simulated fluorescence decays both of multi-exponential and broad lifetime distributions for different values of the zero-time shift. The method is capable of recovering the zero-time shift with an accuracy greater than 0.2% over a time range of 2000 ps. The center and the width of the lifetime distributions are retrieved with relative discrepancies that are lower than 0.1% and 1% for the multi-exponential and continuous lifetime distributions, respectively. The MEM algorithm is experimentally validated by applying the method to fluorescence measurements of the time decays of the flavin adenine dinucleotide (FAD).

  20. A maximum entropy approach to the study of residue-specific backbone angle distributions in α-synuclein, an intrinsically disordered protein

    PubMed Central

    Mantsyzov, Alexey B; Maltsev, Alexander S; Ying, Jinfa; Shen, Yang; Hummer, Gerhard; Bax, Ad

    2014-01-01

    α-Synuclein is an intrinsically disordered protein of 140 residues that switches to an α-helical conformation upon binding phospholipid membranes. We characterize its residue-specific backbone structure in free solution with a novel maximum entropy procedure that integrates an extensive set of NMR data. These data include intraresidue and sequential HN–Hα and HN–HN NOEs, values for 3JHNHα, 1JHαCα, 2JCαN, and 1JCαN, as well as chemical shifts of 15N, 13Cα, and 13C′ nuclei, which are sensitive to backbone torsion angles. Distributions of these torsion angles were identified that yield best agreement to the experimental data, while using an entropy term to minimize the deviation from statistical distributions seen in a large protein coil library. Results indicate that although at the individual residue level considerable deviations from the coil library distribution are seen, on average the fitted distributions agree fairly well with this library, yielding a moderate population (20–30%) of the PPII region and a somewhat higher population of the potentially aggregation-prone β region (20–40%) than seen in the database. A generally lower population of the αR region (10–20%) is found. Analysis of 1H–1H NOE data required consideration of the considerable backbone diffusion anisotropy of a disordered protein. PMID:24976112

  1. A robust channel-calibration algorithm for multi-channel in azimuth HRWS SAR imaging based on local maximum-likelihood weighted minimum entropy.

    PubMed

    Zhang, Shuang-Xi; Xing, Meng-Dao; Xia, Xiang-Gen; Liu, Yan-Yang; Guo, Rui; Bao, Zheng

    2013-12-01

    High-resolution and wide-swath (HRWS) synthetic aperture radar (SAR) is an essential tool for modern remote sensing. To effectively deal with the contradiction problem between high-resolution and low pulse repetition frequency and obtain an HRWS SAR image, a multi-channel in azimuth SAR system has been adopted in the literature. However, the performance of the Doppler ambiguity suppression via digital beam forming processing suffers the losses from the channel mismatch. In this paper, a robust channel-calibration algorithm based on weighted minimum entropy is proposed for the multi-channel in azimuth HRWS SAR imaging. The proposed algorithm is implemented by a two-step process. 1) The timing uncertainty in each channel and most of the range-invariant channel mismatches in amplitude and phase have been corrected in the pre-processing of the coarse-compensation. 2) After the pre-processing, there is only residual range-dependent channel mismatch in phase. Then, the retrieval of the range-dependent channel mismatch in phase is achieved by a local maximum-likelihood weighted minimum entropy algorithm. The simulated multi-channel in azimuth HRWS SAR data experiment is adopted to evaluate the performance of the proposed algorithm. Then, some real measured airborne multi-channel in azimuth HRWS Scan-SAR data is used to demonstrate the effectiveness of the proposed approach.

  2. Evaluation of the reliability of the maximum entropy method for reconstructing 3D and 4D NOESY-type NMR spectra of proteins.

    PubMed

    Shigemitsu, Yoshiki; Ikeya, Teppei; Yamamoto, Akihiro; Tsuchie, Yuusuke; Mishima, Masaki; Smith, Brian O; Güntert, Peter; Ito, Yutaka

    2015-02-06

    Despite their advantages in analysis, 4D NMR experiments are still infrequently used as a routine tool in protein NMR projects due to the long duration of the measurement and limited digital resolution. Recently, new acquisition techniques for speeding up multidimensional NMR experiments, such as nonlinear sampling, in combination with non-Fourier transform data processing methods have been proposed to be beneficial for 4D NMR experiments. Maximum entropy (MaxEnt) methods have been utilised for reconstructing nonlinearly sampled multi-dimensional NMR data. However, the artefacts arising from MaxEnt processing, particularly, in NOESY spectra have not yet been clearly assessed in comparison with other methods, such as quantitative maximum entropy, multidimensional decomposition, and compressed sensing. We compared MaxEnt with other methods in reconstructing 3D NOESY data acquired with variously reduced sparse sampling schedules and found that MaxEnt is robust, quick and competitive with other methods. Next, nonlinear sampling and MaxEnt processing were applied to 4D NOESY experiments, and the effect of the artefacts of MaxEnt was evaluated by calculating 3D structures from the NOE-derived distance restraints. Our results demonstrated that sufficiently converged and accurate structures (RMSD of 0.91Å to the mean and 1.36Å to the reference structures) were obtained even with NOESY spectra reconstructed from 1.6% randomly selected sampling points for indirect dimensions. This suggests that 3D MaxEnt processing in combination with nonlinear sampling schedules is still a useful and advantageous option for rapid acquisition of high-resolution 4D NOESY spectra of proteins. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Entanglement entropy production in gravitational collapse: covariant regularization and solvable models

    NASA Astrophysics Data System (ADS)

    Bianchi, Eugenio; De Lorenzo, Tommaso; Smerlak, Matteo

    2015-06-01

    We study the dynamics of vacuum entanglement in the process of gravitational collapse and subsequent black hole evaporation. In the first part of the paper, we introduce a covariant regularization of entanglement entropy tailored to curved spacetimes; this regularization allows us to propose precise definitions for the concepts of black hole "exterior entropy" and "radiation entropy." For a Vaidya model of collapse we find results consistent with the standard thermodynamic properties of Hawking radiation. In the second part of the paper, we compute the vacuum entanglement entropy of various spherically-symmetric spacetimes of interest, including the nonsingular black hole model of Bardeen, Hayward, Frolov and Rovelli-Vidotto and the "black hole fireworks" model of Haggard-Rovelli. We discuss specifically the role of event and trapping horizons in connection with the behavior of the radiation entropy at future null infinity. We observe in particular that ( i) in the presence of an event horizon the radiation entropy diverges at the end of the evaporation process, ( ii) in models of nonsingular evaporation (with a trapped region but no event horizon) the generalized second law holds only at early times and is violated in the "purifying" phase, ( iii) at late times the radiation entropy can become negative (i.e. the radiation can be less correlated than the vacuum) before going back to zero leading to an up-down-up behavior for the Page curve of a unitarily evaporating black hole.

  4. A new assessment method for urbanization environmental impact: urban environment entropy model and its application.

    PubMed

    Ouyang, Tingping; Fu, Shuqing; Zhu, Zhaoyu; Kuang, Yaoqiu; Huang, Ningsheng; Wu, Zhifeng

    2008-11-01

    The thermodynamic law is one of the most widely used scientific principles. The comparability between the environmental impact of urbanization and the thermodynamic entropy was systematically analyzed. Consequently, the concept "Urban Environment Entropy" was brought forward and the "Urban Environment Entropy" model was established for urbanization environmental impact assessment in this study. The model was then utilized in a case study for the assessment of river water quality in the Pearl River Delta Economic Zone. The results indicated that the assessing results of the model are consistent to that of the equalized synthetic pollution index method. Therefore, it can be concluded that the Urban Environment Entropy model has high reliability and can be applied widely in urbanization environmental assessment research using many different environmental parameters.

  5. The viscosity of planetary tholeiitic melts: A configurational entropy model

    NASA Astrophysics Data System (ADS)

    Sehlke, A.; Whittington, A. G.

    2016-12-01

    The viscosity (η) of silicate melts is a fundamental physical property controlling mass transfer in magmatic systems. Viscosity can span many orders of magnitude, strongly depending on temperature and composition. Several models are available that describe this dependency for terrestrial melts quite well. Planetary basaltic lavas however are distinctly different in composition, being dominantly alkali-poor, iron- rich and/or highly magnesian. We measured the viscosity of 20 anhydrous tholeiitic melts, of which 15 represent known or estimated surface compositions of Mars, Mercury, the Moon, Io and Vesta, by concentric cylinder and parallel plate viscometry. The planetary basalts span a viscosity range of 2 orders of magnitude at liquidus temperatures and 4 orders of magnitude near the glass transition, and can be more or less viscous than terrestrial lavas (Figure A). We find that current models under- and overestimate superliquidus viscosities by up to 2 orders of magnitude for these compositions, and deviate even more strongly from measured viscosities toward the glass transition. We used the Adam-Gibbs (A-G) theory to relate viscosity (η) to absolute temperature (T) and the configurational entropy of the system at that temperature (Sconf), which is in the form of log η=Ae+Be/TSconf. Heat capacities (CP) for glasses and liquids of our investigated compositions were calculated via available literature models. We show that the A-G theory is applicable to model the viscosity of individual complex tholeiitic melts containing 10 or more major oxides as well or better than the commonly used empirical equations. We successfully modeled the global viscosity data set using a constant Ae of -3.34±0.22 log units and 12 adjustable sub-parameters, which capture the compositional and temperature dependence on melt viscosity. Seven sub-parameters account for the compositional dependence of Be and 5 for Sconf. Our model reproduces the 496 measured viscosity data points with a

  6. The viscosity of planetary tholeiitic melts: A configurational entropy model

    NASA Astrophysics Data System (ADS)

    Sehlke, Alexander; Whittington, Alan G.

    2016-10-01

    The viscosity (η) of silicate melts is a fundamental physical property controlling mass transfer in magmatic systems. Viscosity can span many orders of magnitude, strongly depending on temperature and composition. Several models are available that describe this dependency for terrestrial melts quite well. Planetary basaltic lavas however are distinctly different in composition, being dominantly alkali-poor, iron-rich and/or highly magnesian. We measured the viscosity of 20 anhydrous tholeiitic melts, of which 15 represent known or estimated surface compositions of Mars, Mercury, the Moon, Io and Vesta, by concentric cylinder and parallel plate viscometry. The planetary basalts span a viscosity range of 2 orders of magnitude at liquidus temperatures and 4 orders of magnitude near the glass transition, and can be more or less viscous than terrestrial lavas. We find that current models under- and overestimate superliquidus viscosities by up to 2 orders of magnitude for these compositions, and deviate even more strongly from measured viscosities toward the glass transition. We used the Adam-Gibbs theory (A-G) to relate viscosity (η) to absolute temperature (T) and the configurational entropy of the system at that temperature (Sconf), which is in the form of log η =Ae +Be /TSconf . Heat capacities (CP) for glasses and liquids of our investigated compositions were calculated via available literature models. We show that the A-G theory is applicable to model the viscosity of individual complex tholeiitic melts containing 10 or more major oxides as well or better than the commonly used empirical equations. We successfully modeled the global viscosity data set using a constant Ae of -3.34 ± 0.22 log units and 12 adjustable sub-parameters, which capture the compositional and temperature dependence on melt viscosity. Seven sub-parameters account for the compositional dependence of Be and 5 for Sconf. Our model reproduces the 496 measured viscosity data points with a 1

  7. Cluster-size entropy in the Axelrod model of social influence: Small-world networks and mass media

    NASA Astrophysics Data System (ADS)

    Gandica, Y.; Charmell, A.; Villegas-Febres, J.; Bonalde, I.

    2011-10-01

    We study the Axelrod's cultural adaptation model using the concept of cluster-size entropy Sc, which gives information on the variability of the cultural cluster size present in the system. Using networks of different topologies, from regular to random, we find that the critical point of the well-known nonequilibrium monocultural-multicultural (order-disorder) transition of the Axelrod model is given by the maximum of the Sc(q) distributions. The width of the cluster entropy distributions can be used to qualitatively determine whether the transition is first or second order. By scaling the cluster entropy distributions we were able to obtain a relationship between the critical cultural trait qc and the number F of cultural features in two-dimensional regular networks. We also analyze the effect of the mass media (external field) on social systems within the Axelrod model in a square network. We find a partially ordered phase whose largest cultural cluster is not aligned with the external field, in contrast with a recent suggestion that this type of phase cannot be formed in regular networks. We draw a q-B phase diagram for the Axelrod model in regular networks.

  8. Cluster-size entropy in the Axelrod model of social influence: small-world networks and mass media.

    PubMed

    Gandica, Y; Charmell, A; Villegas-Febres, J; Bonalde, I

    2011-10-01

    We study the Axelrod's cultural adaptation model using the concept of cluster-size entropy S(c), which gives information on the variability of the cultural cluster size present in the system. Using networks of different topologies, from regular to random, we find that the critical point of the well-known nonequilibrium monocultural-multicultural (order-disorder) transition of the Axelrod model is given by the maximum of the S(c)(q) distributions. The width of the cluster entropy distributions can be used to qualitatively determine whether the transition is first or second order. By scaling the cluster entropy distributions we were able to obtain a relationship between the critical cultural trait q(c) and the number F of cultural features in two-dimensional regular networks. We also analyze the effect of the mass media (external field) on social systems within the Axelrod model in a square network. We find a partially ordered phase whose largest cultural cluster is not aligned with the external field, in contrast with a recent suggestion that this type of phase cannot be formed in regular networks. We draw a q-B phase diagram for the Axelrod model in regular networks.

  9. Intuitionistic Fuzzy Weighted Linear Regression Model with Fuzzy Entropy under Linear Restrictions.

    PubMed

    Kumar, Gaurav; Bajaj, Rakesh Kumar

    2014-01-01

    In fuzzy set theory, it is well known that a triangular fuzzy number can be uniquely determined through its position and entropies. In the present communication, we extend this concept on triangular intuitionistic fuzzy number for its one-to-one correspondence with its position and entropies. Using the concept of fuzzy entropy the estimators of the intuitionistic fuzzy regression coefficients have been estimated in the unrestricted regression model. An intuitionistic fuzzy weighted linear regression (IFWLR) model with some restrictions in the form of prior information has been considered. Further, the estimators of regression coefficients have been obtained with the help of fuzzy entropy for the restricted/unrestricted IFWLR model by assigning some weights in the distance function.

  10. Entropy, chaos, and excited-state quantum phase transitions in the Dicke model.

    PubMed

    Lóbez, C M; Relaño, A

    2016-07-01

    We study nonequilibrium processes in an isolated quantum system-the Dicke model-focusing on the role played by the transition from integrability to chaos and the presence of excited-state quantum phase transitions. We show that both diagonal and entanglement entropies are abruptly increased by the onset of chaos. Also, this increase ends in both cases just after the system crosses the critical energy of the excited-state quantum phase transition. The link between entropy production, the development of chaos, and the excited-state quantum phase transition is more clear for the entanglement entropy.

  11. Where and how long ago was water in the western North Atlantic ventilated? Maximum entropy inversions of bottle data from WOCE line A20

    NASA Astrophysics Data System (ADS)

    Holzer, Mark; Primeau, FrançOis W.; Smethie, William M.; Khatiwala, Samar

    2010-07-01

    A maximum entropy (ME) method is used to deconvolve tracer data for the joint distribution ? of locations and times since last ventilation. The deconvolutions utilize World Ocean Circulation Experiment line A20 repeat hydrography for CFC-11, potential temperature, salinity, oxygen, and phosphate, as well as Global Ocean Data Analysis Project (GLODAP) radiocarbon data, combined with surface boundary conditions derived from the atmospheric history of CFC-11 and the World Ocean Atlas 2005 and GLODAP databases. Because of the limited number of available tracers the deconvolutions are highly underdetermined, leading to large entropic uncertainties, which are quantified using the information entropy of ? relative to a prior distribution. Additional uncertainties resulting from data sparsity are estimated using a Monte Carlo approach and found to be of secondary importance. The ME deconvolutions objectively identify key water mass formation regions and quantify the local fraction of water of age τ or older last ventilated in each region. Ideal mean age and radiocarbon age are also estimated but found to have large entropic uncertainties that can be attributed to uncertainties in the partitioning of a given water parcel according to where it was last ventilated. Labrador/Irminger seawater (L water) is determined to be mostly less than ˜40 a old in the vicinity of the deep western boundary current (DWBC) at the northern end of A20 but several decades older where the DWBC recrosses the section further south, pointing to the importance of mixing via a multitude of eddy-diffusive paths. Overflow water lies primarily below L water with young waters (τ ≲ 40 a) at middepth in the northern part of A20 and waters as old as ˜600 a below ˜3500 m.

  12. Dynamic approximate entropy electroanatomic maps detect rotors in a simulated atrial fibrillation model.

    PubMed

    Ugarte, Juan P; Orozco-Duque, Andrés; Tobón, Catalina; Kremen, Vaclav; Novak, Daniel; Saiz, Javier; Oesterlein, Tobias; Schmitt, Clauss; Luik, Armin; Bustamante, John

    2014-01-01

    There is evidence that rotors could be drivers that maintain atrial fibrillation. Complex fractionated atrial electrograms have been located in rotor tip areas. However, the concept of electrogram fractionation, defined using time intervals, is still controversial as a tool for locating target sites for ablation. We hypothesize that the fractionation phenomenon is better described using non-linear dynamic measures, such as approximate entropy, and that this tool could be used for locating the rotor tip. The aim of this work has been to determine the relationship between approximate entropy and fractionated electrograms, and to develop a new tool for rotor mapping based on fractionation levels. Two episodes of chronic atrial fibrillation were simulated in a 3D human atrial model, in which rotors were observed. Dynamic approximate entropy maps were calculated using unipolar electrogram signals generated over the whole surface of the 3D atrial model. In addition, we optimized the approximate entropy calculation using two real multi-center databases of fractionated electrogram signals, labeled in 4 levels of fractionation. We found that the values of approximate entropy and the levels of fractionation are positively correlated. This allows the dynamic approximate entropy maps to localize the tips from stable and meandering rotors. Furthermore, we assessed the optimized approximate entropy using bipolar electrograms generated over a vicinity enclosing a rotor, achieving rotor detection. Our results suggest that high approximate entropy values are able to detect a high level of fractionation and to locate rotor tips in simulated atrial fibrillation episodes. We suggest that dynamic approximate entropy maps could become a tool for atrial fibrillation rotor mapping.

  13. Backward transfer entropy: Informational measure for detecting hidden Markov models and its interpretations in thermodynamics, gambling and causality

    NASA Astrophysics Data System (ADS)

    Ito, Sosuke

    2016-11-01

    The transfer entropy is a well-established measure of information flow, which quantifies directed influence between two stochastic time series and has been shown to be useful in a variety fields of science. Here we introduce the transfer entropy of the backward time series called the backward transfer entropy, and show that the backward transfer entropy quantifies how far it is from dynamics to a hidden Markov model. Furthermore, we discuss physical interpretations of the backward transfer entropy in completely different settings of thermodynamics for information processing and the gambling with side information. In both settings of thermodynamics and the gambling, the backward transfer entropy characterizes a possible loss of some benefit, where the conventional transfer entropy characterizes a possible benefit. Our result implies the deep connection between thermodynamics and the gambling in the presence of information flow, and that the backward transfer entropy would be useful as a novel measure of information flow in nonequilibrium thermodynamics, biochemical sciences, economics and statistics.

  14. A new one-dimensional radiative equilibrium model for investigating atmospheric radiation entropy flux

    PubMed Central

    Wu, Wei; Liu, Yangang

    2010-01-01

    A new one-dimensional radiative equilibrium model is built to analytically evaluate the vertical profile of the Earth's atmospheric radiation entropy flux under the assumption that atmospheric longwave radiation emission behaves as a greybody and shortwave radiation as a diluted blackbody. Results show that both the atmospheric shortwave and net longwave radiation entropy fluxes increase with altitude, and the latter is about one order in magnitude greater than the former. The vertical profile of the atmospheric net radiation entropy flux follows approximately that of the atmospheric net longwave radiation entropy flux. Sensitivity study further reveals that a ‘darker’ atmosphere with a larger overall atmospheric longwave optical depth exhibits a smaller net radiation entropy flux at all altitudes, suggesting an intrinsic connection between the atmospheric net radiation entropy flux and the overall atmospheric longwave optical depth. These results indicate that the overall strength of the atmospheric irreversible processes at all altitudes as determined by the corresponding atmospheric net entropy flux is closely related to the amount of greenhouse gases in the atmosphere. PMID:20368255

  15. Entropy analysis on non-equilibrium two-phase flow models

    SciTech Connect

    Karwat, H.; Ruan, Y.Q.

    1995-09-01

    A method of entropy analysis according to the second law of thermodynamics is proposed for the assessment of a class of practical non-equilibrium two-phase flow models. Entropy conditions are derived directly from a local instantaneous formulation for an arbitrary control volume of a structural two-phase fluid, which are finally expressed in terms of the averaged thermodynamic independent variables and their time derivatives as well as the boundary conditions for the volume. On the basis of a widely used thermal-hydraulic system code it is demonstrated with practical examples that entropy production rates in control volumes can be numerically quantified by using the data from the output data files. Entropy analysis using the proposed method is useful in identifying some potential problems in two-phase flow models and predictions as well as in studying the effects of some free parameters in closure relationships.

  16. Midnight Temperature Maximum (MTM) in Whole Atmosphere Model (WAM) Simulations

    DTIC Science & Technology

    2016-04-14

    Midnight temperature maximum (MTM) in Whole Atmosphere Model (WAM) simulations R. A. Akmaev,1 F. Wu,2 T. J. Fuller-Rowell,2 and H. Wang2 Received 13...been unsuccessful. First long-term simulations with the Whole Atmosphere Model (WAM) reveal the presence of a realistically prominent MTM and reproduce...involve nonlinear interactions between other tidal harmonics originating in the middle and lower atmosphere . Our results thus suggest that the MTM is

  17. Using the Maximum Entropy Principle as a Unifying Theory Characterization and Sampling of Multi-Scaling Processes in Hydrometeorology

    DTIC Science & Technology

    2015-08-20

    flux (H21C- 1055 ), American Geophysical Union Fall Meeting, San Francisco, CA, December, 2013. 3. Sharif, H. E., J. Wang, A. Georgakakos, and R...Huang, S.-Y., and J. Wang, Force-Restore Model of surface temperature using a new parameterization of ground heat flux (H21C- 1055 ), American

  18. An entropy model to measure heterogeneity of pedestrian crowds using self-propelled agents

    NASA Astrophysics Data System (ADS)

    Rangel-Huerta, A.; Ballinas-Hernández, A. L.; Muñoz-Meléndez, A.

    2017-05-01

    An entropy model to characterize the heterogeneity of a pedestrian crowd in a counter-flow corridor is presented. Pedestrians are modeled as self-propelled autonomous agents that are able to perform maneuvers to avoid collisions based on a set of simple rules of perception and action. An observer can determine a probability distribution function of the displayed behavior of pedestrians based only on external information. Three types of pedestrian are modeled, relaxed, standard and hurried pedestrians depending on their preferences of turn and non-turn when walking. Thus, using these types of pedestrians two crowds can be simulated: homogeneous and heterogeneous crowds. Heterogeneity is measured in this research based on the entropy in function of time. For that, the entropy of a homogeneous crowd comprising standard pedestrians is used as reference. A number of simulations to measure entropy of pedestrian crowds were conducted by varying different combinations of types of pedestrians, initial simulation conditions of macroscopic flow, as well as density of the crowd. Results from these simulations show that our entropy model is sensitive enough to capture the effect of both the initial simulation conditions about the spatial distribution of pedestrians in a corridor, and the composition of a crowd. Also, a relevant finding is that entropy in function of density presents a phase transition in the critical region.

  19. Inferring Markov chains: Bayesian estimation, model comparison, entropy rate, and out-of-class modeling.

    PubMed

    Strelioff, Christopher C; Crutchfield, James P; Hübler, Alfred W

    2007-07-01

    Markov chains are a natural and well understood tool for describing one-dimensional patterns in time or space. We show how to infer kth order Markov chains, for arbitrary k , from finite data by applying Bayesian methods to both parameter estimation and model-order selection. Extending existing results for multinomial models of discrete data, we connect inference to statistical mechanics through information-theoretic (type theory) techniques. We establish a direct relationship between Bayesian evidence and the partition function which allows for straightforward calculation of the expectation and variance of the conditional relative entropy and the source entropy rate. Finally, we introduce a method that uses finite data-size scaling with model-order comparison to infer the structure of out-of-class processes.

  20. Towards a proxy model for maximum latewood density

    NASA Astrophysics Data System (ADS)

    Trachsel, M.; Evans, M. N.; Duke, G.; Ide, K.

    2016-12-01

    Tree-ring maximum latewood density (mxd) chronologies inform extra-tropical temperature reconstructions covering the last two millennia. We present a simple, process-based model for mxd formation, requiring inputs of latitude, monthly temperature and optionally monthly precipitation. The mxd proxy model is derived from the VS-lite model for tree-ring growth (Tolwinski-Ward et al. 2011) and includes literature-based non-linearities and interactions between effects of temperature and soil-moisture on latewood density formation We validate the process based mxd model on three Northern Hemisphere extra-tropical maximum latewood density networks. The skill found in calibration / validation exercises is similar to the skill of linear regressions relating temperature to mxd. The model suggests strong influence of growing season temperatures and only minimal influence of precipitation / soil moisture on mxd formation, consistent with interpretation of MXD in the literature. In a paleoclimate context, our mxd model, as the full Vaganov-Shashkin model (e. g. Evans et al. 2006) and the VS-lite model, offers the advantage of relaxing the assumption of constant growing season length implicitly made when relating temperature to mxd using ordinary least squares regression. Therefore, the mxd model seems promising as data level model in climate reconstructions based on data assimilation. ReferencesEvans et al. (2006). Journal of Geophysical Research., 111, G03008. Tolwinski-Ward et al. (2011). Climate Dynamics, 36, 2419-2439.

  1. Expected Shannon Entropy and Shannon Differentiation between Subpopulations for Neutral Genes under the Finite Island Model.

    PubMed

    Chao, Anne; Jost, Lou; Hsieh, T C; Ma, K H; Sherwin, William B; Rollins, Lee Ann

    2015-01-01

    Shannon entropy H and related measures are increasingly used in molecular ecology and population genetics because (1) unlike measures based on heterozygosity or allele number, these measures weigh alleles in proportion to their population fraction, thus capturing a previously-ignored aspect of allele frequency distributions that may be important in many applications; (2) these measures connect directly to the rich predictive mathematics of information theory; (3) Shannon entropy is completely additive and has an explicitly hierarchical nature; and (4) Shannon entropy-based differentiation measures obey strong monotonicity properties that heterozygosity-based measures lack. We derive simple new expressions for the expected values of the Shannon entropy of the equilibrium allele distribution at a neutral locus in a single isolated population under two models of mutation: the infinite allele model and the stepwise mutation model. Surprisingly, this complex stochastic system for each model has an entropy expressable as a simple combination of well-known mathematical functions. Moreover, entropy- and heterozygosity-based measures for each model are linked by simple relationships that are shown by simulations to be approximately valid even far from equilibrium. We also identify a bridge between the two models of mutation. We apply our approach to subdivided populations which follow the finite island model, obtaining the Shannon entropy of the equilibrium allele distributions of the subpopulations and of the total population. We also derive the expected mutual information and normalized mutual information ("Shannon differentiation") between subpopulations at equilibrium, and identify the model parameters that determine them. We apply our measures to data from the common starling (Sturnus vulgaris) in Australia. Our measures provide a test for neutrality that is robust to violations of equilibrium assumptions, as verified on real world data from starlings.

  2. Expected Shannon Entropy and Shannon Differentiation between Subpopulations for Neutral Genes under the Finite Island Model

    PubMed Central

    Chao, Anne; Jost, Lou; Hsieh, T. C.; Ma, K. H.; Sherwin, William B.; Rollins, Lee Ann

    2015-01-01

    Shannon entropy H and related measures are increasingly used in molecular ecology and population genetics because (1) unlike measures based on heterozygosity or allele number, these measures weigh alleles in proportion to their population fraction, thus capturing a previously-ignored aspect of allele frequency distributions that may be important in many applications; (2) these measures connect directly to the rich predictive mathematics of information theory; (3) Shannon entropy is completely additive and has an explicitly hierarchical nature; and (4) Shannon entropy-based differentiation measures obey strong monotonicity properties that heterozygosity-based measures lack. We derive simple new expressions for the expected values of the Shannon entropy of the equilibrium allele distribution at a neutral locus in a single isolated population under two models of mutation: the infinite allele model and the stepwise mutation model. Surprisingly, this complex stochastic system for each model has an entropy expressable as a simple combination of well-known mathematical functions. Moreover, entropy- and heterozygosity-based measures for each model are linked by simple relationships that are shown by simulations to be approximately valid even far from equilibrium. We also identify a bridge between the two models of mutation. We apply our approach to subdivided populations which follow the finite island model, obtaining the Shannon entropy of the equilibrium allele distributions of the subpopulations and of the total population. We also derive the expected mutual information and normalized mutual information (“Shannon differentiation”) between subpopulations at equilibrium, and identify the model parameters that determine them. We apply our measures to data from the common starling (Sturnus vulgaris) in Australia. Our measures provide a test for neutrality that is robust to violations of equilibrium assumptions, as verified on real world data from starlings. PMID

  3. Maximum likelihood estimation for distributed parameter models of flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Taylor, L. W., Jr.; Williams, J. L.

    1989-01-01

    A distributed-parameter model of the NASA Solar Array Flight Experiment spacecraft structure is constructed on the basis of measurement data and analyzed to generate a priori estimates of modal frequencies and mode shapes. A Newton-Raphson maximum-likelihood algorithm is applied to determine the unknown parameters, using a truncated model for the estimation and the full model for the computation of the higher modes. Numerical results are presented in a series of graphs and briefly discussed, and the significant improvement in computation speed obtained by parallel implementation of the method on a supercomputer is noted.

  4. Constant Entropy Properties for an Approximate Model of Equilibrium Air

    NASA Technical Reports Server (NTRS)

    Hansen, C. Frederick; Hodge, Marion E.

    1961-01-01

    Approximate analytic solutions for properties of equilibrium air up to 15,000 K have been programmed for machine computation. Temperature, compressibility, enthalpy, specific heats, and speed of sound are tabulated as constant entropy functions of temperature. The reciprocal of acoustic impedance and its integral with respect to pressure are also given for the purpose of evaluating the Riemann constants for one-dimensional, isentropic flow.

  5. An Integrated Modeling Framework for Probable Maximum Precipitation and Flood

    NASA Astrophysics Data System (ADS)

    Gangrade, S.; Rastogi, D.; Kao, S. C.; Ashfaq, M.; Naz, B. S.; Kabela, E.; Anantharaj, V. G.; Singh, N.; Preston, B. L.; Mei, R.

    2015-12-01

    With the increasing frequency and magnitude of extreme precipitation and flood events projected in the future climate, there is a strong need to enhance our modeling capabilities to assess the potential risks on critical energy-water infrastructures such as major dams and nuclear power plants. In this study, an integrated modeling framework is developed through high performance computing to investigate the climate change effects on probable maximum precipitation (PMP) and probable maximum flood (PMF). Multiple historical storms from 1981-2012 over the Alabama-Coosa-Tallapoosa River Basin near the Atlanta metropolitan area are simulated by the Weather Research and Forecasting (WRF) model using the Climate Forecast System Reanalysis (CFSR) forcings. After further WRF model tuning, these storms are used to simulate PMP through moisture maximization at initial and lateral boundaries. A high resolution hydrological model, Distributed Hydrology-Soil-Vegetation Model, implemented at 90m resolution and calibrated by the U.S. Geological Survey streamflow observations, is then used to simulate the corresponding PMF. In addition to the control simulation that is driven by CFSR, multiple storms from the Community Climate System Model version 4 under the Representative Concentrations Pathway 8.5 emission scenario are used to simulate PMP and PMF in the projected future climate conditions. The multiple PMF scenarios developed through this integrated modeling framework may be utilized to evaluate the vulnerability of existing energy-water infrastructures with various aspects associated PMP and PMF.

  6. A Theoretical Study of Ene Reactions in Solution: A Solution-Phase Translational Entropy Model.

    PubMed

    Zhao, Liu; Li, Shi-Jun; Fang, De-Cai

    2015-12-01

    Several density functional theory (DFT) methods, such as CAM-B3LYP, M06, ωB97x, and ωB97xD, are used to characterize a range of ene reactions. The Gibbs free energy, activation enthalpy, and entropy are calculated with both the gas- and solution-phase translational entropy; the results obtained from the solution-phase translational entropies are quite close to the experimental measurements, whereas the gas-phase translational entropies do not perform well. For ene reactions between the enophile propanedioic acid (2-oxo-1,3-dimethyl ester) and π donors, the two-solvent-involved explicit+implicit model can be employed to obtain accurate activation entropies and free-energy barriers, because the interaction between the carbonyl oxygen atom and the solvent in the transition state is strengthened with the formation of C-C and O-H bonds. In contrast, an implicit solvent model is adequate to calculate activation entropies and free-energy barriers for the corresponding reactions of the enophile 4-phenyl-1,2,4-triazoline-3,5-dione.

  7. Maximum parsimony, substitution model, and probability phylogenetic trees.

    PubMed

    Weng, J F; Thomas, D A; Mareels, I

    2011-01-01

    The problem of inferring phylogenies (phylogenetic trees) is one of the main problems in computational biology. There are three main methods for inferring phylogenies-Maximum Parsimony (MP), Distance Matrix (DM) and Maximum Likelihood (ML), of which the MP method is the most well-studied and popular method. In the MP method the optimization criterion is the number of substitutions of the nucleotides computed by the differences in the investigated nucleotide sequences. However, the MP method is often criticized as it only counts the substitutions observable at the current time and all the unobservable substitutions that really occur in the evolutionary history are omitted. In order to take into account the unobservable substitutions, some substitution models have been established and they are now widely used in the DM and ML methods but these substitution models cannot be used within the classical MP method. Recently the authors proposed a probability representation model for phylogenetic trees and the reconstructed trees in this model are called probability phylogenetic trees. One of the advantages of the probability representation model is that it can include a substitution model to infer phylogenetic trees based on the MP principle. In this paper we explain how to use a substitution model in the reconstruction of probability phylogenetic trees and show the advantage of this approach with examples.

  8. Maximum entropy mobility spectrum analysis of LPE-grown and anodic oxidated Hg1-xCdxTe(x=0.237)

    NASA Astrophysics Data System (ADS)

    Song, Z. Y.; Shang, L. Y.; Lin, T.; Wei, Y. F.; Chu, J. H.

    2017-06-01

    In this paper, magneto-transport properties of the LPE-grown and anodic oxidated p-type Hg1-xCdxTe(x=0.237) films have been studied by using maximum entropy mobility spectrum analysis (ME-MSA) technique. It can be found that the high-mobility electron (μe ∼2 × 104 cm 2/Vs) has considerable contributions to the conduction of anodic oxidated Hg1-xCdxTe(x=0.237) film, but not in LPE-grown Hg1-xCdxTe(x=0.237) film. The high-mobility electron maintains dominant contributions from 11k to 150k, which can be attributed to two-dimensional electron gas in the inversion layer of anodic oxidated p-type Hg1-xCdxTe(x=0.237) film. In addition, we also observe the nonphysical contributions of low mobility electrons (μe ∼0.08 × 104cm2/Vs) in mobility spectrum of both LPE-grown and anodic oxidated p-type HgCdTe films. The low-mobility electrons, so-called mirror peaks, can be interpreted as a consequence of magnetic freeze-out of holes in vacancy-doped HgCdTe, which disappeared at T=150k.

  9. Rényi entropy perspective on topological order in classical toric code models

    NASA Astrophysics Data System (ADS)

    Helmes, Johannes; Stéphan, Jean-Marie; Trebst, Simon

    2015-09-01

    Concepts of information theory are increasingly used to characterize collective phenomena in condensed matter systems, such as the use of entanglement entropies to identify emergent topological order in interacting quantum many-body systems. Here, we employ classical variants of these concepts, in particular Rényi entropies and their associated mutual information, to identify topological order in classical systems. Like for their quantum counterparts, the presence of topological order can be identified in such classical systems via a universal, subleading contribution to the prevalent volume and boundary laws of the classical Rényi entropies. We demonstrate that an additional subleading O (1 ) contribution generically arises for all Rényi entropies S(n ) with n ≥2 when driving the system towards a phase transition, e.g., into a conventionally ordered phase. This additional subleading term, which we dub connectivity contribution, tracks back to partial subsystem ordering and is proportional to the number of connected parts in a given bipartition. Notably, the Levin-Wen summation scheme, typically used to extract the topological contribution to the Rényi entropies, does not fully eliminate this additional connectivity contribution in this classical context. This indicates that the distillation of topological order from Rényi entropies requires an additional level of scrutiny to distinguish topological from nontopological O (1 ) contributions. This is also the case for quantum systems, for which we discuss which entropies are sensitive to these connectivity contributions. We showcase these findings by extensive numerical simulations of a classical variant of the toric code model, for which we study the stability of topological order in the presence of a magnetic field and at finite temperatures from a Rényi entropy perspective.

  10. Tracking instantaneous entropy in heartbeat dynamics through inhomogeneous point-process nonlinear models.

    PubMed

    Valenza, Gaetano; Citi, Luca; Scilingo, Enzo Pasquale; Barbieri, Riccardo

    2014-01-01

    Measures of entropy have been proved as powerful quantifiers of complex nonlinear systems, particularly when applied to stochastic series of heartbeat dynamics. Despite the remarkable achievements obtained through standard definitions of approximate and sample entropy, a time-varying definition of entropy characterizing the physiological dynamics at each moment in time is still missing. To this extent, we propose two novel measures of entropy based on the inho-mogeneous point-process theory. The RR interval series is modeled through probability density functions (pdfs) which characterize and predict the time until the next event occurs as a function of the past history. Laguerre expansions of the Wiener-Volterra autoregressive terms account for the long-term nonlinear information. As the proposed measures of entropy are instantaneously defined through such probability functions, the proposed indices are able to provide instantaneous tracking of autonomic nervous system complexity. Of note, the distance between the time-varying phase-space vectors is calculated through the Kolmogorov-Smirnov distance of two pdfs. Experimental results, obtained from the analysis of RR interval series extracted from ten healthy subjects during stand-up tasks, suggest that the proposed entropy indices provide instantaneous tracking of the heartbeat complexity, also allowing for the definition of complexity variability indices.

  11. Neuronal Entropy-Rate Feature of Entopeduncular Nucleus in Rat Model of Parkinson's Disease.

    PubMed

    Darbin, Olivier; Jin, Xingxing; Von Wrangel, Christof; Schwabe, Kerstin; Nambu, Atsushi; Naritoku, Dean K; Krauss, Joachim K; Alam, Mesbah

    2016-03-01

    The function of the nigro-striatal pathway on neuronal entropy in the basal ganglia (BG) output nucleus, i.e. the entopeduncular nucleus (EPN) was investigated in the unilaterally 6-hyroxydopamine (6-OHDA)-lesioned rat model of Parkinson's disease (PD). In both control subjects and subjects with 6-OHDA lesion of dopamine (DA) the nigro-striatal pathway, a histological hallmark for parkinsonism, neuronal entropy in EPN was maximal in neurons with firing rates ranging between 15 and 25 Hz. In 6-OHDA lesioned rats, neuronal entropy in the EPN was specifically higher in neurons with firing rates above 25 Hz. Our data establishes that the nigro-striatal pathway controls neuronal entropy in motor circuitry and that the parkinsonian condition is associated with abnormal relationship between firing rate and neuronal entropy in BG output nuclei. The neuronal firing rates and entropy relationship provide putative relevant electrophysiological information to investigate the sensory-motor processing in normal condition and conditions such as movement disorders.

  12. Interface tension and interface entropy in the 2+1 flavor Nambu-Jona-Lasinio model

    NASA Astrophysics Data System (ADS)

    Ke, Wei-yao; Liu, Yu-xin

    2014-04-01

    We study the QCD phases and their transitions in the 2+1 flavor Nambu-Jona-Lasinio model, with a focus on the interface effects such as the interface tension, the interface entropy, and the critical bubble size in the coexistence region of the first-order phase transitions. Our results show that under the thin-wall approximation, the interface contribution to the total entropy density changes its discontinuity scale in the first-order phase transition. However, the entropy density of the dynamical chiral symmetry (DCS) phase is always greater than that of the dynamical chiral symmetry broken (DCSB) phase in both the heating and hadronization processes. To address this entropy puzzle, the thin-wall approximation is evaluated in the present work. We find that the puzzle can be attributed to an overestimate of the critical bubble size at low temperature in the hadronization process. With an improvement on the thin-wall approximation, the entropy puzzle is well solved with the total entropy density of the hadron-DCSB phase exceeding apparently that of the DCS-quark phase at low temperature.

  13. An information entropy model on clinical assessment of patients based on the holographic field of meridian

    NASA Astrophysics Data System (ADS)

    Wu, Jingjing; Wu, Xinming; Li, Pengfei; Li, Nan; Mao, Xiaomei; Chai, Lihe

    2017-04-01

    Meridian system is not only the basis of traditional Chinese medicine (TCM) method (e.g. acupuncture, massage), but also the core of TCM's basic theory. This paper has introduced a new informational perspective to understand the reality and the holographic field of meridian. Based on maximum information entropy principle (MIEP), a dynamic equation for the holographic field has been deduced, which reflects the evolutionary characteristics of meridian. By using self-organizing artificial neural network as algorithm, the evolutionary dynamic equation of the holographic field can be resolved to assess properties of meridians and clinically diagnose the health characteristics of patients. Finally, through some cases from clinical patients (e.g. a 30-year-old male patient, an apoplectic patient, an epilepsy patient), we use this model to assess the evolutionary properties of meridians. It is proved that this model not only has significant implications in revealing the essence of meridian in TCM, but also may play a guiding role in clinical assessment of patients based on the holographic field of meridians.

  14. Query construction, entropy, and generalization in neural-network models

    NASA Astrophysics Data System (ADS)

    Sollich, Peter

    1994-05-01

    We study query construction algorithms, which aim at improving the generalization ability of systems that learn from examples by choosing optimal, nonredundant training sets. We set up a general probabilistic framework for deriving such algorithms from the requirement of optimizing a suitable objective function; specifically, we consider the objective functions entropy (or information gain) and generalization error. For two learning scenarios, the high-low game and the linear perceptron, we evaluate the generalization performance obtained by applying the corresponding query construction algorithms and compare it to training on random examples. We find qualitative differences between the two scenarios due to the different structure of the underlying rules (nonlinear and ``noninvertible'' versus linear); in particular, for the linear perceptron, random examples lead to the same generalization ability as a sequence of queries in the limit of an infinite number of examples. We also investigate learning algorithms which are ill matched to the learning environment and find that, in this case, minimum entropy queries can in fact yield a lower generalization ability than random examples. Finally, we study the efficiency of single queries and its dependence on the learning history, i.e., on whether the previous training examples were generated randomly or by querying, and the difference between globally and locally optimal query construction.

  15. Modeling of groundwater productivity in northeastern Wasit Governorate, Iraq using frequency ratio and Shannon's entropy models

    NASA Astrophysics Data System (ADS)

    Al-Abadi, Alaa M.

    2017-05-01

    In recent years, delineation of groundwater productivity zones plays an increasingly important role in sustainable management of groundwater resource throughout the world. In this study, groundwater productivity index of northeastern Wasit Governorate was delineated using probabilistic frequency ratio (FR) and Shannon's entropy models in framework of GIS. Eight factors believed to influence the groundwater occurrence in the study area were selected and used as the input data. These factors were elevation (m), slope angle (degree), geology, soil, aquifer transmissivity (m2/d), storativity (dimensionless), distance to river (m), and distance to faults (m). In the first step, borehole location inventory map consisting of 68 boreholes with relatively high yield (>8 l/sec) was prepared. 47 boreholes (70 %) were used as training data and the remaining 21 (30 %) were used for validation. The predictive capability of each model was determined using relative operating characteristic technique. The results of the analysis indicate that the FR model with a success rate of 87.4 % and prediction rate 86.9 % performed slightly better than Shannon's entropy model with success rate of 84.4 % and prediction rate of 82.4 %. The resultant groundwater productivity index was classified into five classes using natural break classification scheme: very low, low, moderate, high, and very high. The high-very high classes for FR and Shannon's entropy models occurred within 30 % (217 km2) and 31 % (220 km2), respectively indicating low productivity conditions of the aquifer system. From final results, both of the models were capable to prospect GWPI with very good results, but FR was better in terms of success and prediction rates. Results of this study could be helpful for better management of groundwater resources in the study area and give planners and decision makers an opportunity to prepare appropriate groundwater investment plans.

  16. Modeling of groundwater productivity in northeastern Wasit Governorate, Iraq using frequency ratio and Shannon's entropy models

    NASA Astrophysics Data System (ADS)

    Al-Abadi, Alaa M.

    2015-04-01

    In recent years, delineation of groundwater productivity zones plays an increasingly important role in sustainable management of groundwater resource throughout the world. In this study, groundwater productivity index of northeastern Wasit Governorate was delineated using probabilistic frequency ratio (FR) and Shannon's entropy models in framework of GIS. Eight factors believed to influence the groundwater occurrence in the study area were selected and used as the input data. These factors were elevation (m), slope angle (degree), geology, soil, aquifer transmissivity (m2/d), storativity (dimensionless), distance to river (m), and distance to faults (m). In the first step, borehole location inventory map consisting of 68 boreholes with relatively high yield (>8 l/sec) was prepared. 47 boreholes (70 %) were used as training data and the remaining 21 (30 %) were used for validation. The predictive capability of each model was determined using relative operating characteristic technique. The results of the analysis indicate that the FR model with a success rate of 87.4 % and prediction rate 86.9 % performed slightly better than Shannon's entropy model with success rate of 84.4 % and prediction rate of 82.4 %. The resultant groundwater productivity index was classified into five classes using natural break classification scheme: very low, low, moderate, high, and very high. The high-very high classes for FR and Shannon's entropy models occurred within 30 % (217 km2) and 31 % (220 km2), respectively indicating low productivity conditions of the aquifer system. From final results, both of the models were capable to prospect GWPI with very good results, but FR was better in terms of success and prediction rates. Results of this study could be helpful for better management of groundwater resources in the study area and give planners and decision makers an opportunity to prepare appropriate groundwater investment plans.

  17. Bayesian and maximum likelihood estimation of hierarchical response time models

    PubMed Central

    Farrell, Simon; Ludwig, Casimir

    2008-01-01

    Hierarchical (or multilevel) statistical models have become increasingly popular in psychology in the last few years. We consider the application of multilevel modeling to the ex-Gaussian, a popular model of response times. Single-level estimation is compared with hierarchical estimation of parameters of the ex-Gaussian distribution. Additionally, for each approach maximum likelihood (ML) estimation is compared with Bayesian estimation. A set of simulations and analyses of parameter recovery show that although all methods perform adequately well, hierarchical methods are better able to recover the parameters of the ex-Gaussian by reducing the variability in recovered parameters. At each level, little overall difference was observed between the ML and Bayesian methods. PMID:19001592

  18. Maximum sustainable yields from a spatially-explicit harvest model.

    PubMed

    Takashina, Nao; Mougi, Akihiko

    2015-10-21

    Spatial heterogeneity plays an important role in complex ecosystem dynamics, and therefore is also an important consideration in sustainable resource management. However, little is known about how spatial effects can influence management targets derived from a non-spatial harvest model. Here, we extended the Schaefer model, a conventional non-spatial harvest model that is widely used in resource management, to a spatially-explicit harvest model by integrating environmental heterogeneities, as well as species exchange between patches. By comparing the maximum sustainable yields (MSY), one of the central management targets in resource management, obtained from the spatially extended model with that of the conventional model, we examined the effect of spatial heterogeneity. When spatial heterogeneity exists, we found that the Schaefer model tends to overestimate the MSY, implying potential for causing overharvesting. In addition, by assuming a well-mixed population in the heterogeneous environment, we showed analytically that the Schaefer model always overestimate the MSY, regardless of the number of patches existing. The degree of overestimation becomes significant when spatial heterogeneity is marked. Collectively, these results highlight the importance of integrating the spatial structure to conduct sustainable resource management.

  19. Modelling and Design of Magnesium and High Entropy Alloys Through Combining Statistical and Physical Models

    NASA Astrophysics Data System (ADS)

    Toda-Caraballo, Isaac; Rivera-Díaz-del-Castillo, Pedro E. J.

    2015-01-01

    Physical and statistical models are combined to describe and design magnesium and high entropy alloys. A principal component analysis is applied to merge material datasets, and it is shown that limits in properties can be envisaged. Extrapolation techniques can be employed to devise properties of non-existing alloys, such as specific heat capacity, melting point and Young's modulus. These in turn can be input to physical models to predict, for example, yield strength and modulus of toughness. The tools described herein can readily be used for materials discovery, and are being implemented in the Accelerated Metallurgy project.

  20. High flow-resolution for mobility estimation in 2D-ENMR of proteins using maximum entropy method (MEM-ENMR).

    PubMed

    Thakur, Sunitha B; He, Qiuhong

    2006-11-01

    Multidimensional electrophoretic NMR (nD-ENMR) is a potentially powerful tool for structural characterization of co-existing proteins and protein conformations. By applying a DC electric field pulse, the electrophoretic migration rates of different proteins were detected experimentally in a new dimension of electrophoretic flow. The electrophoretic mobilities were employed to differentiate protein signals. In U-shaped ENMR sample chambers, individual protein components in a solution mixture followed a cosinusoidal electrophoretic interferogram as a function of its unique electrophoretic migration rate. After Fourier transformation in the electrophoretic flow dimension, the protein signals were resolved at different resonant frequencies proportional to their electrophoretic mobilities. Currently, the mobility resolution of the proteins in the electrophoretic flow dimension is limited by severe truncations of the electrophoretic interferograms due to the finite electric field strength available before the onset of heat-induced convection. In this article, we present a successful signal processing method, the Burg's maximum entropy method (MEM), to analyze the truncated ENMR signals (MEM-ENMR). Significant enhancement in flow resolution was demonstrated using two-dimensional ENMR of two protein samples: a lysozyme solution and a solution mixture of bovine serum albumin (BSA) and ubiquitin. The electrophoretic mobilities of lysozyme, BSA and ubiquitin were measured from the MEM analysis as 7.5x10(-5), 1.9x10(-4) and 8.7x10(-5) cm2 V-1 s-1, respectively. Results from computer simulations confirmed a complete removal of truncation artifacts in the MEM-ENMR spectra with 3- to 6-fold resolution enhancement.