Science.gov

Sample records for maximum entropy models

  1. Duality in a maximum generalized entropy model

    NASA Astrophysics Data System (ADS)

    Eguchi, Shinto; Komori, Osamu; Ohara, Atsumi

    2015-01-01

    This paper discusses a possible generalization for the maximum entropy principle. A class of generalized entropy is introduced by that of generator functions, in which the maximum generalized distribution model is explicitly derived including q-Gaussian distributions, Wigner semicircle distributions and Pareto distributions. We define a totally geodesic subspace in the total space of all probability density functions in a framework of information geometry. The model of maximum generalized entropy distributions is shown to be totally geodesic. The duality of the model and the estimation in the maximum generalized principle is elucidated to give intrinsic understandings from the point of information geometry.

  2. Maximum entropy model for business cycle synchronization

    NASA Astrophysics Data System (ADS)

    Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui

    2014-11-01

    The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.

  3. Maximum entropy models of ecosystem functioning

    SciTech Connect

    Bertram, Jason

    2014-12-05

    Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes’ broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.

  4. Maximum entropy models of ecosystem functioning

    NASA Astrophysics Data System (ADS)

    Bertram, Jason

    2014-12-01

    Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes' broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.

  5. Stimulus-dependent Maximum Entropy Models of Neural Population Codes

    PubMed Central

    Segev, Ronen; Schneidman, Elad

    2013-01-01

    Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME) model—a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population. PMID:23516339

  6. A maximum entropy model for opinions in social groups

    NASA Astrophysics Data System (ADS)

    Davis, Sergio; Navarrete, Yasmín; Gutiérrez, Gonzalo

    2014-04-01

    We study how the opinions of a group of individuals determine their spatial distribution and connectivity, through an agent-based model. The interaction between agents is described by a Hamiltonian in which agents are allowed to move freely without an underlying lattice (the average network topology connecting them is determined from the parameters). This kind of model was derived using maximum entropy statistical inference under fixed expectation values of certain probabilities that (we propose) are relevant to social organization. Control parameters emerge as Lagrange multipliers of the maximum entropy problem, and they can be associated with the level of consequence between the personal beliefs and external opinions, and the tendency to socialize with peers of similar or opposing views. These parameters define a phase diagram for the social system, which we studied using Monte Carlo Metropolis simulations. Our model presents both first and second-order phase transitions, depending on the ratio between the internal consequence and the interaction with others. We have found a critical value for the level of internal consequence, below which the personal beliefs of the agents seem to be irrelevant.

  7. Reducing Degeneracy in Maximum Entropy Models of Networks

    NASA Astrophysics Data System (ADS)

    Horvát, Szabolcs; Czabarka, Éva; Toroczkai, Zoltán

    2015-04-01

    Based on Jaynes's maximum entropy principle, exponential random graphs provide a family of principled models that allow the prediction of network properties as constrained by empirical data (observables). However, their use is often hindered by the degeneracy problem characterized by spontaneous symmetry breaking, where predictions fail. Here we show that degeneracy appears when the corresponding density of states function is not log-concave, which is typically the consequence of nonlinear relationships between the constraining observables. Exploiting these nonlinear relationships here we propose a solution to the degeneracy problem for a large class of systems via transformations that render the density of states function log-concave. The effectiveness of the method is demonstrated on examples.

  8. From Maximum Entropy Models to Non-Stationarity and Irreversibility

    NASA Astrophysics Data System (ADS)

    Cofre, Rodrigo; Cessac, Bruno; Maldonado, Cesar

    The maximum entropy distribution can be obtained from a variational principle. This is important as a matter of principle and for the purpose of finding approximate solutions. One can exploit this fact to obtain relevant information about the underlying stochastic process. We report here in recent progress in three aspects to this approach.1- Biological systems are expected to show some degree of irreversibility in time. Based on the transfer matrix technique to find the spatio-temporal maximum entropy distribution, we build a framework to quantify the degree of irreversibility of any maximum entropy distribution.2- The maximum entropy solution is characterized by a functional called Gibbs free energy (solution of the variational principle). The Legendre transformation of this functional is the rate function, which controls the speed of convergence of empirical averages to their ergodic mean. We show how the correct description of this functional is determinant for a more rigorous characterization of first and higher order phase transitions.3- We assess the impact of a weak time-dependent external stimulus on the collective statistics of spiking neuronal networks. We show how to evaluate this impact on any higher order spatio-temporal correlation. RC supported by ERC advanced Grant ``Bridges'', BC: KEOPS ANR-CONICYT, Renvision and CM: CONICYT-FONDECYT No. 3140572.

  9. On the maximum-entropy/autoregressive modeling of time series

    NASA Technical Reports Server (NTRS)

    Chao, B. F.

    1984-01-01

    The autoregressive (AR) model of a random process is interpreted in the light of the Prony's relation which relates a complex conjugate pair of poles of the AR process in the z-plane (or the z domain) on the one hand, to the complex frequency of one complex harmonic function in the time domain on the other. Thus the AR model of a time series is one that models the time series as a linear combination of complex harmonic functions, which include pure sinusoids and real exponentials as special cases. An AR model is completely determined by its z-domain pole configuration. The maximum-entropy/autogressive (ME/AR) spectrum, defined on the unit circle of the z-plane (or the frequency domain), is nothing but a convenient, but ambiguous visual representation. It is asserted that the position and shape of a spectral peak is determined by the corresponding complex frequency, and the height of the spectral peak contains little information about the complex amplitude of the complex harmonic functions.

  10. Measurement scale in maximum entropy models of species abundance

    PubMed Central

    Frank, Steven A.

    2010-01-01

    The consistency of the species abundance distribution across diverse communities has attracted widespread attention. In this paper, I argue that the consistency of pattern arises because diverse ecological mechanisms share a common symmetry with regard to measurement scale. By symmetry, I mean that different ecological processes preserve the same measure of information and lose all other information in the aggregation of various perturbations. I frame these explanations of symmetry, measurement, and aggregation in terms of a recently developed extension to the theory of maximum entropy. I show that the natural measurement scale for the species abundance distribution is log-linear: the information in observations at small population sizes scales logarithmically and, as population size increases, the scaling of information grades from logarithmic to linear. Such log-linear scaling leads naturally to a gamma distribution for species abundance, which matches well with the observed patterns. Much of the variation between samples can be explained by the magnitude at which the measurement scale grades from logarithmic to linear. This measurement approach can be applied to the similar problem of allelic diversity in population genetics and to a wide variety of other patterns in biology. PMID:21265915

  11. The Maximum Entropy Principle for Generalized Entropies

    NASA Astrophysics Data System (ADS)

    Tsukada, Makoto

    2008-03-01

    It is well known that Gibbs states and the Gaussian distribution are characterized by the maximum entropy principle. In this paper we discuss probability distributions which maximize generalized entropies including Rényi's and Tsal-lis's.

  12. Structural modelling and control design under incomplete parameter information: The maximum-entropy approach

    NASA Technical Reports Server (NTRS)

    Hyland, D. C.

    1983-01-01

    A stochastic structural control model is described. In contrast to the customary deterministic model, the stochastic minimum data/maximum entropy model directly incorporates the least possible a priori parameter information. The approach is to adopt this model as the basic design model, thus incorporating the effects of parameter uncertainty at a fundamental level, and design mean-square optimal controls (that is, choose the control law to minimize the average of a quadratic performance index over the parameter ensemble).

  13. Generalized Maximum Entropy

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter; Stutz, John

    2005-01-01

    A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].

  14. Maximum Entropies Copulas

    NASA Astrophysics Data System (ADS)

    Pougaza, Doriano-Boris; Mohammad-Djafari, Ali

    2011-03-01

    New families of copulas are obtained in a two-step process: first considering the inverse problem which consists of finding a joint distribution from its given marginals as the constrained maximization of some entropies (Shannon, Rényi, Burg, Tsallis-Havrda-Charvát), and then using Sklar's theorem, to define the corresponding copula.

  15. Inferring Pairwise Interactions from Biological Data Using Maximum-Entropy Probability Models

    PubMed Central

    Stein, Richard R.; Marks, Debora S.; Sander, Chris

    2015-01-01

    Maximum entropy-based inference methods have been successfully used to infer direct interactions from biological datasets such as gene expression data or sequence ensembles. Here, we review undirected pairwise maximum-entropy probability models in two categories of data types, those with continuous and categorical random variables. As a concrete example, we present recently developed inference methods from the field of protein contact prediction and show that a basic set of assumptions leads to similar solution strategies for inferring the model parameters in both variable types. These parameters reflect interactive couplings between observables, which can be used to predict global properties of the biological system. Such methods are applicable to the important problems of protein 3-D structure prediction and association of gene–gene networks, and they enable potential applications to the analysis of gene alteration patterns and to protein design. PMID:26225866

  16. The Research on Chinese Coreference Resolution Based on Maximum Entropy Model and Rules

    NASA Astrophysics Data System (ADS)

    Zhang, Yihao; Guo, Jianyi; Yu, Zhengtao; Zhang, Zhikun; Yao, Xianming

    Coreference resolution is an important research topic in natural language processing, including the coreference resolution of proper nouns, common nouns and pronouns. In this paper, a coreference resolution algorithm of the Chinese noun phrase and the pronoun is proposed that based on maximum entropy model and rules. The use of maximum entropy model can integrate effectively a variety of separate features, on this basis to use rules method to improve the recall rate of digestion, and then use filtering rules to remove "noise" to further improve the accuracy rate of digestion. Experiments show that the F value of the algorithm in a closed test and an open test can reach 85.2% and 76.2% respectively, which improve about 12.9 percentage points and 7.8 percentage points compare with the method of rules respectively.

  17. The simplest maximum entropy model for collective behavior in a neural network

    NASA Astrophysics Data System (ADS)

    Tkačik, Gašper; Marre, Olivier; Mora, Thierry; Amodei, Dario; Berry, Michael J., II; Bialek, William

    2013-03-01

    Recent work emphasizes that the maximum entropy principle provides a bridge between statistical mechanics models for collective behavior in neural networks and experiments on networks of real neurons. Most of this work has focused on capturing the measured correlations among pairs of neurons. Here we suggest an alternative, constructing models that are consistent with the distribution of global network activity, i.e. the probability that K out of N cells in the network generate action potentials in the same small time bin. The inverse problem that we need to solve in constructing the model is analytically tractable, and provides a natural ‘thermodynamics’ for the network in the limit of large N. We analyze the responses of neurons in a small patch of the retina to naturalistic stimuli, and find that the implied thermodynamics is very close to an unusual critical point, in which the entropy (in proper units) is exactly equal to the energy.

  18. Convex accelerated maximum entropy reconstruction

    NASA Astrophysics Data System (ADS)

    Worley, Bradley

    2016-04-01

    Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm - called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm - is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra.

  19. Steepest entropy ascent model for far-nonequilibrium thermodynamics: Unified implementation of the maximum entropy production principle

    NASA Astrophysics Data System (ADS)

    Beretta, Gian Paolo

    2014-10-01

    By suitable reformulations, we cast the mathematical frameworks of several well-known different approaches to the description of nonequilibrium dynamics into a unified formulation valid in all these contexts, which extends to such frameworks the concept of steepest entropy ascent (SEA) dynamics introduced by the present author in previous works on quantum thermodynamics. Actually, the present formulation constitutes a generalization also for the quantum thermodynamics framework. The analysis emphasizes that in the SEA modeling principle a key role is played by the geometrical metric with respect to which to measure the length of a trajectory in state space. In the near-thermodynamic-equilibrium limit, the metric tensor is directly related to the Onsager's generalized resistivity tensor. Therefore, through the identification of a suitable metric field which generalizes the Onsager generalized resistance to the arbitrarily far-nonequilibrium domain, most of the existing theories of nonequilibrium thermodynamics can be cast in such a way that the state exhibits the spontaneous tendency to evolve in state space along the path of SEA compatible with the conservation constraints and the boundary conditions. The resulting unified family of SEA dynamical models is intrinsically and strongly consistent with the second law of thermodynamics. The non-negativity of the entropy production is a general and readily proved feature of SEA dynamics. In several of the different approaches to nonequilibrium description we consider here, the SEA concept has not been investigated before. We believe it defines the precise meaning and the domain of general validity of the so-called maximum entropy production principle. Therefore, it is hoped that the present unifying approach may prove useful in providing a fresh basis for effective, thermodynamically consistent, numerical models and theoretical treatments of irreversible conservative relaxation towards equilibrium from far nonequilibrium

  20. Steepest entropy ascent model for far-nonequilibrium thermodynamics: unified implementation of the maximum entropy production principle.

    PubMed

    Beretta, Gian Paolo

    2014-10-01

    By suitable reformulations, we cast the mathematical frameworks of several well-known different approaches to the description of nonequilibrium dynamics into a unified formulation valid in all these contexts, which extends to such frameworks the concept of steepest entropy ascent (SEA) dynamics introduced by the present author in previous works on quantum thermodynamics. Actually, the present formulation constitutes a generalization also for the quantum thermodynamics framework. The analysis emphasizes that in the SEA modeling principle a key role is played by the geometrical metric with respect to which to measure the length of a trajectory in state space. In the near-thermodynamic-equilibrium limit, the metric tensor is directly related to the Onsager's generalized resistivity tensor. Therefore, through the identification of a suitable metric field which generalizes the Onsager generalized resistance to the arbitrarily far-nonequilibrium domain, most of the existing theories of nonequilibrium thermodynamics can be cast in such a way that the state exhibits the spontaneous tendency to evolve in state space along the path of SEA compatible with the conservation constraints and the boundary conditions. The resulting unified family of SEA dynamical models is intrinsically and strongly consistent with the second law of thermodynamics. The non-negativity of the entropy production is a general and readily proved feature of SEA dynamics. In several of the different approaches to nonequilibrium description we consider here, the SEA concept has not been investigated before. We believe it defines the precise meaning and the domain of general validity of the so-called maximum entropy production principle. Therefore, it is hoped that the present unifying approach may prove useful in providing a fresh basis for effective, thermodynamically consistent, numerical models and theoretical treatments of irreversible conservative relaxation towards equilibrium from far nonequilibrium

  1. Modelling streambank erosion potential using maximum entropy in a central Appalachian watershed

    NASA Astrophysics Data System (ADS)

    Pitchford, J.; Strager, M.; Riley, A.; Lin, L.; Anderson, J.

    2015-03-01

    We used maximum entropy to model streambank erosion potential (SEP) in a central Appalachian watershed to help prioritize sites for management. Model development included measuring erosion rates, application of a quantitative approach to locate Target Eroding Areas (TEAs), and creation of maps of boundary conditions. We successfully constructed a probability distribution of TEAs using the program Maxent. All model evaluation procedures indicated that the model was an excellent predictor, and that the major environmental variables controlling these processes were streambank slope, soil characteristics, bank position, and underlying geology. A classification scheme with low, moderate, and high levels of SEP derived from logistic model output was able to differentiate sites with low erosion potential from sites with moderate and high erosion potential. A major application of this type of modelling framework is to address uncertainty in stream restoration planning, ultimately helping to bridge the gap between restoration science and practice.

  2. Maximum entropy production in daisyworld

    NASA Astrophysics Data System (ADS)

    Maunu, Haley A.; Knuth, Kevin H.

    2012-05-01

    Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.

  3. Alternative Multiview Maximum Entropy Discrimination.

    PubMed

    Chao, Guoqing; Sun, Shiliang

    2016-07-01

    Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported. PMID:26111403

  4. Modeling the Mass Action Dynamics of Metabolism with Fluctuation Theorems and Maximum Entropy

    NASA Astrophysics Data System (ADS)

    Cannon, William; Thomas, Dennis; Baxter, Douglas; Zucker, Jeremy; Goh, Garrett

    The laws of thermodynamics dictate the behavior of biotic and abiotic systems. Simulation methods based on statistical thermodynamics can provide a fundamental understanding of how biological systems function and are coupled to their environment. While mass action kinetic simulations are based on solving ordinary differential equations using rate parameters, analogous thermodynamic simulations of mass action dynamics are based on modeling states using chemical potentials. The latter have the advantage that standard free energies of formation/reaction and metabolite levels are much easier to determine than rate parameters, allowing one to model across a large range of scales. Bridging theory and experiment, statistical thermodynamics simulations allow us to both predict activities of metabolites and enzymes and use experimental measurements of metabolites and proteins as input data. Even if metabolite levels are not available experimentally, it is shown that a maximum entropy assumption is quite reasonable and in many cases results in both the most energetically efficient process and the highest material flux.

  5. Bayesian Maximum Entropy Integration of Ozone Observations and Model Predictions: A National Application.

    PubMed

    Xu, Yadong; Serre, Marc L; Reyes, Jeanette; Vizuete, William

    2016-04-19

    To improve ozone exposure estimates for ambient concentrations at a national scale, we introduce our novel Regionalized Air Quality Model Performance (RAMP) approach to integrate chemical transport model (CTM) predictions with the available ozone observations using the Bayesian Maximum Entropy (BME) framework. The framework models the nonlinear and nonhomoscedastic relation between air pollution observations and CTM predictions and for the first time accounts for variability in CTM model performance. A validation analysis using only noncollocated data outside of a validation radius rv was performed and the R(2) between observations and re-estimated values for two daily metrics, the daily maximum 8-h average (DM8A) and the daily 24-h average (D24A) ozone concentrations, were obtained with the OBS scenario using ozone observations only in contrast with the RAMP and a Constant Air Quality Model Performance (CAMP) scenarios. We show that, by accounting for the spatial and temporal variability in model performance, our novel RAMP approach is able to extract more information in terms of R(2) increase percentage, with over 12 times for the DM8A and over 3.5 times for the D24A ozone concentrations, from CTM predictions than the CAMP approach assuming that model performance does not change across space and time. PMID:26998937

  6. Online Robot Dead Reckoning Localization Using Maximum Relative Entropy Optimization With Model Constraints

    SciTech Connect

    Urniezius, Renaldas

    2011-03-14

    The principle of Maximum relative Entropy optimization was analyzed for dead reckoning localization of a rigid body when observation data of two attached accelerometers was collected. Model constraints were derived from the relationships between the sensors. The experiment's results confirmed that accelerometers each axis' noise can be successfully filtered utilizing dependency between channels and the dependency between time series data. Dependency between channels was used for a priori calculation, and a posteriori distribution was derived utilizing dependency between time series data. There was revisited data of autocalibration experiment by removing the initial assumption that instantaneous rotation axis of a rigid body was known. Performance results confirmed that such an approach could be used for online dead reckoning localization.

  7. On the sufficiency of pairwise interactions in maximum entropy models of networks

    NASA Astrophysics Data System (ADS)

    Nemenman, Ilya; Merchan, Lina

    Biological information processing networks consist of many components, which are coupled by an even larger number of complex multivariate interactions. However, analyses of data sets from fields as diverse as neuroscience, molecular biology, and behavior have reported that observed statistics of states of some biological networks can be approximated well by maximum entropy models with only pairwise interactions among the components. Based on simulations of random Ising spin networks with p-spin (p > 2) interactions, here we argue that this reduction in complexity can be thought of as a natural property of some densely interacting networks in certain regimes, and not necessarily as a special property of living systems. This work was supported in part by James S. McDonnell Foundation Grant No. 220020321.

  8. On the Sufficiency of Pairwise Interactions in Maximum Entropy Models of Networks

    NASA Astrophysics Data System (ADS)

    Merchan, Lina; Nemenman, Ilya

    2016-03-01

    Biological information processing networks consist of many components, which are coupled by an even larger number of complex multivariate interactions. However, analyses of data sets from fields as diverse as neuroscience, molecular biology, and behavior have reported that observed statistics of states of some biological networks can be approximated well by maximum entropy models with only pairwise interactions among the components. Based on simulations of random Ising spin networks with p-spin (p>2) interactions, here we argue that this reduction in complexity can be thought of as a natural property of densely interacting networks in certain regimes, and not necessarily as a special property of living systems. By connecting our analysis to the theory of random constraint satisfaction problems, we suggest a reason for why some biological systems may operate in this regime.

  9. Tissue Radiation Response with Maximum Tsallis Entropy

    SciTech Connect

    Sotolongo-Grau, O.; Rodriguez-Perez, D.; Antoranz, J. C.; Sotolongo-Costa, Oscar

    2010-10-08

    The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.

  10. Maximum Entropy Principle for Transportation

    NASA Astrophysics Data System (ADS)

    Bilich, F.; DaSilva, R.

    2008-11-01

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.

  11. Maximum entropy principal for transportation

    SciTech Connect

    Bilich, F.; Da Silva, R.

    2008-11-06

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.

  12. Economics and Maximum Entropy Production

    NASA Astrophysics Data System (ADS)

    Lorenz, R. D.

    2003-04-01

    Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.

  13. Maximum-Entropy Models of Sequenced Immune Repertoires Predict Antigen-Antibody Affinity.

    PubMed

    Asti, Lorenzo; Uguzzoni, Guido; Marcatili, Paolo; Pagnani, Andrea

    2016-04-01

    The immune system has developed a number of distinct complex mechanisms to shape and control the antibody repertoire. One of these mechanisms, the affinity maturation process, works in an evolutionary-like fashion: after binding to a foreign molecule, the antibody-producing B-cells exhibit a high-frequency mutation rate in the genome region that codes for the antibody active site. Eventually, cells that produce antibodies with higher affinity for their cognate antigen are selected and clonally expanded. Here, we propose a new statistical approach based on maximum entropy modeling in which a scoring function related to the binding affinity of antibodies against a specific antigen is inferred from a sample of sequences of the immune repertoire of an individual. We use our inference strategy to infer a statistical model on a data set obtained by sequencing a fairly large portion of the immune repertoire of an HIV-1 infected patient. The Pearson correlation coefficient between our scoring function and the IC50 neutralization titer measured on 30 different antibodies of known sequence is as high as 0.77 (p-value 10-6), outperforming other sequence- and structure-based models. PMID:27074145

  14. Maximum-Entropy Models of Sequenced Immune Repertoires Predict Antigen-Antibody Affinity

    PubMed Central

    Marcatili, Paolo; Pagnani, Andrea

    2016-01-01

    The immune system has developed a number of distinct complex mechanisms to shape and control the antibody repertoire. One of these mechanisms, the affinity maturation process, works in an evolutionary-like fashion: after binding to a foreign molecule, the antibody-producing B-cells exhibit a high-frequency mutation rate in the genome region that codes for the antibody active site. Eventually, cells that produce antibodies with higher affinity for their cognate antigen are selected and clonally expanded. Here, we propose a new statistical approach based on maximum entropy modeling in which a scoring function related to the binding affinity of antibodies against a specific antigen is inferred from a sample of sequences of the immune repertoire of an individual. We use our inference strategy to infer a statistical model on a data set obtained by sequencing a fairly large portion of the immune repertoire of an HIV-1 infected patient. The Pearson correlation coefficient between our scoring function and the IC50 neutralization titer measured on 30 different antibodies of known sequence is as high as 0.77 (p-value 10−6), outperforming other sequence- and structure-based models. PMID:27074145

  15. Computational design of hepatitis C vaccines using maximum entropy models and population dynamics

    NASA Astrophysics Data System (ADS)

    Hart, Gregory; Ferguson, Andrew

    Hepatitis C virus (HCV) afflicts 170 million people and kills 350,000 annually. Vaccination offers the most realistic and cost effective hope of controlling this epidemic. Despite 20 years of research, no vaccine is available. A major obstacle is the virus' extreme genetic variability and rapid mutational escape from immune pressure. Improvements in the vaccine design process are urgently needed. Coupling data mining with spin glass models and maximum entropy inference, we have developed a computational approach to translate sequence databases into empirical fitness landscapes. These landscapes explicitly connect viral genotype to phenotypic fitness and reveal vulnerable targets that can be exploited to rationally design immunogens. Viewing these landscapes as the mutational ''playing field'' over which the virus is constrained to evolve, we have integrated them with agent-based models of the viral mutational and host immune response dynamics, establishing a data-driven immune simulator of HCV infection. We have employed this simulator to perform in silico screening of HCV immunogens. By systematically identifying a small number of promising vaccine candidates, these models can accelerate the search for a vaccine by massively reducing the experimental search space.

  16. Predicting the distribution of the Asian tapir in Peninsular Malaysia using maximum entropy modeling.

    PubMed

    Clements, Gopalasamy Reuben; Rayan, D Mark; Aziz, Sheema Abdul; Kawanishi, Kae; Traeholt, Carl; Magintan, David; Yazi, Muhammad Fadlli Abdul; Tingley, Reid

    2012-12-01

    In 2008, the IUCN threat status of the Asian tapir (Tapirus indicus) was reclassified from 'vulnerable' to 'endangered'. The latest distribution map from the IUCN Red List suggests that the tapirs' native range is becoming increasingly fragmented in Peninsular Malaysia, but distribution data collected by local researchers suggest a more extensive geographical range. Here, we compile a database of 1261 tapir occurrence records within Peninsular Malaysia, and demonstrate that this species, indeed, has a much broader geographical range than the IUCN range map suggests. However, extreme spatial and temporal bias in these records limits their utility for conservation planning. Therefore, we used maximum entropy (MaxEnt) modeling to elucidate the potential extent of the Asian tapir's occurrence in Peninsular Malaysia while accounting for bias in existing distribution data. Our MaxEnt model predicted that the Asian tapir has a wider geographic range than our fine-scale data and the IUCN range map both suggest. Approximately 37% of Peninsular Malaysia contains potentially suitable tapir habitats. Our results justify a revision to the Asian tapir's extent of occurrence in the IUCN Red List. Furthermore, our modeling demonstrated that selectively logged forests encompass 45% of potentially suitable tapir habitats, underscoring the importance of these habitats for the conservation of this species in Peninsular Malaysia. PMID:23253371

  17. Predictive Modeling and Mapping of Malayan Sun Bear (Helarctos malayanus) Distribution Using Maximum Entropy

    PubMed Central

    Nazeri, Mona; Jusoff, Kamaruzaman; Madani, Nima; Mahmud, Ahmad Rodzi; Bahman, Abdul Rani; Kumar, Lalit

    2012-01-01

    One of the available tools for mapping the geographical distribution and potential suitable habitats is species distribution models. These techniques are very helpful for finding poorly known distributions of species in poorly sampled areas, such as the tropics. Maximum Entropy (MaxEnt) is a recently developed modeling method that can be successfully calibrated using a relatively small number of records. In this research, the MaxEnt model was applied to describe the distribution and identify the key factors shaping the potential distribution of the vulnerable Malayan Sun Bear (Helarctos malayanus) in one of the main remaining habitats in Peninsular Malaysia. MaxEnt results showed that even though Malaysian sun bear habitat is tied with tropical evergreen forests, it lives in a marginal threshold of bio-climatic variables. On the other hand, current protected area networks within Peninsular Malaysia do not cover most of the sun bears potential suitable habitats. Assuming that the predicted suitability map covers sun bears actual distribution, future climate change, forest degradation and illegal hunting could potentially severely affect the sun bear’s population. PMID:23110182

  18. Predictive modeling and mapping of Malayan Sun Bear (Helarctos malayanus) distribution using maximum entropy.

    PubMed

    Nazeri, Mona; Jusoff, Kamaruzaman; Madani, Nima; Mahmud, Ahmad Rodzi; Bahman, Abdul Rani; Kumar, Lalit

    2012-01-01

    One of the available tools for mapping the geographical distribution and potential suitable habitats is species distribution models. These techniques are very helpful for finding poorly known distributions of species in poorly sampled areas, such as the tropics. Maximum Entropy (MaxEnt) is a recently developed modeling method that can be successfully calibrated using a relatively small number of records. In this research, the MaxEnt model was applied to describe the distribution and identify the key factors shaping the potential distribution of the vulnerable Malayan Sun Bear (Helarctos malayanus) in one of the main remaining habitats in Peninsular Malaysia. MaxEnt results showed that even though Malaysian sun bear habitat is tied with tropical evergreen forests, it lives in a marginal threshold of bio-climatic variables. On the other hand, current protected area networks within Peninsular Malaysia do not cover most of the sun bears potential suitable habitats. Assuming that the predicted suitability map covers sun bears actual distribution, future climate change, forest degradation and illegal hunting could potentially severely affect the sun bear's population. PMID:23110182

  19. Zipf's law, power laws and maximum entropy

    NASA Astrophysics Data System (ADS)

    Visser, Matt

    2013-04-01

    Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.

  20. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    USGS Publications Warehouse

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  1. A subjective supply-demand model: the maximum Boltzmann/Shannon entropy solution

    NASA Astrophysics Data System (ADS)

    Piotrowski, Edward W.; Sładkowski, Jan

    2009-03-01

    The present authors have put forward a projective geometry model of rational trading. The expected (mean) value of the time that is necessary to strike a deal and the profit strongly depend on the strategies adopted. A frequent trader often prefers maximal profit intensity to the maximization of profit resulting from a separate transaction because the gross profit/income is the adopted/recommended benchmark. To investigate activities that have different periods of duration we define, following the queuing theory, the profit intensity as a measure of this economic category. The profit intensity in repeated trading has a unique property of attaining its maximum at a fixed point regardless of the shape of demand curves for a wide class of probability distributions of random reverse transactions (i.e. closing of the position). These conclusions remain valid for an analogous model based on supply analysis. This type of market game is often considered in research aiming at finding an algorithm that maximizes profit of a trader who negotiates prices with the Rest of the World (a collective opponent), possessing a definite and objective supply profile. Such idealization neglects the sometimes important influence of an individual trader on the demand/supply profile of the Rest of the World and in extreme cases questions the very idea of demand/supply profile. Therefore we put forward a trading model in which the demand/supply profile of the Rest of the World induces the (rational) trader to (subjectively) presume that he/she lacks (almost) all knowledge concerning the market but his/her average frequency of trade. This point of view introduces maximum entropy principles into the model and broadens the range of economic phenomena that can be perceived as a sort of thermodynamical system. As a consequence, the profit intensity has a fixed point with an astonishing connection with Fibonacci classical works and looking for the quickest algorithm for obtaining the extremum of a

  2. Maximum Entropy Production Modeling of Evapotranspiration Partitioning on Heterogeneous Terrain and Canopy Cover: advantages and limitations.

    NASA Astrophysics Data System (ADS)

    Gutierrez-Jurado, H. A.; Guan, H.; Wang, J.; Wang, H.; Bras, R. L.; Simmons, C. T.

    2015-12-01

    Quantification of evapotranspiration (ET) and its partition over regions of heterogeneous topography and canopy poses a challenge using traditional approaches. In this study, we report the results of a novel field experiment design guided by the Maximum Entropy Production model of ET (MEP-ET), formulated for estimating evaporation and transpiration from homogeneous soil and canopy. A catchment with complex terrain and patchy vegetation in South Australia was instrumented to measure temperature, humidity and net radiation at soil and canopy surfaces. Performance of the MEP-ET model to quantify transpiration and soil evaporation was evaluated during wet and dry conditions with independently and directly measured transpiration from sapflow and soil evaporation using the Bowen Ratio Energy Balance (BREB). MEP-ET transpiration shows remarkable agreement with that obtained through sapflow measurements during wet conditions, but consistently overestimates the flux during dry periods. However, an additional term introduced to the original MEP-ET model accounting for higher stomatal regulation during dry spells, based on differences between leaf and air vapor pressure deficits and temperatures, significantly improves the model performance. On the other hand, MEP-ET soil evaporation is in good agreement with that from BREB regardless of moisture conditions. The experimental design allows a plot and tree scale quantification of evaporation and transpiration respectively. This study confirms for the first time that the MEP-ET originally developed for homogeneous open bare soil and closed canopy can be used for modeling ET over heterogeneous land surfaces. Furthermore, we show that with the addition of an empirical function simulating the plants ability to regulate transpiration, and based on the same measurements of temperature and humidity, the method can produce reliable estimates of ET during both wet and dry conditions without compromising its parsimony.

  3. Maximum entropy and drug absorption.

    PubMed

    Charter, M K; Gull, S F

    1991-10-01

    The application of maximum entropy to the calculation of drug absorption rates was introduced in an earlier paper. Here it is developed further, and the whole procedure is presented as a problem in scientific inference to be solved using Bayes' theorem. Blood samples do not need to be taken at equally spaced intervals, and no smoothing, interpolation, extrapolation, or other preprocessing of the data is necessary. The resulting input rate estimates are smooth and physiologically realistic, even with noisy data, and their accuracy is quantified. Derived quantities such as the proportion of the dose absorbed, and the mean and median absorption times, are also obtained, together with their error estimates. There are no arbitrarily valued parameters in the analysis, and no specific functional form, such as an exponential or polynomial, is assumed for the input rate functions. PMID:1783989

  4. Maximum-entropy description of animal movement.

    PubMed

    Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M

    2015-03-01

    We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic. PMID:25871054

  5. Maximum-entropy description of animal movement

    NASA Astrophysics Data System (ADS)

    Fleming, Chris H.; Subaşı, Yiǧit; Calabrese, Justin M.

    2015-03-01

    We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.

  6. Role of adjacency-matrix degeneracy in maximum-entropy-weighted network models

    NASA Astrophysics Data System (ADS)

    Sagarra, O.; Pérez Vicente, C. J.; Díaz-Guilera, A.

    2015-11-01

    Complex network null models based on entropy maximization are becoming a powerful tool to characterize and analyze data from real systems. However, it is not easy to extract good and unbiased information from these models: A proper understanding of the nature of the underlying events represented in them is crucial. In this paper we emphasize this fact stressing how an accurate counting of configurations compatible with given constraints is fundamental to build good null models for the case of networks with integer-valued adjacency matrices constructed from an aggregation of one or multiple layers. We show how different assumptions about the elements from which the networks are built give rise to distinctively different statistics, even when considering the same observables to match those of real data. We illustrate our findings by applying the formalism to three data sets using an open-source software package accompanying the present work and demonstrate how such differences are clearly seen when measuring network observables.

  7. The two-box model of climate: limitations and applications to planetary habitability and maximum entropy production studies.

    PubMed

    Lorenz, Ralph D

    2010-05-12

    The 'two-box model' of planetary climate is discussed. This model has been used to demonstrate consistency of the equator-pole temperature gradient on Earth, Mars and Titan with what would be predicted from a principle of maximum entropy production (MEP). While useful for exposition and for generating first-order estimates of planetary heat transports, it has too low a resolution to investigate climate systems with strong feedbacks. A two-box MEP model agrees well with the observed day : night temperature contrast observed on the extrasolar planet HD 189733b. PMID:20368253

  8. Dynamical maximum entropy approach to flocking

    NASA Astrophysics Data System (ADS)

    Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.

    2014-04-01

    We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.

  9. The two-box model of climate: limitations and applications to planetary habitability and maximum entropy production studies

    PubMed Central

    Lorenz, Ralph D.

    2010-01-01

    The ‘two-box model’ of planetary climate is discussed. This model has been used to demonstrate consistency of the equator–pole temperature gradient on Earth, Mars and Titan with what would be predicted from a principle of maximum entropy production (MEP). While useful for exposition and for generating first-order estimates of planetary heat transports, it has too low a resolution to investigate climate systems with strong feedbacks. A two-box MEP model agrees well with the observed day : night temperature contrast observed on the extrasolar planet HD 189733b. PMID:20368253

  10. Maximum entropy production - Full steam ahead

    NASA Astrophysics Data System (ADS)

    Lorenz, Ralph D.

    2012-05-01

    The application of a principle of Maximum Entropy Production (MEP, or less ambiguously MaxEP) to planetary climate is discussed. This idea suggests that if sufficiently free of dynamical constraints, the atmospheric and oceanic heat flows across a planet may conspire to maximize the generation of mechanical work, or entropy. Thermodynamic and information-theoretic aspects of this idea are discussed. These issues are also discussed in the context of dust devils, convective vortices found in strongly-heated desert areas.

  11. Weak scale from the maximum entropy principle

    NASA Astrophysics Data System (ADS)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2015-03-01

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  12. Predicting the potential environmental suitability for Theileria orientalis transmission in New Zealand cattle using maximum entropy niche modelling.

    PubMed

    Lawrence, K E; Summers, S R; Heath, A C G; McFadden, A M J; Pulford, D J; Pomroy, W E

    2016-07-15

    The tick-borne haemoparasite Theileria orientalis is the most important infectious cause of anaemia in New Zealand cattle. Since 2012 a previously unrecorded type, T. orientalis type 2 (Ikeda), has been associated with disease outbreaks of anaemia, lethargy, jaundice and deaths on over 1000 New Zealand cattle farms, with most of the affected farms found in the upper North Island. The aim of this study was to model the relative environmental suitability for T. orientalis transmission throughout New Zealand, to predict the proportion of cattle farms potentially suitable for active T. orientalis infection by region, island and the whole of New Zealand and to estimate the average relative environmental suitability per farm by region, island and the whole of New Zealand. The relative environmental suitability for T. orientalis transmission was estimated using the Maxent (maximum entropy) modelling program. The Maxent model predicted that 99% of North Island cattle farms (n=36,257), 64% South Island cattle farms (n=15,542) and 89% of New Zealand cattle farms overall (n=51,799) could potentially be suitable for T. orientalis transmission. The average relative environmental suitability of T. orientalis transmission at the farm level was 0.34 in the North Island, 0.02 in the South Island and 0.24 overall. The study showed that the potential spatial distribution of T. orientalis environmental suitability was much greater than presumed in the early part of the Theileria associated bovine anaemia (TABA) epidemic. Maximum entropy offers a computer efficient method of modelling the probability of habitat suitability for an arthropod vectored disease. This model could help estimate the boundaries of the endemically stable and endemically unstable areas for T. orientalis transmission within New Zealand and be of considerable value in informing practitioner and farmer biosecurity decisions in these respective areas. PMID:27270395

  13. Maximum entropy analysis of cosmic ray composition

    NASA Astrophysics Data System (ADS)

    Nosek, Dalibor; Ebr, Jan; Vícha, Jakub; Trávníček, Petr; Nosková, Jana

    2016-03-01

    We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the superposition model. We present two examples that demonstrate what consequences can be drawn for energy dependent changes in the primary composition.

  14. Deep-sea benthic megafaunal habitat suitability modelling: A global-scale maximum entropy model for xenophyophores

    NASA Astrophysics Data System (ADS)

    Ashford, Oliver S.; Davies, Andrew J.; Jones, Daniel O. B.

    2014-12-01

    Xenophyophores are a group of exclusively deep-sea agglutinating rhizarian protozoans, at least some of which are foraminifera. They are an important constituent of the deep-sea megafauna that are sometimes found in sufficient abundance to act as a significant source of habitat structure for meiofaunal and macrofaunal organisms. This study utilised maximum entropy modelling (Maxent) and a high-resolution environmental database to explore the environmental factors controlling the presence of Xenophyophorea and two frequently sampled xenophyophore species that are taxonomically stable: Syringammina fragilissima and Stannophyllum zonarium. These factors were also used to predict the global distribution of each taxon. Areas of high habitat suitability for xenophyophores were highlighted throughout the world's oceans, including in a large number of areas yet to be suitably sampled, but the Northeast and Southeast Atlantic Ocean, Gulf of Mexico and Caribbean Sea, the Red Sea and deep-water regions of the Malay Archipelago represented particular hotspots. The two species investigated showed more specific habitat requirements when compared to the model encompassing all xenophyophore records, perhaps in part due to the smaller number and relatively more clustered nature of the presence records available for modelling at present. The environmental variables depth, oxygen parameters, nitrate concentration, carbon-chemistry parameters and temperature were of greatest importance in determining xenophyophore distributions, but, somewhat surprisingly, hydrodynamic parameters were consistently shown to have low importance, possibly due to the paucity of well-resolved global hydrodynamic datasets. The results of this study (and others of a similar type) have the potential to guide further sample collection, environmental policy, and spatial planning of marine protected areas and industrial activities that impact the seafloor, particularly those that overlap with aggregations of

  15. Soil Moisture and Vegetation Controls on Surface Energy Balance Using the Maximum Entropy Production Model of Evapotranspiration

    NASA Astrophysics Data System (ADS)

    Wang, J.; Parolari, A.; Huang, S. Y.

    2014-12-01

    The objective of this study is to formulate and test plant water stress parameterizations for the recently proposed maximum entropy production (MEP) model of evapotranspiration (ET) over vegetated surfaces. . The MEP model of ET is a parsimonious alternative to existing land surface parameterizations of surface energy fluxes from net radiation, temperature, humidity, and a small number of parameters. The MEP model was previously tested for vegetated surfaces under well-watered and dry, dormant conditions, when the surface energy balance is relatively insensitive to plant physiological activity. Under water stressed conditions, however, the plant water stress response strongly affects the surface energy balance. This effect occurs through plant physiological adjustments that reduce ET to maintain leaf turgor pressure as soil moisture is depleted during drought. To improve MEP model of ET predictions under water stress conditions, the model was modified to incorporate this plant-mediated feedback between soil moisture and ET. We compare MEP model predictions to observations under a range of field conditions, including bare soil, grassland, and forest. The results indicate a water stress function that combines the soil water potential in the surface soil layer with the atmospheric humidity successfully reproduces observed ET decreases during drought. In addition to its utility as a modeling tool, the calibrated water stress functions also provide a means to infer ecosystem influence on the land surface state. Challenges associated with sampling model input data (i.e., net radiation, surface temperature, and surface humidity) are also discussed.

  16. Joint Modeling of Multiple Social Networks to Elucidate Primate Social Dynamics: I. Maximum Entropy Principle and Network-Based Interactions

    PubMed Central

    Chan, Stephanie; Fushing, Hsieh; Beisner, Brianne A.; McCowan, Brenda

    2013-01-01

    In a complex behavioral system, such as an animal society, the dynamics of the system as a whole represent the synergistic interaction among multiple aspects of the society. We constructed multiple single-behavior social networks for the purpose of approximating from multiple aspects a single complex behavioral system of interest: rhesus macaque society. Instead of analyzing these networks individually, we describe a new method for jointly analyzing them in order to gain comprehensive understanding about the system dynamics as a whole. This method of jointly modeling multiple networks becomes valuable analytical tool for studying the complex nature of the interaction among multiple aspects of any system. Here we develop a bottom-up, iterative modeling approach based upon the maximum entropy principle. This principle is applied to a multi-dimensional link-based distributional framework, which is derived by jointly transforming the multiple directed behavioral social network data, for extracting patterns of synergistic inter-behavioral relationships. Using a rhesus macaque group as a model system, we jointly modeled and analyzed four different social behavioral networks at two different time points (one stable and one unstable) from a rhesus macaque group housed at the California National Primate Research Center (CNPRC). We report and discuss the inter-behavioral dynamics uncovered by our joint modeling approach with respect to social stability. PMID:23468833

  17. Bayesian Maximum Entropy Integration of Ozone Observations and Model Predictions: An Application for Attainment Demonstration in North Carolina

    PubMed Central

    de Nazelle, Audrey; Arunachalam, Saravanan; Serre, Marc L.

    2010-01-01

    States in the USA are required to demonstrate future compliance of criteria air pollutant standards by using both air quality monitors and model outputs. In the case of ozone, the demonstration tests aim at relying heavily on measured values, due to their perceived objectivity and enforceable quality. Weight given to numerical models is diminished by integrating them in the calculations only in a relative sense. For unmonitored locations, the EPA has suggested the use of a spatial interpolation technique to assign current values. We demonstrate that this approach may lead to erroneous assignments of non-attainment and may make it difficult for States to establish future compliance. We propose a method that combines different sources of information to map air pollution, using the Bayesian Maximum Entropy (BME) Framework. The approach gives precedence to measured values and integrates modeled data as a function of model performance. We demonstrate this approach in North Carolina, using the State’s ozone monitoring network in combination with outputs from the Multiscale Air Quality Simulation Platform (MAQSIP) modeling system. We show that the BME data integration approach, compared to a spatial interpolation of measured data, improves the accuracy and the precision of ozone estimations across the State. PMID:20590110

  18. Pareto versus lognormal: A maximum entropy test

    NASA Astrophysics Data System (ADS)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  19. Maximum entropy and Bayesian methods. Proceedings.

    NASA Astrophysics Data System (ADS)

    Grandy, W. T., Jr.; Schick, L. H.

    This volume contains a selection of papers presented at the Tenth Annual Workshop on Maximum Entropy and Bayesian Methods. The thirty-six papers included cover a wide range of applications in areas such as economics and econometrics, astronomy and astrophysics, general physics, complex systems, image reconstruction, and probability and mathematics. Together they give an excellent state-of-the-art overview of fundamental methods of data analysis.

  20. Inferring global wind energetics from a simple Earth system model based on the principle of maximum entropy production

    NASA Astrophysics Data System (ADS)

    Karkar, S.; Paillard, D.

    2015-03-01

    The question of total available wind power in the atmosphere is highly debated, as well as the effect large scale wind farms would have on the climate. Bottom-up approaches, such as those proposed by wind turbine engineers often lead to non-physical results (non-conservation of energy, mostly), while top-down approaches have proven to give physically consistent results. This paper proposes an original method for the calculation of mean annual wind energetics in the atmosphere, without resorting to heavy numerical integration of the entire dynamics. The proposed method is derived from a model based on the Maximum of Entropy Production (MEP) principle, which has proven to efficiently describe the annual mean temperature and energy fluxes, despite its simplicity. Because the atmosphere is represented with only one vertical layer and there is no vertical wind component, the model fails to represent the general circulation patterns such as cells or trade winds. However, interestingly, global energetic diagnostics are well captured by the mere combination of a simple MEP model and a flux inversion method.

  1. Dynamics of the Anderson model for dilute magnetic alloys: A quantum Monte Carlo and maximum entropy study

    SciTech Connect

    Silver, R.N.; Gubernatis, J.E.; Sivia, D.S. ); Jarrell, M. . Dept. of Physics)

    1990-01-01

    In this article we describe the results of a new method for calculating the dynamical properties of the Anderson model. QMC generates data about the Matsubara Green's functions in imaginary time. To obtain dynamical properties, one must analytically continue these data to real time. This is an extremely ill-posed inverse problem similar to the inversion of a Laplace transform from incomplete and noisy data. Our method is a general one, applicable to the calculation of dynamical properties from a wide variety of quantum simulations. We use Bayesian methods of statistical inference to determine the dynamical properties based on both the QMC data and any prior information we may have such as sum rules, symmetry, high frequency limits, etc. This provides a natural means of combining perturbation theory and numerical simulations in order to understand dynamical many-body problems. Specifically we use the well-established maximum entropy (ME) method for image reconstruction. We obtain the spectral density and transport coefficients over the entire range of model parameters accessible by QMC, with data having much larger statistical error than required by other proposed analytic continuation methods.

  2. Predicting Changes in Macrophyte Community Structure from Functional Traits in a Freshwater Lake: A Test of Maximum Entropy Model

    PubMed Central

    Fu, Hui; Zhong, Jiayou; Yuan, Guixiang; Guo, Chunjing; Lou, Qian; Zhang, Wei; Xu, Jun; Ni, Leyi; Xie, Ping; Cao, Te

    2015-01-01

    Trait-based approaches have been widely applied to investigate how community dynamics respond to environmental gradients. In this study, we applied a series of maximum entropy (maxent) models incorporating functional traits to unravel the processes governing macrophyte community structure along water depth gradient in a freshwater lake. We sampled 42 plots and 1513 individual plants, and measured 16 functional traits and abundance of 17 macrophyte species. Study results showed that maxent model can be highly robust (99.8%) in predicting the species relative abundance of macrophytes with observed community-weighted mean (CWM) traits as the constraints, while relative low (about 30%) with CWM traits fitted from water depth gradient as the constraints. The measured traits showed notably distinct importance in predicting species abundances, with lowest for perennial growth form and highest for leaf dry mass content. For tuber and leaf nitrogen content, there were significant shifts in their effects on species relative abundance from positive in shallow water to negative in deep water. This result suggests that macrophyte species with tuber organ and greater leaf nitrogen content would become more abundant in shallow water, but would become less abundant in deep water. Our study highlights how functional traits distributed across gradients provide a robust path towards predictive community ecology. PMID:26167856

  3. Predicting the Current and Future Potential Distributions of Lymphatic Filariasis in Africa Using Maximum Entropy Ecological Niche Modelling

    PubMed Central

    Slater, Hannah; Michael, Edwin

    2012-01-01

    Modelling the spatial distributions of human parasite species is crucial to understanding the environmental determinants of infection as well as for guiding the planning of control programmes. Here, we use ecological niche modelling to map the current potential distribution of the macroparasitic disease, lymphatic filariasis (LF), in Africa, and to estimate how future changes in climate and population could affect its spread and burden across the continent. We used 508 community-specific infection presence data collated from the published literature in conjunction with five predictive environmental/climatic and demographic variables, and a maximum entropy niche modelling method to construct the first ecological niche maps describing potential distribution and burden of LF in Africa. We also ran the best-fit model against climate projections made by the HADCM3 and CCCMA models for 2050 under A2a and B2a scenarios to simulate the likely distribution of LF under future climate and population changes. We predict a broad geographic distribution of LF in Africa extending from the west to the east across the middle region of the continent, with high probabilities of occurrence in the Western Africa compared to large areas of medium probability interspersed with smaller areas of high probability in Central and Eastern Africa and in Madagascar. We uncovered complex relationships between predictor ecological niche variables and the probability of LF occurrence. We show for the first time that predicted climate change and population growth will expand both the range and risk of LF infection (and ultimately disease) in an endemic region. We estimate that populations at risk to LF may range from 543 and 804 million currently, and that this could rise to between 1.65 to 1.86 billion in the future depending on the climate scenario used and thresholds applied to signify infection presence. PMID:22359670

  4. A spatiotemporal dengue fever early warning model accounting for nonlinear associations with meteorological factors: a Bayesian maximum entropy approach

    NASA Astrophysics Data System (ADS)

    Lee, Chieh-Han; Yu, Hwa-Lung; Chien, Lung-Chang

    2014-05-01

    Dengue fever has been identified as one of the most widespread vector-borne diseases in tropical and sub-tropical. In the last decade, dengue is an emerging infectious disease epidemic in Taiwan especially in the southern area where have annually high incidences. For the purpose of disease prevention and control, an early warning system is urgently needed. Previous studies have showed significant relationships between climate variables, in particular, rainfall and temperature, and the temporal epidemic patterns of dengue cases. However, the transmission of the dengue fever is a complex interactive process that mostly understated the composite space-time effects of dengue fever. This study proposes developing a one-week ahead warning system of dengue fever epidemics in the southern Taiwan that considered nonlinear associations between weekly dengue cases and meteorological factors across space and time. The early warning system based on an integration of distributed lag nonlinear model (DLNM) and stochastic Bayesian Maximum Entropy (BME) analysis. The study identified the most significant meteorological measures including weekly minimum temperature and maximum 24-hour rainfall with continuous 15-week lagged time to dengue cases variation under condition of uncertainty. Subsequently, the combination of nonlinear lagged effects of climate variables and space-time dependence function is implemented via a Bayesian framework to predict dengue fever occurrences in the southern Taiwan during 2012. The result shows the early warning system is useful for providing potential outbreak spatio-temporal prediction of dengue fever distribution. In conclusion, the proposed approach can provide a practical disease control tool for environmental regulators seeking more effective strategies for dengue fever prevention.

  5. Efficient maximum entropy algorithms for electronic structure

    SciTech Connect

    Silver, R.N.; Roeder, H.; Voter, A.F.; Kress, J.D.

    1996-04-01

    Two Chebyshev recursion methods are presented for calculations with very large sparse Hamiltonians, the kernel polynomial method (KPM) and the maximum entropy method (MEM). If limited statistical accuracy and energy resolution are acceptable, they provide linear scaling methods for the calculation of physical properties involving large numbers of eigenstates such as densities of states, spectral functions, thermodynamics, total energies for Monte Carlo simulations and forces for molecular dynamics. KPM provides a uniform approximation to a DOS, with resolution inversely proportional to the number of Chebyshev moments, while MEM can achieve significantly higher, but non-uniform, resolution at the risk of possible artifacts. This paper emphasizes efficient algorithms.

  6. Binary versus non-binary information in real time series: empirical results and maximum-entropy matrix models

    NASA Astrophysics Data System (ADS)

    Almog, Assaf; Garlaschelli, Diego

    2014-09-01

    The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of multiple time series of activity of the constituent units, such as stocks or neurons, respectively. While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relations between binary and non-binary properties of financial time series. These relations are a novel quantification of the fact that extreme price increments occur more often when most stocks move in the same direction. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices and their mapping to spin models in statistical physics, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to accurately replicate, and mathematically characterize, the observed binary/non-binary relations. We also obtain a phase diagram allowing us to identify, based only on the instantaneous aggregate return of a set of multiple time series, a regime where the so-called ‘market mode’ has an optimal interpretation in terms of collective (endogenous) effects, a regime where it is parsimoniously explained by pure noise, and a regime where it can be regarded as a combination of endogenous and exogenous factors. Our approach allows us to connect spin models, simple stochastic processes, and ensembles of time series inferred from partial information.

  7. Stochastic modeling and control system designs of the NASA/MSFC Ground Facility for large space structures: The maximum entropy/optimal projection approach

    NASA Technical Reports Server (NTRS)

    Hsia, Wei-Shen

    1986-01-01

    In the Control Systems Division of the Systems Dynamics Laboratory of the NASA/MSFC, a Ground Facility (GF), in which the dynamics and control system concepts being considered for Large Space Structures (LSS) applications can be verified, was designed and built. One of the important aspects of the GF is to design an analytical model which will be as close to experimental data as possible so that a feasible control law can be generated. Using Hyland's Maximum Entropy/Optimal Projection Approach, a procedure was developed in which the maximum entropy principle is used for stochastic modeling and the optimal projection technique is used for a reduced-order dynamic compensator design for a high-order plant.

  8. Traffic network and distribution of cars: Maximum-entropy approach

    SciTech Connect

    Das, N.C.; Chakrabarti, C.G.; Mazumder, S.K.

    2000-02-01

    An urban transport system plays a vital role in the modeling of the modern cosmopolis. A great emphasis is needed for the proper development of a transport system, particularly the traffic network and flow, to meet possible future demand. There are various mathematical models of traffic network and flow. The role of Shannon entropy in the modeling of traffic network and flow was stressed by Tomlin and Tomlin (1968) and Tomlin (1969). In the present note the authors study the role of maximum-entropy principle in the solution of an important problem associated with the traffic network flow. The maximum-entropy principle initiated by Jaynes is a powerful optimization technique of determining the distribution of a random system in the case of partial or incomplete information or data available about the system. This principle has now been broadened and extended and has found wide applications in different fields of science and technology. In the present note the authors show how the Jaynes' maximum-entropy principle, slightly modified, can be successfully applied in determining the flow or distribution of cars in different paths of a traffic network when incomplete information is available about the network.

  9. BIOSMILE: A semantic role labeling system for biomedical verbs using a maximum-entropy model with automatically generated template features

    PubMed Central

    Tsai, Richard Tzong-Han; Chou, Wen-Chi; Su, Ying-Shan; Lin, Yu-Chun; Sung, Cheng-Lung; Dai, Hong-Jie; Yeh, Irene Tzu-Hsuan; Ku, Wei; Sung, Ting-Yi; Hsu, Wen-Lian

    2007-01-01

    Background Bioinformatics tools for automatic processing of biomedical literature are invaluable for both the design and interpretation of large-scale experiments. Many information extraction (IE) systems that incorporate natural language processing (NLP) techniques have thus been developed for use in the biomedical field. A key IE task in this field is the extraction of biomedical relations, such as protein-protein and gene-disease interactions. However, most biomedical relation extraction systems usually ignore adverbial and prepositional phrases and words identifying location, manner, timing, and condition, which are essential for describing biomedical relations. Semantic role labeling (SRL) is a natural language processing technique that identifies the semantic roles of these words or phrases in sentences and expresses them as predicate-argument structures. We construct a biomedical SRL system called BIOSMILE that uses a maximum entropy (ME) machine-learning model to extract biomedical relations. BIOSMILE is trained on BioProp, our semi-automatic, annotated biomedical proposition bank. Currently, we are focusing on 30 biomedical verbs that are frequently used or considered important for describing molecular events. Results To evaluate the performance of BIOSMILE, we conducted two experiments to (1) compare the performance of SRL systems trained on newswire and biomedical corpora; and (2) examine the effects of using biomedical-specific features. The experimental results show that using BioProp improves the F-score of the SRL system by 21.45% over an SRL system that uses a newswire corpus. It is noteworthy that adding automatically generated template features improves the overall F-score by a further 0.52%. Specifically, ArgM-LOC, ArgM-MNR, and Arg2 achieve statistically significant performance improvements of 3.33%, 2.27%, and 1.44%, respectively. Conclusion We demonstrate the necessity of using a biomedical proposition bank for training SRL systems in the

  10. Generalization of the k-moment method using the maximum entropy principle. Application to the NBKM and full spectrum SLMB gas radiation models

    NASA Astrophysics Data System (ADS)

    André, Frédéric; Vaillon, Rodolphe

    2012-08-01

    The k-moment method is generalized by applying the maximum entropy principle to get several estimates of the k-distribution function on any kind of spectral interval as a function of the first two moments of the absorption coefficient. Corresponding formulations of the blackbody weighted band averaged transmission function of a gaseous uniform path are obtained. Different constraints involving the first and second order positive, first order negative and logarithmic moments are introduced together with a physical meaning whenever it is possible. Different sets of these constraints are considered to get maximum entropy estimates of the distributions functions: the Dirac, exponential, Gamma, inverse Gaussian and reciprocal inverse Gaussian k-distribution functions. Analytical formulas are provided for each of these distributions and for their associated transmission function, as a function of the mean and variance of the absorption coefficient. The methodology can be applied considering any spectral interval: narrow, wide, the full spectrum, continuous or not. Thus the resulting associated transmission and cumulative k-distribution functions can be utilized in the frame of a large variety of gas radiation models. Hence the k-moment method using the maximum entropy principle is assessed in the frame of the NBKM and full spectrum SLMB gas radiation models. A series of test cases implying comparisons with reference Line-by-Line results exhibits which maximum entropy k-distributions are likely to give the best estimations of narrow band or total emitted intensities, curves-of-growth of the total emission function and full spectrum cumulative k-distribution functions. In particular, the inverse Gaussian and Gamma k-distributions seem most of the time to perform very well.

  11. Maximum entropy distribution of stock price fluctuations

    NASA Astrophysics Data System (ADS)

    Bartiromo, Rosario

    2013-04-01

    In this paper we propose to use the principle of absence of arbitrage opportunities in its entropic interpretation to obtain the distribution of stock price fluctuations by maximizing its information entropy. We show that this approach leads to a physical description of the underlying dynamics as a random walk characterized by a stochastic diffusion coefficient and constrained to a given value of the expected volatility, in this way taking into account the information provided by the existence of an option market. The model is validated by a comprehensive comparison with observed distributions of both price return and diffusion coefficient. Expected volatility is the only parameter in the model and can be obtained by analysing option prices. We give an analytic formulation of the probability density function for price returns which can be used to extract expected volatility from stock option data.

  12. A multiscale maximum entropy moment closure for locally regulated space-time point process models of population dynamics.

    PubMed

    Raghib, Michael; Hill, Nicholas A; Dieckmann, Ulf

    2011-05-01

    The prevalence of structure in biological populations challenges fundamental assumptions at the heart of continuum models of population dynamics based only on mean densities (local or global). Individual-based models (IBMs) were introduced during the last decade in an attempt to overcome this limitation by following explicitly each individual in the population. Although the IBM approach has been quite useful, the capability to follow each individual usually comes at the expense of analytical tract ability, which limits the generality of the statements that can be made. For the specific case of spatial structure in populations of sessile (and identical) organisms, space-time point processes with local regulation seem to cover the middle ground between analytical tract ability and a higher degree of biological realism. This approach has shown that simplified representations of fecundity, local dispersal and density-dependent mortality weighted by the local competitive environment are sufficient to generate spatial patterns that mimic field observations. Continuum approximations of these stochastic processes try to distill their fundamental properties, and they keep track of not only mean densities, but also higher order spatial correlations. However, due to the non-linearities involved they result in infinite hierarchies of moment equations. This leads to the problem of finding a 'moment closure'; that is, an appropriate order of (lower order) truncation, together with a method of expressing the highest order density not explicitly modelled in the truncated hierarchy in terms of the lower order densities. We use the principle of constrained maximum entropy to derive a closure relationship for truncation at second order using normalisation and the product densities of first and second orders as constraints, and apply it to one such hierarchy. The resulting 'maxent' closure is similar to the Kirkwood superposition approximation, or 'power-3' closure, but it is

  13. NOTE FROM THE EDITOR: Bayesian and Maximum Entropy Methods Bayesian and Maximum Entropy Methods

    NASA Astrophysics Data System (ADS)

    Dobrzynski, L.

    2008-10-01

    The Bayesian and Maximum Entropy Methods are now standard routines in various data analyses, irrespective of ones own preference to the more conventional approach based on so-called frequentists understanding of the notion of the probability. It is not the purpose of the Editor to show all achievements of these methods in various branches of science, technology and medicine. In the case of condensed matter physics most of the oldest examples of Bayesian analysis can be found in the excellent tutorial textbooks by Sivia and Skilling [1], and Bretthorst [2], while the application of the Maximum Entropy Methods were described in `Maximum Entropy in Action' [3]. On the list of questions addressed one finds such problems as deconvolution and reconstruction of the complicated spectra, e.g. counting the number of lines hidden within the spectrum observed with always finite resolution, reconstruction of charge, spin and momentum density distribution from an incomplete sets of data, etc. On the theoretical side one might find problems like estimation of interatomic potentials [4], application of the MEM to quantum Monte Carlo data [5], Bayesian approach to inverse quantum statistics [6], very general to statistical mechanics [7] etc. Obviously, in spite of the power of the Bayesian and Maximum Entropy Methods, it is not possible for everything to be solved in a unique way by application of these particular methods of analysis, and one of the problems which is often raised is connected not only with a uniqueness of a reconstruction of a given distribution (map) but also with its accuracy (error maps). In this `Comments' section we present a few papers showing more recent advances and views, and highlighting some of the aforementioned problems. References [1] Sivia D S and Skilling J 2006 Data Analysis: A Bayesian Tutorial 2nd edn (Oxford: Oxford University Press) [2] Bretthorst G L 1988 Bayesian Spectruim Analysis and Parameter Estimation (Berlin: Springer) [3] Buck B and

  14. Combining experiments and simulations using the maximum entropy principle.

    PubMed

    Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten

    2014-02-01

    A key component of computational biology is to compare the results of computer modelling with experimental measurements. Despite substantial progress in the models and algorithms used in many areas of computational biology, such comparisons sometimes reveal that the computations are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results that are at not in complete and quantitative accordance with experiments. A common solution to this problem is to explicitly ensure agreement between the two by perturbing the potential energy function towards the experimental data. So far, a general consensus for how such perturbations should be implemented has been lacking. Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges. PMID:24586124

  15. Combining Experiments and Simulations Using the Maximum Entropy Principle

    PubMed Central

    Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten

    2014-01-01

    A key component of computational biology is to compare the results of computer modelling with experimental measurements. Despite substantial progress in the models and algorithms used in many areas of computational biology, such comparisons sometimes reveal that the computations are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results that are at not in complete and quantitative accordance with experiments. A common solution to this problem is to explicitly ensure agreement between the two by perturbing the potential energy function towards the experimental data. So far, a general consensus for how such perturbations should be implemented has been lacking. Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges. PMID:24586124

  16. Beyond maximum entropy: Fractal pixon-based image reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, R. C.; Pina, R. K.

    1994-01-01

    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other methods, including Goodness-of-Fit (e.g. Least-Squares and Lucy-Richardson) and Maximum Entropy (ME). Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME.

  17. Maximum-Entropy Inference with a Programmable Annealer

    PubMed Central

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-01-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition. PMID:26936311

  18. Maximum-Entropy Inference with a Programmable Annealer

    NASA Astrophysics Data System (ADS)

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-03-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.

  19. A coupled force-restore model of surface temperature and soil moisture using the maximum entropy production model of heat fluxes

    NASA Astrophysics Data System (ADS)

    Huang, S.-Y.; Wang, J.

    2016-07-01

    A coupled force-restore model of surface soil temperature and moisture (FRMEP) is formulated by incorporating the maximum entropy production model of surface heat fluxes and including the gravitational drainage term. The FRMEP model driven by surface net radiation and precipitation are independent of near-surface atmospheric variables with reduced sensitivity to the uncertainties of model input and parameters compared to the classical force-restore models (FRM). The FRMEP model was evaluated using observations from two field experiments with contrasting soil moisture conditions. The modeling errors of the FRMEP predicted surface temperature and soil moisture are lower than those of the classical FRMs forced by observed or bulk formula based surface heat fluxes (bias 1 ~ 2°C versus ~4°C, 0.02 m3 m-3 versus 0.05 m3 m-3). The diurnal variations of surface temperature, soil moisture, and surface heat fluxes are well captured by the FRMEP model measured by the high correlations between the model predictions and observations (r ≥ 0.84). Our analysis suggests that the drainage term cannot be neglected under wet soil condition. A 1 year simulation indicates that the FRMEP model captures the seasonal variation of surface temperature and soil moisture with bias less than 2°C and 0.01 m3 m-3 and correlation coefficients of 0.93 and 0.9 with observations, respectively.

  20. Twenty-five years of maximum-entropy principle

    NASA Astrophysics Data System (ADS)

    Kapur, J. N.

    1983-04-01

    The strengths and weaknesses of the maximum entropy principle (MEP) are examined and some challenging problems that remain outstanding at the end of the first quarter century of the principle are discussed. The original formalism of the MEP is presented and its relationship to statistical mechanics is set forth. The use of MEP for characterizing statistical distributions, in statistical inference, nonlinear spectral analysis, transportation models, population density models, models for brand-switching in marketing and vote-switching in elections is discussed. Its application to finance, insurance, image reconstruction, pattern recognition, operations research and engineering, biology and medicine, and nonparametric density estimation is considered.

  1. Maximum Entropy for the International Division of Labor

    PubMed Central

    Lei, Hongmei; Chen, Ying; Li, Ruiqi; He, Deli; Zhang, Jiang

    2015-01-01

    As a result of the international division of labor, the trade value distribution on different products substantiated by international trade flows can be regarded as one country’s strategy for competition. According to the empirical data of trade flows, countries may spend a large fraction of export values on ubiquitous and competitive products. Meanwhile, countries may also diversify their exports share on different types of products to reduce the risk. In this paper, we report that the export share distribution curves can be derived by maximizing the entropy of shares on different products under the product’s complexity constraint once the international market structure (the country-product bipartite network) is given. Therefore, a maximum entropy model provides a good fit to empirical data. The empirical data is consistent with maximum entropy subject to a constraint on the expected value of the product complexity for each country. One country’s strategy is mainly determined by the types of products this country can export. In addition, our model is able to fit the empirical export share distribution curves of nearly every country very well by tuning only one parameter. PMID:26172052

  2. Maximum entropy criteria applied to signal recovery

    NASA Astrophysics Data System (ADS)

    MacKinnon, Robert F.; Wilmut, Michael J.

    1988-06-01

    A method based on the minimization of cross-entropy is presented for the recovery of signals from noisy data either in the form of time series or images. Finite Fourier transforms are applied to the data and constraints are placed on the magnitude and phase of the Fourier coefficients based on their statistics for noise-only data. The minimization of cross-entropy is achieved through application of well-established functional minimization techniques which allow for further constraints in the spatial, temporal or frequency domain. Derivatives of the entropy function are obtained analytically and the results applied to the cases of correlated noise and of signal perturbations about a mean. Demonstrations of applications to one-dimensional data are presented.

  3. Multi-site, multivariate weather generator using maximum entropy bootstrap

    NASA Astrophysics Data System (ADS)

    Srivastav, Roshan K.; Simonovic, Slobodan P.

    2014-05-01

    Weather generators are increasingly becoming viable alternate models to assess the effects of future climate change scenarios on water resources systems. In this study, a new multisite, multivariate maximum entropy bootstrap weather generator (MEBWG) is proposed for generating daily weather variables, which has the ability to mimic both, spatial and temporal dependence structure in addition to other historical statistics. The maximum entropy bootstrap (MEB) involves two main steps: (1) random sampling from the empirical cumulative distribution function with endpoints selected to allow limited extrapolation and (2) reordering of the random series to respect the rank ordering of the original time series (temporal dependence structure). To capture the multi-collinear structure between the weather variables and between the sites, we combine orthogonal linear transformation with MEB. Daily weather data, which include precipitation, maximum temperature and minimum temperature from 27 years of record from the Upper Thames River Basin in Ontario, Canada, are used to analyze the ability of MEBWG based weather generator. Results indicate that the statistics from the synthetic replicates were not significantly different from the observed data and the model is able to preserve the 27 CLIMDEX indices very well. The MEBWG model shows better performance in terms of extrapolation and computational efficiency when compared to multisite, multivariate K-nearest neighbour model.

  4. Maximum entropy production in environmental and ecological systems.

    PubMed

    Kleidon, Axel; Malhi, Yadvinder; Cox, Peter M

    2010-05-12

    The coupled biosphere-atmosphere system entails a vast range of processes at different scales, from ecosystem exchange fluxes of energy, water and carbon to the processes that drive global biogeochemical cycles, atmospheric composition and, ultimately, the planetary energy balance. These processes are generally complex with numerous interactions and feedbacks, and they are irreversible in their nature, thereby producing entropy. The proposed principle of maximum entropy production (MEP), based on statistical mechanics and information theory, states that thermodynamic processes far from thermodynamic equilibrium will adapt to steady states at which they dissipate energy and produce entropy at the maximum possible rate. This issue focuses on the latest development of applications of MEP to the biosphere-atmosphere system including aspects of the atmospheric circulation, the role of clouds, hydrology, vegetation effects, ecosystem exchange of energy and mass, biogeochemical interactions and the Gaia hypothesis. The examples shown in this special issue demonstrate the potential of MEP to contribute to improved understanding and modelling of the biosphere and the wider Earth system, and also explore limitations and constraints to the application of the MEP principle. PMID:20368247

  5. Maximum entropy production in environmental and ecological systems

    PubMed Central

    Kleidon, Axel; Malhi, Yadvinder; Cox, Peter M.

    2010-01-01

    The coupled biosphere–atmosphere system entails a vast range of processes at different scales, from ecosystem exchange fluxes of energy, water and carbon to the processes that drive global biogeochemical cycles, atmospheric composition and, ultimately, the planetary energy balance. These processes are generally complex with numerous interactions and feedbacks, and they are irreversible in their nature, thereby producing entropy. The proposed principle of maximum entropy production (MEP), based on statistical mechanics and information theory, states that thermodynamic processes far from thermodynamic equilibrium will adapt to steady states at which they dissipate energy and produce entropy at the maximum possible rate. This issue focuses on the latest development of applications of MEP to the biosphere–atmosphere system including aspects of the atmospheric circulation, the role of clouds, hydrology, vegetation effects, ecosystem exchange of energy and mass, biogeochemical interactions and the Gaia hypothesis. The examples shown in this special issue demonstrate the potential of MEP to contribute to improved understanding and modelling of the biosphere and the wider Earth system, and also explore limitations and constraints to the application of the MEP principle. PMID:20368247

  6. Minimum uncertainty products from the principle of maximum entropy

    NASA Astrophysics Data System (ADS)

    Rajagopal, A. K.; Teitler, S.

    1989-07-01

    The maximum-entropy method is here generalized to obtain many possible extrema of the uncertainty product corresponding to the generalized minimum uncertainty products recently discussed by Lahiri and Menon (LM) [Phys. Rev. A 38, 5412 (1988)]. Unlike the LM work, the present work applies to mixed states and leads to a new annealing algorithm for obtaining the extrema of the entropy functional.

  7. Kirchhoff's loop law and the maximum entropy production principle.

    PubMed

    Zupanović, Pasko; Juretić, Davor; Botrić, Srećko

    2004-11-01

    In contrast to the standard derivation of Kirchhoff's loop law, which invokes electric potential, we show, for the linear planar electric network in a stationary state at the fixed temperature, that loop law can be derived from the maximum entropy production principle. This means that the currents in network branches are distributed in such a way as to achieve the state of maximum entropy production. PMID:15600693

  8. The maximum entropy production principle: two basic questions

    PubMed Central

    Martyushev, Leonid M.

    2010-01-01

    The overwhelming majority of maximum entropy production applications to ecological and environmental systems are based on thermodynamics and statistical physics. Here, we discuss briefly maximum entropy production principle and raises two questions: (i) can this principle be used as the basis for non-equilibrium thermodynamics and statistical mechanics and (ii) is it possible to ‘prove’ the principle? We adduce one more proof which is most concise today. PMID:20368251

  9. Maximum-entropy probability distributions under Lp-norm constraints

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1991-01-01

    Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.

  10. Ab initio-informed maximum entropy modeling of rovibrational relaxation and state-specific dissociation with application to the O2 + O system

    NASA Astrophysics Data System (ADS)

    Kulakhmetov, Marat; Gallis, Michael; Alexeenko, Alina

    2016-05-01

    Quasi-classical trajectory (QCT) calculations are used to study state-specific ro-vibrational energy exchange and dissociation in the O2 + O system. Atom-diatom collisions with energy between 0.1 and 20 eV are calculated with a double many body expansion potential energy surface by Varandas and Pais [Mol. Phys. 65, 843 (1988)]. Inelastic collisions favor mono-quantum vibrational transitions at translational energies above 1.3 eV although multi-quantum transitions are also important. Post-collision vibrational favoring decreases first exponentially and then linearly as Δv increases. Vibrationally elastic collisions (Δv = 0) favor small ΔJ transitions while vibrationally inelastic collisions have equilibrium post-collision rotational distributions. Dissociation exhibits both vibrational and rotational favoring. New vibrational-translational (VT), vibrational-rotational-translational (VRT) energy exchange, and dissociation models are developed based on QCT observations and maximum entropy considerations. Full set of parameters for state-to-state modeling of oxygen is presented. The VT energy exchange model describes 22 000 state-to-state vibrational cross sections using 11 parameters and reproduces vibrational relaxation rates within 30% in the 2500-20 000 K temperature range. The VRT model captures 80 × 106 state-to-state ro-vibrational cross sections using 19 parameters and reproduces vibrational relaxation rates within 60% in the 5000-15 000 K temperature range. The developed dissociation model reproduces state-specific and equilibrium dissociation rates within 25% using just 48 parameters. The maximum entropy framework makes it feasible to upscale ab initio simulation to full nonequilibrium flow calculations.

  11. Ab initio-informed maximum entropy modeling of rovibrational relaxation and state-specific dissociation with application to the O2 + O system.

    PubMed

    Kulakhmetov, Marat; Gallis, Michael; Alexeenko, Alina

    2016-05-01

    Quasi-classical trajectory (QCT) calculations are used to study state-specific ro-vibrational energy exchange and dissociation in the O2 + O system. Atom-diatom collisions with energy between 0.1 and 20 eV are calculated with a double many body expansion potential energy surface by Varandas and Pais [Mol. Phys. 65, 843 (1988)]. Inelastic collisions favor mono-quantum vibrational transitions at translational energies above 1.3 eV although multi-quantum transitions are also important. Post-collision vibrational favoring decreases first exponentially and then linearly as Δv increases. Vibrationally elastic collisions (Δv = 0) favor small ΔJ transitions while vibrationally inelastic collisions have equilibrium post-collision rotational distributions. Dissociation exhibits both vibrational and rotational favoring. New vibrational-translational (VT), vibrational-rotational-translational (VRT) energy exchange, and dissociation models are developed based on QCT observations and maximum entropy considerations. Full set of parameters for state-to-state modeling of oxygen is presented. The VT energy exchange model describes 22 000 state-to-state vibrational cross sections using 11 parameters and reproduces vibrational relaxation rates within 30% in the 2500-20 000 K temperature range. The VRT model captures 80 × 10(6) state-to-state ro-vibrational cross sections using 19 parameters and reproduces vibrational relaxation rates within 60% in the 5000-15 000 K temperature range. The developed dissociation model reproduces state-specific and equilibrium dissociation rates within 25% using just 48 parameters. The maximum entropy framework makes it feasible to upscale ab initio simulation to full nonequilibrium flow calculations. PMID:27155635

  12. Possible dynamical explanations for Paltridge's principle of maximum entropy production

    SciTech Connect

    Virgo, Nathaniel Ikegami, Takashi

    2014-12-05

    Throughout the history of non-equilibrium thermodynamics a number of theories have been proposed in which complex, far from equilibrium flow systems are hypothesised to reach a steady state that maximises some quantity. Perhaps the most celebrated is Paltridge's principle of maximum entropy production for the horizontal heat flux in Earth's atmosphere, for which there is some empirical support. There have been a number of attempts to derive such a principle from maximum entropy considerations. However, we currently lack a more mechanistic explanation of how any particular system might self-organise into a state that maximises some quantity. This is in contrast to equilibrium thermodynamics, in which models such as the Ising model have been a great help in understanding the relationship between the predictions of MaxEnt and the dynamics of physical systems. In this paper we show that, unlike in the equilibrium case, Paltridge-type maximisation in non-equilibrium systems cannot be achieved by a simple dynamical feedback mechanism. Nevertheless, we propose several possible mechanisms by which maximisation could occur. Showing that these occur in any real system is a task for future work. The possibilities presented here may not be the only ones. We hope that by presenting them we can provoke further discussion about the possible dynamical mechanisms behind extremum principles for non-equilibrium systems, and their relationship to predictions obtained through MaxEnt.

  13. A maximum entropy framework for nonexponential distributions

    PubMed Central

    Peterson, Jack; Dixit, Purushottam D.; Dill, Ken A.

    2013-01-01

    Probability distributions having power-law tails are observed in a broad range of social, economic, and biological systems. We describe here a potentially useful common framework. We derive distribution functions for situations in which a “joiner particle” k pays some form of price to enter a community of size , where costs are subject to economies of scale. Maximizing the Boltzmann–Gibbs–Shannon entropy subject to this energy-like constraint predicts a distribution having a power-law tail; it reduces to the Boltzmann distribution in the absence of economies of scale. We show that the predicted function gives excellent fits to 13 different distribution functions, ranging from friendship links in social networks, to protein–protein interactions, to the severity of terrorist attacks. This approach may give useful insights into when to expect power-law distributions in the natural and social sciences. PMID:24297895

  14. Reconstructing the history of dark energy using maximum entropy

    NASA Astrophysics Data System (ADS)

    Zunckel, Caroline; Trotta, Roberto

    2007-09-01

    We present a Bayesian technique based on a maximum-entropy method to reconstruct the dark energy equation of state (EOS) w(z) in a non-parametric way. This Maximum Entropy (MaxEnt) technique allows to incorporate relevant prior information while adjusting the degree of smoothing of the reconstruction in response to the structure present in the data. After demonstrating the method on synthetic data, we apply it to current cosmological data, separately analysing Type Ia supernova measurement from the HST/GOODS programme and the first-year Supernovae Legacy Survey (SNLS), complemented by cosmic microwave background and baryonic acoustic oscillation data. We find that the SNLS data are compatible with w(z) = -1 at all redshifts 0 <= z <~ 1100, with error bars of the order of 20 per cent for the most-constraining choice of priors. The HST/GOODS data exhibit a slight (about 1σ significance) preference for w > -1 at z ~ 0.5 and a drift towards w > -1 at larger redshifts which, however, is not robust with respect to changes in our prior specifications. We employ both a constant EOS prior model and a slowly varying w(z) and find that our conclusions are only mildly dependent on this choice at high redshifts. Our method highlights the danger of employing parametric fits for the unknown EOS, that can potentially miss or underestimate real structure in the data.

  15. How multiplicity determines entropy and the derivation of the maximum entropy principle for complex systems

    PubMed Central

    Hanel, Rudolf; Thurner, Stefan; Gell-Mann, Murray

    2014-01-01

    The maximum entropy principle (MEP) is a method for obtaining the most likely distribution functions of observables from statistical systems by maximizing entropy under constraints. The MEP has found hundreds of applications in ergodic and Markovian systems in statistical mechanics, information theory, and statistics. For several decades there has been an ongoing controversy over whether the notion of the maximum entropy principle can be extended in a meaningful way to nonextensive, nonergodic, and complex statistical systems and processes. In this paper we start by reviewing how Boltzmann–Gibbs–Shannon entropy is related to multiplicities of independent random processes. We then show how the relaxation of independence naturally leads to the most general entropies that are compatible with the first three Shannon–Khinchin axioms, the -entropies. We demonstrate that the MEP is a perfectly consistent concept for nonergodic and complex statistical systems if their relative entropy can be factored into a generalized multiplicity and a constraint term. The problem of finding such a factorization reduces to finding an appropriate representation of relative entropy in a linear basis. In a particular example we show that path-dependent random processes with memory naturally require specific generalized entropies. The example is to our knowledge the first exact derivation of a generalized entropy from the microscopic properties of a path-dependent random process. PMID:24782541

  16. Quantum collapse rules from the maximum relative entropy principle

    NASA Astrophysics Data System (ADS)

    Hellmann, Frank; Kamiński, Wojciech; Paweł Kostecki, Ryszard

    2016-01-01

    We show that the von Neumann-Lüders collapse rules in quantum mechanics always select the unique state that maximises the quantum relative entropy with respect to the premeasurement state, subject to the constraint that the postmeasurement state has to be compatible with the knowledge gained in the measurement. This way we provide an information theoretic characterisation of quantum collapse rules by means of the maximum relative entropy principle.

  17. Maximum Entropy, Word-Frequency, Chinese Characters, and Multiple Meanings

    PubMed Central

    Yan, Xiaoyong; Minnhagen, Petter

    2015-01-01

    The word-frequency distribution of a text written by an author is well accounted for by a maximum entropy distribution, the RGF (random group formation)-prediction. The RGF-distribution is completely determined by the a priori values of the total number of words in the text (M), the number of distinct words (N) and the number of repetitions of the most common word (kmax). It is here shown that this maximum entropy prediction also describes a text written in Chinese characters. In particular it is shown that although the same Chinese text written in words and Chinese characters have quite differently shaped distributions, they are nevertheless both well predicted by their respective three a priori characteristic values. It is pointed out that this is analogous to the change in the shape of the distribution when translating a given text to another language. Another consequence of the RGF-prediction is that taking a part of a long text will change the input parameters (M, N, kmax) and consequently also the shape of the frequency distribution. This is explicitly confirmed for texts written in Chinese characters. Since the RGF-prediction has no system-specific information beyond the three a priori values (M, N, kmax), any specific language characteristic has to be sought in systematic deviations from the RGF-prediction and the measured frequencies. One such systematic deviation is identified and, through a statistical information theoretical argument and an extended RGF-model, it is proposed that this deviation is caused by multiple meanings of Chinese characters. The effect is stronger for Chinese characters than for Chinese words. The relation between Zipf’s law, the Simon-model for texts and the present results are discussed. PMID:25955175

  18. Stochastic model of the NASA/MSFC ground facility for large space structures with uncertain parameters: The maximum entropy approach, part 2

    NASA Technical Reports Server (NTRS)

    Hsia, Wei Shen

    1989-01-01

    A validated technology data base is being developed in the areas of control/structures interaction, deployment dynamics, and system performance for Large Space Structures (LSS). A Ground Facility (GF), in which the dynamics and control systems being considered for LSS applications can be verified, was designed and built. One of the important aspects of the GF is to verify the analytical model for the control system design. The procedure is to describe the control system mathematically as well as possible, then to perform tests on the control system, and finally to factor those results into the mathematical model. The reduction of the order of a higher order control plant was addressed. The computer program was improved for the maximum entropy principle adopted in Hyland's MEOP method. The program was tested against the testing problem. It resulted in a very close match. Two methods of model reduction were examined: Wilson's model reduction method and Hyland's optimal projection (OP) method. Design of a computer program for Hyland's OP method was attempted. Due to the difficulty encountered at the stage where a special matrix factorization technique is needed in order to obtain the required projection matrix, the program was successful up to the finding of the Linear Quadratic Gaussian solution but not beyond. Numerical results along with computer programs which employed ORACLS are presented.

  19. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution

    NASA Astrophysics Data System (ADS)

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.

  20. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution.

    PubMed

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method. PMID:27627406

  1. Beyond maximum entropy: Fractal Pixon-based image reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard C.; Pina, R. K.

    1994-01-01

    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.

  2. Chebyshev moment problems: Maximum entropy and kernel polynomial methods

    SciTech Connect

    Silver, R.N.; Roeder, H.; Voter, A.F.; Kress, J.D.

    1995-12-31

    Two Chebyshev recursion methods are presented for calculations with very large sparse Hamiltonians, the kernel polynomial method (KPM) and the maximum entropy method (MEM). They are applicable to physical properties involving large numbers of eigenstates such as densities of states, spectral functions, thermodynamics, total energies for Monte Carlo simulations and forces for tight binding molecular dynamics. this paper emphasizes efficient algorithms.

  3. What is the maximum rate at which entropy of a string can increase?

    SciTech Connect

    Ropotenko, Kostyantyn

    2009-03-15

    According to Susskind, a string falling toward a black hole spreads exponentially over the stretched horizon due to repulsive interactions of the string bits. In this paper such a string is modeled as a self-avoiding walk and the string entropy is found. It is shown that the rate at which information/entropy contained in the string spreads is the maximum rate allowed by quantum theory. The maximum rate at which the black hole entropy can increase when a string falls into a black hole is also discussed.

  4. Spatio-spectral Maximum Entropy Method. I. Formulation and Test

    NASA Astrophysics Data System (ADS)

    Bong, Su-Chan; Lee, Jeongwoo; Gary, Dale E.; Yun, Hong Sik

    2006-01-01

    The spatio-spectral maximum entropy method (SSMEM) has been developed by Komm and coworkers in 1997 for use with solar multifrequency interferometric observation. In this paper we further improve the formulation of the SSMEM to establish it as a tool for astronomical imaging spectroscopy. We maintain their original idea that spectral smoothness at neighboring frequencies can be used as an additional a priori assumption in astrophysical problems and that this can be implemented by adding a spectral entropy term to the usual maximum entropy method (MEM) formulation. We, however, address major technical difficulties in introducing the spectral entropy into the imaging problem that are not encountered in the conventional MEM. These include calculation of the spectral entropy in a generally frequency-dependent map grid, simultaneous adjustment of the temperature variables and Lagrangian multipliers in the spatial and spectral domain, and matching the solutions to the observational constraints at a large number of frequencies. We test the performance of the SSMEM in comparison with the conventional MEM.

  5. Crowd macro state detection using entropy model

    NASA Astrophysics Data System (ADS)

    Zhao, Ying; Yuan, Mengqi; Su, Guofeng; Chen, Tao

    2015-08-01

    In the crowd security research area a primary concern is to identify the macro state of crowd behaviors to prevent disasters and to supervise the crowd behaviors. The entropy is used to describe the macro state of a self-organization system in physics. The entropy change indicates the system macro state change. This paper provides a method to construct crowd behavior microstates and the corresponded probability distribution using the individuals' velocity information (magnitude and direction). Then an entropy model was built up to describe the crowd behavior macro state. Simulation experiments and video detection experiments were conducted. It was verified that in the disordered state, the crowd behavior entropy is close to the theoretical maximum entropy; while in ordered state, the entropy is much lower than half of the theoretical maximum entropy. The crowd behavior macro state sudden change leads to the entropy change. The proposed entropy model is more applicable than the order parameter model in crowd behavior detection. By recognizing the entropy mutation, it is possible to detect the crowd behavior macro state automatically by utilizing cameras. Results will provide data support on crowd emergency prevention and on emergency manual intervention.

  6. Propane spectral resolution enhancement by the maximum entropy method

    NASA Technical Reports Server (NTRS)

    Bonavito, N. L.; Stewart, K. P.; Hurley, E. J.; Yeh, K. C.; Inguva, R.

    1990-01-01

    The Burg algorithm for maximum entropy power spectral density estimation is applied to a time series of data obtained from a Michelson interferometer and compared with a standard FFT estimate for resolution capability. The propane transmittance spectrum was estimated by use of the FFT with a 2 to the 18th data sample interferogram, giving a maximum unapodized resolution of 0.06/cm. This estimate was then interpolated by zero filling an additional 2 to the 18th points, and the final resolution was taken to be 0.06/cm. Comparison of the maximum entropy method (MEM) estimate with the FFT was made over a 45/cm region of the spectrum for several increasing record lengths of interferogram data beginning at 2 to the 10th. It is found that over this region the MEM estimate with 2 to the 16th data samples is in close agreement with the FFT estimate using 2 to the 18th samples.

  7. Development of an Anisotropic Geological-Based Land Use Regression and Bayesian Maximum Entropy Model for Estimating Groundwater Radon across Northing Carolina

    NASA Astrophysics Data System (ADS)

    Messier, K. P.; Serre, M. L.

    2015-12-01

    Radon (222Rn) is a naturally occurring chemically inert, colorless, and odorless radioactive gas produced from the decay of uranium (238U), which is ubiquitous in rocks and soils worldwide. Exposure to 222Rn is likely the second leading cause of lung cancer after cigarette smoking via inhalation; however, exposure through untreated groundwater is also a contributing factor to both inhalation and ingestion routes. A land use regression (LUR) model for groundwater 222Rn with anisotropic geological and 238U based explanatory variables is developed, which helps elucidate the factors contributing to elevated 222Rn across North Carolina. Geological and uranium based variables are constructed in elliptical buffers surrounding each observation such that they capture the lateral geometric anisotropy present in groundwater 222Rn. Moreover, geological features are defined at three different geological spatial scales to allow the model to distinguish between large area and small area effects of geology on groundwater 222Rn. The LUR is also integrated into the Bayesian Maximum Entropy (BME) geostatistical framework to increase accuracy and produce a point-level LUR-BME model of groundwater 222Rn across North Carolina including prediction uncertainty. The LUR-BME model of groundwater 222Rn results in a leave-one out cross-validation of 0.46 (Pearson correlation coefficient= 0.68), effectively predicting within the spatial covariance range. Modeled results of 222Rn concentrations show variability among Intrusive Felsic geological formations likely due to average bedrock 238U defined on the basis of overlying stream-sediment 238U concentrations that is a widely distributed consistently analyzed point-source data.

  8. Inverse spin glass and related maximum entropy problems.

    PubMed

    Castellana, Michele; Bialek, William

    2014-09-12

    If we have a system of binary variables and we measure the pairwise correlations among these variables, then the least structured or maximum entropy model for their joint distribution is an Ising model with pairwise interactions among the spins. Here we consider inhomogeneous systems in which we constrain, for example, not the full matrix of correlations, but only the distribution from which these correlations are drawn. In this sense, what we have constructed is an inverse spin glass: rather than choosing coupling constants at random from a distribution and calculating correlations, we choose the correlations from a distribution and infer the coupling constants. We argue that such models generate a block structure in the space of couplings, which provides an explicit solution of the inverse problem. This allows us to generate a phase diagram in the space of (measurable) moments of the distribution of correlations. We expect that these ideas will be most useful in building models for systems that are nonequilibrium statistical mechanics problems, such as networks of real neurons. PMID:25260004

  9. A maximum entropy method for MEG source imaging

    SciTech Connect

    Khosla, D. |; Singh, M.

    1996-12-31

    The estimation of three-dimensional dipole current sources on the cortical surface from the measured magnetoencephalogram (MEG) is a highly under determined inverse problem as there are many {open_quotes}feasible{close_quotes} images which are consistent with the MEG data. Previous approaches to this problem have concentrated on the use of weighted minimum norm inverse methods. While these methods ensure a unique solution, they often produce overly smoothed solutions and exhibit severe sensitivity to noise. In this paper we explore the maximum entropy approach to obtain better solutions to the problem. This estimation technique selects that image from the possible set of feasible images which has the maximum entropy permitted by the information available to us. In order to account for the presence of noise in the data, we have also incorporated a noise rejection or likelihood term into our maximum entropy method. This makes our approach mirror a Bayesian maximum a posteriori (MAP) formulation. Additional information from other functional techniques like functional magnetic resonance imaging (fMRI) can be incorporated in the proposed method in the form of a prior bias function to improve solutions. We demonstrate the method with experimental phantom data from a clinical 122 channel MEG system.

  10. Nonequilibrium thermodynamics and maximum entropy production in the Earth system

    NASA Astrophysics Data System (ADS)

    Kleidon, Axel

    2009-02-01

    The Earth system is maintained in a unique state far from thermodynamic equilibrium, as, for instance, reflected in the high concentration of reactive oxygen in the atmosphere. The myriad of processes that transform energy, that result in the motion of mass in the atmosphere, in oceans, and on land, processes that drive the global water, carbon, and other biogeochemical cycles, all have in common that they are irreversible in their nature. Entropy production is a general consequence of these processes and measures their degree of irreversibility. The proposed principle of maximum entropy production (MEP) states that systems are driven to steady states in which they produce entropy at the maximum possible rate given the prevailing constraints. In this review, the basics of nonequilibrium thermodynamics are described, as well as how these apply to Earth system processes. Applications of the MEP principle are discussed, ranging from the strength of the atmospheric circulation, the hydrological cycle, and biogeochemical cycles to the role that life plays in these processes. Nonequilibrium thermodynamics and the MEP principle have potentially wide-ranging implications for our understanding of Earth system functioning, how it has evolved in the past, and why it is habitable. Entropy production allows us to quantify an objective direction of Earth system change (closer to vs further away from thermodynamic equilibrium, or, equivalently, towards a state of MEP). When a maximum in entropy production is reached, MEP implies that the Earth system reacts to perturbations primarily with negative feedbacks. In conclusion, this nonequilibrium thermodynamic view of the Earth system shows great promise to establish a holistic description of the Earth as one system. This perspective is likely to allow us to better understand and predict its function as one entity, how it has evolved in the past, and how it is modified by human activities in the future.

  11. Metabolic networks evolve towards states of maximum entropy production.

    PubMed

    Unrean, Pornkamol; Srienc, Friedrich

    2011-11-01

    A metabolic network can be described by a set of elementary modes or pathways representing discrete metabolic states that support cell function. We have recently shown that in the most likely metabolic state the usage probability of individual elementary modes is distributed according to the Boltzmann distribution law while complying with the principle of maximum entropy production. To demonstrate that a metabolic network evolves towards such state we have carried out adaptive evolution experiments with Thermoanaerobacterium saccharolyticum operating with a reduced metabolic functionality based on a reduced set of elementary modes. In such reduced metabolic network metabolic fluxes can be conveniently computed from the measured metabolite secretion pattern. Over a time span of 300 generations the specific growth rate of the strain continuously increased together with a continuous increase in the rate of entropy production. We show that the rate of entropy production asymptotically approaches the maximum entropy production rate predicted from the state when the usage probability of individual elementary modes is distributed according to the Boltzmann distribution. Therefore, the outcome of evolution of a complex biological system can be predicted in highly quantitative terms using basic statistical mechanical principles. PMID:21903175

  12. Optical and terahertz spectra analysis by the maximum entropy method.

    PubMed

    Vartiainen, Erik M; Peiponen, Kai-Erik

    2013-06-01

    Phase retrieval is one of the classical problems in various fields of physics including x-ray crystallography, astronomy and spectroscopy. It arises when only an amplitude measurement on electric field can be made while both amplitude and phase of the field are needed for obtaining the desired material properties. In optical and terahertz spectroscopies, in particular, phase retrieval is a one-dimensional problem, which is considered as unsolvable in general. Nevertheless, an approach utilizing the maximum entropy principle has proven to be a feasible tool in various applications of optical, both linear and nonlinear, as well as in terahertz spectroscopies, where the one-dimensional phase retrieval problem arises. In this review, we focus on phase retrieval using the maximum entropy method in various spectroscopic applications. We review the theory behind the method and illustrate through examples why and how the method works, as well as discuss its limitations. PMID:23660584

  13. PNNL: A Supervised Maximum Entropy Approach to Word Sense Disambiguation

    SciTech Connect

    Tratz, Stephen C.; Sanfilippo, Antonio P.; Gregory, Michelle L.; Chappell, Alan R.; Posse, Christian; Whitney, Paul D.

    2007-06-23

    In this paper, we described the PNNL Word Sense Disambiguation system as applied to the English All-Word task in Se-mEval 2007. We use a supervised learning approach, employing a large number of features and using Information Gain for dimension reduction. Our Maximum Entropy approach combined with a rich set of features produced results that are significantly better than baseline and are the highest F-score for the fined-grained English All-Words subtask.

  14. Fast Forward Maximum entropy reconstruction of sparsely sampled data

    NASA Astrophysics Data System (ADS)

    Balsgart, Nicholas M.; Vosegaard, Thomas

    2012-10-01

    We present an analytical algorithm using fast Fourier transformations (FTs) for deriving the gradient needed as part of the iterative reconstruction of sparsely sampled datasets using the forward maximum entropy reconstruction (FM) procedure by Hyberts and Wagner [J. Am. Chem. Soc. 129 (2007) 5108]. The major drawback of the original algorithm is that it required one FT and one evaluation of the entropy per missing datapoint to establish the gradient. In the present study, we demonstrate that the entire gradient may be obtained using only two FT's and one evaluation of the entropy derivative, thus achieving impressive time savings compared to the original procedure. An example: A 2D dataset with sparse sampling of the indirect dimension, with sampling of only 75 out of 512 complex points (15% sampling) would lack (512 - 75) × 2 = 874 points per ν2 slice. The original FM algorithm would require 874 FT's and entropy function evaluations to setup the gradient, while the present algorithm is ˜450 times faster in this case, since it requires only two FT's. This allows reduction of the computational time from several hours to less than a minute. Even more impressive time savings may be achieved with 2D reconstructions of 3D datasets, where the original algorithm required days of CPU time on high-performance computing clusters only require few minutes of calculation on regular laptop computers with the new algorithm.

  15. Spectrum unfolding in X-ray spectrometry using the maximum entropy method

    NASA Astrophysics Data System (ADS)

    Fernandez, Jorge E.; Scot, Viviana; Di Giulio, Eugenio

    2014-02-01

    The solution of the unfolding problem is an ever-present issue in X-ray spectrometry. The maximum entropy technique solves this problem by taking advantage of some known a priori physical information and by ensuring an outcome with only positive values. This method is implemented in MAXED (MAXimum Entropy Deconvolution), a software code contained in the package UMG (Unfolding with MAXED and GRAVEL) developed at PTB and distributed by NEA Data Bank. This package contains also the code GRAVEL (used to estimate the precision of the solution). This article introduces the new code UMESTRAT (Unfolding Maximum Entropy STRATegy) which applies a semi-automatic strategy to solve the unfolding problem by using a suitable combination of MAXED and GRAVEL for applications in X-ray spectrometry. Some examples of the use of UMESTRAT are shown, demonstrating its capability to remove detector artifacts from the measured spectrum consistently with the model used for the detector response function (DRF).

  16. Time-Reversal Acoustics and Maximum-Entropy Imaging

    SciTech Connect

    Berryman, J G

    2001-08-22

    Target location is a common problem in acoustical imaging using either passive or active data inversion. Time-reversal methods in acoustics have the important characteristic that they provide a means of determining the eigenfunctions and eigenvalues of the scattering operator for either of these problems. Each eigenfunction may often be approximately associated with an individual scatterer. The resulting decoupling of the scattered field from a collection of targets is a very useful aid to localizing the targets, and suggests a number of imaging and localization algorithms. Two of these are linear subspace methods and maximum-entropy imaging.

  17. Nuclear-weighted X-ray maximum entropy method - NXMEM.

    PubMed

    Christensen, Sebastian; Bindzus, Niels; Christensen, Mogens; Brummerstedt Iversen, Bo

    2015-01-01

    Subtle structural features such as disorder and anharmonic motion may be accurately characterized from nuclear density distributions (NDDs). As a viable alternative to neutron diffraction, this paper introduces a new approach named the nuclear-weighted X-ray maximum entropy method (NXMEM) for reconstructing pseudo NDDs. It calculates an electron-weighted nuclear density distribution (eNDD), exploiting that X-ray diffraction delivers data of superior quality, requires smaller sample volumes and has higher availability. NXMEM is tested on two widely different systems: PbTe and Ba(8)Ga(16)Sn(30). The first compound, PbTe, possesses a deceptively simple crystal structure on the macroscopic level that is unable to account for its excellent thermoelectric properties. The key mechanism involves local distortions, and the capability of NXMEM to probe this intriguing feature is established with simulated powder diffraction data. In the second compound, Ba(8)Ga(16)Sn(30), disorder among the Ba guest atoms is analysed with both experimental and simulated single-crystal diffraction data. In all cases, NXMEM outperforms the maximum entropy method by substantially enhancing the nuclear resolution. The induced improvements correlate with the amount of available data, rendering NXMEM especially powerful for powder and low-resolution single-crystal diffraction. The NXMEM procedure can be implemented in existing software and facilitates widespread characterization of disorder in functional materials. PMID:25537384

  18. A maximum entropy reconstruction technique for tomographic particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Bilsky, A. V.; Lozhkin, V. A.; Markovich, D. M.; Tokarev, M. P.

    2013-04-01

    This paper studies a novel approach for reducing tomographic PIV computational complexity. The proposed approach is an algebraic reconstruction technique, termed MENT (maximum entropy). This technique computes the three-dimensional light intensity distribution several times faster than SMART, using at least ten times less memory. Additionally, the reconstruction quality remains nearly the same as with SMART. This paper presents the theoretical computation performance comparison for MENT, SMART and MART, followed by validation using synthetic particle images. Both the theoretical assessment and validation of synthetic images demonstrate significant computational time reduction. The data processing accuracy of MENT was compared to that of SMART in a slot jet experiment. A comparison of the average velocity profiles shows a high level of agreement between the results obtained with MENT and those obtained with SMART.

  19. Test images for the maximum entropy image restoration method

    NASA Technical Reports Server (NTRS)

    Mackey, James E.

    1990-01-01

    One of the major activities of any experimentalist is data analysis and reduction. In solar physics, remote observations are made of the sun in a variety of wavelengths and circumstances. In no case is the data collected free from the influence of the design and operation of the data gathering instrument as well as the ever present problem of noise. The presence of significant noise invalidates the simple inversion procedure regardless of the range of known correlation functions. The Maximum Entropy Method (MEM) attempts to perform this inversion by making minimal assumptions about the data. To provide a means of testing the MEM and characterizing its sensitivity to noise, choice of point spread function, type of data, etc., one would like to have test images of known characteristics that can represent the type of data being analyzed. A means of reconstructing these images is presented.

  20. LIBOR troubles: Anomalous movements detection based on maximum entropy

    NASA Astrophysics Data System (ADS)

    Bariviera, Aurelio F.; Martín, María T.; Plastino, Angelo; Vampa, Victoria

    2016-05-01

    According to the definition of the London Interbank Offered Rate (LIBOR), contributing banks should give fair estimates of their own borrowing costs in the interbank market. Between 2007 and 2009, several banks made inappropriate submissions of LIBOR, sometimes motivated by profit-seeking from their trading positions. In 2012, several newspapers' articles began to cast doubt on LIBOR integrity, leading surveillance authorities to conduct investigations on banks' behavior. Such procedures resulted in severe fines imposed to involved banks, who recognized their financial inappropriate conduct. In this paper, we uncover such unfair behavior by using a forecasting method based on the Maximum Entropy principle. Our results are robust against changes in parameter settings and could be of great help for market surveillance.

  1. Verification and validation of the maximum entropy method of moment reconstruction of energy dependent neutron flux

    NASA Astrophysics Data System (ADS)

    Crawford, Douglas Spencer

    Verification and Validation of reconstructed neutron flux based on the maximum entropy method, is presented in this paper. The verification is carried out by comparing the neutron flux spectrum from the maximum entropy method with Monte Carlo N Particle 5 version 1.40 (MCNP5) and Attila-7.1.0-beta (Attila). A spherical 100% 235U critical assembly is modeled as the test case to compare the three methods. The verification error range for the maximum entropy method is 15% to 23% where MCNP5 is taken to be the comparison standard. Attila relative error for the critical assembly is 20% to 35%. Validation is accomplished by comparing a neutron flux spectrum that is back calculated from foil activation measurements performed in the GODIVA experiment (GODIVA). The error range of the reconstructed flux compared to GODIVA is 0%-10%. The error range of the neutron flux spectrum from MCNP5 compared to GODIVA is 0%-20% and the Attila error range compared to the GODIVA is 0%-35%. The maximum entropy method for reconstructing flux is shown to be a fast reliable method, compared to either Monte Carlo methods (MCNP5) or 30 multienergy group methods (Attila) and with respect to the GODIVA experiment.

  2. Tsallis distribution as a standard maximum entropy solution with ‘tail’ constraint

    NASA Astrophysics Data System (ADS)

    Bercher, J.-F.

    2008-08-01

    We show that Tsallis' distributions can be derived from the standard (Shannon) maximum entropy setting, by incorporating a constraint on the divergence between the distribution and another distribution imagined as its tail. In this setting, we find an underlying entropy which is the Rényi entropy. Furthermore, escort distributions and generalized means appear as a direct consequence of the construction. Finally, the “maximum entropy tail distribution” is identified as a Generalized Pareto Distribution.

  3. Maximum Entropy Production As a Framework for Understanding How Living Systems Evolve, Organize and Function

    NASA Astrophysics Data System (ADS)

    Vallino, J. J.; Algar, C. K.; Huber, J. A.; Fernandez-Gonzalez, N.

    2014-12-01

    The maximum entropy production (MEP) principle holds that non equilibrium systems with sufficient degrees of freedom will likely be found in a state that maximizes entropy production or, analogously, maximizes potential energy destruction rate. The theory does not distinguish between abiotic or biotic systems; however, we will show that systems that can coordinate function over time and/or space can potentially dissipate more free energy than purely Markovian processes (such as fire or a rock rolling down a hill) that only maximize instantaneous entropy production. Biological systems have the ability to store useful information acquired via evolution and curated by natural selection in genomic sequences that allow them to execute temporal strategies and coordinate function over space. For example, circadian rhythms allow phototrophs to "predict" that sun light will return and can orchestrate metabolic machinery appropriately before sunrise, which not only gives them a competitive advantage, but also increases the total entropy production rate compared to systems that lack such anticipatory control. Similarly, coordination over space, such a quorum sensing in microbial biofilms, can increase acquisition of spatially distributed resources and free energy and thereby enhance entropy production. In this talk we will develop a modeling framework to describe microbial biogeochemistry based on the MEP conjecture constrained by information and resource availability. Results from model simulations will be compared to laboratory experiments to demonstrate the usefulness of the MEP approach.

  4. Local image statistics: maximum-entropy constructions and perceptual salience

    PubMed Central

    Victor, Jonathan D.; Conte, Mary M.

    2012-01-01

    The space of visual signals is high-dimensional and natural visual images have a highly complex statistical structure. While many studies suggest that only a limited number of image statistics are used for perceptual judgments, a full understanding of visual function requires analysis not only of the impact of individual image statistics, but also, how they interact. In natural images, these statistical elements (luminance distributions, correlations of low and high order, edges, occlusions, etc.) are intermixed, and their effects are difficult to disentangle. Thus, there is a need for construction of stimuli in which one or more statistical elements are introduced in a controlled fashion, so that their individual and joint contributions can be analyzed. With this as motivation, we present algorithms to construct synthetic images in which local image statistics—including luminance distributions, pair-wise correlations, and higher-order correlations—are explicitly specified and all other statistics are determined implicitly by maximum-entropy. We then apply this approach to measure the sensitivity of the human visual system to local image statistics and to sample their interactions. PMID:22751397

  5. Application of the maximum relative entropy method to the physics of ferromagnetic materials

    NASA Astrophysics Data System (ADS)

    Giffin, Adom; Cafaro, Carlo; Ali, Sean Alan

    2016-08-01

    It is known that the Maximum relative Entropy (MrE) method can be used to both update and approximate probability distributions functions in statistical inference problems. In this manuscript, we apply the MrE method to infer magnetic properties of ferromagnetic materials. In addition to comparing our approach to more traditional methodologies based upon the Ising model and Mean Field Theory, we also test the effectiveness of the MrE method on conventionally unexplored ferromagnetic materials with defects.

  6. Maximum Entropy/Optimal Projection (MEOP) control design synthesis: Optimal quantification of the major design tradeoffs

    NASA Technical Reports Server (NTRS)

    Hyland, D. C.; Bernstein, D. S.

    1987-01-01

    The underlying philosophy and motivation of the optimal projection/maximum entropy (OP/ME) stochastic modeling and reduced control design methodology for high order systems with parameter uncertainties are discussed. The OP/ME design equations for reduced-order dynamic compensation including the effect of parameter uncertainties are reviewed. The application of the methodology to several Large Space Structures (LSS) problems of representative complexity is illustrated.

  7. Entropy maximum in a nonlinear system with the 1/ f fluctuation spectrum

    NASA Astrophysics Data System (ADS)

    Koverda, V. P.; Skokov, V. N.

    2011-11-01

    Analysis of the control and subordination is carried out for the system of nonlinear stochastic equations describing fluctuations with the 1/ f spectrum and with the interaction of nonequilibrium phase transitions. It is shown that the control equation of the system has a distribution function that decreases upon an increase in the argument in the same way as the Gaussian distribution function. Therefore, this function can be used for determining the Gibbs-Shannon informational entropy. The local maximum of this entropy is determined, which corresponds to tuning of the stochastic equations to criticality and indicates the stability of fluctuations with the 1/ f spectrum. The values of parameter q appearing in the definition of these entropies are determined from the condition that the coordinates of the Gibbs-Shannon entropy maximum coincide with the coordinates of the Tsallis entropy maximum and the Renyi entropy maximum for distribution functions with a power dependence.

  8. Maximum entropy principle based estimation of performance distribution in queueing theory.

    PubMed

    He, Dayi; Li, Ran; Huang, Qi; Lei, Ping

    2014-01-01

    In related research on queuing systems, in order to determine the system state, there is a widespread practice to assume that the system is stable and that distributions of the customer arrival ratio and service ratio are known information. In this study, the queuing system is looked at as a black box without any assumptions on the distribution of the arrival and service ratios and only keeping the assumption on the stability of the queuing system. By applying the principle of maximum entropy, the performance distribution of queuing systems is derived from some easily accessible indexes, such as the capacity of the system, the mean number of customers in the system, and the mean utilization of the servers. Some special cases are modeled and their performance distributions are derived. Using the chi-square goodness of fit test, the accuracy and generality for practical purposes of the principle of maximum entropy approach is demonstrated. PMID:25207992

  9. Maximum entropy signal processing in practical NMR spectroscopy

    NASA Astrophysics Data System (ADS)

    Sibisi, Sibusiso; Skilling, John; Brereton, Richard G.; Laue, Ernest D.; Staunton, James

    1984-10-01

    NMR spectroscopy is intrinsically insensitive, a frequently serious limitation especially in biochemical applications where sample size is limited and compounds may be too insoluble or unstable for data to be accumulated over long periods. Fourier transform (FT) NMR was developed by Ernst1 to speed up the accumulation of useful data, dramatically improving the quality of spectra obtained in a given observing time by recording the free induction decay (FID) data directly in time, at the cost of requiring numerical processing. Ernst also proposed that more information could be obtained from the spectrum if the FID was multiplied by a suitable apodizing function before being Fourier transformed. For example (see ref. 2), an increase in sensitivity can result from the use of a matched filter1, whereas an increase in resolution can be achieved by the use of gaussian multiplication1,3, application of sine bells4-8 or convolution difference9. These methods are now used routinely in NMR data processing. The maximum entropy method (MEM)10 is theoretically capable of achieving simultaneous enhancement in both respects11, and this has been borne out in practice in other fields where it has been applied. However, this technique requires relatively heavy computation. We describe here the first practical application of MEM to NMR, and we analyse 13C and 1H NMR spectra of 2-vinyl pyridine. Compared with conventional spectra, MEM gives considerable suppression of noise, accompanied by significant resolution enhancement. Multiplets in the 1H spectra are resolved better leading to improved visual clarity.

  10. 'Maximum' entropy production in self-organized plasma boundary layer: A thermodynamic discussion about turbulent heat transport

    SciTech Connect

    Yoshida, Z.; Mahajan, S. M.

    2008-03-15

    A thermodynamic model of a plasma boundary layer, characterized by enhanced temperature contrasts and ''maximum entropy production,'' is proposed. The system shows bifurcation if the heat flux entering through the inner boundary exceeds a critical value. The state with a larger temperature contrast (larger entropy production) sustains a self-organized flow. An inverse cascade of energy is proposed as the underlying physical mechanism for the realization of such a heat engine.

  11. Maximum entropy principle for stationary states underpinned by stochastic thermodynamics.

    PubMed

    Ford, Ian J

    2015-11-01

    The selection of an equilibrium state by maximizing the entropy of a system, subject to certain constraints, is often powerfully motivated as an exercise in logical inference, a procedure where conclusions are reached on the basis of incomplete information. But such a framework can be more compelling if it is underpinned by dynamical arguments, and we show how this can be provided by stochastic thermodynamics, where an explicit link is made between the production of entropy and the stochastic dynamics of a system coupled to an environment. The separation of entropy production into three components allows us to select a stationary state by maximizing the change, averaged over all realizations of the motion, in the principal relaxational or nonadiabatic component, equivalent to requiring that this contribution to the entropy production should become time independent for all realizations. We show that this recovers the usual equilibrium probability density function (pdf) for a conservative system in an isothermal environment, as well as the stationary nonequilibrium pdf for a particle confined to a potential under nonisothermal conditions, and a particle subject to a constant nonconservative force under isothermal conditions. The two remaining components of entropy production account for a recently discussed thermodynamic anomaly between over- and underdamped treatments of the dynamics in the nonisothermal stationary state. PMID:26651681

  12. Approximation of probability density functions by the Multilevel Monte Carlo Maximum Entropy method

    NASA Astrophysics Data System (ADS)

    Bierig, Claudio; Chernov, Alexey

    2016-06-01

    We develop a complete convergence theory for the Maximum Entropy method based on moment matching for a sequence of approximate statistical moments estimated by the Multilevel Monte Carlo method. Under appropriate regularity assumptions on the target probability density function, the proposed method is superior to the Maximum Entropy method with moments estimated by the Monte Carlo method. New theoretical results are illustrated in numerical examples.

  13. Maximum entropy method applied to deblurring images on a MasPar MP-1 computer

    NASA Technical Reports Server (NTRS)

    Bonavito, N. L.; Dorband, John; Busse, Tim

    1991-01-01

    A statistical inference method based on the principle of maximum entropy is developed for the purpose of enhancing and restoring satellite images. The proposed maximum entropy image restoration method is shown to overcome the difficulties associated with image restoration and provide the smoothest and most appropriate solution consistent with the measured data. An implementation of the method on the MP-1 computer is described, and results of tests on simulated data are presented.

  14. Unification of field theory and maximum entropy methods for learning probability densities

    NASA Astrophysics Data System (ADS)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  15. Estimation of typhoon rainfall in GaoPing River: A Multivariate Maximum Entropy Method

    NASA Astrophysics Data System (ADS)

    Pei-Jui, Wu; Hwa-Lung, Yu

    2016-04-01

    The heavy rainfall from typhoons is the main factor of the natural disaster in Taiwan, which causes the significant loss of human lives and properties. Statistically average 3.5 typhoons invade Taiwan every year, and the serious typhoon, Morakot in 2009, impacted Taiwan in recorded history. Because the duration, path and intensity of typhoon, also affect the temporal and spatial rainfall type in specific region , finding the characteristics of the typhoon rainfall type is advantageous when we try to estimate the quantity of rainfall. This study developed a rainfall prediction model and can be divided three parts. First, using the EEOF(extended empirical orthogonal function) to classify the typhoon events, and decompose the standard rainfall type of all stations of each typhoon event into the EOF and PC(principal component). So we can classify the typhoon events which vary similarly in temporally and spatially as the similar typhoon types. Next, according to the classification above, we construct the PDF(probability density function) in different space and time by means of using the multivariate maximum entropy from the first to forth moment statistically. Therefore, we can get the probability of each stations of each time. Final we use the BME(Bayesian Maximum Entropy method) to construct the typhoon rainfall prediction model , and to estimate the rainfall for the case of GaoPing river which located in south of Taiwan.This study could be useful for typhoon rainfall predictions in future and suitable to government for the typhoon disaster prevention .

  16. Automatic feature template generation for maximum entropy based intonational phrase break prediction

    NASA Astrophysics Data System (ADS)

    Zhou, You

    2013-03-01

    The prediction of intonational phrase (IP) breaks is important for both the naturalness and intelligibility of Text-to- Speech (TTS) systems. In this paper, we propose a maximum entropy (ME) model to predict IP breaks from unrestricted text, and evaluate various keyword selection approaches in different domains. Furthermore, we design a hierarchical clustering algorithm for automatic generation of feature templates, which minimizes the need for human supervision during ME model training. Results of comparative experiments show that, for the task of IP break prediction, ME model obviously outperforms classification and regression tree (CART), log-likelihood ratio is the best scoring measure of keyword selection, compared with manual templates, templates automatically generated by our approach greatly improves the F-score of ME based IP break prediction, and significantly reduces the size of ME model.

  17. Maximum Theoretical Efficiency Limit of Photovoltaic Devices: Effect of Band Structure on Excited State Entropy.

    PubMed

    Osterloh, Frank E

    2014-10-01

    The Shockley-Queisser analysis provides a theoretical limit for the maximum energy conversion efficiency of single junction photovoltaic cells. But besides the semiconductor bandgap no other semiconductor properties are considered in the analysis. Here, we show that the maximum conversion efficiency is limited further by the excited state entropy of the semiconductors. The entropy loss can be estimated with the modified Sackur-Tetrode equation as a function of the curvature of the bands, the degeneracy of states near the band edges, the illumination intensity, the temperature, and the band gap. The application of the second law of thermodynamics to semiconductors provides a simple explanation for the observed high performance of group IV, III-V, and II-VI materials with strong covalent bonding and for the lower efficiency of transition metal oxides containing weakly interacting metal d orbitals. The model also predicts efficient energy conversion with quantum confined and molecular structures in the presence of a light harvesting mechanism. PMID:26278444

  18. A maximum entropy kernel density estimator with applications to function interpolation and texture segmentation

    NASA Astrophysics Data System (ADS)

    Balakrishnan, Nikhil; Schonfeld, Dan

    2006-02-01

    In this paper, we develop a new algorithm to estimate an unknown probability density function given a finite data sample using a tree shaped kernel density estimator. The algorithm formulates an integrated squared error based cost function which minimizes the quadratic divergence between the kernel density and the Parzen density estimate. The cost function reduces to a quadratic programming problem which is minimized within the maximum entropy framework. The maximum entropy principle acts as a regularizer which yields a smooth solution. A smooth density estimate enables better generalization to unseen data and offers distinct advantages in high dimensions and cases where there is limited data. We demonstrate applications of the hierarchical kernel density estimator for function interpolation and texture segmentation problems. When applied to function interpolation, the kernel density estimator improves performance considerably in situations where the posterior conditional density of the dependent variable is multimodal. The kernel density estimator allows flexible non parametric modeling of textures which improves performance in texture segmentation algorithms. We demonstrate performance on a text labeling problem which shows performance of the algorithm in high dimensions. The hierarchical nature of the density estimator enables multiresolution solutions depending on the complexity of the data. The algorithm is fast and has at most quadratic scaling in the number of kernels.

  19. Maximum entropy in a nonlinear system with a 1/f power spectrum

    NASA Astrophysics Data System (ADS)

    Koverda, V. P.; Skokov, V. N.

    2012-01-01

    An analysis of master-slave hierarchy has been made in a system of nonlinear stochastic equations describing fluctuations with a 1/f spectrum at coupled nonequilibrium phase transitions. It is shown that for a system of stochastic equations there exist different probability distribution functions with power-law (non-Gaussian) and Gaussian tails. The governing equation of a system has a probability distribution function with Gaussian tails. Therefore, distribution functions for governing equations may be used for finding the Gibbs-Shannon entropy. The local maximum of this entropy has been found. It corresponds to the tuning of the parameters of the equations to criticality and points to the stability of fluctuations with a 1/f spectrum. The Tsallis entropy and the Renyi entropy for the probability distribution functions with power-law tails have been calculated. The parameter q, which is included in the determination of these entropies has been found from the condition that the coordinates of the maximum Gibbs-Shannon entropy coincide with the maxima of the Tsallis and Renyi entropies.

  20. Estimation of design sea ice thickness with maximum entropy distribution by particle swarm optimization method

    NASA Astrophysics Data System (ADS)

    Tao, Shanshan; Dong, Sheng; Wang, Zhifeng; Jiang, Wensheng

    2016-06-01

    The maximum entropy distribution, which consists of various recognized theoretical distributions, is a better curve to estimate the design thickness of sea ice. Method of moment and empirical curve fitting method are common-used parameter estimation methods for maximum entropy distribution. In this study, we propose to use the particle swarm optimization method as a new parameter estimation method for the maximum entropy distribution, which has the advantage to avoid deviation introduced by simplifications made in other methods. We conducted a case study to fit the hindcasted thickness of the sea ice in the Liaodong Bay of Bohai Sea using these three parameter-estimation methods for the maximum entropy distribution. All methods implemented in this study pass the K-S tests at 0.05 significant level. In terms of the average sum of deviation squares, the empirical curve fitting method provides the best fit for the original data, while the method of moment provides the worst. Among all three methods, the particle swarm optimization method predicts the largest thickness of the sea ice for a same return period. As a result, we recommend using the particle swarm optimization method for the maximum entropy distribution for offshore structures mainly influenced by the sea ice in winter, but using the empirical curve fitting method to reduce the cost in the design of temporary and economic buildings.

  1. Maximum Entropy Production and the Evolution of the Biotic Carbon Cycle

    NASA Astrophysics Data System (ADS)

    Kleidon, A.

    2003-12-01

    The MEP hypothesis states that diabatic processes with sufficient degrees of freedom maintain states at which the rate of entropy production is maximized. A common example in climatology is the application of MEP to poleward heat transport, which leads to predicted equator-pole temperature gradients that are consistent with observations. Here the MEP hypothesis is applied to biotic activity as a diabatic process which affects the atmospheric concentration of carbon dioxide (pCO2) and therefore the strength of the Earth's greenhouse effect. It is first shown with a conceptual climate model that there should be a minimum planetary albedo for which entropy production associated with absorption of solar radiation would be at a maximum as a consequence of the competing effects of surface temperature on the extent of snow cover and convective cloud cover. When pCO2 is simulated by a simple carbon cycle model, it is then shown that the application of MEP to biotic activity leads to an insensitivity of simulated surface temperature to long-term changes in solar luminosity. These predicted changes are consistent with the general suggested pattern of Earth system evolution (decreased greenhouse strength and roughly constant surface temperature through time) and share similarity with the Gaia hypothesis.

  2. A Bayes-Maximum Entropy method for multi-sensor data fusion

    SciTech Connect

    Beckerman, M.

    1991-01-01

    In this paper we introduce a Bayes-Maximum Entropy formalism for multi-sensor data fusion, and present an application of this methodology to the fusion of ultrasound and visual sensor data as acquired by a mobile robot. In our approach the principle of maximum entropy is applied to the construction of priors and likelihoods from the data. Distances between ultrasound and visual points of interest in a dual representation are used to define Gibbs likelihood distributions. Both one- and two-dimensional likelihoods are presented, and cast into a form which makes explicit their dependence upon the mean. The Bayesian posterior distributions are used to test a null hypothesis, and Maximum Entropy Maps used for navigation are updated using the resulting information from the dual representation. 14 refs., 9 figs.

  3. Maximum-entropy closures for kinetic theories of neuronal network dynamics.

    PubMed

    Rangan, Aaditya V; Cai, David

    2006-05-01

    We analyze (1 + 1)D kinetic equations for neuronal network dynamics, which are derived via an intuitive closure from a Boltzmann-like equation governing the evolution of a one-particle (i.e., one-neuron) probability density function. We demonstrate that this intuitive closure is a generalization of moment closures based on the maximum-entropy principle. By invoking maximum-entropy closures, we show how to systematically extend this kinetic theory to obtain higher-order, kinetic equations and to include coupled networks of both excitatory and inhibitory neurons. PMID:16712338

  4. Infrared image segmentation method based on spatial coherence histogram and maximum entropy

    NASA Astrophysics Data System (ADS)

    Liu, Songtao; Shen, Tongsheng; Dai, Yao

    2014-11-01

    In order to segment the target well and suppress background noises effectively, an infrared image segmentation method based on spatial coherence histogram and maximum entropy is proposed. First, spatial coherence histogram is presented by weighting the importance of the different position of these pixels with the same gray-level, which is obtained by computing their local density. Then, after enhancing the image by spatial coherence histogram, 1D maximum entropy method is used to segment the image. The novel method can not only get better segmentation results, but also have a faster computation time than traditional 2D histogram-based segmentation methods.

  5. Maximum entropy spectral analysis for circadian rhythms: theory, history and practice

    PubMed Central

    2013-01-01

    There is an array of numerical techniques available to estimate the period of circadian and other biological rhythms. Criteria for choosing a method include accuracy of period measurement, resolution of signal embedded in noise or of multiple periodicities, and sensitivity to the presence of weak rhythms and robustness in the presence of stochastic noise. Maximum Entropy Spectral Analysis (MESA) has proven itself excellent in all regards. The MESA algorithm fits an autoregressive model to the data and extracts the spectrum from its coefficients. Entropy in this context refers to “ignorance” of the data and since this is formally maximized, no unwarranted assumptions are made. Computationally, the coefficients are calculated efficiently by solution of the Yule-Walker equations in an iterative algorithm. MESA is compared here to other common techniques. It is normal to remove high frequency noise from time series using digital filters before analysis. The Butterworth filter is demonstrated here and a danger inherent in multiple filtering passes is discussed. PMID:23844660

  6. Maximum joint entropy and information-based collaboration of automated learning machines

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Knuth, K. H.; Lary, D. J.

    2012-05-01

    We are working to develop automated intelligent agents, which can act and react as learning machines with minimal human intervention. To accomplish this, an intelligent agent is viewed as a question-asking machine, which is designed by coupling the processes of inference and inquiry to form a model-based learning unit. In order to select maximally-informative queries, the intelligent agent needs to be able to compute the relevance of a question. This is accomplished by employing the inquiry calculus, which is dual to the probability calculus, and extends information theory by explicitly requiring context. Here, we consider the interaction between two questionasking intelligent agents, and note that there is a potential information redundancy with respect to the two questions that the agents may choose to pose. We show that the information redundancy is minimized by maximizing the joint entropy of the questions, which simultaneously maximizes the relevance of each question while minimizing the mutual information between them. Maximum joint entropy is therefore an important principle of information-based collaboration, which enables intelligent agents to efficiently learn together.

  7. Hierarchical maximum entropy principle for generalized superstatistical systems and Bose-Einstein condensation of light

    NASA Astrophysics Data System (ADS)

    Sob'yanin, Denis Nikolaevich

    2012-06-01

    A principle of hierarchical entropy maximization is proposed for generalized superstatistical systems, which are characterized by the existence of three levels of dynamics. If a generalized superstatistical system comprises a set of superstatistical subsystems, each made up of a set of cells, then the Boltzmann-Gibbs-Shannon entropy should be maximized first for each cell, second for each subsystem, and finally for the whole system. Hierarchical entropy maximization naturally reflects the sufficient time-scale separation between different dynamical levels and allows one to find the distribution of both the intensive parameter and the control parameter for the corresponding superstatistics. The hierarchical maximum entropy principle is applied to fluctuations of the photon Bose-Einstein condensate in a dye microcavity. This principle provides an alternative to the master equation approach recently applied to this problem. The possibility of constructing generalized superstatistics based on a statistics different from the Boltzmann-Gibbs statistics is pointed out.

  8. Determination of zero-coupon and spot rates from treasury data by maximum entropy methods

    NASA Astrophysics Data System (ADS)

    Gzyl, Henryk; Mayoral, Silvia

    2016-08-01

    An interesting and important inverse problem in finance consists of the determination of spot rates or prices of the zero coupon bonds, when the only information available consists of the prices of a few coupon bonds. A variety of methods have been proposed to deal with this problem. Here we present variants of a non-parametric method to treat with such problems, which neither imposes an analytic form on the rates or bond prices, nor imposes a model for the (random) evolution of the yields. The procedure consists of transforming the problem of the determination of the prices of the zero coupon bonds into a linear inverse problem with convex constraints, and then applying the method of maximum entropy in the mean. This method is flexible enough to provide a possible solution to a mispricing problem.

  9. Maximum entropy decomposition of flux distribution at steady state to elementary modes.

    PubMed

    Zhao, Quanyu; Kurata, Hiroyuki

    2009-01-01

    Enzyme Control Flux (ECF) is a method of correlating enzyme activity and flux distribution. The advantage of ECF is that the measurement integrates proteome data with metabolic flux analysis through Elementary Modes (EMs). But there are a few methods of effectively determining the Elementary Mode Coefficient (EMC) in cases where no objective biological function is available. Therefore, we proposed a new algorithm implementing the maximum entropy principle (MEP) as an objective function for estimating the EMC. To demonstrate the feasibility of using the MEP in this way, we compared it with Linear Programming and Quadratic Programming for modeling the metabolic networks of Chinese Hamster Ovary, Escherichia coli, and Saccharomyces cerevisiae cells. The use of the MEP presents the most plausible distribution of EMCs in the absence of any biological hypotheses describing the physiological state of cells, thereby enhancing the prediction accuracy of the flux distribution in various mutants. PMID:19147116

  10. Maximum Entropy of Effective Reaction Theory of Steady Non-ideal Detonation

    NASA Astrophysics Data System (ADS)

    Watt, Simon; Braithwaite, Martin; Byers Brown, William; Falle, Samuel; Sharpe, Gary

    2009-06-01

    According to the theory of Byers Brown, in a steady state detonation the entropy production between the shock and sonic locus is a maximum in a self-sustaining wave. This has shown to hold true for all one-dimensional cases. Applied to 2D steady curved detonation waves in a slab or cylindrical stick of explosive, Byers Brown suggested a novel variational approach for maximising the global entropy generation within the detonation driving zone, hence providing the solution of the self-sustaining detonation wave problem. Preliminary application of such a variational technique, albeit with simplfying assumptions, demonstrate its potential to provide a rapid and accurate solution method for the problem. In this paper, recent progress in the development of the 2D variational technique and validation of the maximum entropy concept are reported. The predictions of the theory are compared with high-resolution numerical simulations and with the predictions of existing Detonation Shock Dynamics theory.

  11. Application of the Maximum Entropy Method to Risk Analysis of Mergers and Acquisitions

    NASA Astrophysics Data System (ADS)

    Xie, Jigang; Song, Wenyun

    The maximum entropy (ME) method can be used to analyze the risk of mergers and acquisitions when only pre-acquisition information is available. A practical example of the risk analysis of China listed firms’ mergers and acquisitions is provided to testify the feasibility and practicality of the method.

  12. Generalization of the diffusion equation by using the maximum entropy principle

    NASA Astrophysics Data System (ADS)

    Jumarie, Guy

    1985-06-01

    By using the so-called maximum entropy principle in information theory, one derives a generalization of the Fokker-Planck-Kolmogorov equation which applies when the n first transition moments of the process are proportional to Δt, while the other ones can be neglected.

  13. Monitoring of Time-Dependent System Profiles by Multiplex Gas Chromatography with Maximum Entropy Demodulation

    NASA Technical Reports Server (NTRS)

    Becker, Joseph F.; Valentin, Jose

    1996-01-01

    The maximum entropy technique was successfully applied to the deconvolution of overlapped chromatographic peaks. An algorithm was written in which the chromatogram was represented as a vector of sample concentrations multiplied by a peak shape matrix. Simulation results demonstrated that there is a trade off between the detector noise and peak resolution in the sense that an increase of the noise level reduced the peak separation that could be recovered by the maximum entropy method. Real data originated from a sample storage column was also deconvoluted using maximum entropy. Deconvolution is useful in this type of system because the conservation of time dependent profiles depends on the band spreading processes in the chromatographic column, which might smooth out the finer details in the concentration profile. The method was also applied to the deconvolution of previously interpretted Pioneer Venus chromatograms. It was found in this case that the correct choice of peak shape function was critical to the sensitivity of maximum entropy in the reconstruction of these chromatograms.

  14. Maximum entropy, fractal dimension and lacunarity in quantification of cellular rejection in myocardial biopsy of patients submitted to heart transplantation

    NASA Astrophysics Data System (ADS)

    Neves, L. A.; Oliveira, F. R.; Peres, F. A.; Moreira, R. D.; Moriel, A. R.; de Godoy, M. F.; Murta Junior, L. O.

    2011-03-01

    This paper presents a method for the quantification of cellular rejection in endomyocardial biopsies of patients submitted to heart transplant. The model is based on automatic multilevel thresholding, which employs histogram quantification techniques, histogram slope percentage analysis and the calculation of maximum entropy. The structures were quantified with the aid of the multi-scale fractal dimension and lacunarity for the identification of behavior patterns in myocardial cellular rejection in order to determine the most adequate treatment for each case.

  15. Ecosystem functioning and maximum entropy production: a quantitative test of hypotheses

    PubMed Central

    Meysman, Filip J. R.; Bruers, Stijn

    2010-01-01

    The idea that entropy production puts a constraint on ecosystem functioning is quite popular in ecological thermodynamics. Yet, until now, such claims have received little quantitative verification. Here, we examine three ‘entropy production’ hypotheses that have been forwarded in the past. The first states that increased entropy production serves as a fingerprint of living systems. The other two hypotheses invoke stronger constraints. The state selection hypothesis states that when a system can attain multiple steady states, the stable state will show the highest entropy production rate. The gradient response principle requires that when the thermodynamic gradient increases, the system's new stable state should always be accompanied by a higher entropy production rate. We test these three hypotheses by applying them to a set of conventional food web models. Each time, we calculate the entropy production rate associated with the stable state of the ecosystem. This analysis shows that the first hypothesis holds for all the food webs tested: the living state shows always an increased entropy production over the abiotic state. In contrast, the state selection and gradient response hypotheses break down when the food web incorporates more than one trophic level, indicating that they are not generally valid. PMID:20368259

  16. Self-Assembled Wiggling Nano-Structures and the Principle of Maximum Entropy Production

    NASA Astrophysics Data System (ADS)

    Belkin, A.; Hubler, A.; Bezryadin, A.

    2015-02-01

    While behavior of equilibrium systems is well understood, evolution of nonequilibrium ones is much less clear. Yet, many researches have suggested that the principle of the maximum entropy production is of key importance in complex systems away from equilibrium. Here, we present a quantitative study of large ensembles of carbon nanotubes suspended in a non-conducting non-polar fluid subject to a strong electric field. Being driven out of equilibrium, the suspension spontaneously organizes into an electrically conducting state under a wide range of parameters. Such self-assembly allows the Joule heating and, therefore, the entropy production in the fluid, to be maximized. Curiously, we find that emerging self-assembled structures can start to wiggle. The wiggling takes place only until the entropy production in the suspension reaches its maximum, at which time the wiggling stops and the structure becomes quasi-stable. Thus, we provide strong evidence that maximum entropy production principle plays an essential role in the evolution of self-organizing systems far from equilibrium.

  17. Self-Assembled Wiggling Nano-Structures and the Principle of Maximum Entropy Production

    PubMed Central

    Belkin, A.; Hubler, A.; Bezryadin, A.

    2015-01-01

    While behavior of equilibrium systems is well understood, evolution of nonequilibrium ones is much less clear. Yet, many researches have suggested that the principle of the maximum entropy production is of key importance in complex systems away from equilibrium. Here, we present a quantitative study of large ensembles of carbon nanotubes suspended in a non-conducting non-polar fluid subject to a strong electric field. Being driven out of equilibrium, the suspension spontaneously organizes into an electrically conducting state under a wide range of parameters. Such self-assembly allows the Joule heating and, therefore, the entropy production in the fluid, to be maximized. Curiously, we find that emerging self-assembled structures can start to wiggle. The wiggling takes place only until the entropy production in the suspension reaches its maximum, at which time the wiggling stops and the structure becomes quasi-stable. Thus, we provide strong evidence that maximum entropy production principle plays an essential role in the evolution of self-organizing systems far from equilibrium. PMID:25662746

  18. Estimation of Groundwater Radon in North Carolina Using Land Use Regression and Bayesian Maximum Entropy.

    PubMed

    Messier, Kyle P; Campbell, Ted; Bradley, Philip J; Serre, Marc L

    2015-08-18

    Radon ((222)Rn) is a naturally occurring chemically inert, colorless, and odorless radioactive gas produced from the decay of uranium ((238)U), which is ubiquitous in rocks and soils worldwide. Exposure to (222)Rn is likely the second leading cause of lung cancer after cigarette smoking via inhalation; however, exposure through untreated groundwater is also a contributing factor to both inhalation and ingestion routes. A land use regression (LUR) model for groundwater (222)Rn with anisotropic geological and (238)U based explanatory variables is developed, which helps elucidate the factors contributing to elevated (222)Rn across North Carolina. The LUR is also integrated into the Bayesian Maximum Entropy (BME) geostatistical framework to increase accuracy and produce a point-level LUR-BME model of groundwater (222)Rn across North Carolina including prediction uncertainty. The LUR-BME model of groundwater (222)Rn results in a leave-one out cross-validation r(2) of 0.46 (Pearson correlation coefficient = 0.68), effectively predicting within the spatial covariance range. Modeled results of (222)Rn concentrations show variability among intrusive felsic geological formations likely due to average bedrock (238)U defined on the basis of overlying stream-sediment (238)U concentrations that is a widely distributed consistently analyzed point-source data. PMID:26191968

  19. Structural damage assessment using linear approximation with maximum entropy and transmissibility data

    NASA Astrophysics Data System (ADS)

    Meruane, V.; Ortiz-Bernardin, A.

    2015-03-01

    Supervised learning algorithms have been proposed as a suitable alternative to model updating methods in structural damage assessment, being Artificial Neural Networks the most frequently used. Notwithstanding, the slow learning speed and the large number of parameters that need to be tuned within the training stage have been a major bottleneck in their application. This article presents a new algorithm for real-time damage assessment that uses a linear approximation method in conjunction with antiresonant frequencies that are identified from transmissibility functions. The linear approximation is handled by a statistical inference model based on the maximum-entropy principle. The merits of this new approach are twofold: training is avoided and data is processed in a period of time that is comparable to the one of Neural Networks. The performance of the proposed methodology is validated by considering three experimental structures: an eight-degree-of-freedom (DOF) mass-spring system, a beam, and an exhaust system of a car. To demonstrate the potential of the proposed algorithm over existing ones, the obtained results are compared with those of a model updating method based on parallel genetic algorithms and a multilayer feedforward neural network approach.

  20. Maximum entropy and the method of moments in performance evaluation of digital communications systems

    NASA Astrophysics Data System (ADS)

    Kavehrad, Mohsen; Joseph, Myrlene

    1986-12-01

    The maximum entropy criterion for estimating an unknown probability density function from its moments is applied to the evaluation of the average error probability in digital communications. Accurate averages are obtained, even when a few moments are available. The method is stable and results compare well with those from the powerful and widely used Gauss quadrature rules (GQR) method. For test cases presented in this work, the maximum entropy method achieved results with typically a few moments, while the GQR method required many more moments to obtain the same, as accurately. The method requires about the same number of movements as techniques based on orthogonal expansions. In addition, it provides an estimate of the probability density function of the target variable in a digital communication application.

  1. Maximum-Entropy Meshfree Method for Compressible and Near-Incompressible Elasticity

    SciTech Connect

    Ortiz, A; Puso, M A; Sukumar, N

    2009-09-04

    Numerical integration errors and volumetric locking in the near-incompressible limit are two outstanding issues in Galerkin-based meshfree computations. In this paper, we present a modified Gaussian integration scheme on background cells for meshfree methods that alleviates errors in numerical integration and ensures patch test satisfaction to machine precision. Secondly, a locking-free small-strain elasticity formulation for meshfree methods is proposed, which draws on developments in assumed strain methods and nodal integration techniques. In this study, maximum-entropy basis functions are used; however, the generality of our approach permits the use of any meshfree approximation. Various benchmark problems in two-dimensional compressible and near-incompressible small strain elasticity are presented to demonstrate the accuracy and optimal convergence in the energy norm of the maximum-entropy meshfree formulation.

  2. Maximum entropy deconvolution of the optical jet of 3C 273

    NASA Technical Reports Server (NTRS)

    Evans, I. N.; Ford, H. C.; Hui, X.

    1989-01-01

    The technique of maximum entropy image restoration is applied to the problem of deconvolving the point spread function from a deep, high-quality V band image of the optical jet of 3C 273. The resulting maximum entropy image has an approximate spatial resolution of 0.6 arcsec and has been used to study the morphology of the optical jet. Four regularly-spaced optical knots are clearly evident in the data, together with an optical 'extension' at each end of the optical jet. The jet oscillates around its center of gravity, and the spatial scale of the oscillations is very similar to the spacing between the optical knots. The jet is marginally resolved in the transverse direction and has an asymmetric profile perpendicular to the jet axis. The distribution of V band flux along the length of the jet, and accurate astrometry of the optical knot positions are presented.

  3. Maximum information entropy principle and the interpretation of probabilities in statistical mechanics - a short review

    NASA Astrophysics Data System (ADS)

    Kuić, Domagoj

    2016-05-01

    In this paper an alternative approach to statistical mechanics based on the maximum information entropy principle (MaxEnt) is examined, specifically its close relation with the Gibbs method of ensembles. It is shown that the MaxEnt formalism is the logical extension of the Gibbs formalism of equilibrium statistical mechanics that is entirely independent of the frequentist interpretation of probabilities only as factual (i.e. experimentally verifiable) properties of the real world. Furthermore, we show that, consistently with the law of large numbers, the relative frequencies of the ensemble of systems prepared under identical conditions (i.e. identical constraints) actually correspond to the MaxEnt probabilites in the limit of a large number of systems in the ensemble. This result implies that the probabilities in statistical mechanics can be interpreted, independently of the frequency interpretation, on the basis of the maximum information entropy principle.

  4. Hydrodynamic equations for electrons in graphene obtained from the maximum entropy principle

    SciTech Connect

    Barletti, Luigi

    2014-08-15

    The maximum entropy principle is applied to the formal derivation of isothermal, Euler-like equations for semiclassical fermions (electrons and holes) in graphene. After proving general mathematical properties of the equations so obtained, their asymptotic form corresponding to significant physical regimes is investigated. In particular, the diffusive regime, the Maxwell-Boltzmann regime (high temperature), the collimation regime and the degenerate gas limit (vanishing temperature) are considered.

  5. REMARKS ON THE MAXIMUM ENTROPY METHOD APPLIED TO FINITE TEMPERATURE LATTICE QCD.

    SciTech Connect

    UMEDA, T.; MATSUFURU, H.

    2005-07-25

    We make remarks on the Maximum Entropy Method (MEM) for studies of the spectral function of hadronic correlators in finite temperature lattice QCD. We discuss the virtues and subtlety of MEM in the cases that one does not have enough number of data points such as at finite temperature. Taking these points into account, we suggest several tests which one should examine to keep the reliability for the results, and also apply them using mock and lattice QCD data.

  6. Maximum entropy inference of seabed attenuation parameters using ship radiated broadband noise.

    PubMed

    Knobles, D P

    2015-12-01

    The received acoustic field generated by a single passage of a research vessel on the New Jersey continental shelf is employed to infer probability distributions for the parameter values representing the frequency dependence of the seabed attenuation and the source levels of the ship. The statistical inference approach employed in the analysis is a maximum entropy methodology. The average value of the error function, needed to uniquely specify a conditional posterior probability distribution, is estimated with data samples from time periods in which the ship-receiver geometry is dominated by either the stern or bow aspect. The existence of ambiguities between the source levels and the environmental parameter values motivates an attempt to partially decouple these parameter values. The main result is the demonstration that parameter values for the attenuation (α and the frequency exponent), the sediment sound speed, and the source levels can be resolved through a model space reduction technique. The results of this multi-step statistical inference developed for ship radiated noise is then tested by processing towed source data over the same bandwidth and source track to estimate continuous wave source levels that were measured independently with a reference hydrophone on the tow body. PMID:26723313

  7. Causal nexus between energy consumption and carbon dioxide emission for Malaysia using maximum entropy bootstrap approach.

    PubMed

    Gul, Sehrish; Zou, Xiang; Hassan, Che Hashim; Azam, Muhammad; Zaman, Khalid

    2015-12-01

    This study investigates the relationship between energy consumption and carbon dioxide emission in the causal framework, as the direction of causality remains has a significant policy implication for developed and developing countries. The study employed maximum entropy bootstrap (Meboot) approach to examine the causal nexus between energy consumption and carbon dioxide emission using bivariate as well as multivariate framework for Malaysia, over a period of 1975-2013. This is a unified approach without requiring the use of conventional techniques based on asymptotical theory such as testing for possible unit root and cointegration. In addition, it can be applied in the presence of non-stationary of any type including structural breaks without any type of data transformation to achieve stationary. Thus, it provides more reliable and robust inferences which are insensitive to time span as well as lag length used. The empirical results show that there is a unidirectional causality running from energy consumption to carbon emission both in the bivariate model and multivariate framework, while controlling for broad money supply and population density. The results indicate that Malaysia is an energy-dependent country and hence energy is stimulus to carbon emissions. PMID:26282441

  8. Merging daily sea surface temperature data from multiple satellites using a Bayesian maximum entropy method

    NASA Astrophysics Data System (ADS)

    Tang, Shaolei; Yang, Xiaofeng; Dong, Di; Li, Ziwei

    2015-12-01

    Sea surface temperature (SST) is an important variable for understanding interactions between the ocean and the atmosphere. SST fusion is crucial for acquiring SST products of high spatial resolution and coverage. This study introduces a Bayesian maximum entropy (BME) method for blending daily SSTs from multiple satellite sensors. A new spatiotemporal covariance model of an SST field is built to integrate not only single-day SSTs but also time-adjacent SSTs. In addition, AVHRR 30-year SST climatology data are introduced as soft data at the estimation points to improve the accuracy of blended results within the BME framework. The merged SSTs, with a spatial resolution of 4 km and a temporal resolution of 24 hours, are produced in the Western Pacific Ocean region to demonstrate and evaluate the proposed methodology. Comparisons with in situ drifting buoy observations show that the merged SSTs are accurate and the bias and root-mean-square errors for the comparison are 0.15°C and 0.72°C, respectively.

  9. A maximum-entropy method for the planning of conformal radiotherapy.

    PubMed

    Wu, X; Zhu, Y

    2001-11-01

    The maximum entropy method (MEM) is a powerful inverse analysis technique that is used in many fields of science and engineering to perform tasks such as image reconstruction and processing of nuclear magnetic resonance signals. Unlike other methods, MEM naturally incorporates a priori knowledge of the problem into the optimized cost function. This feature is especially important in radiotherapy planning, because some knowledge is usually available about the stage of tumor development and about the prescription doses, including some dose constraints to the surrounding normal organs. Inverse planning is inherently consistent with the ability of MEM to estimate parameters inversely. In this investigation, an entropy function determines the homogeneity of dose distribution in the planning target volume; a least-squares function is added to the maximum entropy function as a constraint to measure the quality of reconstructed doses in organs at risk; and an iterative Newton-Ralphson algorithm searches for the optimization solution. Here we provide two examples that validate this application of MEM and the results were compared with manual plans. Although the examples involve conformal radiotherapy, we think MEM can be adopted to optimize intensity-modulated radiation therapy. PMID:11764028

  10. Quantum maximum-entropy principle for closed quantum hydrodynamic transport within a Wigner function formalism

    SciTech Connect

    Trovato, M.; Reggiani, L.

    2011-12-15

    By introducing a quantum entropy functional of the reduced density matrix, the principle of quantum maximum entropy is asserted as fundamental principle of quantum statistical mechanics. Accordingly, we develop a comprehensive theoretical formalism to construct rigorously a closed quantum hydrodynamic transport within a Wigner function approach. The theoretical formalism is formulated in both thermodynamic equilibrium and nonequilibrium conditions, and the quantum contributions are obtained by only assuming that the Lagrange multipliers can be expanded in powers of ({h_bar}/2{pi}){sup 2}. In particular, by using an arbitrary number of moments, we prove that (1) on a macroscopic scale all nonlocal effects, compatible with the uncertainty principle, are imputable to high-order spatial derivatives, both of the numerical density n and of the effective temperature T; (2) the results available from the literature in the framework of both a quantum Boltzmann gas and a degenerate quantum Fermi gas are recovered as a particular case; (3) the statistics for the quantum Fermi and Bose gases at different levels of degeneracy are explicitly incorporated; (4) a set of relevant applications admitting exact analytical equations are explicitly given and discussed; (5) the quantum maximum entropy principle keeps full validity in the classical limit, when ({h_bar}/2{pi}){yields}0.

  11. Application of a multiscale maximum entropy image restoration algorithm to HXMT observations

    NASA Astrophysics Data System (ADS)

    Guan, Ju; Song, Li-Ming; Huo, Zhuo-Xi

    2016-08-01

    This paper introduces a multiscale maximum entropy (MSME) algorithm for image restoration of the Hard X-ray Modulation Telescope (HXMT), which is a collimated scan X-ray satellite mainly devoted to a sensitive all-sky survey and pointed observations in the 1–250 keV range. The novelty of the MSME method is to use wavelet decomposition and multiresolution support to control noise amplification at different scales. Our work is focused on the application and modification of this method to restore diffuse sources detected by HXMT scanning observations. An improved method, the ensemble multiscale maximum entropy (EMSME) algorithm, is proposed to alleviate the problem of mode mixing exiting in MSME. Simulations have been performed on the detection of the diffuse source Cen A by HXMT in all-sky survey mode. The results show that the MSME method is adapted to the deconvolution task of HXMT for diffuse source detection and the improved method could suppress noise and improve the correlation and signal-to-noise ratio, thus proving itself a better algorithm for image restoration. Through one all-sky survey, HXMT could reach a capacity of detecting a diffuse source with maximum differential flux of 0.5 mCrab. Supported by Strategic Priority Research Program on Space Science, Chinese Academy of Sciences (XDA04010300) and National Natural Science Foundation of China (11403014)

  12. A Maximum-Entropy approach for accurate document annotation in the biomedical domain

    PubMed Central

    2012-01-01

    The increasing number of scientific literature on the Web and the absence of efficient tools used for classifying and searching the documents are the two most important factors that influence the speed of the search and the quality of the results. Previous studies have shown that the usage of ontologies makes it possible to process document and query information at the semantic level, which greatly improves the search for the relevant information and makes one step further towards the Semantic Web. A fundamental step in these approaches is the annotation of documents with ontology concepts, which can also be seen as a classification task. In this paper we address this issue for the biomedical domain and present a new automated and robust method, based on a Maximum Entropy approach, for annotating biomedical literature documents with terms from the Medical Subject Headings (MeSH). The experimental evaluation shows that the suggested Maximum Entropy approach for annotating biomedical documents with MeSH terms is highly accurate, robust to the ambiguity of terms, and can provide very good performance even when a very small number of training documents is used. More precisely, we show that the proposed algorithm obtained an average F-measure of 92.4% (precision 99.41%, recall 86.77%) for the full range of the explored terms (4,078 MeSH terms), and that the algorithm’s performance is resilient to terms’ ambiguity, achieving an average F-measure of 92.42% (precision 99.32%, recall 86.87%) in the explored MeSH terms which were found to be ambiguous according to the Unified Medical Language System (UMLS) thesaurus. Finally, we compared the results of the suggested methodology with a Naive Bayes and a Decision Trees classification approach, and we show that the Maximum Entropy based approach performed with higher F-Measure in both ambiguous and monosemous MeSH terms. PMID:22541593

  13. A homotopy algorithm for synthesizing robust controllers for flexible structures via the maximum entropy design equations

    NASA Technical Reports Server (NTRS)

    Collins, Emmanuel G., Jr.; Richter, Stephen

    1990-01-01

    One well known deficiency of LQG compensators is that they do not guarantee any measure of robustness. This deficiency is especially highlighted when considering control design for complex systems such as flexible structures. There has thus been a need to generalize LQG theory to incorporate robustness constraints. Here we describe the maximum entropy approach to robust control design for flexible structures, a generalization of LQG theory, pioneered by Hyland, which has proved useful in practice. The design equations consist of a set of coupled Riccati and Lyapunov equations. A homotopy algorithm that is used to solve these design equations is presented.

  14. Learning probability distributions from smooth observables and the maximum entropy principle: some remarks

    NASA Astrophysics Data System (ADS)

    Obuchi, Tomoyuki; Monasson, Rémi

    2015-09-01

    The maximum entropy principle (MEP) is a very useful working hypothesis in a wide variety of inference problems, ranging from biological to engineering tasks. To better understand the reasons of the success of MEP, we propose a statistical-mechanical formulation to treat the space of probability distributions constrained by the measures of (experimental) observables. In this paper we first review the results of a detailed analysis of the simplest case of randomly chosen observables. In addition, we investigate by numerical and analytical means the case of smooth observables, which is of practical relevance. Our preliminary results are presented and discussed with respect to the efficiency of the MEP.

  15. High resolution VLBI polarisation imaging of AGN with the Maximum Entropy Method

    NASA Astrophysics Data System (ADS)

    Coughlan, Colm P.; Gabuzda, Denise C.

    2016-08-01

    Radio polarisation images of the jets of Active Galactic Nuclei (AGN) can provide a deep insight into the launching and collimation mechanisms of relativistic jets. However, even at VLBI scales, resolution is often a limiting factor in the conclusions that can be drawn from observations. The Maximum Entropy Method (MEM) is a deconvolution algorithm that can outperform the more common CLEAN algorithm in many cases, particularly when investigating structures present on scales comparable to or smaller than the nominal beam size with "super-resolution". A new implementation of the MEM suitable for single- or multiple-wavelength VLBI polarisation observations has been developed and is described here. Monte Carlo simulations comparing the performances of CLEAN and MEM at reconstructing the properties of model images are presented; these demonstrate the enhanced reliability of MEM over CLEAN when images of the fractional polarisation and polarisation angle are constructed using convolving beams that are appreciably smaller than the full CLEAN beam. The results of using this new MEM software to image VLBA observations of the AGN 0716+714 at six different wavelengths are presented, and compared to corresponding maps obtained with CLEAN. MEM and CLEAN maps of Stokes I, the polarised flux, the fractional polarisation and the polarisation angle are compared for convolving beams ranging from the full CLEAN beam down to a beam one-third of this size. MEM's ability to provide more trustworthy polarisation imaging than a standard CLEAN-based deconvolution when convolving beams appreciably smaller than the full CLEAN beam are used is discussed.

  16. Algorithms for optimized maximum entropy and diagnostic tools for analytic continuation

    NASA Astrophysics Data System (ADS)

    Bergeron, Dominic; Tremblay, A.-M. S.

    2016-08-01

    Analytic continuation of numerical data obtained in imaginary time or frequency has become an essential part of many branches of quantum computational physics. It is, however, an ill-conditioned procedure and thus a hard numerical problem. The maximum-entropy approach, based on Bayesian inference, is the most widely used method to tackle that problem. Although the approach is well established and among the most reliable and efficient ones, useful developments of the method and of its implementation are still possible. In addition, while a few free software implementations are available, a well-documented, optimized, general purpose, and user-friendly software dedicated to that specific task is still lacking. Here we analyze all aspects of the implementation that are critical for accuracy and speed and present a highly optimized approach to maximum entropy. Original algorithmic and conceptual contributions include (1) numerical approximations that yield a computational complexity that is almost independent of temperature and spectrum shape (including sharp Drude peaks in broad background, for example) while ensuring quantitative accuracy of the result whenever precision of the data is sufficient, (2) a robust method of choosing the entropy weight α that follows from a simple consistency condition of the approach and the observation that information- and noise-fitting regimes can be identified clearly from the behavior of χ2 with respect to α , and (3) several diagnostics to assess the reliability of the result. Benchmarks with test spectral functions of different complexity and an example with an actual physical simulation are presented. Our implementation, which covers most typical cases for fermions, bosons, and response functions, is available as an open source, user-friendly software.

  17. Improvement of the detector resolution in X-ray spectrometry by using the maximum entropy method

    NASA Astrophysics Data System (ADS)

    Fernández, Jorge E.; Scot, Viviana; Giulio, Eugenio Di; Sabbatucci, Lorenzo

    2015-11-01

    In every X-ray spectroscopy measurement the influence of the detection system causes loss of information. Different mechanisms contribute to form the so-called detector response function (DRF): the detector efficiency, the escape of photons as a consequence of photoelectric or scattering interactions, the spectrum smearing due to the energy resolution, and, in solid states detectors (SSD), the charge collection artifacts. To recover the original spectrum, it is necessary to remove the detector influence by solving the so-called inverse problem. The maximum entropy unfolding technique solves this problem by imposing a set of constraints, taking advantage of the known a priori information and preserving the positive-defined character of the X-ray spectrum. This method has been included in the tool UMESTRAT (Unfolding Maximum Entropy STRATegy), which adopts a semi-automatic strategy to solve the unfolding problem based on a suitable combination of the codes MAXED and GRAVEL, developed at PTB. In the past UMESTRAT proved the capability to resolve characteristic peaks which were revealed as overlapped by a Si SSD, giving good qualitative results. In order to obtain quantitative results, UMESTRAT has been modified to include the additional constraint of the total number of photons of the spectrum, which can be easily determined by inverting the diagonal efficiency matrix. The features of the improved code are illustrated with some examples of unfolding from three commonly used SSD like Si, Ge, and CdTe. The quantitative unfolding can be considered as a software improvement of the detector resolution.

  18. Estimation of Wild Fire Risk Area based on Climate and Maximum Entropy in Korean Peninsular

    NASA Astrophysics Data System (ADS)

    Kim, T.; Lim, C. H.; Song, C.; Lee, W. K.

    2015-12-01

    The number of forest fires and accompanying human injuries and physical damages has been increased by frequent drought. In this study, forest fire danger zone of Korea is estimated to predict and prepare for future forest fire hazard regions. The MaxEnt (Maximum Entropy) model is used to estimate the forest fire hazard region which estimates the probability distribution of the status. The MaxEnt model is primarily for the analysis of species distribution, but its applicability for various natural disasters is getting recognition. The detailed forest fire occurrence data collected by the MODIS for past 5 years (2010-2014) is used as occurrence data for the model. Also meteorology, topography, vegetation data are used as environmental variable. In particular, various meteorological variables are used to check impact of climate such as annual average temperature, annual precipitation, precipitation of dry season, annual effective humidity, effective humidity of dry season, aridity index. Consequently, the result was valid based on the AUC(Area Under the Curve) value (= 0.805) which is used to predict accuracy in the MaxEnt model. Also predicted forest fire locations were practically corresponded with the actual forest fire distribution map. Meteorological variables such as effective humidity showed the greatest contribution, and topography variables such as TWI (Topographic Wetness Index) and slope also contributed on the forest fire. As a result, the east coast and the south part of Korea peninsula were predicted to have high risk on the forest fire. In contrast, high-altitude mountain area and the west coast appeared to be safe with the forest fire. The result of this study is similar with former studies, which indicates high risks of forest fire in accessible area and reflects climatic characteristics of east and south part in dry season. To sum up, we estimated the forest fire hazard zone with existing forest fire locations and environment variables and had

  19. Maximum Entropy Methods as the Bridge Between Microscopic and Macroscopic Theory

    NASA Astrophysics Data System (ADS)

    Taylor, Jamie M.

    2016-07-01

    This paper is concerned with an investigation into a function of macroscopic variables known as the singular potential, building on previous work by Ball and Majumdar. The singular potential is a function of the admissible statistical averages of probability distributions on a state space, defined so that it corresponds to the maximum possible entropy given known observed statistical averages, although non-classical entropy-like objective functions will also be considered. First the set of admissible moments must be established, and under the conditions presented in this work the set is open, bounded and convex allowing a description in terms of supporting hyperplanes, which provides estimates on the development of singularities for related probability distributions. Under appropriate conditions it is shown that the singular potential is strictly convex, as differentiable as the microscopic entropy, and blows up uniformly as the macroscopic variable tends to the boundary of the set of admissible moments. Applications of the singular potential are then discussed, and particular consideration will be given to certain free-energy functionals typical in mean-field theory, demonstrating an equivalence between certain microscopic and macroscopic free-energy functionals. This allows statements about L^1 -local minimisers of Onsager's free energy to be obtained which cannot be given by two-sided variations, and overcomes the need to ensure local minimisers are bounded away from zero and +∞ before taking L^∞ variations. The analysis also permits the definition of a dual order parameter for which Onsager's free energy allows an explicit representation. Also, the difficulties in approximating the singular potential by everywhere defined functions, in particular by polynomial functions, are addressed, with examples demonstrating the failure of the Taylor approximation to preserve relevant shape properties of the singular potential.

  20. Learning Gaussian mixture models with entropy-based criteria.

    PubMed

    Penalver Benavent, Antonio; Escolano Ruiz, Francisco; Saez, Juan Manuel

    2009-11-01

    In this paper, we address the problem of estimating the parameters of Gaussian mixture models. Although the expectation-maximization (EM) algorithm yields the maximum-likelihood (ML) solution, its sensitivity to the selection of the starting parameters is well-known and it may converge to the boundary of the parameter space. Furthermore, the resulting mixture depends on the number of selected components, but the optimal number of kernels may be unknown beforehand. We introduce the use of the entropy of the probability density function (pdf) associated to each kernel to measure the quality of a given mixture model with a fixed number of kernels. We propose two methods to approximate the entropy of each kernel and a modification of the classical EM algorithm in order to find the optimum number of components of the mixture. Moreover, we use two stopping criteria: a novel global mixture entropy-based criterion called Gaussianity deficiency (GD) and a minimum description length (MDL) principle-based one. Our algorithm, called entropy-based EM (EBEM), starts with a unique kernel and performs only splitting by selecting the worst kernel attending to GD. We have successfully tested it in probability density estimation, pattern classification, and color image segmentation. Experimental results improve the ones of other state-of-the-art model order selection methods. PMID:19770090

  1. A practical computational framework for the multidimensional moment-constrained maximum entropy principle

    NASA Astrophysics Data System (ADS)

    Abramov, Rafail

    2006-01-01

    The maximum entropy principle is a versatile tool for evaluating smooth approximations of probability density functions with a least bias beyond given constraints. In particular, the moment-based constraints are often a common prior information about a statistical state in various areas of science, including that of a forecast ensemble or a climate in atmospheric science. With that in mind, here we present a unified computational framework for an arbitrary number of phase space dimensions and moment constraints for both Shannon and relative entropies, together with a practical usable convex optimization algorithm based on the Newton method with an additional preconditioning and robust numerical integration routine. This optimization algorithm has already been used in three studies of predictability, and so far was found to be capable of producing reliable results in one- and two-dimensional phase spaces with moment constraints of up to order 4. The current work extensively references those earlier studies as practical examples of the applicability of the algorithm developed below.

  2. Use of maximum entropy method with parallel processing machine. [for x-ray object image reconstruction

    NASA Technical Reports Server (NTRS)

    Yin, Lo I.; Bielefeld, Michael J.

    1987-01-01

    The maximum entropy method (MEM) and balanced correlation method were used to reconstruct the images of low-intensity X-ray objects obtained experimentally by means of a uniformly redundant array coded aperture system. The reconstructed images from MEM are clearly superior. However, the MEM algorithm is computationally more time-consuming because of its iterative nature. On the other hand, both the inherently two-dimensional character of images and the iterative computations of MEM suggest the use of parallel processing machines. Accordingly, computations were carried out on the massively parallel processor at Goddard Space Flight Center as well as on the serial processing machine VAX 8600, and the results are compared.

  3. A maximum-entropy approach to the adiabatic freezing of a supercooled liquid.

    PubMed

    Prestipino, Santi

    2013-04-28

    I employ the van der Waals theory of Baus and co-workers to analyze the fast, adiabatic decay of a supercooled liquid in a closed vessel with which the solidification process usually starts. By imposing a further constraint on either the system volume or pressure, I use the maximum-entropy method to quantify the fraction of liquid that is transformed into solid as a function of undercooling and of the amount of a foreign gas that could possibly be also present in the test tube. Upon looking at the implications of thermal and mechanical insulation for the energy cost of forming a solid droplet within the liquid, I identify one situation where the onset of solidification inevitably occurs near the wall in contact with the bath. PMID:23635151

  4. Optical Spectrum Analysis of Real-Time TDDFT Using the Maximum Entropy Method

    NASA Astrophysics Data System (ADS)

    Toogoshi, M.; Kato, M.; Kano, S. S.; Zempo, Y.

    2014-05-01

    In the calculation of time-dependent density-functional theory in real time, we apply an external field to perturb the optimized electronic structure, and follow the time evolution of the dipole moment to calculate the oscillator strength distribution. We solve the time-dependent equation of motion, keeping track of the dipole moment as time-series data. We adopt Burg's maximum entropy method (MEM) to compute the spectrum of the oscillator strength, and apply this technique to several molecules. We find that MEM provides the oscillator strength distribution at high resolution even with a half of the evolution time of a simple FFT of the dynamic dipole moment. In this paper we show the effectiveness and efficiency of MEM in comparison with that of FFT. Not only the total number of time steps, but also the length of the autocorrelation, the lag, plays an important role in improving the resolution of the spectrum.

  5. A maximum-entropy approach to the adiabatic freezing of a supercooled liquid

    NASA Astrophysics Data System (ADS)

    Prestipino, Santi

    2013-04-01

    I employ the van der Waals theory of Baus and co-workers to analyze the fast, adiabatic decay of a supercooled liquid in a closed vessel with which the solidification process usually starts. By imposing a further constraint on either the system volume or pressure, I use the maximum-entropy method to quantify the fraction of liquid that is transformed into solid as a function of undercooling and of the amount of a foreign gas that could possibly be also present in the test tube. Upon looking at the implications of thermal and mechanical insulation for the energy cost of forming a solid droplet within the liquid, I identify one situation where the onset of solidification inevitably occurs near the wall in contact with the bath.

  6. Quantum state tomography with incomplete data: Maximum entropy and variational quantum tomography

    NASA Astrophysics Data System (ADS)

    Gonçalves, D. S.; Lavor, C.; Gomes-Ruggiero, M. A.; Cesário, A. T.; Vianna, R. O.; Maciel, T. O.

    2013-05-01

    Whenever we do not have an informationally complete set of measurements, the estimate of a quantum state cannot be uniquely determined. In this case, among the density matrices compatible with the available data, the one commonly preferred is the one which is the most uncommitted to the missing information. This is the purpose of the maximum entropy estimation (MaxEnt) and the variational quantum tomography (VQT). Here, we propose a variant of VQT and show its relationship with MaxEnt methods in quantum tomographies with an incomplete set of measurements. We prove their equivalence in the case of eigenbasis measurements, and through numerical simulations we stress their similar behavior. Hence, in the modified VQT formulation we have an estimate of a quantum state as unbiased as in MaxEnt and with the benefit that VQT can be more efficiently solved by means of linear semidefinite programs.

  7. Thermodynamic Basis of Budyko Curve for Annual Water Balance: Proportionality Hypothesis and Maximum Entropy Production

    NASA Astrophysics Data System (ADS)

    Wang, Dingbao; Zhao, Jianshi; Tang, Yin; Sivapalan, Murugesu

    2015-04-01

    Recently, Wang and Tang [2014] demonstrated that the validity of the Proportionality Hypothesis extends to the partitioning of precipitation into runoff and evaporation at the annual time scale as well, and that the Budyko Curve could then be seen as the straightforward outcome of the application of the Proportionality Hypothesis to estimate mean annual water balance. In this talk, we go further and demonstrate that the Proportionality Hypothesis itself can be seen as a result of the application of the thermodynamic principle of Maximum Entropy Production (MEP), provided that the conductance coefficients assumed for evaporation and runoff are linearly proportional to their corresponding potential values. In this way, on the basis of this common hydrological assumption, we demonstrate a possible physical (thermodynamic) basis for the Proportionality Hypothesis, and consequently for the Budyko Curve.

  8. Bayesian and maximum entropy methods for fusion diagnostic measurements with compact neutron spectrometers

    NASA Astrophysics Data System (ADS)

    Reginatto, Marcel; Zimbal, Andreas

    2008-02-01

    In applications of neutron spectrometry to fusion diagnostics, it is advantageous to use methods of data analysis which can extract information from the spectrum that is directly related to the parameters of interest that describe the plasma. We present here methods of data analysis which were developed with this goal in mind, and which were applied to spectrometric measurements made with an organic liquid scintillation detector (type NE213). In our approach, we combine Bayesian parameter estimation methods and unfolding methods based on the maximum entropy principle. This two-step method allows us to optimize the analysis of the data depending on the type of information that we want to extract from the measurements. To illustrate these methods, we analyze neutron measurements made at the PTB accelerator under controlled conditions, using accelerator-produced neutron beams. Although the methods have been chosen with a specific application in mind, they are general enough to be useful for many other types of measurements.

  9. On the stability of the moments of the maximum entropy wind wave spectrum

    SciTech Connect

    Pena, H.G.

    1983-03-01

    The stability of some current wind wave parameters as a function of high-frequency cut-off and degrees of freedom of the spectrum has been numerically investigated when computed in terms of the moments of the wave energy spectrum. From the Pierson-Moskovitz wave spectrum type, a sea surface profile is simulated and its wave energy spectrum is estimated by the Maximum Entropy Method (MEM). As the degrees of freedom of the MEM spectral estimation are varied, the results show a much better stability of the wave parameters as compared to the classical periodogram and correlogram spectral approaches. The stability of wave parameters as a function of high-frequency cut-off has the same result as obtained by the classical techniques.

  10. An understanding of human dynamics in urban subway traffic from the Maximum Entropy Principle

    NASA Astrophysics Data System (ADS)

    Yong, Nuo; Ni, Shunjiang; Shen, Shifei; Ji, Xuewei

    2016-08-01

    We studied the distribution of entry time interval in Beijing subway traffic by analyzing the smart card transaction data, and then deduced the probability distribution function of entry time interval based on the Maximum Entropy Principle. Both theoretical derivation and data statistics indicated that the entry time interval obeys power-law distribution with an exponential cutoff. In addition, we pointed out the constraint conditions for the distribution form and discussed how the constraints affect the distribution function. It is speculated that for bursts and heavy tails in human dynamics, when the fitted power exponent is less than 1.0, it cannot be a pure power-law distribution, but with an exponential cutoff, which may be ignored in the previous studies.

  11. Analysis of a quasi-elastic laser scattering spectrum using the maximum entropy method.

    PubMed

    Tsuyumoto, Isao

    2007-12-01

    We have applied the maximum entropy method (MEM) to the analysis of quasi-elastic laser scattering (QELS) spectra and have established a technique for determining capillary wave frequencies with a higher time resolution than that of the conventional procedure. Although the QELS method has an advantage in time resolution over mechanical methods, it requires the averaging of at least 20-100 power spectra for determining capillary wave frequencies. We find that the MEM analysis markedly improves the S/N ratio of the power spectra, and that averaging the spectra is not necessary for determining the capillary wave frequency, i.e., it can be estimated from one power spectrum. The time resolution of the QELS attains the theoretical limit by using MEM analysis. PMID:18071233

  12. Optimal resolution in maximum entropy image reconstruction from projections with multigrid acceleration

    NASA Technical Reports Server (NTRS)

    Limber, Mark A.; Manteuffel, Thomas A.; Mccormick, Stephen F.; Sholl, David S.

    1993-01-01

    We consider the problem of image reconstruction from a finite number of projections over the space L(sup 1)(Omega), where Omega is a compact subset of the set of Real numbers (exp 2). We prove that, given a discretization of the projection space, the function that generates the correct projection data and maximizes the Boltzmann-Shannon entropy is piecewise constant on a certain discretization of Omega, which we call the 'optimal grid'. It is on this grid that one obtains the maximum resolution given the problem setup. The size of this grid grows very quickly as the number of projections and number of cells per projection grow, indicating fast computational methods are essential to make its use feasible. We use a Fenchel duality formulation of the problem to keep the number of variables small while still using the optimal discretization, and propose a multilevel scheme to improve convergence of a simple cyclic maximization scheme applied to the dual problem.

  13. Modelling the spreading rate of controlled communicable epidemics through an entropy-based thermodynamic model

    NASA Astrophysics Data System (ADS)

    Wang, WenBin; Wu, ZiNiu; Wang, ChunFeng; Hu, RuiFeng

    2013-11-01

    A model based on a thermodynamic approach is proposed for predicting the dynamics of communicable epidemics assumed to be governed by controlling efforts of multiple scales so that an entropy is associated with the system. All the epidemic details are factored into a single and time-dependent coefficient, the functional form of this coefficient is found through four constraints, including notably the existence of an inflexion point and a maximum. The model is solved to give a log-normal distribution for the spread rate, for which a Shannon entropy can be defined. The only parameter, that characterizes the width of the distribution function, is uniquely determined through maximizing the rate of entropy production. This entropy-based thermodynamic (EBT) model predicts the number of hospitalized cases with a reasonable accuracy for SARS in the year 2003. This EBT model can be of use for potential epidemics such as avian influenza and H7N9 in China.

  14. Verification and validation of the maximum entropy method for reconstructing neutron flux, with MCNP5, Attila-7.1.0 and the GODIVA experiment

    SciTech Connect

    Douglas S. Crawford; Tony Saad; Terry A. Ring

    2013-03-01

    Verification and validation of reconstructed neutron flux based on the maximum entropy method is presented in this paper. The verification is carried out by comparing the neutron flux spectrum from the maximum entropy method with Monte Carlo N Particle 5 version 1.40 (MCNP5) and Attila-7.1.0-beta (Attila). A spherical 100% 235U critical assembly is modeled as the test case to compare the three methods. The verification error range for the maximum entropy method is 15–21% where MCNP5 is taken to be the comparison standard. Attila relative error for the critical assembly is 20–35%. Validation is accomplished by comparing a neutron flux spectrum that is back calculated from foil activation measurements performed in the GODIVA experiment (GODIVA). The error range of the reconstructed flux compared to GODIVA is 0–10%. The error range of the neutron flux spectrum from MCNP5 compared to GODIVA is 0–20% and the Attila error range compared to the GODIVA is 0–35%. The maximum entropy method is shown to be a fast reliable method, compared to either Monte Carlo methods (MCNP5) or 30 multienergy group methods (Attila) and with respect to the GODIVA experiment.

  15. Yttrium-90 attenuation measurements before and after maximum entropy image restoration

    SciTech Connect

    Kallergi, M.; Abernathy, M.J.; Li, H.D.

    1994-05-01

    Quantitative measurement of Yttrium-90, a pure beta emitter, is important for in-vivo management of antibody therapy. Gamma camera imaging of bromsstrahlung radiation poses significant problems, as compared to single photon detection, due to the enhanced scattering and photon penetration effects, and poor conversion efficiency which results in a poor signal-to-noise ratio. For the first time a maximum entropy neural network is proposed for image resolution restoration. It is based on the system MTF, but it voids the common inverse problem associated with both the Wiener and Metz filters. The critical requirement for quantitative measurements is the calculation of an effective attenuation coefficient that is reasonably stable with various source sizes and regions-of-interest (ROI`s) that fully enclose the images of the sources. We have demonstrated that stable image restoration can be obtained for Yttrium-90 with various source sizes and with varying depths in water tanks of 20 cm in depth. Measurements were made using a single-head gamma camera equipped with a high-energy collimator. This yielded effective attenuation coefficients of 0.124 cm{sup -1} and 0.129 cm{sup -1} for the raw images and restored images, respectively. The results for two spherical sources of 3.5 cm I.D. and 6.0 cm I.D. were within {plus_minus}5.07% and {plus_minus}8.84% for the raw and enhanced images, respectively. These results were obtained for ROI`s twice the physical size of each source. In conclusion, the results suggest that sources of varying size, at varying depths in an idealized water phantom, can be imaged with improved resolution using the maximum entropy filter. This filter should permit improved identification of tumors during therapeutic treatment protocols and allow an estimation of the total radiation accumulation in the tumor or in critical organ systems.

  16. Adaptive meshless local maximum-entropy finite element method for convection-diffusion problems

    NASA Astrophysics Data System (ADS)

    Wu, C. T.; Young, D. L.; Hong, H. K.

    2014-01-01

    In this paper, a meshless local maximum-entropy finite element method (LME-FEM) is proposed to solve 1D Poisson equation and steady state convection-diffusion problems at various Peclet numbers in both 1D and 2D. By using local maximum-entropy (LME) approximation scheme to construct the element shape functions in the formulation of finite element method (FEM), additional nodes can be introduced within element without any mesh refinement to increase the accuracy of numerical approximation of unknown function, which procedure is similar to conventional p-refinement but without increasing the element connectivity to avoid the high conditioning matrix. The resulted LME-FEM preserves several significant characteristics of conventional FEM such as Kronecker-delta property on element vertices, partition of unity of shape function and exact reproduction of constant and linear functions. Furthermore, according to the essential properties of LME approximation scheme, nodes can be introduced in an arbitrary way and the continuity of the shape function along element edge is kept at the same time. No transition element is needed to connect elements of different orders. The property of arbitrary local refinement makes LME-FEM be a numerical method that can adaptively solve the numerical solutions of various problems where troublesome local mesh refinement is in general necessary to obtain reasonable solutions. Several numerical examples with dramatically varying solutions are presented to test the capability of the current method. The numerical results show that LME-FEM can obtain much better and stable solutions than conventional FEM with linear element.

  17. A Maximum Entropy Test for Evaluating Higher-Order Correlations in Spike Counts

    PubMed Central

    Onken, Arno; Dragoi, Valentin; Obermayer, Klaus

    2012-01-01

    Evaluating the importance of higher-order correlations of neural spike counts has been notoriously hard. A large number of samples are typically required in order to estimate higher-order correlations and resulting information theoretic quantities. In typical electrophysiology data sets with many experimental conditions, however, the number of samples in each condition is rather small. Here we describe a method that allows to quantify evidence for higher-order correlations in exactly these cases. We construct a family of reference distributions: maximum entropy distributions, which are constrained only by marginals and by linear correlations as quantified by the Pearson correlation coefficient. We devise a Monte Carlo goodness-of-fit test, which tests - for a given divergence measure of interest - whether the experimental data lead to the rejection of the null hypothesis that it was generated by one of the reference distributions. Applying our test to artificial data shows that the effects of higher-order correlations on these divergence measures can be detected even when the number of samples is small. Subsequently, we apply our method to spike count data which were recorded with multielectrode arrays from the primary visual cortex of anesthetized cat during an adaptation experiment. Using mutual information as a divergence measure we find that there are spike count bin sizes at which the maximum entropy hypothesis can be rejected for a substantial number of neuronal pairs. These results demonstrate that higher-order correlations can matter when estimating information theoretic quantities in V1. They also show that our test is able to detect their presence in typical in-vivo data sets, where the number of samples is too small to estimate higher-order correlations directly. PMID:22685392

  18. Forest Tree Species Distribution Mapping Using Landsat Satellite Imagery and Topographic Variables with the Maximum Entropy Method in Mongolia

    NASA Astrophysics Data System (ADS)

    Hao Chiang, Shou; Valdez, Miguel; Chen, Chi-Farn

    2016-06-01

    Forest is a very important ecosystem and natural resource for living things. Based on forest inventories, government is able to make decisions to converse, improve and manage forests in a sustainable way. Field work for forestry investigation is difficult and time consuming, because it needs intensive physical labor and the costs are high, especially surveying in remote mountainous regions. A reliable forest inventory can give us a more accurate and timely information to develop new and efficient approaches of forest management. The remote sensing technology has been recently used for forest investigation at a large scale. To produce an informative forest inventory, forest attributes, including tree species are unavoidably required to be considered. In this study the aim is to classify forest tree species in Erdenebulgan County, Huwsgul province in Mongolia, using Maximum Entropy method. The study area is covered by a dense forest which is almost 70% of total territorial extension of Erdenebulgan County and is located in a high mountain region in northern Mongolia. For this study, Landsat satellite imagery and a Digital Elevation Model (DEM) were acquired to perform tree species mapping. The forest tree species inventory map was collected from the Forest Division of the Mongolian Ministry of Nature and Environment as training data and also used as ground truth to perform the accuracy assessment of the tree species classification. Landsat images and DEM were processed for maximum entropy modeling, and this study applied the model with two experiments. The first one is to use Landsat surface reflectance for tree species classification; and the second experiment incorporates terrain variables in addition to the Landsat surface reflectance to perform the tree species classification. All experimental results were compared with the tree species inventory to assess the classification accuracy. Results show that the second one which uses Landsat surface reflectance coupled

  19. Block entropy and quantum phase transition in the anisotropic Kondo necklace model

    SciTech Connect

    Mendoza-Arenas, J. J.; Franco, R.; Silva-Valencia, J.

    2010-06-15

    We study the von Neumann block entropy in the Kondo necklace model for different anisotropies {eta} in the XY interaction between conduction spins using the density matrix renormalization group method. It was found that the block entropy presents a maximum for each {eta} considered, and, comparing it with the results of the quantum criticality of the model based on the behavior of the energy gap, we observe that the maximum block entropy occurs at the quantum critical point between an antiferromagnetic and a Kondo singlet state, so this measure of entanglement is useful for giving information about where a quantum phase transition occurs in this model. We observe that the block entropy also presents a maximum at the quantum critical points that are obtained when an anisotropy {Delta} is included in the Kondo exchange between localized and conduction spins; when {Delta} diminishes for a fixed value of {eta}, the critical point increases, favoring the antiferromagnetic phase.

  20. Application of maximum-entropy spectral estimation to deconvolution of XPS data. [X-ray Photoelectron Spectroscopy

    NASA Technical Reports Server (NTRS)

    Vasquez, R. P.; Klein, J. D.; Barton, J. J.; Grunthaner, F. J.

    1981-01-01

    A comparison is made between maximum-entropy spectral estimation and traditional methods of deconvolution used in electron spectroscopy. The maximum-entropy method is found to have higher resolution-enhancement capabilities and, if the broadening function is known, can be used with no adjustable parameters with a high degree of reliability. The method and its use in practice are briefly described, and a criterion is given for choosing the optimal order for the prediction filter based on the prediction-error power sequence. The method is demonstrated on a test case and applied to X-ray photoelectron spectra.

  1. Spectral analysis of the Chandler wobble: comparison of the discrete Fourier analysis and the maximum entropy method

    NASA Astrophysics Data System (ADS)

    Brzezinski, A.

    2014-12-01

    The methods of spectral analysis are applied to solve the following two problems concerning the free Chandler wobble (CW): 1) to estimate the CW resonance parameters, the period T and the quality factor Q, and 2) to perform the excitation balance of the observed free wobble. It appears, however, that the results depend on the algorithm of spectral analysis applied. Here we compare the following two algorithms which are frequently applied for analysis of the polar motion data, the classical discrete Fourier analysis and the maximum entropy method corresponding to the autoregressive modeling of the input time series. We start from general description of both methods and of their application to the analysis of the Earth orientation observations. Then we compare results of the analysis of the polar motion and the related excitation data.

  2. Entropy Based Modelling for Estimating Demographic Trends.

    PubMed

    Li, Guoqi; Zhao, Daxuan; Xu, Yi; Kuo, Shyh-Hao; Xu, Hai-Yan; Hu, Nan; Zhao, Guangshe; Monterola, Christopher

    2015-01-01

    In this paper, an entropy-based method is proposed to forecast the demographical changes of countries. We formulate the estimation of future demographical profiles as a constrained optimization problem, anchored on the empirically validated assumption that the entropy of age distribution is increasing in time. The procedure of the proposed method involves three stages, namely: 1) Prediction of the age distribution of a country's population based on an "age-structured population model"; 2) Estimation the age distribution of each individual household size with an entropy-based formulation based on an "individual household size model"; and 3) Estimation the number of each household size based on a "total household size model". The last stage is achieved by projecting the age distribution of the country's population (obtained in stage 1) onto the age distributions of individual household sizes (obtained in stage 2). The effectiveness of the proposed method is demonstrated by feeding real world data, and it is general and versatile enough to be extended to other time dependent demographic variables. PMID:26382594

  3. An Instructive Model of Entropy

    ERIC Educational Resources Information Center

    Zimmerman, Seth

    2010-01-01

    This article first notes the misinterpretation of a common thought experiment, and the misleading comment that "systems tend to flow from less probable to more probable macrostates". It analyses the experiment, generalizes it and introduces a new tool of investigation, the simplectic structure. A time-symmetric model is built upon this structure,…

  4. Maximum Likelihood Estimation in Generalized Rasch Models.

    ERIC Educational Resources Information Center

    de Leeuw, Jan; Verhelst, Norman

    1986-01-01

    Maximum likelihood procedures are presented for a general model to unify the various models and techniques that have been proposed for item analysis. Unconditional maximum likelihood estimation, proposed by Wright and Haberman, and conditional maximum likelihood estimation, proposed by Rasch and Andersen, are shown as important special cases. (JAZ)

  5. Applying Bayesian Maximum Entropy to extrapolating local-scale water consumption in Maricopa County, Arizona

    NASA Astrophysics Data System (ADS)

    Lee, Seung-Jae; Wentz, Elizabeth A.

    2008-01-01

    Understanding water use in the context of urban growth and climate variability requires an accurate representation of regional water use. It is challenging, however, because water use data are often unavailable, and when they are available, they are geographically aggregated to protect the identity of individuals. The present paper aims to map local-scale estimates of water use in Maricopa County, Arizona, on the basis of data aggregated to census tracts and measured only in the City of Phoenix. To complete our research goals we describe two types of data uncertainty sources (i.e., extrapolation and downscaling processes) and then generate data that account for the uncertainty sources (i.e., soft data). Our results ascertain that the Bayesian Maximum Entropy (BME) mapping method of modern geostatistics is a theoretically sound approach for assimilating the soft data into mapping processes. Our results lead to increased mapping accuracy over classical geostatistics, which does not account for the soft data. The confirmed BME maps therefore provide useful knowledge on local water use variability in the whole county that is further applied to the understanding of causal factors of urban water demand.

  6. Computational Amide I Spectroscopy for Refinement of Disordered Peptide Ensembles: Maximum Entropy and Related Approaches

    NASA Astrophysics Data System (ADS)

    Reppert, Michael; Tokmakoff, Andrei

    The structural characterization of intrinsically disordered peptides (IDPs) presents a challenging biophysical problem. Extreme heterogeneity and rapid conformational interconversion make traditional methods difficult to interpret. Due to its ultrafast (ps) shutter speed, Amide I vibrational spectroscopy has received considerable interest as a novel technique to probe IDP structure and dynamics. Historically, Amide I spectroscopy has been limited to delivering global secondary structural information. More recently, however, the method has been adapted to study structure at the local level through incorporation of isotope labels into the protein backbone at specific amide bonds. Thanks to the acute sensitivity of Amide I frequencies to local electrostatic interactions-particularly hydrogen bonds-spectroscopic data on isotope labeled residues directly reports on local peptide conformation. Quantitative information can be extracted using electrostatic frequency maps which translate molecular dynamics trajectories into Amide I spectra for comparison with experiment. Here we present our recent efforts in the development of a rigorous approach to incorporating Amide I spectroscopic restraints into refined molecular dynamics structural ensembles using maximum entropy and related approaches. By combining force field predictions with experimental spectroscopic data, we construct refined structural ensembles for a family of short, strongly disordered, elastin-like peptides in aqueous solution.

  7. Maximum-entropy reconstruction method for moment-based solution of the Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Summy, Dustin; Pullin, Dale

    2013-11-01

    We describe a method for a moment-based solution of the Boltzmann equation. This starts with moment equations for a 10 + 9 N , N = 0 , 1 , 2 . . . -moment representation. The partial-differential equations (PDEs) for these moments are unclosed, containing both higher-order moments and molecular-collision terms. These are evaluated using a maximum-entropy construction of the velocity distribution function f (c , x , t) , using the known moments, within a finite-box domain of single-particle-velocity (c) space. Use of a finite-domain alleviates known problems (Junk and Unterreiter, Continuum Mech. Thermodyn., 2002) concerning existence and uniqueness of the reconstruction. Unclosed moments are evaluated with quadrature while collision terms are calculated using a Monte-Carlo method. This allows integration of the moment PDEs in time. Illustrative examples will include zero-space- dimensional relaxation of f (c , t) from a Mott-Smith-like initial condition toward equilibrium and one-space dimensional, finite Knudsen number, planar Couette flow. Comparison with results using the direct-simulation Monte-Carlo method will be presented.

  8. Bayesian and maximum entropy methods for fusion diagnostic measurements with compact neutron spectrometers.

    PubMed

    Reginatto, Marcel; Zimbal, Andreas

    2008-02-01

    In applications of neutron spectrometry to fusion diagnostics, it is advantageous to use methods of data analysis which can extract information from the spectrum that is directly related to the parameters of interest that describe the plasma. We present here methods of data analysis which were developed with this goal in mind, and which were applied to spectrometric measurements made with an organic liquid scintillation detector (type NE213). In our approach, we combine Bayesian parameter estimation methods and unfolding methods based on the maximum entropy principle. This two-step method allows us to optimize the analysis of the data depending on the type of information that we want to extract from the measurements. To illustrate these methods, we analyze neutron measurements made at the PTB accelerator under controlled conditions, using accelerator-produced neutron beams. Although the methods have been chosen with a specific application in mind, they are general enough to be useful for many other types of measurements. PMID:18315297

  9. Linearized semiclassical initial value time correlation functions with maximum entropy analytic continuation

    SciTech Connect

    Liu, Jian; Miller, William H.

    2008-08-01

    The maximum entropy analytic continuation (MEAC) method is used to extend the range of accuracy of the linearized semiclassical initial value representation (LSC-IVR)/classical Wigner approximation for real time correlation functions. The LSC-IVR provides a very effective 'prior' for the MEAC procedure since it is very good for short times, exact for all time and temperature for harmonic potentials (even for correlation functions of nonlinear operators), and becomes exact in the classical high temperature limit. This combined MEAC+LSC/IVR approach is applied here to two highly nonlinear dynamical systems, a pure quartic potential in one dimensional and liquid para-hydrogen at two thermal state points (25K and 14K under nearly zero external pressure). The former example shows the MEAC procedure to be a very significant enhancement of the LSC-IVR, for correlation functions of both linear and nonlinear operators, and especially at low temperature where semiclassical approximations are least accurate. For liquid para-hydrogen, the LSC-IVR is seen already to be excellent at T = 25K, but the MEAC procedure produces a significant correction at the lower temperature (T = 14K). Comparisons are also made to how the MEAC procedure is able to provide corrections for other trajectory-based dynamical approximations when used as priors.

  10. Fast Maximum Entropy Moment Closure Approach to Solving the Boltzmann Equation

    NASA Astrophysics Data System (ADS)

    Summy, Dustin; Pullin, Dale

    2015-11-01

    We describe a method for a moment-based solution of the Boltzmann Equation (BE). This is applicable to an arbitrary set of velocity moments whose transport is governed by partial-differential equations (PDEs) derived from the BE. The equations are unclosed, containing both higher-order moments and molecular-collision terms. These are evaluated using a maximum-entropy reconstruction of the velocity distribution function f (c , x , t) , from the known moments, within a finite-box domain of single-particle velocity (c) space. Use of a finite-domain alleviates known problems (Junk and Unterreiter, Continuum Mech. Thermodyn., 2002) concerning existence and uniqueness of the reconstruction. Unclosed moments are evaluated with quadrature while collision terms are calculated using any desired method. This allows integration of the moment PDEs in time. The high computational cost of the general method is greatly reduced by careful choice of the velocity moments, allowing the necessary integrals to be reduced from three- to one-dimensional in the case of strictly 1D flows. A method to extend this enhancement to fully 3D flows is discussed. Comparison with relaxation and shock-wave problems using the DSMC method will be presented. Partially supported by NSF grant DMS-1418903.

  11. Time-dependent radiative transfer through thin films: Chapman Enskog-maximum entropy method

    NASA Astrophysics Data System (ADS)

    Abulwafa, E. M.; Hassan, T.; El-Wakil, S. A.; Razi Naqvi, K.

    2005-09-01

    Approximate solutions to the time-dependent radiative transfer equation, also called the phonon radiative transfer equation, for a plane-parallel system have been obtained by combining the flux-limited Chapman-Enskog approximation with the maximum entropy method. For problems involving heat transfer at small scales (short times and/or thin films), the results found by this combined approach are closer to the outcome of the more labour-intensive Laguerre-Galerkin technique (a moment method described recently by the authors) than the results obtained by using the diffusion equation (Fourier's law) or the telegraph equation (Cattaneo's law). The results for heat flux and temperature are presented in graphical form for xL = 0.01, 0.1, 1 and 10, and at τ = 0.01, 0.1, 1.0 and 10, where xL is the film thickness in mean free paths, and τ is the value of time in mean free times.

  12. Developing Soil Moisture Profiles Utilizing Remotely Sensed MW and TIR Based SM Estimates Through Principle of Maximum Entropy

    NASA Astrophysics Data System (ADS)

    Mishra, V.; Cruise, J. F.; Mecikalski, J. R.

    2015-12-01

    Developing accurate vertical soil moisture profiles with minimum input requirements is important to agricultural as well as land surface modeling. Earlier studies show that the principle of maximum entropy (POME) can be utilized to develop vertical soil moisture profiles with accuracy (MAE of about 1% for a monotonically dry profile; nearly 2% for monotonically wet profiles and 3.8% for mixed profiles) with minimum constraints (surface, mean and bottom soil moisture contents). In this study, the constraints for the vertical soil moisture profiles were obtained from remotely sensed data. Low resolution (25 km) MW soil moisture estimates (AMSR-E) were downscaled to 4 km using a soil evaporation efficiency index based disaggregation approach. The downscaled MW soil moisture estimates served as a surface boundary condition, while 4 km resolution TIR based Atmospheric Land Exchange Inverse (ALEXI) estimates provided the required mean root-zone soil moisture content. Bottom soil moisture content is assumed to be a soil dependent constant. Mulit-year (2002-2011) gridded profiles were developed for the southeastern United States using the POME method. The soil moisture profiles were compared to those generated in land surface models (Land Information System (LIS) and an agricultural model DSSAT) along with available NRCS SCAN sites in the study region. The end product, spatial soil moisture profiles, can be assimilated into agricultural and hydrologic models in lieu of precipitation for data scarce regions.Developing accurate vertical soil moisture profiles with minimum input requirements is important to agricultural as well as land surface modeling. Previous studies have shown that the principle of maximum entropy (POME) can be utilized with minimal constraints to develop vertical soil moisture profiles with accuracy (MAE = 1% for monotonically dry profiles; MAE = 2% for monotonically wet profiles and MAE = 3.8% for mixed profiles) when compared to laboratory and field

  13. Cluster Prototypes and Fuzzy Memberships Jointly Leveraged Cross-Domain Maximum Entropy Clustering.

    PubMed

    Qian, Pengjiang; Jiang, Yizhang; Deng, Zhaohong; Hu, Lingzhi; Sun, Shouwei; Wang, Shitong; Muzic, Raymond F

    2016-01-01

    The classical maximum entropy clustering (MEC) algorithm usually cannot achieve satisfactory results in the situations where the data is insufficient, incomplete, or distorted. To address this problem, inspired by transfer learning, the specific cluster prototypes and fuzzy memberships jointly leveraged (CPM-JL) framework for cross-domain MEC (CDMEC) is firstly devised in this paper, and then the corresponding algorithm referred to as CPM-JL-CDMEC and the dedicated validity index named fuzzy memberships-based cross-domain difference measurement (FM-CDDM) are concurrently proposed. In general, the contributions of this paper are fourfold: 1) benefiting from the delicate CPM-JL framework, CPM-JL-CDMEC features high-clustering effectiveness and robustness even in some complex data situations; 2) the reliability of FM-CDDM has been demonstrated to be close to well-established external criteria, e.g., normalized mutual information and rand index, and it does not require additional label information. Hence, using FM-CDDM as a dedicated validity index significantly enhances the applicability of CPM-JL-CDMEC under realistic scenarios; 3) the performance of CPM-JL-CDMEC is generally better than, at least equal to, that of MEC because CPM-JL-CDMEC can degenerate into the standard MEC algorithm after adopting the proper parameters, and which avoids the issue of negative transfer; and 4) in order to maximize privacy protection, CPM-JL-CDMEC employs the known cluster prototypes and their associated fuzzy memberships rather than the raw data in the source domain as prior knowledge. The experimental studies thoroughly evaluated and demonstrated these advantages on both synthetic and real-life transfer datasets. PMID:26684257

  14. Spatiotemporal fusion of multiple-satellite aerosol optical depth (AOD) products using Bayesian maximum entropy method

    NASA Astrophysics Data System (ADS)

    Tang, Qingxin; Bo, Yanchen; Zhu, Yuxin

    2016-04-01

    Merging multisensor aerosol optical depth (AOD) products is an effective way to produce more spatiotemporally complete and accurate AOD products. A spatiotemporal statistical data fusion framework based on a Bayesian maximum entropy (BME) method was developed for merging satellite AOD products in East Asia. The advantages of the presented merging framework are that it not only utilizes the spatiotemporal autocorrelations but also explicitly incorporates the uncertainties of the AOD products being merged. The satellite AOD products used for merging are the Moderate Resolution Imaging Spectroradiometer (MODIS) Collection 5.1 Level-2 AOD products (MOD04_L2) and the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Deep Blue Level 2 AOD products (SWDB_L2). The results show that the average completeness of the merged AOD data is 95.2%,which is significantly superior to the completeness of MOD04_L2 (22.9%) and SWDB_L2 (20.2%). By comparing the merged AOD to the Aerosol Robotic Network AOD records, the results show that the correlation coefficient (0.75), root-mean-square error (0.29), and mean bias (0.068) of the merged AOD are close to those (the correlation coefficient (0.82), root-mean-square error (0.19), and mean bias (0.059)) of the MODIS AOD. In the regions where both MODIS and SeaWiFS have valid observations, the accuracy of the merged AOD is higher than those of MODIS and SeaWiFS AODs. Even in regions where both MODIS and SeaWiFS AODs are missing, the accuracy of the merged AOD is also close to the accuracy of the regions where both MODIS and SeaWiFS have valid observations.

  15. A maximum entropy approach to detect close-in giant planets around active stars

    NASA Astrophysics Data System (ADS)

    Petit, P.; Donati, J.-F.; Hébrard, E.; Morin, J.; Folsom, C. P.; Böhm, T.; Boisse, I.; Borgniet, S.; Bouvier, J.; Delfosse, X.; Hussain, G.; Jeffers, S. V.; Marsden, S. C.; Barnes, J. R.

    2015-12-01

    Context. The high spot coverage of young active stars is responsible for distortions of spectral lines that hamper the detection of close-in planets through radial velocity methods. Aims: We aim to progress towards more efficient exoplanet detection around active stars by optimizing the use of Doppler imaging in radial velocity measurements. Methods: We propose a simple method to simultaneously extract a brightness map and a set of orbital parameters through a tomographic inversion technique derived from classical Doppler mapping. Based on the maximum entropy principle, the underlying idea is to determine the set of orbital parameters that minimizes the information content of the resulting Doppler map. We carry out a set of numerical simulations to perform a preliminary assessment of the robustness of our method, using an actual Doppler map of the very active star HR 1099 to produce a realistic synthetic data set for various sets of orbital parameters of a single planet in a circular orbit. Results: Using a simulated time series of 50 line profiles affected by a peak-to-peak activity jitter of 2.5 km s-1, in most cases we are able to recover the radial velocity amplitude, orbital phase, and orbital period of an artificial planet down to a radial velocity semi-amplitude of the order of the radial velocity scatter due to the photon noise alone (about 50 m s-1 in our case). One noticeable exception occurs when the planetary orbit is close to co-rotation, in which case significant biases are observed in the reconstructed radial velocity amplitude, while the orbital period and phase remain robustly recovered. Conclusions: The present method constitutes a very simple way to extract orbital parameters from heavily distorted line profiles of active stars, when more classical radial velocity detection methods generally fail. It is easily adaptable to most existing Doppler imaging codes, paving the way towards a systematic search for close-in planets orbiting young, rapidly

  16. The Earth's entropy production budget as simulated by a climate system model of intermediate complexity

    NASA Astrophysics Data System (ADS)

    Kleidon, A.; Fraedrich, K.; Lunkeit, F.; Jansen, H.

    2003-04-01

    The Earth is an open thermodynamic system far from equilibrium. It has been suggested that processes within such systems evolve to states of maximum entropy production. Here we report on the entropy production budget of the climate system as simulated by the intermediate complexity climate model PUMA, which consists of an atmospheric general circulation model of coarse resolution, a land surface representation, and a mixed-layer ocean model. We expanded the model to explicitly calculate entopy production for absorption of solar and terrestrial radiation, turbulent fluxes of sensible and latent heat, atmospheric and oceanic heat transport, and entropy production associated with biotic productivity. We present the general methodology, the entropy production budget for the present-day climatic mean, and the sensitivity to vegetation related land surface characteristics.

  17. Application of the maximum entropy technique in tomographic reconstruction from laser diffraction data to determine local spray drop size distribution

    NASA Astrophysics Data System (ADS)

    Yongyingsakthavorn, Pisit; Vallikul, Pumyos; Fungtammasan, Bundit; Dumouchel, Christophe

    2007-03-01

    This work proposes a new deconvolution technique to obtain local drop size distributions from line-of-sight intensity data measured by laser diffraction technique. The tomographic reconstruction, based on the maximum entropy (ME) technique, is applied to forward scattered light signal from a laser beam scanning horizontally through the spray on each plane from the center to the edge of spray, resulting in the reconstructed scattered light intensities at particular points in the spray. These reconstructed intensities are in turn converted to local drop size distributions. Unlike the classical method of the onion peeling technique or other mathematical transformation techniques that yield unrealistic negative scattered light intensity solutions, the maximum entropy constraints ensure positive light intensity. Experimental validations to the reconstructed results are achieved by using phase Doppler particle analyzer (PDPA). The results from the PDPA measurements agree very well with the proposed ME tomographic reconstruction.

  18. Maximum-entropy Monte Carlo method for the inversion of the structure factor in simple classical systems

    NASA Astrophysics Data System (ADS)

    D'Alessandro, Marco

    2011-10-01

    We present a method for the evaluation of the interaction potential of an equilibrium classical system starting from the (partial) knowledge of its structure factor. The procedure is divided into two phases, both of which are based on the maximum entropy principle of information theory. First we determine the maximum entropy estimate of the radial distribution function constrained by the information contained in the structure factor. Next we invert the pair function and extract the interaction potential. The method is tested on a Lennard-Jones fluid at high density and the reliability of its results with respect to the missing information in the structure factor data are discussed. Finally, it is applied to the experimental data of liquid sodium at 100 ∘C.

  19. Analysis of the Velocity Distribution in Partially-Filled Circular Pipe Employing the Principle of Maximum Entropy

    PubMed Central

    2016-01-01

    The flow velocity distribution in partially-filled circular pipe was investigated in this paper. The velocity profile is different from full-filled pipe flow, since the flow is driven by gravity, not by pressure. The research findings show that the position of maximum flow is below the water surface, and varies with the water depth. In the region of near tube wall, the fluid velocity is mainly influenced by the friction of the wall and the pipe bottom slope, and the variation of velocity is similar to full-filled pipe. But near the free water surface, the velocity distribution is mainly affected by the contractive tube wall and the secondary flow, and the variation of the velocity is relatively small. Literature retrieval results show relatively less research has been shown on the practical expression to describe the velocity distribution of partially-filled circular pipe. An expression of two-dimensional (2D) velocity distribution in partially-filled circular pipe flow was derived based on the principle of maximum entropy (POME). Different entropies were compared according to fluid knowledge, and non-extensive entropy was chosen. A new cumulative distribution function (CDF) of partially-filled circular pipe velocity in terms of flow depth was hypothesized. Combined with the CDF hypothesis, the 2D velocity distribution was derived, and the position of maximum velocity distribution was analyzed. The experimental results show that the estimated velocity values based on the principle of maximum Tsallis wavelet entropy are in good agreement with measured values. PMID:26986064

  20. Analysis of the Velocity Distribution in Partially-Filled Circular Pipe Employing the Principle of Maximum Entropy.

    PubMed

    Jiang, Yulin; Li, Bin; Chen, Jie

    2016-01-01

    The flow velocity distribution in partially-filled circular pipe was investigated in this paper. The velocity profile is different from full-filled pipe flow, since the flow is driven by gravity, not by pressure. The research findings show that the position of maximum flow is below the water surface, and varies with the water depth. In the region of near tube wall, the fluid velocity is mainly influenced by the friction of the wall and the pipe bottom slope, and the variation of velocity is similar to full-filled pipe. But near the free water surface, the velocity distribution is mainly affected by the contractive tube wall and the secondary flow, and the variation of the velocity is relatively small. Literature retrieval results show relatively less research has been shown on the practical expression to describe the velocity distribution of partially-filled circular pipe. An expression of two-dimensional (2D) velocity distribution in partially-filled circular pipe flow was derived based on the principle of maximum entropy (POME). Different entropies were compared according to fluid knowledge, and non-extensive entropy was chosen. A new cumulative distribution function (CDF) of partially-filled circular pipe velocity in terms of flow depth was hypothesized. Combined with the CDF hypothesis, the 2D velocity distribution was derived, and the position of maximum velocity distribution was analyzed. The experimental results show that the estimated velocity values based on the principle of maximum Tsallis wavelet entropy are in good agreement with measured values. PMID:26986064

  1. Location of Cu2+ in CHA zeolite investigated by X-ray diffraction using the Rietveld/maximum entropy method

    PubMed Central

    Andersen, Casper Welzel; Bremholm, Martin; Vennestrøm, Peter Nicolai Ravnborg; Blichfeld, Anders Bank; Lundegaard, Lars Fahl; Iversen, Bo Brummerstedt

    2014-01-01

    Accurate structural models of reaction centres in zeolite catalysts are a prerequisite for mechanistic studies and further improvements to the catalytic performance. The Rietveld/maximum entropy method is applied to synchrotron powder X-ray diffraction data on fully dehydrated CHA-type zeolites with and without loading of catalytically active Cu2+ for the selective catalytic reduction of NOx with NH3. The method identifies the known Cu2+ sites in the six-membered ring and a not previously observed site in the eight-membered ring. The sum of the refined Cu occupancies for these two sites matches the chemical analysis and thus all the Cu is accounted for. It is furthermore shown that approximately 80% of the Cu2+ is located in the new 8-ring site for an industrially relevant CHA zeolite with Si/Al = 15.5 and Cu/Al = 0.45. Density functional theory calculations are used to corroborate the positions and identity of the two Cu sites, leading to the most complete structural description of dehydrated silicoaluminate CHA loaded with catalytically active Cu2+ cations. PMID:25485118

  2. A Bayesian approach to incorporating maximum entropy-derived signal parameter statistics into the receiver operating characteristic (ROC) curves

    NASA Astrophysics Data System (ADS)

    Culver, R. Lee; Sibul, Leon H.; Bradley, David L.; Ballard, Jeffrey A.; Camin, H. John

    2005-09-01

    Our goal is to develop a probabilistic sonar performance prediction methodology that can make use of limited knowledge of random or uncertain environment, target, and sonar system parameters, but does not make unwarranted assumptions. The maximum entropy method (MEM) can be used to construct probability density functions (pdfs) for relevant environmental and source parameters, and an ocean acoustic propagation model can use those pdfs to predict the variability of received signal parameter. At this point, the MEM can be used once again to produce signal parameter pdfs. A Bayesian framework allows these pdfs to be incorporated into the signal processor to produce ROC curves in which, for example, the signal-to-noise ratio (SNR) is a random variable for which a pdf has been calculated. One output of such a processor could be a range-dependent probability of detection for fixed probability of false alarm, which would be more useful than the conventional range of the day that is still in use in some areas. [Work supported by ONR Code 321US.

  3. Toward the Application of the Maximum Entropy Production Principle to a Broader Range of Far From Equilibrium Dissipative Systems

    NASA Astrophysics Data System (ADS)

    Lineweaver, C. H.

    2005-12-01

    The principle of Maximum Entropy Production (MEP) is being usefully applied to a wide range of non-equilibrium processes including flows in planetary atmospheres and the bioenergetics of photosynthesis. Our goal of applying the principle of maximum entropy production to an even wider range of Far From Equilibrium Dissipative Systems (FFEDS) depends on the reproducibility of the evolution of the system from macro-state A to macro-state B. In an attempt to apply the principle of MEP to astronomical and cosmological structures, we investigate the problematic relationship between gravity and entropy. In the context of open and non-equilibrium systems, we use a generalization of the Gibbs free energy to include the sources of free energy extracted by non-living FFEDS such as hurricanes and convection cells. Redox potential gradients and thermal and pressure gradients provide the free energy for a broad range of FFEDS, both living and non-living. However, these gradients have to be within certain ranges. If the gradients are too weak, FFEDS do not appear. If the gradients are too strong FFEDS disappear. Living and non-living FFEDS often have different source gradients (redox potential gradients vs thermal and pressure gradients) and when they share the same gradient, they exploit different ranges of the gradient. In a preliminary attempt to distinguish living from non-living FFEDS, we investigate the parameter space of: type of gradient and steepness of gradient.

  4. THE LICK AGN MONITORING PROJECT: VELOCITY-DELAY MAPS FROM THE MAXIMUM-ENTROPY METHOD FOR Arp 151

    SciTech Connect

    Bentz, Misty C.; Barth, Aaron J.; Walsh, Jonelle L.; Horne, Keith; Bennert, Vardha Nicola; Treu, Tommaso; Canalizo, Gabriela; Filippenko, Alexei V.; Gates, Elinor L.; Malkan, Matthew A.; Minezaki, Takeo; Woo, Jong-Hak

    2010-09-01

    We present velocity-delay maps for optical H I, He I, and He II recombination lines in Arp 151, recovered by fitting a reverberation model to spectrophotometric monitoring data using the maximum-entropy method. H I response is detected over the range 0-15 days, with the response confined within the virial envelope. The Balmer-line maps have similar morphologies but exhibit radial stratification, with progressively longer delays for H{gamma} to H{beta} to H{alpha}. The He I and He II response is confined within 1-2 days. There is a deficit of prompt response in the Balmer-line cores but strong prompt response in the red wings. Comparison with simple models identifies two classes that reproduce these features: free-falling gas and a half-illuminated disk with a hot spot at small radius on the receding lune. Symmetrically illuminated models with gas orbiting in an inclined disk or an isotropic distribution of randomly inclined circular orbits can reproduce the virial structure but not the observed asymmetry. Radial outflows are also largely ruled out by the observed asymmetry. A warped-disk geometry provides a physically plausible mechanism for the asymmetric illumination and hot spot features. Simple estimates show that a disk in the broad-line region of Arp 151 could be unstable to warping induced by radiation pressure. Our results demonstrate the potential power of detailed modeling combined with monitoring campaigns at higher cadence to characterize the gas kinematics and physical processes that give rise to the broad emission lines in active galactic nuclei.

  5. Topological properties of hydrogen bonds and covalent bonds from charge densities obtained by the maximum entropy method (MEM)

    PubMed Central

    Netzel, Jeanette; van Smaalen, Sander

    2009-01-01

    Charge densities have been determined by the Maximum Entropy Method (MEM) from the high-resolution, low-temperature (T ≃ 20 K) X-ray diffraction data of six different crystals of amino acids and peptides. A comparison of dynamic deformation densities of the MEM with static and dynamic deformation densities of multipole models shows that the MEM may lead to a better description of the electron density in hydrogen bonds in cases where the multipole model has been restricted to isotropic displacement parameters and low-order multipoles (l max = 1) for the H atoms. Topological properties at bond critical points (BCPs) are found to depend systematically on the bond length, but with different functions for covalent C—C, C—N and C—O bonds, and for hydrogen bonds together with covalent C—H and N—H bonds. Similar dependencies are known for AIM properties derived from static multipole densities. The ratio of potential and kinetic energy densities |V(BCP)|/G(BCP) is successfully used for a classification of hydrogen bonds according to their distance d(H⋯O) between the H atom and the acceptor atom. The classification based on MEM densities coincides with the usual classification of hydrogen bonds as strong, intermediate and weak [Jeffrey (1997) ▶. An Introduction to Hydrogen Bonding. Oxford University Press]. MEM and procrystal densities lead to similar values of the densities at the BCPs of hydrogen bonds, but differences are shown to prevail, such that it is found that only the true charge density, represented by MEM densities, the multipole model or some other method can lead to the correct characterization of chemical bonding. Our results do not confirm suggestions in the literature that the promolecule density might be sufficient for a characterization of hydrogen bonds. PMID:19767685

  6. Entropy, complexity, and Markov diagrams for random walk cancer models

    PubMed Central

    Newton, Paul K.; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter

    2014-01-01

    The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential. PMID:25523357

  7. Entropy, complexity, and Markov diagrams for random walk cancer models

    NASA Astrophysics Data System (ADS)

    Newton, Paul K.; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter

    2014-12-01

    The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.

  8. Power-Law entropy corrected holographic dark energy model

    NASA Astrophysics Data System (ADS)

    Sheykhi, Ahmad; Jamil, Mubasher

    2011-10-01

    Among various scenarios to explain the acceleration of the universe expansion, the holographic dark energy (HDE) model has got a lot of enthusiasm recently. In the derivation of holographic energy density, the area relation of the black hole entropy plays a crucial role. Indeed, the power-law corrections to entropy appear in dealing with the entanglement of quantum fields in and out the horizon. Inspired by the power-law corrected entropy, we propose the so-called "power-law entropy-corrected holographic dark energy" (PLECHDE) in this Letter. We investigate the cosmological implications of this model and calculate some relevant cosmological parameters and their evolution. We also briefly study the so-called "power-law entropy-corrected agegraphic dark energy" (PLECADE).

  9. On entropy weak solutions of Hughes' model for pedestrian motion

    NASA Astrophysics Data System (ADS)

    El-Khatib, Nader; Goatin, Paola; Rosini, Massimiliano D.

    2013-04-01

    We consider a generalized version of Hughes' macroscopic model for crowd motion in the one-dimensional case. It consists in a scalar conservation law accounting for the conservation of the number of pedestrians, coupled with an eikonal equation giving the direction of the flux depending on pedestrian density. As a result of this non-trivial coupling, we have to deal with a conservation law with space-time discontinuous flux, whose discontinuity depends non-locally on the density itself. We propose a definition of entropy weak solution, which allows us to recover a maximum principle. Moreover, we study the structure of the solutions to Riemann-type problems, and we construct them explicitly for small times, depending on the choice of the running cost in the eikonal equation. In particular, aiming at the optimization of the evacuation time, we propose a strategy that is optimal in the case of high densities. All results are illustrated by numerical simulations.

  10. Electron-positron momentum density distribution of Gd from 2D ACAR data via Maximum Entropy and Cormack's methods

    NASA Astrophysics Data System (ADS)

    Pylak, M.; Kontrym-Sznajd, G.; Dobrzyński, L.

    2011-08-01

    A successful application of the Maximum Entropy Method (MEM) to the reconstruction of electron-positron momentum density distribution in gadolinium out of the experimental of 2D ACAR data is presented. Formally, the algorithm used was prepared for two-dimensional reconstructions from line integrals. For the first time the results of MEM, applied to such data, are compared in detail with the ones obtained by means of Cormack's method. It is also shown how the experimental uncertainties may influence the results of the latter analysis. Preliminary calculations, using WIEN2k code, of band structure and Fermi surface have been done as well.

  11. Connection between wave transport through disordered 1D waveguides and energy density inside the sample: A maximum-entropy approach

    NASA Astrophysics Data System (ADS)

    Mello, Pier A.; Shi, Zhou; Genack, Azriel Z.

    2015-11-01

    We study the average energy - or particle - density of waves inside disordered 1D multiply-scattering media. We extend the transfer-matrix technique that was used in the past for the calculation of the intensity beyond the sample to study the intensity in the interior of the sample by considering the transfer matrices of the two segments that form the entire waveguide. The statistical properties of the two disordered segments are found using a maximum-entropy ansatz subject to appropriate constraints. The theoretical expressions are shown to be in excellent agreement with 1D transfer-matrix simulations.

  12. Application of the maximum entropy method to spectral-domain optical coherence tomography for enhancing axial resolution

    NASA Astrophysics Data System (ADS)

    Takahashi, Yoshiyuki; Watanabe, Yuuki; Sato, Manabu

    2007-08-01

    For the first time we applied the maximum entropy method (MEM) to spectral domain optical coherence tomography to enhance axial resolution (AR). The MEM estimates the power spectrum by fitting. For an onion with optimization of M = 70, the AR of 18.8 μm by discrete Fourier transform (DFT) was improved three times compared with peak widths. The calculation time by the MEM with M = 70 was 20 times longer than that of DFT. However, further studies are needed for practical applications, because the validity of the MEM depends on the sample structures.

  13. Distributed estimation and joint probabilities estimation by entropy model

    NASA Astrophysics Data System (ADS)

    Fassinut-Mombot, B.; Zribi, M.; Choquel, J. B.

    2001-05-01

    This paper proposes the use of Entropy Model for distributed estimation system. Entropy Model is an entropic technique based on the minimization of conditional entropy and developed for Multi-Source/Sensor Information Fusion (MSIF) problem. We address the problem of distributed estimation from independent observations involving multiple sources, i.e., the problem of estimating or selecting one of several identity declaration, or hypothesis concerning an observed object. Two problems are considered in Entropy Model. In order to fuse observations using Entropy Model, it is necessary to know or estimate the conditional probabilities and by equivalent the joint probabilities. A common practice for estimating probability distributions from data when nothing is known (without a priori knowledge), one should prefer distributions that are as uniform as possible, that is, have maximal entropy. Next, the problem of combining (or ``fusing'') observations relating to identity hypotheses and selecting the most appropriate hypothesis about the object's identity is addressed. Much future work remains, but the results indicate that Entropy Model is a promising technique for distributed estimation. .

  14. Coupling diffusion and maximum entropy models to estimate thermal inertia

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Thermal inertia is a physical property of soil at the land surface related to water content. We have developed a method for estimating soil thermal inertia using two daily measurements of surface temperature, to capture the diurnal range, and diurnal time series of net radiation and specific humidi...

  15. Maximum Entropy and the Inference of Pattern and Dynamics in Ecology

    NASA Astrophysics Data System (ADS)

    Harte, John

    Constrained maximization of information entropy yields least biased probability distributions. From physics to economics, from forensics to medicine, this powerful inference method has enriched science. Here I apply this method to ecology, using constraints derived from ratios of ecological state variables, and infer functional forms for the ecological metrics describing patterns in the abundance, distribution, and energetics of species. I show that a static version of the theory describes remarkably well observed patterns in quasi-steady-state ecosystems across a wide range of habitats, spatial scales, and taxonomic groups. A systematic pattern of failure is observed, however, for ecosystems either losing species following disturbance or diversifying in evolutionary time; I show that this problem may be remedied with a stochastic-dynamic extension of the theory.

  16. Beyond Boltzmann-Gibbs statistics: Maximum entropy hyperensemblesout-of-equilibrium

    SciTech Connect

    Crooks, Gavin E.

    2006-02-23

    What is the best description that we can construct of athermodynamic system that is not in equilibrium, given only one, or afew, extra parameters over and above those needed for a description ofthe same system at equilibrium? Here, we argue the most appropriateadditional parameter is the non-equilibrium entropy of the system, andthat we should not attempt to estimate the probability distribution ofthe system, but rather the metaprobability (or hyperensemble) that thesystem is described by a particular probability distribution. The resultis an entropic distribution with two parameters, one a non-equilibriumtemperature, and the other a measure of distance from equilibrium. Thisdispersion parameter smoothly interpolates between certainty of acanonical distribution at equilibrium and great uncertainty as to theprobability distribution as we move away from equilibrium. We deducethat, in general, large, rare fluctuations become far more common as wemove away from equilibrium.

  17. Model Fit after Pairwise Maximum Likelihood

    PubMed Central

    Barendse, M. T.; Ligtvoet, R.; Timmerman, M. E.; Oort, F. J.

    2016-01-01

    Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response patterns is computationally very intensive, the sum of the log–likelihoods of the bivariate response patterns is maximized instead. Little is yet known about how to assess model fit when the analysis is based on such a pairwise maximum likelihood (PML) of two–way contingency tables. We propose new fit criteria for the PML method and conduct a simulation study to evaluate their performance in model selection. With large sample sizes (500 or more), PML performs as well the robust weighted least squares analysis of polychoric correlations. PMID:27148136

  18. Model Fit after Pairwise Maximum Likelihood.

    PubMed

    Barendse, M T; Ligtvoet, R; Timmerman, M E; Oort, F J

    2016-01-01

    Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response patterns is computationally very intensive, the sum of the log-likelihoods of the bivariate response patterns is maximized instead. Little is yet known about how to assess model fit when the analysis is based on such a pairwise maximum likelihood (PML) of two-way contingency tables. We propose new fit criteria for the PML method and conduct a simulation study to evaluate their performance in model selection. With large sample sizes (500 or more), PML performs as well the robust weighted least squares analysis of polychoric correlations. PMID:27148136

  19. Identification of a Threshold Value for the DEMATEL Method: Using the Maximum Mean De-Entropy Algorithm

    NASA Astrophysics Data System (ADS)

    Chung-Wei, Li; Gwo-Hshiung, Tzeng

    To deal with complex problems, structuring them through graphical representations and analyzing causal influences can aid in illuminating complex issues, systems, or concepts. The DEMATEL method is a methodology which can be used for researching and solving complicated and intertwined problem groups. The end product of the DEMATEL process is a visual representation—the impact-relations map—by which respondents organize their own actions in the world. The applicability of the DEMATEL method is widespread, ranging from analyzing world problematique decision making to industrial planning. The most important property of the DEMATEL method used in the multi-criteria decision making (MCDM) field is to construct interrelations between criteria. In order to obtain a suitable impact-relations map, an appropriate threshold value is needed to obtain adequate information for further analysis and decision-making. In this paper, we propose a method based on the entropy approach, the maximum mean de-entropy algorithm, to achieve this purpose. Using real cases to find the interrelationships between the criteria for evaluating effects in E-learning programs as an examples, we will compare the results obtained from the respondents and from our method, and discuss that the different impact-relations maps from these two methods.

  20. Irreversible entropy model for damage diagnosis in resistors

    SciTech Connect

    Cuadras, Angel Crisóstomo, Javier; Ovejas, Victoria J.; Quilez, Marcos

    2015-10-28

    We propose a method to characterize electrical resistor damage based on entropy measurements. Irreversible entropy and the rate at which it is generated are more convenient parameters than resistance for describing damage because they are essentially positive in virtue of the second law of thermodynamics, whereas resistance may increase or decrease depending on the degradation mechanism. Commercial resistors were tested in order to characterize the damage induced by power surges. Resistors were biased with constant and pulsed voltage signals, leading to power dissipation in the range of 4–8 W, which is well above the 0.25 W nominal power to initiate failure. Entropy was inferred from the added power and temperature evolution. A model is proposed to understand the relationship among resistance, entropy, and damage. The power surge dissipates into heat (Joule effect) and damages the resistor. The results show a correlation between entropy generation rate and resistor failure. We conclude that damage can be conveniently assessed from irreversible entropy generation. Our results for resistors can be easily extrapolated to other systems or machines that can be modeled based on their resistance.

  1. Entropy Based Modelling for Estimating Demographic Trends

    PubMed Central

    Kuo, Shyh-Hao; Xu, Hai-Yan; Hu, Nan; Zhao, Guangshe; Monterola, Christopher

    2015-01-01

    In this paper, an entropy-based method is proposed to forecast the demographical changes of countries. We formulate the estimation of future demographical profiles as a constrained optimization problem, anchored on the empirically validated assumption that the entropy of age distribution is increasing in time. The procedure of the proposed method involves three stages, namely: 1) Prediction of the age distribution of a country’s population based on an “age-structured population model”; 2) Estimation the age distribution of each individual household size with an entropy-based formulation based on an “individual household size model”; and 3) Estimation the number of each household size based on a “total household size model”. The last stage is achieved by projecting the age distribution of the country’s population (obtained in stage 1) onto the age distributions of individual household sizes (obtained in stage 2). The effectiveness of the proposed method is demonstrated by feeding real world data, and it is general and versatile enough to be extended to other time dependent demographic variables. PMID:26382594

  2. Comparison of the algebraic reconstruction technique with the maximum entropy reconstruction technique for a variety of detection tasks

    SciTech Connect

    Myers, K.J. . Center for Devices and Radiological Health); Hanson, K.M. )

    1990-01-01

    A method for comparing reconstruction algorithms is presented based on the ability to perform certain detection tasks on the resulting images. The reconstruction algorithms compared are the algebraic reconstruction technique and the maximum entropy reconstruction method. Task performance is assessed through a Monte Carlo simulation of the complete imaging process, including the generation of a set of object scenes, followed by data-taking, reconstruction, and performance of the specified task by a machine observer. For these detection tasks the figure of merit used for comparison is the detectability index. When each algorithm is run with approximately optimized parameters, these studies find comparable values for the detectability index. 19 refs., 6 figs., 2 tabs.

  3. Maximum entropy principle for predicting response to multiple-drug exposure in bacteria and human cancer cells

    NASA Astrophysics Data System (ADS)

    Wood, Kevin; Nishida, Satoshi; Sontag, Eduardo; Cluzel, Philippe

    2012-02-01

    Drugs are commonly used in combinations larger than two for treating infectious disease. However, it is generally impossible to infer the net effect of a multi-drug combination on cell growth directly from the effects of individual drugs. We combined experiments with maximum entropy methods to develop a mechanism-independent framework for calculating the response of both bacteria and human cancer cells to a large variety of drug combinations comprised of anti-microbial or anti-cancer drugs. We experimentally show that the cellular responses to drug pairs are sufficient to infer the effects of larger drug combinations in gram negative bacteria, Escherichia coli, gram positive bacteria, Staphylococcus aureus, and also human breast cancer and melanoma cell lines. Remarkably, the accurate predictions of this framework suggest that the multi-drug response obeys statistical rather than chemical laws for combinations larger than two. Consequently, these findings offer a new strategy for the rational design of therapies using large drug combinations.

  4. A cumulative entropy method for distribution recognition of model error

    NASA Astrophysics Data System (ADS)

    Liang, Yingjie; Chen, Wen

    2015-02-01

    This paper develops a cumulative entropy method (CEM) to recognize the most suitable distribution for model error. In terms of the CEM, the Lévy stable distribution is employed to capture the statistical properties of model error. The strategies are tested on 250 experiments of axially loaded CFT steel stub columns in conjunction with the four national building codes of Japan (AIJ, 1997), China (DL/T, 1999), the Eurocode 4 (EU4, 2004), and United States (AISC, 2005). The cumulative entropy method is validated as more computationally efficient than the Shannon entropy method. Compared with the Kolmogorov-Smirnov test and root mean square deviation, the CEM provides alternative and powerful model selection criterion to recognize the most suitable distribution for the model error.

  5. Entanglement entropy of Wilson loops: Holography and matrix models

    NASA Astrophysics Data System (ADS)

    Gentle, Simon A.; Gutperle, Michael

    2014-09-01

    A half-Bogomol'nyi-Prasad-Sommerfeld circular Wilson loop in N=4 SU(N) supersymmetric Yang-Mills theory in an arbitrary representation is described by a Gaussian matrix model with a particular insertion. The additional entanglement entropy of a spherical region in the presence of such a loop was recently computed by Lewkowycz and Maldacena using exact matrix model results. In this paper we utilize the supergravity solutions that are dual to such Wilson loops in a representation with order N2 boxes to calculate this entropy holographically. Employing the matrix model results of Gomis, Matsuura, Okuda and Trancanelli we express this holographic entanglement entropy in a form that can be compared with the calculation of Lewkowycz and Maldacena. We find complete agreement between the matrix model and holographic calculations.

  6. Coupling entropy of co-processing model on social networks

    NASA Astrophysics Data System (ADS)

    Zhang, Zhanli

    2015-08-01

    Coupling entropy of co-processing model on social networks is investigated in this paper. As one crucial factor to determine the processing ability of nodes, the information flow with potential time lag is modeled by co-processing diffusion which couples the continuous time processing and the discrete diffusing dynamics. Exact results on master equation and stationary state are achieved to disclose the formation. In order to understand the evolution of the co-processing and design the optimal routing strategy according to the maximal entropic diffusion on networks, we propose the coupling entropy comprehending the structural characteristics and information propagation on social network. Based on the analysis of the co-processing model, we analyze the coupling impact of the structural factor and information propagating factor on the coupling entropy, where the analytical results fit well with the numerical ones on scale-free social networks.

  7. Experiments and Model for Serration Statistics in Low-Entropy, Medium-Entropy, and High-Entropy Alloys

    SciTech Connect

    Carroll, Robert; Lee, Chi; Tsai, Che-Wei; Yeh, Jien-Wei; Antonaglia, James; Brinkman, Braden A.W.; LeBlanc, Michael; Xie, Xie; Chen, Shuying; Liaw, Peter K; Dahmen, Karin A

    2015-11-23

    High-entropy alloys (HEAs) are new alloys that contain five or more elements in roughly equal proportion. We present new experiments and theory on the deformation behavior of HEAs under slow stretching (straining), and observe differences, compared to conventional alloys with fewer elements. For a specific range of temperatures and strain-rates, HEAs deform in a jerky way, with sudden slips that make it difficult to precisely control the deformation. An analytic model explains these slips as avalanches of slipping weak spots and predicts the observed slip statistics, stress-strain curves, and their dependence on temperature, strain-rate, and material composition. The ratio of the weak spots’ healing rate to the strain-rate is the main tuning parameter, reminiscent of the Portevin-LeChatellier effect and time-temperature superposition in polymers. Our model predictions agree with the experimental results. The proposed widely-applicable deformation mechanism is useful for deformation control and alloys design.

  8. Renyi entropies for classical string-net models

    NASA Astrophysics Data System (ADS)

    Hermanns, M.; Trebst, S.

    2014-05-01

    In quantum mechanics, string-net condensed states—a family of prototypical states exhibiting nontrivial topological order—can be classified via their long-range entanglement properties, in particular, topological corrections to the prevalent area law of the entanglement entropy. Here we consider classical analogs of such string-net models whose partition function is given by an equal-weight superposition of classical string-net configurations. Our analysis of the Shannon and Renyi entropies for a bipartition of a given system reveals that the prevalent volume law for these classical entropies is augmented by subleading topological corrections that are intimately linked to the anyonic theories underlying the construction of the classical models. We determine the universal values of these topological corrections for a number of underlying anyonic theories including SU(2)k,SU(N)1, and SU(N)2 theories.

  9. Entropy exchange and entanglement in the Jaynes-Cummings model

    NASA Astrophysics Data System (ADS)

    Boukobza, E.; Tannor, D. J.

    2005-06-01

    The Jaynes-Cummings model (JCM) is the simplest fully quantum model that describes the interaction between light and matter. We extend a previous analysis by Phoenix and Knight [Ann. Phys. 186, 381 (1988)] of the JCM by considering mixed states of both the light and matter. We present examples of qualitatively different entropic correlations. In particular, we explore the regime of entropy exchange between light and matter, i.e., where the rate of change of the two are anticorrelated. This behavior contrasts with the case of pure light-matter states in which the rate of change of the two entropies are positively correlated and in fact identical. We give an analytical derivation of the anticorrelation phenomenon and discuss the regime of its validity. Finally, we show a strong correlation between the region of the Bloch sphere characterized by entropy exchange and that characterized by minimal entanglement as measured by the negative eigenvalues of the partially transposed density matrix.

  10. Improved model for the transit entropy of monatomic liquids

    NASA Astrophysics Data System (ADS)

    Wallace, Duane C.; Chisolm, Eric D.; Bock, Nicolas

    2009-05-01

    In the original formulation of vibration-transit (V-T) theory for monatomic liquid dynamics, the transit contribution to entropy was taken to be a universal constant, calibrated to the constant-volume entropy of melting. This model suffers two deficiencies: (a) it does not account for experimental entropy differences of ±2% among elemental liquids and (b) it implies a value of zero for the transit contribution to internal energy. The purpose of this paper is to correct these deficiencies. To this end, the V-T equation for entropy is fitted to an overall accuracy of ±0.1% to the available experimental high-temperature entropy data for elemental liquids. The theory contains two nuclear motion contributions: (a) the dominant vibrational contribution Svib(T/θ0) , where T is temperature and θ0 is the vibrational characteristic temperature, and (b) the transit contribution Str(T/θtr) , where θtr is a scaling temperature for each liquid. The appearance of a common functional form of Str for all the liquids studied is a property of the experimental data, when analyzed via the V-T formula. The resulting Str implies the correct transit contribution to internal energy. The theoretical entropy of melting is derived in a single formula applying to normal and anomalous melting alike. An ab initio calculation of θ0 , based on density-functional theory, is reported for liquid Na and Cu. Comparison of these calculations with the above analysis of experimental entropy data provides verification of V-T theory. In view of the present results, techniques currently being applied in ab initio simulations of liquid properties can be employed to advantage in the further testing and development of V-T theory.

  11. An improved model for the transit entropy of monatomic liquids

    SciTech Connect

    Wallace, Duane C; Chisolm, Eric D; Bock, Nicolas

    2009-01-01

    In the original formulation of V-T theory for monatomic liquid dynamics, the transit contribution to entropy was taken to be a universal constant, calibrated to the constant-volume entropy of melting. This model suffers two deficiencies: (a) it does not account for experimental entropy differences of {+-}2% among elemental liquids, and (b) it implies a value of zero for the transit contribution to internal energy. The purpose of this paper is to correct these deficiencies. To this end, the V-T equation for entropy is fitted to an overall accuracy of {+-}0.1% to the available experimental high temperature entropy data for elemental liquids. The theory contains two nuclear motion contributions: (a) the dominant vibrational contribution S{sub vib}(T/{theta}{sub 0}), where T is temperature and {theta}{sub 0} is the vibrational characteristic temperature, and (b) the transit contribution S{sub tr}(T/{theta}{sub tr}), where {theta}{sub tr} is a scaling temperature for each liquid. The appearance of a common functional form of S{sub tr} for all the liquids studied is a property of the experimental data, when analyzed via the V-T formula. The resulting S{sub tr} implies the correct transit contribution to internal energy. The theoretical entropy of melting is derived, in a single formula applying to normal and anomalous melting alike. An ab initio calculation of {theta}{sub 0}, based on density functional theory, is reported for liquid Na and Cu. Comparison of these calculations with the above analysis of experimental entropy data provides verification of V-T theory. In view of the present results, techniques currently being applied in ab initio simulations of liquid properties can be employed to advantage in the further testing and development of V-T theory.

  12. An articulatorily constrained, maximum entropy approach to speech recognition and speech coding

    SciTech Connect

    Hogden, J.

    1996-12-31

    Hidden Markov models (HMM`s) are among the most popular tools for performing computer speech recognition. One of the primary reasons that HMM`s typically outperform other speech recognition techniques is that the parameters used for recognition are determined by the data, not by preconceived notions of what the parameters should be. This makes HMM`s better able to deal with intra- and inter-speaker variability despite the limited knowledge of how speech signals vary and despite the often limited ability to correctly formulate rules describing variability and invariance in speech. In fact, it is often the case that when HMM parameter values are constrained using the limited knowledge of speech, recognition performance decreases. However, the structure of an HMM has little in common with the mechanisms underlying speech production. Here, the author argues that by using probabilistic models that more accurately embody the process of speech production, he can create models that have all the advantages of HMM`s, but that should more accurately capture the statistical properties of real speech samples--presumably leading to more accurate speech recognition. The model he will discuss uses the fact that speech articulators move smoothly and continuously. Before discussing how to use articulatory constraints, he will give a brief description of HMM`s. This will allow him to highlight the similarities and differences between HMM`s and the proposed technique.

  13. Configurational entropy in f (R,T ) brane models

    NASA Astrophysics Data System (ADS)

    Correa, R. A. C.; Moraes, P. H. R. S.

    2016-02-01

    In this work we investigate generalized theories of gravity in the so-called configurational entropy (CE) context. We show, by means of this information-theoretical measure, that a stricter bound on the parameter of f( R, T) brane models arises from the CE. We find that these bounds are characterized by a valley region in the CE profile, where the entropy is minimal. We argue that the CE measure can play a new role and might be an important additional approach to selecting parameters in modified theories of gravitation.

  14. Entanglement entropies of the quarter filled Hubbard model

    NASA Astrophysics Data System (ADS)

    Calabrese, Pasquale; Essler, Fabian H. L.; Läuchli, Andreas M.

    2014-09-01

    We study Rényi and von Neumann entanglement entropies in the ground state of the one dimensional quarter-filled Hubbard model with periodic boundary conditions. We show that they exhibit an unexpected dependence on system size: for L = 4mod 8 the results are in agreement with expectations based on conformal field theory, while for L = 0mod 8 additional contributions arise. We show that these can be understood in terms of a ‘shell-filling’ effect and we develop a conformal field theory approach to calculate the additional contributions to the entropies. These analytic results are found to be in excellent agreement with density matrix renormalization group computations for weak Hubbard interactions. We argue that for larger interactions the presence of a marginal irrelevant operator in the spin sector strongly affects the entropies at the finite sizes accessible numerically and we present an effective way to take them into account.

  15. Factor Analysis of Wildfire and Risk Area Estimation in Korean Peninsula Using Maximum Entropy

    NASA Astrophysics Data System (ADS)

    Kim, Teayeon; Lim, Chul-Hee; Lee, Woo-Kyun; Kim, YouSeung; Heo, Seongbong; Cha, Sung Eun; Kim, Seajin

    2016-04-01

    The number of wildfires and accompanying human injuries and physical damages has been increased by frequent drought. Especially, Korea experienced severe drought and numbers of wildfire took effect this year. We used MaxEnt model to figure out major environmental factors for wildfire and used RCP scenarios to predict future wildfire risk area. In this study, environmental variables including topographic, anthropogenic, meteorologic data was used to figure out contributing variables of wildfire in South and North Korea, and compared accordingly. As for occurrence data, we used MODIS fire data after verification. In North Korea, AUC(Area Under the ROC Curve) value was 0.890 which was high enough to explain the distribution of wildfires. South Korea had low AUC value than North Korea and high mean standard deviation which means there is low anticipation to predict fire with same environmental variables. It is expected to enhance AUC value in South Korea with environmental variables such as distance from trails, wildfire management systems. For instance, fire occurred within DMZ(demilitarized zone, 4kms boundary from 38th parallel) has decisive influence on fire risk area in South Korea, but not in North Korea. The contribution of each environmental variables was more distributed among variables in North Korea than in South Korea. This means South Korea is dependent on few certain variables, and North Korea can be explained as number of variables with evenly distributed portions. Although the AUC value and standard deviation of South Korea was not high enough to predict wildfire, the result carries an significant meaning to figure out scientific and social matters that certain environmental variables has great weight by understanding their response curves. We also made future wildfire risk area map in whole Korean peninsula using the same model. In four RCP scenarios, it was found that severe climate change would lead wildfire risk area move north. Especially North

  16. Extension of spray nozzle correlations to the prediction of drop size distributions using principles of maximum entropy

    NASA Astrophysics Data System (ADS)

    Sankagiri, N.; Ruff, G. A.

    1993-01-01

    For years, design and evaluation of the performance of many existing liquid spray systems has made use of the many empirical correlations for the bulk properties of a spray such as mean drop size, spread angle, etc. However, more detailed information, such as the drop size distribution, is required to evaluate critical performance parameters such as NOx emission in internal combustion engines and the combustion efficiency of a hazardous waste incinerator. The principles of conservation of mass, momentum, and energy can be applied through the maximum entropy formulation to estimate the joint drop size-velocity distribution provided that some information about the bulk properties of the spray exists from empirical correlations. A general method for this determination is described in this paper and differences from previous work are highlighted. Comparisons between the predicted and experimental results verify that this method does yield a good estimation of the drop size distribution for certain applications. Other uses for this methodology in spray analysis are also discussed.

  17. Maximum entropy state of the quasi-geostrophic bi-disperse point vortex system: bifurcation phenomena under periodic boundary conditions

    NASA Astrophysics Data System (ADS)

    Funakoshi, Satoshi; Sato, Tomoyoshi; Miyazaki, Takeshi

    2012-06-01

    We investigate the statistical mechanics of quasi-geostrophic point vortices of mixed sign (bi-disperse system) numerically and theoretically. Direct numerical simulations under periodic boundary conditions are performed using a fast special-purpose computer for molecular dynamics (GRAPE-DR). Clustering of point vortices of like sign is observed and two-dimensional (2D) equilibrium states are formed. It is shown that they are the solutions of the 2D mean-field equation, i.e. the sinh-Poisson equation. The sinh-Poisson equation is generalized to study the 3D nature of the equilibrium states, and a new mean-field equation with the 3D Laplace operator is derived based on the maximum entropy theory. 3D solutions are obtained at very low energy level. These solution branches, however, cannot be traced up to the higher energy level at which the direct numerical simulations are performed, and transitions to 2D solution branches take place when the energy is increased.

  18. METSP: A Maximum-Entropy Classifier Based Text Mining Tool for Transporter-Substrate Identification with Semistructured Text

    PubMed Central

    Zhao, Min; Chen, Yanming; Qu, Dacheng; Qu, Hong

    2015-01-01

    The substrates of a transporter are not only useful for inferring function of the transporter, but also important to discover compound-compound interaction and to reconstruct metabolic pathway. Though plenty of data has been accumulated with the developing of new technologies such as in vitro transporter assays, the search for substrates of transporters is far from complete. In this article, we introduce METSP, a maximum-entropy classifier devoted to retrieve transporter-substrate pairs (TSPs) from semistructured text. Based on the high quality annotation from UniProt, METSP achieves high precision and recall in cross-validation experiments. When METSP is applied to 182,829 human transporter annotation sentences in UniProt, it identifies 3942 sentences with transporter and compound information. Finally, 1547 confidential human TSPs are identified for further manual curation, among which 58.37% pairs with novel substrates not annotated in public transporter databases. METSP is the first efficient tool to extract TSPs from semistructured annotation text in UniProt. This tool can help to determine the precise substrates and drugs of transporters, thus facilitating drug-target prediction, metabolic network reconstruction, and literature classification. PMID:26495291

  19. Multilevel image thresholding based on 2D histogram and maximum Tsallis entropy--a differential evolution approach.

    PubMed

    Sarkar, Soham; Das, Swagatam

    2013-12-01

    Multilevel thresholding amounts to segmenting a gray-level image into several distinct regions. This paper presents a 2D histogram based multilevel thresholding approach to improve the separation between objects. Recent studies indicate that the results obtained with 2D histogram oriented approaches are superior to those obtained with 1D histogram based techniques in the context of bi-level thresholding. Here, a method to incorporate 2D histogram related information for generalized multilevel thresholding is proposed using the maximum Tsallis entropy. Differential evolution (DE), a simple yet efficient evolutionary algorithm of current interest, is employed to improve the computational efficiency of the proposed method. The performance of DE is investigated extensively through comparison with other well-known nature inspired global optimization techniques such as genetic algorithm, particle swarm optimization, artificial bee colony, and simulated annealing. In addition, the outcome of the proposed method is evaluated using a well known benchmark--the Berkley segmentation data set (BSDS300) with 300 distinct images. PMID:23955760

  20. Modeling maximum daily temperature using a varying coefficient regression model

    NASA Astrophysics Data System (ADS)

    Li, Han; Deng, Xinwei; Kim, Dong-Yun; Smith, Eric P.

    2014-04-01

    Relationships between stream water and air temperatures are often modeled using linear or nonlinear regression methods. Despite a strong relationship between water and air temperatures and a variety of models that are effective for data summarized on a weekly basis, such models did not yield consistently good predictions for summaries such as daily maximum temperature. A good predictive model for daily maximum temperature is required because daily maximum temperature is an important measure for predicting survival of temperature sensitive fish. To appropriately model the strong relationship between water and air temperatures at a daily time step, it is important to incorporate information related to the time of the year into the modeling. In this work, a time-varying coefficient model is used to study the relationship between air temperature and water temperature. The time-varying coefficient model enables dynamic modeling of the relationship, and can be used to understand how the air-water temperature relationship varies over time. The proposed model is applied to 10 streams in Maryland, West Virginia, Virginia, North Carolina, and Georgia using daily maximum temperatures. It provides a better fit and better predictions than those produced by a simple linear regression model or a nonlinear logistic model.

  1. Experiments and Model for Serration Statistics in Low-Entropy, Medium-Entropy, and High-Entropy Alloys.

    PubMed

    Carroll, Robert; Lee, Chi; Tsai, Che-Wei; Yeh, Jien-Wei; Antonaglia, James; Brinkman, Braden A W; LeBlanc, Michael; Xie, Xie; Chen, Shuying; Liaw, Peter K; Dahmen, Karin A

    2015-01-01

    High-entropy alloys (HEAs) are new alloys that contain five or more elements in roughly-equal proportion. We present new experiments and theory on the deformation behavior of HEAs under slow stretching (straining), and observe differences, compared to conventional alloys with fewer elements. For a specific range of temperatures and strain-rates, HEAs deform in a jerky way, with sudden slips that make it difficult to precisely control the deformation. An analytic model explains these slips as avalanches of slipping weak spots and predicts the observed slip statistics, stress-strain curves, and their dependence on temperature, strain-rate, and material composition. The ratio of the weak spots' healing rate to the strain-rate is the main tuning parameter, reminiscent of the Portevin-LeChatellier effect and time-temperature superposition in polymers. Our model predictions agree with the experimental results. The proposed widely-applicable deformation mechanism is useful for deformation control and alloy design. PMID:26593056

  2. Experiments and Model for Serration Statistics in Low-Entropy, Medium-Entropy, and High-Entropy Alloys

    DOE PAGESBeta

    Carroll, Robert; Lee, Chi; Tsai, Che-Wei; Yeh, Jien-Wei; Antonaglia, James; Brinkman, Braden A.W.; LeBlanc, Michael; Xie, Xie; Chen, Shuying; Liaw, Peter K; et al

    2015-11-23

    High-entropy alloys (HEAs) are new alloys that contain five or more elements in roughly equal proportion. We present new experiments and theory on the deformation behavior of HEAs under slow stretching (straining), and observe differences, compared to conventional alloys with fewer elements. For a specific range of temperatures and strain-rates, HEAs deform in a jerky way, with sudden slips that make it difficult to precisely control the deformation. An analytic model explains these slips as avalanches of slipping weak spots and predicts the observed slip statistics, stress-strain curves, and their dependence on temperature, strain-rate, and material composition. The ratio ofmore » the weak spots’ healing rate to the strain-rate is the main tuning parameter, reminiscent of the Portevin-LeChatellier effect and time-temperature superposition in polymers. Our model predictions agree with the experimental results. The proposed widely-applicable deformation mechanism is useful for deformation control and alloys design.« less

  3. Experiments and Model for Serration Statistics in Low-Entropy, Medium-Entropy, and High-Entropy Alloys

    PubMed Central

    Carroll, Robert; Lee, Chi; Tsai, Che-Wei; Yeh, Jien-Wei; Antonaglia, James; Brinkman, Braden A. W.; LeBlanc, Michael; Xie, Xie; Chen, Shuying; Liaw, Peter K.; Dahmen, Karin A.

    2015-01-01

    High-entropy alloys (HEAs) are new alloys that contain five or more elements in roughly-equal proportion. We present new experiments and theory on the deformation behavior of HEAs under slow stretching (straining), and observe differences, compared to conventional alloys with fewer elements. For a specific range of temperatures and strain-rates, HEAs deform in a jerky way, with sudden slips that make it difficult to precisely control the deformation. An analytic model explains these slips as avalanches of slipping weak spots and predicts the observed slip statistics, stress-strain curves, and their dependence on temperature, strain-rate, and material composition. The ratio of the weak spots’ healing rate to the strain-rate is the main tuning parameter, reminiscent of the Portevin-LeChatellier effect and time-temperature superposition in polymers. Our model predictions agree with the experimental results. The proposed widely-applicable deformation mechanism is useful for deformation control and alloy design. PMID:26593056

  4. Improving Estimations of Spatial Distribution of Soil Respiration Using the Bayesian Maximum Entropy Algorithm and Soil Temperature as Auxiliary Data.

    PubMed

    Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi

    2016-01-01

    Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points. PMID:26807579

  5. A thermodynamic interpretation of Budyko and L'vovich formulations of annual water balance: Proportionality Hypothesis and maximum entropy production

    NASA Astrophysics Data System (ADS)

    Wang, Dingbao; Zhao, Jianshi; Tang, Yin; Sivapalan, Murugesu

    2015-04-01

    The paper forms part of the search for a thermodynamic explanation for the empirical Budyko Curve, addressing a long-standing research question in hydrology. Here this issue is pursued by invoking the Proportionality Hypothesis underpinning the Soil Conservation Service (SCS) curve number method widely used for estimating direct runoff at the event scale. In this case, the Proportionality Hypothesis posits that the ratio of continuing abstraction to its potential value is equal to the ratio of direct runoff to its potential value. Recently, the validity of the Proportionality Hypothesis has been extended to the partitioning of precipitation into runoff and evaporation at the annual time scale as well. In this case, the Proportionality Hypothesis dictates that the ratio of continuing evaporation to its potential value is equal to the ratio of runoff to its potential value. The Budyko Curve could then be seen as the straightforward outcome of the application of the Proportionality Hypothesis to estimate mean annual water balance. In this paper, we go further and demonstrate that the Proportionality Hypothesis itself can be seen as a result of the application of the thermodynamic principle of Maximum Entropy Production (MEP). In this way, we demonstrate a possible thermodynamic basis for the Proportionality Hypothesis, and consequently for the Budyko Curve. As a further extension, the L'vovich formulation for the two-stage partitioning of annual precipitation is also demonstrated to be a result of MEP: one for the competition between soil wetting and fast flow during the first stage; another for the competition between evaporation and base flow during the second stage.

  6. Improving Estimations of Spatial Distribution of Soil Respiration Using the Bayesian Maximum Entropy Algorithm and Soil Temperature as Auxiliary Data

    PubMed Central

    Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi

    2016-01-01

    Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points. PMID:26807579

  7. ATR applications of minimax entropy models of texture and shape

    NASA Astrophysics Data System (ADS)

    Zhu, Song-Chun; Yuille, Alan L.; Lanterman, Aaron D.

    2001-10-01

    Concepts from information theory have recently found favor in both the mainstream computer vision community and the military automatic target recognition community. In the computer vision literature, the principles of minimax entropy learning theory have been used to generate rich probabilitistic models of texture and shape. In addition, the method of types and large deviation theory has permitted the difficulty of various texture and shape recognition tasks to be characterized by 'order parameters' that determine how fundamentally vexing a task is, independent of the particular algorithm used. These information-theoretic techniques have been demonstrated using traditional visual imagery in applications such as simulating cheetah skin textures and such as finding roads in aerial imagery. We discuss their application to problems in the specific application domain of automatic target recognition using infrared imagery. We also review recent theoretical and algorithmic developments which permit learning minimax entropy texture models for infrared textures in reasonable timeframes.

  8. Holography and entropy bounds in the plane wave matrix model

    SciTech Connect

    Bousso, Raphael; Mints, Aleksey L.

    2006-06-15

    As a quantum theory of gravity, matrix theory should provide a realization of the holographic principle, in the sense that a holographic theory should contain one binary degree of freedom per Planck area. We present evidence that Bekenstein's entropy bound, which is related to area differences, is manifest in the plane wave matrix model. If holography is implemented in this way, we predict crossover behavior at strong coupling when the energy exceeds N{sup 2} in units of the mass scale.

  9. A Bayesian Maximum Entropy approach to address the change of support problem in the spatial analysis of childhood asthma prevalence across North Carolina.

    PubMed

    Lee, Seung-Jae; Yeatts, Karin B; Serre, Marc L

    2009-01-01

    The spatial analysis of data observed at different spatial observation scales leads to the change of support problem (COSP). A solution to the COSP widely used in linear spatial statistics consists in explicitly modeling the spatial autocorrelation of the variable observed at different spatial scales. We present a novel approach that takes advantage of the non-linear Bayesian Maximum Entropy (BME) extension of linear spatial statistics to address the COSP directly without relying on the classical linear approach. Our procedure consists in modeling data observed over large areas as soft data for the process at the local scale. We demonstrate the application of our approach to obtain spatially detailed maps of childhood asthma prevalence across North Carolina (NC). Because of the high prevalence of childhood asthma in NC, the small number problem is not an issue, so we can focus our attention solely to the COSP of integrating prevalence data observed at the county-level together with data observed at a targeted local scale equivalent to the scale of school-districts. Our spatially detailed maps can be used for different applications ranging from exploratory and hypothesis generating analyses to targeting intervention and exposure mitigation efforts. PMID:20300553

  10. A Bayesian Maximum Entropy approach to address the change of support problem in the spatial analysis of childhood asthma prevalence across North Carolina

    PubMed Central

    LEE, SEUNG-JAE; YEATTS, KARIN; SERRE, MARC L.

    2009-01-01

    The spatial analysis of data observed at different spatial observation scales leads to the change of support problem (COSP). A solution to the COSP widely used in linear spatial statistics consists in explicitly modeling the spatial autocorrelation of the variable observed at different spatial scales. We present a novel approach that takes advantage of the non-linear Bayesian Maximum Entropy (BME) extension of linear spatial statistics to address the COSP directly without relying on the classical linear approach. Our procedure consists in modeling data observed over large areas as soft data for the process at the local scale. We demonstrate the application of our approach to obtain spatially detailed maps of childhood asthma prevalence across North Carolina (NC). Because of the high prevalence of childhood asthma in NC, the small number problem is not an issue, so we can focus our attention solely to the COSP of integrating prevalence data observed at the county-level together with data observed at a targeted local scale equivalent to the scale of school-districts. Our spatially detailed maps can be used for different applications ranging from exploratory and hypothesis generating analyses to targeting intervention and exposure mitigation efforts. PMID:20300553

  11. Directional entropy based model for diffusivity-driven tumor growth.

    PubMed

    de Oliveira, Marcelo E; Neto, Luiz M G

    2016-04-01

    In this work, we present and investigate a multiscale model to simulate 3D growth of glioblastomas (GBMs) that incorporates features of the tumor microenvironment and derives macroscopic growth laws from microscopic tissue structure information. We propose a normalized version of the Shannon entropy as an alternative measure of the directional anisotropy for an estimation of the diffusivity tensor in cases where the latter is unknown. In our formulation, the tumor aggressiveness and morphological behavior is tissue-type dependent, i.e. alterations in white and gray matter regions (which can e.g. be induced by normal aging in healthy individuals or neurodegenerative diseases) affect both tumor growth rates and their morphology. The feasibility of this new conceptual approach is supported by previous observations that the fractal dimension, which correlates with the Shannon entropy we calculate, is a quantitative parameter that characterizes the variability of brain tissue, thus, justifying the further evaluation of this new conceptual approach. PMID:27105991

  12. Stability of ecological industry chain: an entropy model approach.

    PubMed

    Wang, Qingsong; Qiu, Shishou; Yuan, Xueliang; Zuo, Jian; Cao, Dayong; Hong, Jinglan; Zhang, Jian; Dong, Yong; Zheng, Ying

    2016-07-01

    A novel methodology is proposed in this study to examine the stability of ecological industry chain network based on entropy theory. This methodology is developed according to the associated dissipative structure characteristics, i.e., complexity, openness, and nonlinear. As defined in the methodology, network organization is the object while the main focus is the identification of core enterprises and core industry chains. It is proposed that the chain network should be established around the core enterprise while supplementation to the core industry chain helps to improve system stability, which is verified quantitatively. Relational entropy model can be used to identify core enterprise and core eco-industry chain. It could determine the core of the network organization and core eco-industry chain through the link form and direction of node enterprises. Similarly, the conductive mechanism of different node enterprises can be examined quantitatively despite the absence of key data. Structural entropy model can be employed to solve the problem of order degree for network organization. Results showed that the stability of the entire system could be enhanced by the supplemented chain around the core enterprise in eco-industry chain network organization. As a result, the sustainability of the entire system could be further improved. PMID:27055893

  13. Solving maximum cut problems in the Adleman-Lipton model.

    PubMed

    Xiao, Dongmei; Li, Wenxia; Zhang, Zhizhou; He, Lin

    2005-12-01

    In this paper, we consider a procedure for solving maximum cut problems in the Adleman-Lipton model. The procedure works in O(n2) steps for maximum cut problems of an undirected graph with n vertices. PMID:16236426

  14. Extensive ground state entropy in supersymmetric lattice models

    SciTech Connect

    Eerten, Hendrik van

    2005-12-15

    We present the result of calculations of the Witten index for a supersymmetric lattice model on lattices of various type and size. Because the model remains supersymmetric at finite lattice size, the Witten index can be calculated using row-to-row transfer matrices and the calculations are similar to calculations of the partition function at negative activity -1. The Witten index provides a lower bound on the number of ground states. We find strong numerical evidence that the Witten index grows exponentially with the number of sites of the lattice, implying that the model has extensive entropy in the ground state.

  15. Entropy maximization under the constraints on the generalized Gini index and its application in modeling income distributions

    NASA Astrophysics Data System (ADS)

    Khosravi Tanak, A.; Mohtashami Borzadaran, G. R.; Ahmadi, J.

    2015-11-01

    In economics and social sciences, the inequality measures such as Gini index, Pietra index etc., are commonly used to measure the statistical dispersion. There is a generalization of Gini index which includes it as special case. In this paper, we use principle of maximum entropy to approximate the model of income distribution with a given mean and generalized Gini index. Many distributions have been used as descriptive models for the distribution of income. The most widely known of these models are the generalized beta of second kind and its subclass distributions. The obtained maximum entropy distributions are fitted to the US family total money income in 2009, 2011 and 2013 and their relative performances with respect to generalized beta of second kind family are compared.

  16. Maximum likelihood estimation of finite mixture model for economic data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  17. Maximum Entropy Method and Charge Flipping, a Powerful Combination to Visualize the True Nature of Structural Disorder from in situ X-ray Powder Diffraction Data

    SciTech Connect

    Samy, A.; Dinnebier, R; van Smaalen, S; Jansen, M

    2010-01-01

    In a systematic approach, the ability of the Maximum Entropy Method (MEM) to reconstruct the most probable electron density of highly disordered crystal structures from X-ray powder diffraction data was evaluated. As a case study, the ambient temperature crystal structures of disordered {alpha}-Rb{sub 2}[C{sub 2}O{sub 4}] and {alpha}-Rb{sub 2}[CO{sub 3}] and ordered {delta}-K{sub 2}[C{sub 2}O{sub 4}] were investigated in detail with the aim of revealing the 'true' nature of the apparent disorder. Different combinations of F (based on phased structure factors) and G constraints (based on structure-factor amplitudes) from different sources were applied in MEM calculations. In particular, a new combination of the MEM with the recently developed charge-flipping algorithm with histogram matching for powder diffraction data (pCF) was successfully introduced to avoid the inevitable bias of the phases of the structure-factor amplitudes by the Rietveld model. Completely ab initio electron-density distributions have been obtained with the MEM applied to a combination of structure-factor amplitudes from Le Bail fits with phases derived from pCF. All features of the crystal structures, in particular the disorder of the oxalate and carbonate anions, and the displacements of the cations, are clearly obtained. This approach bears the potential of a fast method of electron-density determination, even for highly disordered materials. All the MEM maps obtained in this work were compared with the MEM map derived from the best Rietveld refined model. In general, the phased observed structure factors obtained from Rietveld refinement (applying F and G constraints) were found to give the closest description of the experimental data and thus lead to the most accurate image of the actual disorder.

  18. Single-particle spectral density of the unitary Fermi gas: Novel approach based on the operator product expansion, sum rules and the maximum entropy method

    SciTech Connect

    Gubler, Philipp; Yamamoto, Naoki; Hatsuda, Tetsuo; Nishida, Yusuke

    2015-05-15

    Making use of the operator product expansion, we derive a general class of sum rules for the imaginary part of the single-particle self-energy of the unitary Fermi gas. The sum rules are analyzed numerically with the help of the maximum entropy method, which allows us to extract the single-particle spectral density as a function of both energy and momentum. These spectral densities contain basic information on the properties of the unitary Fermi gas, such as the dispersion relation and the superfluid pairing gap, for which we obtain reasonable agreement with the available results based on quantum Monte-Carlo simulations.

  19. Reprint of : Connection between wave transport through disordered 1D waveguides and energy density inside the sample: A maximum-entropy approach

    NASA Astrophysics Data System (ADS)

    Mello, Pier A.; Shi, Zhou; Genack, Azriel Z.

    2016-08-01

    We study the average energy - or particle - density of waves inside disordered 1D multiply-scattering media. We extend the transfer-matrix technique that was used in the past for the calculation of the intensity beyond the sample to study the intensity in the interior of the sample by considering the transfer matrices of the two segments that form the entire waveguide. The statistical properties of the two disordered segments are found using a maximum-entropy ansatz subject to appropriate constraints. The theoretical expressions are shown to be in excellent agreement with 1D transfer-matrix simulations.

  20. Emergence of spacetime dynamics in entropy corrected and braneworld models

    SciTech Connect

    Sheykhi, A.; Dehghani, M.H.; Hosseini, S.E. E-mail: mhd@shirazu.ac.ir

    2013-04-01

    A very interesting new proposal on the origin of the cosmic expansion was recently suggested by Padmanabhan [arXiv:1206.4916]. He argued that the difference between the surface degrees of freedom and the bulk degrees of freedom in a region of space drives the accelerated expansion of the universe, as well as the standard Friedmann equation through relation ΔV = Δt(N{sub sur}−N{sub bulk}). In this paper, we first present the general expression for the number of degrees of freedom on the holographic surface, N{sub sur}, using the general entropy corrected formula S = A/(4L{sub p}{sup 2})+s(A). Then, as two example, by applying the Padmanabhan's idea we extract the corresponding Friedmann equations in the presence of power-law and logarithmic correction terms in the entropy. We also extend the study to RS II and DGP braneworld models and derive successfully the correct form of the Friedmann equations in these theories. Our study further supports the viability of Padmanabhan's proposal.

  1. Mixture Rasch Models with Joint Maximum Likelihood Estimation

    ERIC Educational Resources Information Center

    Willse, John T.

    2011-01-01

    This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…

  2. Shifting Distributions of Adult Atlantic Sturgeon Amidst Post-Industrialization and Future Impacts in the Delaware River: a Maximum Entropy Approach

    PubMed Central

    Breece, Matthew W.; Oliver, Matthew J.; Cimino, Megan A.; Fox, Dewayne A.

    2013-01-01

    Atlantic sturgeon (Acipenser oxyrinchus oxyrinchus) experienced severe declines due to habitat destruction and overfishing beginning in the late 19th century. Subsequent to the boom and bust period of exploitation, there has been minimal fishing pressure and improving habitats. However, lack of recovery led to the 2012 listing of Atlantic sturgeon under the Endangered Species Act. Although habitats may be improving, the availability of high quality spawning habitat, essential for the survival and development of eggs and larvae may still be a limiting factor in the recovery of Atlantic sturgeon. To estimate adult Atlantic sturgeon spatial distributions during riverine occupancy in the Delaware River, we utilized a maximum entropy (MaxEnt) approach along with passive biotelemetry during the likely spawning season. We found that substrate composition and distance from the salt front significantly influenced the locations of adult Atlantic sturgeon in the Delaware River. To broaden the scope of this study we projected our model onto four scenarios depicting varying locations of the salt front in the Delaware River: the contemporary location of the salt front during the likely spawning season, the location of the salt front during the historic fishery in the late 19th century, an estimated shift in the salt front by the year 2100 due to climate change, and an extreme drought scenario, similar to that which occurred in the 1960’s. The movement of the salt front upstream as a result of dredging and climate change likely eliminated historic spawning habitats and currently threatens areas where Atlantic sturgeon spawning may be taking place. Identifying where suitable spawning substrate and water chemistry intersect with the likely occurrence of adult Atlantic sturgeon in the Delaware River highlights essential spawning habitats, enhancing recovery prospects for this imperiled species. PMID:24260570

  3. Shifting distributions of adult Atlantic sturgeon amidst post-industrialization and future impacts in the Delaware River: a maximum entropy approach.

    PubMed

    Breece, Matthew W; Oliver, Matthew J; Cimino, Megan A; Fox, Dewayne A

    2013-01-01

    Atlantic sturgeon (Acipenser oxyrinchus oxyrinchus) experienced severe declines due to habitat destruction and overfishing beginning in the late 19(th) century. Subsequent to the boom and bust period of exploitation, there has been minimal fishing pressure and improving habitats. However, lack of recovery led to the 2012 listing of Atlantic sturgeon under the Endangered Species Act. Although habitats may be improving, the availability of high quality spawning habitat, essential for the survival and development of eggs and larvae may still be a limiting factor in the recovery of Atlantic sturgeon. To estimate adult Atlantic sturgeon spatial distributions during riverine occupancy in the Delaware River, we utilized a maximum entropy (MaxEnt) approach along with passive biotelemetry during the likely spawning season. We found that substrate composition and distance from the salt front significantly influenced the locations of adult Atlantic sturgeon in the Delaware River. To broaden the scope of this study we projected our model onto four scenarios depicting varying locations of the salt front in the Delaware River: the contemporary location of the salt front during the likely spawning season, the location of the salt front during the historic fishery in the late 19(th) century, an estimated shift in the salt front by the year 2100 due to climate change, and an extreme drought scenario, similar to that which occurred in the 1960's. The movement of the salt front upstream as a result of dredging and climate change likely eliminated historic spawning habitats and currently threatens areas where Atlantic sturgeon spawning may be taking place. Identifying where suitable spawning substrate and water chemistry intersect with the likely occurrence of adult Atlantic sturgeon in the Delaware River highlights essential spawning habitats, enhancing recovery prospects for this imperiled species. PMID:24260570

  4. Entropy Corrected Holographic Dark Energy f(T) Gravity Model

    NASA Astrophysics Data System (ADS)

    Sharif, M.; Rani, Shamaila

    2014-01-01

    This paper is devoted to study the power-law entropy corrected holographic dark energy (ECHDE) model in the framework of f(T) gravity. We assume infrared (IR) cutoff in terms of Granda-Oliveros (GO) length and discuss the constructed f(T) model in interacting as well as in non-interacting scenarios. We explore some cosmological parameters like equation of state (EoS), deceleration, statefinder parameters as well as ωT-ωT‧ analysis. The EoS and deceleration parameters indicate phantom behavior of the accelerated expansion of the universe. It is mentioned here that statefinder trajectories represent consistent results with ΛCDM limit, while evolution trajectory of ωT-ωT‧ phase plane does not approach to ΛCDM limit for both interacting and non-interacting cases.

  5. Relevance Data for Language Models Using Maximum Likelihood.

    ERIC Educational Resources Information Center

    Bodoff, David; Wu, Bin; Wong, K. Y. Michael

    2003-01-01

    Presents a preliminary empirical test of a maximum likelihood approach to using relevance data for training information retrieval parameters. Discusses similarities to language models; the unification of document-oriented and query-oriented views; tests on data sets; algorithms and scalability; and the effectiveness of maximum likelihood…

  6. A stochastic model for the analysis of maximum daily temperature

    NASA Astrophysics Data System (ADS)

    Sirangelo, B.; Caloiero, T.; Coscarelli, R.; Ferrari, E.

    2016-08-01

    In this paper, a stochastic model for the analysis of the daily maximum temperature is proposed. First, a deseasonalization procedure based on the truncated Fourier expansion is adopted. Then, the Johnson transformation functions were applied for the data normalization. Finally, the fractionally autoregressive integrated moving average model was used to reproduce both short- and long-memory behavior of the temperature series. The model was applied to the data of the Cosenza gauge (Calabria region) and verified on other four gauges of southern Italy. Through a Monte Carlo simulation procedure based on the proposed model, 105 years of daily maximum temperature have been generated. Among the possible applications of the model, the occurrence probabilities of the annual maximum values have been evaluated. Moreover, the procedure was applied for the estimation of the return periods of long sequences of days with maximum temperature above prefixed thresholds.

  7. Modeling the Overalternating Bias with an Asymmetric Entropy Measure.

    PubMed

    Gronchi, Giorgio; Raglianti, Marco; Noventa, Stefano; Lazzeri, Alessandro; Guazzini, Andrea

    2016-01-01

    Psychological research has found that human perception of randomness is biased. In particular, people consistently show the overalternating bias: they rate binary sequences of symbols (such as Heads and Tails in coin flipping) with an excess of alternation as more random than prescribed by the normative criteria of Shannon's entropy. Within data mining for medical applications, Marcellin proposed an asymmetric measure of entropy that can be ideal to account for such bias and to quantify subjective randomness. We fitted Marcellin's entropy and Renyi's entropy (a generalized form of uncertainty measure comprising many different kinds of entropies) to experimental data found in the literature with the Differential Evolution algorithm. We observed a better fit for Marcellin's entropy compared to Renyi's entropy. The fitted asymmetric entropy measure also showed good predictive properties when applied to different datasets of randomness-related tasks. We concluded that Marcellin's entropy can be a parsimonious and effective measure of subjective randomness that can be useful in psychological research about randomness perception. PMID:27458418

  8. Modeling the Overalternating Bias with an Asymmetric Entropy Measure

    PubMed Central

    Gronchi, Giorgio; Raglianti, Marco; Noventa, Stefano; Lazzeri, Alessandro; Guazzini, Andrea

    2016-01-01

    Psychological research has found that human perception of randomness is biased. In particular, people consistently show the overalternating bias: they rate binary sequences of symbols (such as Heads and Tails in coin flipping) with an excess of alternation as more random than prescribed by the normative criteria of Shannon's entropy. Within data mining for medical applications, Marcellin proposed an asymmetric measure of entropy that can be ideal to account for such bias and to quantify subjective randomness. We fitted Marcellin's entropy and Renyi's entropy (a generalized form of uncertainty measure comprising many different kinds of entropies) to experimental data found in the literature with the Differential Evolution algorithm. We observed a better fit for Marcellin's entropy compared to Renyi's entropy. The fitted asymmetric entropy measure also showed good predictive properties when applied to different datasets of randomness-related tasks. We concluded that Marcellin's entropy can be a parsimonious and effective measure of subjective randomness that can be useful in psychological research about randomness perception. PMID:27458418

  9. Models, Entropy and Information of Temporal Social Networks

    NASA Astrophysics Data System (ADS)

    Zhao, Kun; Karsai, Márton; Bianconi, Ginestra

    Temporal social networks are characterized by heterogeneous duration of contacts, which can either follow a power-law distribution, such as in face-to-face interactions, or a Weibull distribution, such as in mobile-phone communication. Here we model the dynamics of face-to-face interaction and mobile phone communication by a reinforcement dynamics, which explains the data observed in these different types of social interactions. We quantify the information encoded in the dynamics of these networks by the entropy of temporal networks. Finally, we show evidence that human dynamics is able to modulate the information present in social network dynamics when it follows circadian rhythms and when it is interfacing with a new technology such as the mobile-phone communication technology.

  10. Proper encoding for snapshot-entropy scaling in two-dimensional classical spin models

    NASA Astrophysics Data System (ADS)

    Matsueda, Hiroaki; Ozaki, Dai

    2015-10-01

    We reexamine the snapshot entropy of the Ising and three-states Potts models on the L ×L square lattice. Focusing on how to encode the spin snapshot, we find that the entropy at Tc scales asymptotically as S ˜(1 /3 )lnL that strongly reminds us of the entanglement entropy in one-dimensional quantum critical systems. This finding seems to support that the snapshot entropy after the proper encoding is related to the holographic entanglement entropy. On the other hand, the anomalous scaling Sχ˜χηlnχ for the coarse-grained snapshot entropy holds even for the proper encoding. These features originate in the fact that the largest singular value of the snapshot matrix is regulated by the proper encoding.

  11. Assessing Bayesian model averaging uncertainty of groundwater modeling based on information entropy method

    NASA Astrophysics Data System (ADS)

    Zeng, Xiankui; Wu, Jichun; Wang, Dong; Zhu, Xiaobin; Long, Yuqiao

    2016-07-01

    Because of groundwater conceptualization uncertainty, multi-model methods are usually used and the corresponding uncertainties are estimated by integrating Markov Chain Monte Carlo (MCMC) and Bayesian model averaging (BMA) methods. Generally, the variance method is used to measure the uncertainties of BMA prediction. The total variance of ensemble prediction is decomposed into within-model and between-model variances, which represent the uncertainties derived from parameter and conceptual model, respectively. However, the uncertainty of a probability distribution couldn't be comprehensively quantified by variance solely. A new measuring method based on information entropy theory is proposed in this study. Due to actual BMA process hard to meet the ideal mutually exclusive collectively exhaustive condition, BMA predictive uncertainty could be decomposed into parameter, conceptual model, and overlapped uncertainties, respectively. Overlapped uncertainty is induced by the combination of predictions from correlated model structures. In this paper, five simple analytical functions are firstly used to illustrate the feasibility of the variance and information entropy methods. A discrete distribution example shows that information entropy could be more appropriate to describe between-model uncertainty than variance. Two continuous distribution examples show that the two methods are consistent in measuring normal distribution, and information entropy is more appropriate to describe bimodal distribution than variance. The two examples of BMA uncertainty decomposition demonstrate that the two methods are relatively consistent in assessing the uncertainty of unimodal BMA prediction. Information entropy is more informative in describing the uncertainty decomposition of bimodal BMA prediction. Then, based on a synthetical groundwater model, the variance and information entropy methods are used to assess the BMA uncertainty of groundwater modeling. The uncertainty assessments of

  12. Interacting Entropy-Corrected Holographic Chaplygin Gas Model

    NASA Astrophysics Data System (ADS)

    Farooq, M. Umar; Jamil, Mubasher; Rashid, Muneer A.

    2010-10-01

    Holographic dark energy (HDE), presents a dynamical view of dark energy which is consistent with the observational data and has a solid theoretical background. Its definition follows from the entropy-area relation S( A), where S and A are entropy and area respectively. In the framework of loop quantum gravity, a modified definition of HDE called “entropy-corrected holographic dark energy” (ECHDE) has been proposed recently to explain dark energy with the help of quantum corrections to the entropy-area relation. Using this new definition, we establish a correspondence between modified variable Chaplygin gas, new modified Chaplygin gas and the viscous generalized Chaplygin gas with the entropy corrected holographic dark energy and reconstruct the corresponding scalar potentials which describe the dynamics of the scalar field.

  13. Two aspects of black hole entropy in Lanczos-Lovelock models of gravity

    NASA Astrophysics Data System (ADS)

    Kolekar, Sanved; Kothawala, Dawood; Padmanabhan, T.

    2012-03-01

    We consider two specific approaches to evaluate the black hole entropy which are known to produce correct results in the case of Einstein’s theory and generalize them to Lanczos-Lovelock models. In the first approach (which could be called extrinsic), we use a procedure motivated by earlier work by Pretorius, Vollick, and Israel, and by Oppenheim, and evaluate the entropy of a configuration of densely packed gravitating shells on the verge of forming a black hole in Lanczos-Lovelock theories of gravity. We find that this matter entropy is not equal to (it is less than) Wald entropy, except in the case of Einstein theory, where they are equal. The matter entropy is proportional to the Wald entropy if we consider a specific mth-order Lanczos-Lovelock model, with the proportionality constant depending on the spacetime dimensions D and the order m of the Lanczos-Lovelock theory as (D-2m)/(D-2). Since the proportionality constant depends on m, the proportionality between matter entropy and Wald entropy breaks down when we consider a sum of Lanczos-Lovelock actions involving different m. In the second approach (which could be called intrinsic), we generalize a procedure, previously introduced by Padmanabhan in the context of general relativity, to study off-shell entropy of a class of metrics with horizon using a path integral method. We consider the Euclidean action of Lanczos-Lovelock models for a class of metrics off shell and interpret it as a partition function. We show that in the case of spherically symmetric metrics, one can interpret the Euclidean action as the free energy and read off both the entropy and energy of a black hole spacetime. Surprisingly enough, this leads to exactly the Wald entropy and the energy of the spacetime in Lanczos-Lovelock models obtained by other methods. We comment on possible implications of the result.

  14. The entropy in supernova explosions

    SciTech Connect

    Colgate, S.A.

    1990-12-06

    The explosion of a supernova forms because of the collapse to a neutron star. In addition an explosion requires that a region of relatively high entropy be in contact with the neutron star and persisting for a relatively protracted period of time. The high entropy region ensures that the maximum temperature in contact with the neutron star and in hydrostatic equilibrium is less than some maximum. This temperature must be low enough such that neutrino emission cooling is small, otherwise the equilibrium atmosphere will collapse adding a large accretion mass to the neutron star. A so-called normal explosion shock that must reverse the accretion flow corresponding to a typical stellar collapse must have sufficient strength or pressure to reverse this flow and eject the matter with 10{sup 51} ergs for a typical type II supernova. Surprisingly the matter behind such a shock wave has a relatively low entropy low enough such that neutrino cooling would be orders of magnitude faster than the expansion rate. The resulting accretion low would be inside the Bondi radius and result in free-fall accretion inside the expanding rarefaction wave. The accreted mass or reimplosion mass unless stopped by a high entropy bubble could than exceed that of bound neutron star models. In addition the explosion shock would be overtaken by the rarefaction wave and either disappear or at least weaken. Hence, a hot, high entropy bubble is required to support an equilibrium atmosphere in contact with a relatively cold neutron star. Subsequently during the expansion of the high entropy bubble that drives or pushes on the shocked matter, mixing of the matter of the high entropy bubble and lower entropy shock-ejected matter is ensured. The mixing is driven by the negative entropy gradient between the high entropy bubble accelerating the shocked matter and the lower entropy of the matter behind the shock.

  15. Discrete state model and accurate estimation of loop entropy of RNA secondary structures.

    PubMed

    Zhang, Jian; Lin, Ming; Chen, Rong; Wang, Wei; Liang, Jie

    2008-03-28

    Conformational entropy makes important contribution to the stability and folding of RNA molecule, but it is challenging to either measure or compute conformational entropy associated with long loops. We develop optimized discrete k-state models of RNA backbone based on known RNA structures for computing entropy of loops, which are modeled as self-avoiding walks. To estimate entropy of hairpin, bulge, internal loop, and multibranch loop of long length (up to 50), we develop an efficient sampling method based on the sequential Monte Carlo principle. Our method considers excluded volume effect. It is general and can be applied to calculating entropy of loops with longer length and arbitrary complexity. For loops of short length, our results are in good agreement with a recent theoretical model and experimental measurement. For long loops, our estimated entropy of hairpin loops is in excellent agreement with the Jacobson-Stockmayer extrapolation model. However, for bulge loops and more complex secondary structures such as internal and multibranch loops, we find that the Jacobson-Stockmayer extrapolation model has large errors. Based on estimated entropy, we have developed empirical formulae for accurate calculation of entropy of long loops in different secondary structures. Our study on the effect of asymmetric size of loops suggest that loop entropy of internal loops is largely determined by the total loop length, and is only marginally affected by the asymmetric size of the two loops. Our finding suggests that the significant asymmetric effects of loop length in internal loops measured by experiments are likely to be partially enthalpic. Our method can be applied to develop improved energy parameters important for studying RNA stability and folding, and for predicting RNA secondary and tertiary structures. The discrete model and the program used to calculate loop entropy can be downloaded at http://gila.bioengr.uic.edu/resources/RNA.html. PMID:18376982

  16. A maximum entropy approach to the study of residue-specific backbone angle distributions in α-synuclein, an intrinsically disordered protein

    PubMed Central

    Mantsyzov, Alexey B; Maltsev, Alexander S; Ying, Jinfa; Shen, Yang; Hummer, Gerhard; Bax, Ad

    2014-01-01

    α-Synuclein is an intrinsically disordered protein of 140 residues that switches to an α-helical conformation upon binding phospholipid membranes. We characterize its residue-specific backbone structure in free solution with a novel maximum entropy procedure that integrates an extensive set of NMR data. These data include intraresidue and sequential HN–Hα and HN–HN NOEs, values for 3JHNHα, 1JHαCα, 2JCαN, and 1JCαN, as well as chemical shifts of 15N, 13Cα, and 13C′ nuclei, which are sensitive to backbone torsion angles. Distributions of these torsion angles were identified that yield best agreement to the experimental data, while using an entropy term to minimize the deviation from statistical distributions seen in a large protein coil library. Results indicate that although at the individual residue level considerable deviations from the coil library distribution are seen, on average the fitted distributions agree fairly well with this library, yielding a moderate population (20–30%) of the PPII region and a somewhat higher population of the potentially aggregation-prone β region (20–40%) than seen in the database. A generally lower population of the αR region (10–20%) is found. Analysis of 1H–1H NOE data required consideration of the considerable backbone diffusion anisotropy of a disordered protein. PMID:24976112

  17. Develop and test a solvent accessible surface area-based model in conformational entropy calculations.

    PubMed

    Wang, Junmei; Hou, Tingjun

    2012-05-25

    It is of great interest in modern drug design to accurately calculate the free energies of protein-ligand or nucleic acid-ligand binding. MM-PBSA (molecular mechanics Poisson-Boltzmann surface area) and MM-GBSA (molecular mechanics generalized Born surface area) have gained popularity in this field. For both methods, the conformational entropy, which is usually calculated through normal-mode analysis (NMA), is needed to calculate the absolute binding free energies. Unfortunately, NMA is computationally demanding and becomes a bottleneck of the MM-PB/GBSA-NMA methods. In this work, we have developed a fast approach to estimate the conformational entropy based upon solvent accessible surface area calculations. In our approach, the conformational entropy of a molecule, S, can be obtained by summing up the contributions of all atoms, no matter they are buried or exposed. Each atom has two types of surface areas, solvent accessible surface area (SAS) and buried SAS (BSAS). The two types of surface areas are weighted to estimate the contribution of an atom to S. Atoms having the same atom type share the same weight and a general parameter k is applied to balance the contributions of the two types of surface areas. This entropy model was parametrized using a large set of small molecules for which their conformational entropies were calculated at the B3LYP/6-31G* level taking the solvent effect into account. The weighted solvent accessible surface area (WSAS) model was extensively evaluated in three tests. For convenience, TS values, the product of temperature T and conformational entropy S, were calculated in those tests. T was always set to 298.15 K through the text. First of all, good correlations were achieved between WSAS TS and NMA TS for 44 protein or nucleic acid systems sampled with molecular dynamics simulations (10 snapshots were collected for postentropy calculations): the mean correlation coefficient squares (R²) was 0.56. As to the 20 complexes, the TS

  18. Application of a Boltzmann-entropy-like concept in an agent-based multilane traffic model

    NASA Astrophysics Data System (ADS)

    Sugihakim, Ryan; Alatas, Husin

    2016-01-01

    We discuss the dynamics of an agent-based multilane traffic model using three defined rules. The dynamical characteristics of the model are described by a Boltzmann traffic entropy quantity adopting the concept of Boltzmann entropy in statistical physics. The results are analyzed using fundamental diagrams based on lane density, entropy and its derivative with respect to density. We show that there are three classifications of allowed initial to equilibrium state transition process out of four possibilities and demonstrate that density and entropy fluctuations occur during the transition from the initial to equilibrium states, exhibiting the well-known expected self-organization process. The related concept of entropy can therefore be considered as a new alternative quantity to describe the complexity of traffic dynamics.

  19. The multi-hemoglobin system of the hydrothermal vent tube worm Riftia pachyptila. II. Complete polypeptide chain composition investigated by maximum entropy analysis of mass spectra.

    PubMed

    Zal, F; Lallier, F H; Green, B N; Vinogradov, S N; Toulmond, A

    1996-04-12

    The deep-sea tube worm Riftia pachyptila Jones possesses a complex of three extracellular Hbs: two in the vascular compartment, V1 (approximately 3500 kDa) and V2 (approximately 400 kDa), and one in the coelomic cavity, C1 (approximately 400 kDa). These native Hbs, their dissociation products and derivatives were subjected to electrospray ionization mass spectrometry (ESI-MS). The data were analyzed by the maximum entropy deconvolution system. We identified three groups of peaks for V1 Hb, at approximately 16, 23 27, and 30 kDa, corresponding to (i) two monomeric globin chains, b (Mr 16,133.5) and c (Mr 16,805.9); (ii) four linker subunits, L1 L4 (Mr 23,505.2, 23,851.4, 26,342.4, and 27,425.8, respectively); and (iii) one disulfide-bonded dimer D1 (Mr 31,720.7) composed of globin chains d (Mr 15,578.5) and e (Mr 16, 148.3). V2 and C1 Hbs had no linkers and contained a glycosylated monomeric globin chain, a (Mr 15,933.4) and a second dimer D2 (Mr 32,511.7) composed of chains e and f (Mr 16,368.1). The dimer D1 was absent from C1 Hb, clearly differentiating V2 and C1 Hbs. These Hbs were also subjected to SDS-PAGE analysis for comparative purposes. The following models are proposed ((cD1)(bD1)3) for the one-twelfth protomer of V1 Hb, ((cD)(bD)6(aD)) (D corresponding to either D1 or D2) for V2 and C1 Hbs. HBL V1 Hb would be composed of 180 polypeptide chains with 144 globin chains and 36 linker chains, each twelfth being in contact with three linker subunits, providing a total molecular mass = 3285 kDa. V2 and C1 would be composed of 24 globin chains providing a total molecular mass = 403 kDa and 406 kDa, respectively. These results are in excellent agreement with experimental Mr determined by STEM mass mapping and MALLS. PMID:8621529

  20. Evaluation of the reliability of the maximum entropy method for reconstructing 3D and 4D NOESY-type NMR spectra of proteins.

    PubMed

    Shigemitsu, Yoshiki; Ikeya, Teppei; Yamamoto, Akihiro; Tsuchie, Yuusuke; Mishima, Masaki; Smith, Brian O; Güntert, Peter; Ito, Yutaka

    2015-02-01

    Despite their advantages in analysis, 4D NMR experiments are still infrequently used as a routine tool in protein NMR projects due to the long duration of the measurement and limited digital resolution. Recently, new acquisition techniques for speeding up multidimensional NMR experiments, such as nonlinear sampling, in combination with non-Fourier transform data processing methods have been proposed to be beneficial for 4D NMR experiments. Maximum entropy (MaxEnt) methods have been utilised for reconstructing nonlinearly sampled multi-dimensional NMR data. However, the artefacts arising from MaxEnt processing, particularly, in NOESY spectra have not yet been clearly assessed in comparison with other methods, such as quantitative maximum entropy, multidimensional decomposition, and compressed sensing. We compared MaxEnt with other methods in reconstructing 3D NOESY data acquired with variously reduced sparse sampling schedules and found that MaxEnt is robust, quick and competitive with other methods. Next, nonlinear sampling and MaxEnt processing were applied to 4D NOESY experiments, and the effect of the artefacts of MaxEnt was evaluated by calculating 3D structures from the NOE-derived distance restraints. Our results demonstrated that sufficiently converged and accurate structures (RMSD of 0.91Å to the mean and 1.36Å to the reference structures) were obtained even with NOESY spectra reconstructed from 1.6% randomly selected sampling points for indirect dimensions. This suggests that 3D MaxEnt processing in combination with nonlinear sampling schedules is still a useful and advantageous option for rapid acquisition of high-resolution 4D NOESY spectra of proteins. PMID:25545060

  1. Nonparametric identification and maximum likelihood estimation for hidden Markov models

    PubMed Central

    Alexandrovich, G.; Holzmann, H.; Leister, A.

    2016-01-01

    Nonparametric identification and maximum likelihood estimation for finite-state hidden Markov models are investigated. We obtain identification of the parameters as well as the order of the Markov chain if the transition probability matrices have full-rank and are ergodic, and if the state-dependent distributions are all distinct, but not necessarily linearly independent. Based on this identification result, we develop a nonparametric maximum likelihood estimation theory. First, we show that the asymptotic contrast, the Kullback–Leibler divergence of the hidden Markov model, also identifies the true parameter vector nonparametrically. Second, for classes of state-dependent densities which are arbitrary mixtures of a parametric family, we establish the consistency of the nonparametric maximum likelihood estimator. Here, identification of the mixing distributions need not be assumed. Numerical properties of the estimates and of nonparametric goodness of fit tests are investigated in a simulation study.

  2. Maximum Likelihood Estimation of Nonlinear Structural Equation Models.

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Zhu, Hong-Tu

    2002-01-01

    Developed an EM type algorithm for maximum likelihood estimation of a general nonlinear structural equation model in which the E-step is completed by a Metropolis-Hastings algorithm. Illustrated the methodology with results from a simulation study and two real examples using data from previous studies. (SLD)

  3. Two-site entropy and quantum phase transitions in low-dimensional models.

    PubMed

    Legeza, O; Sólyom, J

    2006-03-24

    We propose a new approach to study quantum phase transitions in low-dimensional lattice models. It is based on studying the von Neumann entropy of two neighboring central sites in a long chain. It is demonstrated that the procedure works equally well for fermionic and spin models, and the two-site entropy is a better indicator of quantum phase transition than calculating gaps, order parameters, or the single-site entropy. The method is especially convenient when the density-matrix renormalization-group algorithm is used. PMID:16605844

  4. Cluster-size entropy in the Axelrod model of social influence: Small-world networks and mass media

    NASA Astrophysics Data System (ADS)

    Gandica, Y.; Charmell, A.; Villegas-Febres, J.; Bonalde, I.

    2011-10-01

    We study the Axelrod's cultural adaptation model using the concept of cluster-size entropy Sc, which gives information on the variability of the cultural cluster size present in the system. Using networks of different topologies, from regular to random, we find that the critical point of the well-known nonequilibrium monocultural-multicultural (order-disorder) transition of the Axelrod model is given by the maximum of the Sc(q) distributions. The width of the cluster entropy distributions can be used to qualitatively determine whether the transition is first or second order. By scaling the cluster entropy distributions we were able to obtain a relationship between the critical cultural trait qc and the number F of cultural features in two-dimensional regular networks. We also analyze the effect of the mass media (external field) on social systems within the Axelrod model in a square network. We find a partially ordered phase whose largest cultural cluster is not aligned with the external field, in contrast with a recent suggestion that this type of phase cannot be formed in regular networks. We draw a q-B phase diagram for the Axelrod model in regular networks.

  5. Entropy, chaos, and excited-state quantum phase transitions in the Dicke model.

    PubMed

    Lóbez, C M; Relaño, A

    2016-07-01

    We study nonequilibrium processes in an isolated quantum system-the Dicke model-focusing on the role played by the transition from integrability to chaos and the presence of excited-state quantum phase transitions. We show that both diagonal and entanglement entropies are abruptly increased by the onset of chaos. Also, this increase ends in both cases just after the system crosses the critical energy of the excited-state quantum phase transition. The link between entropy production, the development of chaos, and the excited-state quantum phase transition is more clear for the entanglement entropy. PMID:27575109

  6. Maximum likelihood estimation for distributed parameter models of flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Taylor, L. W., Jr.; Williams, J. L.

    1989-01-01

    A distributed-parameter model of the NASA Solar Array Flight Experiment spacecraft structure is constructed on the basis of measurement data and analyzed to generate a priori estimates of modal frequencies and mode shapes. A Newton-Raphson maximum-likelihood algorithm is applied to determine the unknown parameters, using a truncated model for the estimation and the full model for the computation of the higher modes. Numerical results are presented in a series of graphs and briefly discussed, and the significant improvement in computation speed obtained by parallel implementation of the method on a supercomputer is noted.

  7. Entropy analysis on non-equilibrium two-phase flow models

    SciTech Connect

    Karwat, H.; Ruan, Y.Q.

    1995-09-01

    A method of entropy analysis according to the second law of thermodynamics is proposed for the assessment of a class of practical non-equilibrium two-phase flow models. Entropy conditions are derived directly from a local instantaneous formulation for an arbitrary control volume of a structural two-phase fluid, which are finally expressed in terms of the averaged thermodynamic independent variables and their time derivatives as well as the boundary conditions for the volume. On the basis of a widely used thermal-hydraulic system code it is demonstrated with practical examples that entropy production rates in control volumes can be numerically quantified by using the data from the output data files. Entropy analysis using the proposed method is useful in identifying some potential problems in two-phase flow models and predictions as well as in studying the effects of some free parameters in closure relationships.

  8. Expected Shannon Entropy and Shannon Differentiation between Subpopulations for Neutral Genes under the Finite Island Model

    PubMed Central

    Chao, Anne; Jost, Lou; Hsieh, T. C.; Ma, K. H.; Sherwin, William B.; Rollins, Lee Ann

    2015-01-01

    Shannon entropy H and related measures are increasingly used in molecular ecology and population genetics because (1) unlike measures based on heterozygosity or allele number, these measures weigh alleles in proportion to their population fraction, thus capturing a previously-ignored aspect of allele frequency distributions that may be important in many applications; (2) these measures connect directly to the rich predictive mathematics of information theory; (3) Shannon entropy is completely additive and has an explicitly hierarchical nature; and (4) Shannon entropy-based differentiation measures obey strong monotonicity properties that heterozygosity-based measures lack. We derive simple new expressions for the expected values of the Shannon entropy of the equilibrium allele distribution at a neutral locus in a single isolated population under two models of mutation: the infinite allele model and the stepwise mutation model. Surprisingly, this complex stochastic system for each model has an entropy expressable as a simple combination of well-known mathematical functions. Moreover, entropy- and heterozygosity-based measures for each model are linked by simple relationships that are shown by simulations to be approximately valid even far from equilibrium. We also identify a bridge between the two models of mutation. We apply our approach to subdivided populations which follow the finite island model, obtaining the Shannon entropy of the equilibrium allele distributions of the subpopulations and of the total population. We also derive the expected mutual information and normalized mutual information (“Shannon differentiation”) between subpopulations at equilibrium, and identify the model parameters that determine them. We apply our measures to data from the common starling (Sturnus vulgaris) in Australia. Our measures provide a test for neutrality that is robust to violations of equilibrium assumptions, as verified on real world data from starlings. PMID

  9. Expected Shannon Entropy and Shannon Differentiation between Subpopulations for Neutral Genes under the Finite Island Model.

    PubMed

    Chao, Anne; Jost, Lou; Hsieh, T C; Ma, K H; Sherwin, William B; Rollins, Lee Ann

    2015-01-01

    Shannon entropy H and related measures are increasingly used in molecular ecology and population genetics because (1) unlike measures based on heterozygosity or allele number, these measures weigh alleles in proportion to their population fraction, thus capturing a previously-ignored aspect of allele frequency distributions that may be important in many applications; (2) these measures connect directly to the rich predictive mathematics of information theory; (3) Shannon entropy is completely additive and has an explicitly hierarchical nature; and (4) Shannon entropy-based differentiation measures obey strong monotonicity properties that heterozygosity-based measures lack. We derive simple new expressions for the expected values of the Shannon entropy of the equilibrium allele distribution at a neutral locus in a single isolated population under two models of mutation: the infinite allele model and the stepwise mutation model. Surprisingly, this complex stochastic system for each model has an entropy expressable as a simple combination of well-known mathematical functions. Moreover, entropy- and heterozygosity-based measures for each model are linked by simple relationships that are shown by simulations to be approximately valid even far from equilibrium. We also identify a bridge between the two models of mutation. We apply our approach to subdivided populations which follow the finite island model, obtaining the Shannon entropy of the equilibrium allele distributions of the subpopulations and of the total population. We also derive the expected mutual information and normalized mutual information ("Shannon differentiation") between subpopulations at equilibrium, and identify the model parameters that determine them. We apply our measures to data from the common starling (Sturnus vulgaris) in Australia. Our measures provide a test for neutrality that is robust to violations of equilibrium assumptions, as verified on real world data from starlings. PMID:26067448

  10. Comparison of spectral analysis with fast Fourier transform and maximum entropy method. Application to the role of molybdenum implantation on localized corrosion of Type 304 stainless steel

    SciTech Connect

    Beaunier, L.; Frydman, J.; Gabrielli, C.; Huet, F.; Keddam, M.

    1996-12-31

    A comparison of a spectral analysis using the fast Fourier transform (FFT) and the maximum entropy method (MEM) was carried out in the case in which both methods can be performed, that is, when several time acquisitions can be recorded. A summary of the principles of the MEM is given. Then the main properties of this method are investigated, that is, influence of the MEM order on the spectrum accuracy, validity of the low-frequency plateau usually given by this technique, overlapping of spectra measured for different frequency bandwidths, and influence of a slow evolution of the amplitude of the signal fluctuations. The susceptibility to pitting corrosion of type 304 stainless steel and type 304 modified by molybdenum (Mo) by means of ion implantation was studied. The power spectral densities (PSD) measured with the FFT and MEM techniques are in reasonable agreement, except for low electrochemical current noises (ECN) buried in the parasitic noise generated by the power supply. In that case, the FFT technique is more appropriate than the MEM, which gave qualitative results only. The type 304 stainless steel showed a large metastable pitting leading to only a few macroscopic pits, whereas the type 304 Mo-implanted specimen showed a very low metastable pitting leading to many hemispheric pits covered by the Mo-implanted layer, under which localized corrosion occurred.

  11. Electron density topology of high-pressure Ba{sub 8}Si{sub 46} from a combined Rietveld and maximum-entropy analysis

    SciTech Connect

    Tse, John S.; Flacau, Roxana; Desgreniers, Serge; Iitaka, Toshiaki; Jiang, J. Z.

    2007-11-01

    Under pressure, Ba{sub 8}Si{sub 46} is found to undergo an isostructural transition, as observed by Raman spectroscopy, extended x-ray-absorption fine structure, and x-ray diffraction. Rietveld analysis of the x-ray diffraction data shows a homothetic contraction of the host lattice after the structural transition at 17 GPa. Using the Rietveld and maximum-entropy methods, we have performed an analysis of high resolution x-ray diffraction patterns collected from ambient to 30 GPa obtained in a diamond anvil cell using He as a quasihydrostatic pressure transmitting medium. The results indicate unambiguously that the homothetic phase transition at about 17 GPa is due to an extensive rehybridization of the Si atoms leading to a transfer of valence electrons from the bonding to the interstitial region. Consequently, the Si-Si bonds are weakened substantially at high density, leading to an abrupt collapse of the unit cell volume without a change in crystalline structure. The transition pressure and the change in the chemical bonding are remarkably similar to that observed in elemental Si-V.

  12. Constant Entropy Properties for an Approximate Model of Equilibrium Air

    NASA Technical Reports Server (NTRS)

    Hansen, C. Frederick; Hodge, Marion E.

    1961-01-01

    Approximate analytic solutions for properties of equilibrium air up to 15,000 K have been programmed for machine computation. Temperature, compressibility, enthalpy, specific heats, and speed of sound are tabulated as constant entropy functions of temperature. The reciprocal of acoustic impedance and its integral with respect to pressure are also given for the purpose of evaluating the Riemann constants for one-dimensional, isentropic flow.

  13. Improved model for the transit entropy of monatomic liquids

    NASA Astrophysics Data System (ADS)

    Chisolm, Eric; Bock, Nicolas; Wallace, Duane

    2010-03-01

    In the original formulation of vibration-transit (V-T) theory for monatomic liquid dynamics, the transit contribution to entropy was taken to be a universal constant, calibrated to the constant-volume entropy of melting. This implied that the transit contribution to energy vanishes, which is incorrect. Here we develop a new formulation that corrects this deficiency. The theory contains two nuclear motion contributions: (a) the dominant vibrational contribution Svib(T/θ0), where T is temperature and θ0 is the vibrational characteristic temperature, and (b) the transit contribution Str(T/θtr), where θtr is a scaling temperature for each liquid. The appearance of a common functional form of Str for all the liquids studied is deduced from the experimental data, when analyzed via the V-T formula. The theoretical entropy of melting is derived, in a single formula applying to normal and anomalous melting alike. An ab initio calculation of θ0 for Na and Cu, based on density functional theory, provides verification of our analysis and V-T theory. In view of the present results, techniques currently being applied in ab initio simulations of liquid properties can be employed to advantage in the further testing and development of V-T theory.

  14. Hierarchical Bayesian spatio-temporal modeling and entropy-based network design

    NASA Astrophysics Data System (ADS)

    Wu, Y.; Jin, B.; Chan, E.

    2012-12-01

    Typical spatio-temporal data include temperature, precipitation, atmospheric pressure, ozone concentration, personal income, infection prevalence, mosquito populations, among others. To model such data in a given region by hierarchical Bayesian kriging is undertaken in this paper. In addition, an environmental network design problem is also explored. For demonstration, we consider the ozone concentrations in the Toronto region of Ontario, Canada. There are many missing observations in the data. To proceed, we first formulate the hierarchical spatio-temporal model in terms of observed data. We then fill in some missing observations such that the data has the staircase structure. Thus, in light of Le and Zidek (2006), we model the ozone concentrations in Toronto region by hierarchical Bayesian kriging and derive a conditional predictive distribution of the ozone concentrations over unknown locations. To decide if a new monitoring station needs to be added or an existing station can be closed down, we solve this environmental network design problem by using the principle of maximum entropy.

  15. Rényi entropy perspective on topological order in classical toric code models

    NASA Astrophysics Data System (ADS)

    Helmes, Johannes; Stéphan, Jean-Marie; Trebst, Simon

    2015-09-01

    Concepts of information theory are increasingly used to characterize collective phenomena in condensed matter systems, such as the use of entanglement entropies to identify emergent topological order in interacting quantum many-body systems. Here, we employ classical variants of these concepts, in particular Rényi entropies and their associated mutual information, to identify topological order in classical systems. Like for their quantum counterparts, the presence of topological order can be identified in such classical systems via a universal, subleading contribution to the prevalent volume and boundary laws of the classical Rényi entropies. We demonstrate that an additional subleading O (1 ) contribution generically arises for all Rényi entropies S(n ) with n ≥2 when driving the system towards a phase transition, e.g., into a conventionally ordered phase. This additional subleading term, which we dub connectivity contribution, tracks back to partial subsystem ordering and is proportional to the number of connected parts in a given bipartition. Notably, the Levin-Wen summation scheme, typically used to extract the topological contribution to the Rényi entropies, does not fully eliminate this additional connectivity contribution in this classical context. This indicates that the distillation of topological order from Rényi entropies requires an additional level of scrutiny to distinguish topological from nontopological O (1 ) contributions. This is also the case for quantum systems, for which we discuss which entropies are sensitive to these connectivity contributions. We showcase these findings by extensive numerical simulations of a classical variant of the toric code model, for which we study the stability of topological order in the presence of a magnetic field and at finite temperatures from a Rényi entropy perspective.

  16. Tracking instantaneous entropy in heartbeat dynamics through inhomogeneous point-process nonlinear models.

    PubMed

    Valenza, Gaetano; Citi, Luca; Scilingo, Enzo Pasquale; Barbieri, Riccardo

    2014-01-01

    Measures of entropy have been proved as powerful quantifiers of complex nonlinear systems, particularly when applied to stochastic series of heartbeat dynamics. Despite the remarkable achievements obtained through standard definitions of approximate and sample entropy, a time-varying definition of entropy characterizing the physiological dynamics at each moment in time is still missing. To this extent, we propose two novel measures of entropy based on the inho-mogeneous point-process theory. The RR interval series is modeled through probability density functions (pdfs) which characterize and predict the time until the next event occurs as a function of the past history. Laguerre expansions of the Wiener-Volterra autoregressive terms account for the long-term nonlinear information. As the proposed measures of entropy are instantaneously defined through such probability functions, the proposed indices are able to provide instantaneous tracking of autonomic nervous system complexity. Of note, the distance between the time-varying phase-space vectors is calculated through the Kolmogorov-Smirnov distance of two pdfs. Experimental results, obtained from the analysis of RR interval series extracted from ten healthy subjects during stand-up tasks, suggest that the proposed entropy indices provide instantaneous tracking of the heartbeat complexity, also allowing for the definition of complexity variability indices. PMID:25571453

  17. Holographic f(T)-gravity model with power-law entropy correction

    NASA Astrophysics Data System (ADS)

    Karami, K.; Asadzadeh, S.; Abdolmaleki, A.; Safari, Z.

    2013-10-01

    Using the correspondence between the f(T)-gravity model and the holographic dark energy model with the power-law entropy correction, we reconstruct the holographic f(T)-gravity model with the power-law entropy correction. We fit the model parameters by using the latest observational data including type Ia supernovae, baryon acoustic oscillations, cosmic microwave background, and Hubble parameter data. We also check the viability of our model using a cosmographic analysis approach. Using the best-fit values of the model, we obtain the evolutionary behavior of the effective torsion equation-of-state parameter of the power-law entropy-corrected holographic f(T)-gravity model, as well as the deceleration parameter of the Universe. We also investigate different energy conditions in our model. Furthermore, we examine the validity of the generalized second law of gravitational thermodynamics. Finally, we point out the growth rate of the matter density perturbation in our model. We conclude that in the power-law entropy-corrected holographic f(T)-gravity model, the Universe begins a matter-dominated phase and approaches a de Sitter regime at late times, as expected. It also can justify the transition from the quintessence state to the phantom regime in the near past, as indicated by recent observations. Moreover, this model is consistent with current data, it passes the cosmographic test, and it fits the data of the growth factor as well as the ΛCDM model.

  18. Computational realizations of the entropy condition in modeling congested traffic flow. Final report

    SciTech Connect

    Bui, D.D.; Nelson, P.; Narasimhan, S.L.

    1992-04-01

    Existing continuum models of traffic flow tend to provide somewhat unrealistic predictions for conditions of congested flow. Previous approaches to modeling congested flow conditions are based on various types of special treatments at the congested freeway sections. Ansorge (Transpn. Res. B, 24B(1990), 133-143) has suggested that such difficulties might be substantially alleviated, even for the simple conservation model of Lighthill and Whitman, if the entropy condition were incorporated into the numerical schemes. In this report the numerical aspects and effects of incorporating the entropy condition in congested traffic flow problems are discussed. Results for simple scenarios involving dissipation of traffic jams suggest that Godnunov's method, which in a numerical technique that incorporates the entropy condition, is more accurate than two alternative methods. Similarly, numerical results for this method, applied to simple model problems involving formation of traffic jams, appear at least as realistic as those obtained from the well-known code of FREFLO.

  19. A stochastic Pella Tomlinson model and its maximum sustainable yield.

    PubMed

    Bordet, Charles; Rivest, Louis-Paul

    2014-11-01

    This paper investigates the biological reference points, such as the maximum sustainable yield (MSY), for the Pella Tomlinson and the Fox surplus production models (SPM) in the presence of a multiplicative environmental noise. These models are used in fisheries stock assessment as a firsthand tool for the elaboration of harvesting strategies. We derive conditions on the environmental noise distribution that insure that the biomass process for an SPM has a stationary distribution, so that extinction is avoided. Explicit results about the stationary behavior of the biomass distribution are provided for a particular specification of the noise. The consideration of random noise in the MSY calculations leads to more conservative harvesting target than deterministic models. The derivations account for a possible noise autocorrelation that represents the occurrence of spells of good and bad years. The impact of the noise is found to be more severe on Pella Tomlinson model for which the asymmetry parameter p is large while it is less important for Fox model. PMID:24992235

  20. Theoretical study of the position of the transition state for unimolecular reactions: an entropy model

    NASA Astrophysics Data System (ADS)

    Zou, Jian-Wei; Chen, Wei-Chen; Kao, Che-Lun; Yu, Chin-Hui

    2004-01-01

    An entropy model that can be used to quantitatively estimate the position of the transition state for unimolecular reaction is presented. A series of 12 isomeric reactions have been investigated to validate this model. It has been shown that the position of the transition state predicted by the entropy model ( χS≠) is qualitatively consistent with the Hammond postulate (HP) except for the isomerizations of FSSF and CH 3SH. The inconsistency for these two reactions may be well ascribed to the dissociated character of their transition states that would lead to the entropy deviating from a normal unimolecular behavior. Comparisons of χS≠ values with other quantities characterizing the position of the transition state have also been made.

  1. Pathway model, superstatistics, Tsallis statistics, and a generalized measure of entropy

    NASA Astrophysics Data System (ADS)

    Mathai, A. M.; Haubold, H. J.

    2007-02-01

    The pathway model of Mathai [A pathway to matrix-variate gamma and normal densities, Linear Algebra Appl. 396 (2005) 317 328] is shown to be inferable from the maximization of a certain generalized entropy measure. This entropy is a variant of the generalized entropy of order α, considered in Mathai and Rathie [Basic Concepts in Information Theory and Statistics: Axiomatic Foundations and Applications, Wiley Halsted, New York and Wiley Eastern, New Delhi, 1975], and it is also associated with Shannon, Boltzmann Gibbs, Rényi, Tsallis, and Havrda Charvát entropies. The generalized entropy measure introduced here is also shown to have interesting statistical properties and it can be given probabilistic interpretations in terms of inaccuracy measure, expected value, and information content in a scheme. Particular cases of the pathway model are shown to be Tsallis statistics [C. Tsallis, Possible generalization of Boltzmann-Gibbs statistics, J. Stat. Phys. 52 (1988) 479 487] and superstatistics introduced by Beck and Cohen [Superstatistics, Physica A 322 (2003) 267 275]. The pathway model's connection to fractional calculus is illustrated by considering a fractional reaction equation.

  2. Modeling of groundwater productivity in northeastern Wasit Governorate, Iraq using frequency ratio and Shannon's entropy models

    NASA Astrophysics Data System (ADS)

    Al-Abadi, Alaa M.

    2015-04-01

    In recent years, delineation of groundwater productivity zones plays an increasingly important role in sustainable management of groundwater resource throughout the world. In this study, groundwater productivity index of northeastern Wasit Governorate was delineated using probabilistic frequency ratio (FR) and Shannon's entropy models in framework of GIS. Eight factors believed to influence the groundwater occurrence in the study area were selected and used as the input data. These factors were elevation (m), slope angle (degree), geology, soil, aquifer transmissivity (m2/d), storativity (dimensionless), distance to river (m), and distance to faults (m). In the first step, borehole location inventory map consisting of 68 boreholes with relatively high yield (>8 l/sec) was prepared. 47 boreholes (70 %) were used as training data and the remaining 21 (30 %) were used for validation. The predictive capability of each model was determined using relative operating characteristic technique. The results of the analysis indicate that the FR model with a success rate of 87.4 % and prediction rate 86.9 % performed slightly better than Shannon's entropy model with success rate of 84.4 % and prediction rate of 82.4 %. The resultant groundwater productivity index was classified into five classes using natural break classification scheme: very low, low, moderate, high, and very high. The high-very high classes for FR and Shannon's entropy models occurred within 30 % (217 km2) and 31 % (220 km2), respectively indicating low productivity conditions of the aquifer system. From final results, both of the models were capable to prospect GWPI with very good results, but FR was better in terms of success and prediction rates. Results of this study could be helpful for better management of groundwater resources in the study area and give planners and decision makers an opportunity to prepare appropriate groundwater investment plans.

  3. A minimum-entropy-production criterion to compare credit risk models

    NASA Astrophysics Data System (ADS)

    Baltazar-Larios, Fernando; Longoria, Pablo Padilla

    2012-09-01

    We study the problem of comparing several models for determining credit risk, which could, in general, be based on very different methodologies. A natural question arises as to which one could be considered more reliable. We give evidence, both numerical and analytical, that selecting the model that minimizes the relative entropy production from period to period is the best choice.

  4. Modeling East African tropical glaciers during the Last Glacial Maximum

    NASA Astrophysics Data System (ADS)

    Doughty, Alice; Kelly, Meredith; Russell, James; Jackson, Margaret; Anderson, Brian; Nakileza, Robert

    2016-04-01

    The timing and magnitude of tropical glacier fluctuations since the last glacial maximum could elucidate how climatic signals transfer between hemispheres. We focus on ancient glaciers of the East African Rwenzori Mountains, Uganda/D.R. Congo, where efforts to map and date the moraines are on-going. We use a coupled mass balance - ice flow model to infer past climate by simulating glacier extents that match the mapped and dated LGM moraines. A range of possible temperature/precipitation change combinations (e.g. -15% precipitation and -7C temperature change) allow simulated glaciers to fit the LGM moraines dated to 20,140±610 and 23,370±470 years old.

  5. Minimal length effects on entanglement entropy of spherically symmetric black holes in the brick wall model

    NASA Astrophysics Data System (ADS)

    Wang, Peng; Yang, Haitang; Ying, Shuxuan

    2016-01-01

    We compute the black hole horizon entanglement entropy for a massless scalar field in the brick wall model by incorporating the minimal length. Taking the minimal length effects on the occupation number n(ω, l) and the Hawking temperature into consideration, we obtain the leading ultraviolet (UV) divergent term and the subleading logarithmic term in the entropy. The leading divergent term scales with the horizon area. The subleading logarithmic term is the same as that in the usual brick wall model without the minimal length.

  6. Fisher information, Rényi entropy power and quantum phase transition in the Dicke model

    NASA Astrophysics Data System (ADS)

    Nagy, Á.; Romera, E.

    2012-07-01

    Fisher information, Rényi entropy power and Fisher-Rényi information product are presented for the Dicke model. There is a quantum phase transition in this quantum optical model. It is pointed out that there is an abrupt change in the Fisher information, Rényi entropy power, the Fisher, Shannon and Rényi lengths at the transition point. It is found that these quantities diverge as the characteristic length: | around the critical value of the coupling strength λc for any value of the parameter β.

  7. Thermospheric density model biases at the 23rd sunspot maximum

    NASA Astrophysics Data System (ADS)

    Pardini, C.; Moe, K.; Anselmo, L.

    2012-07-01

    Uncertainties in the neutral density estimation are the major source of aerodynamic drag errors and one of the main limiting factors in the accuracy of the orbit prediction and determination process at low altitudes. Massive efforts have been made over the years to constantly improve the existing operational density models, or to create even more precise and sophisticated tools. Special attention has also been paid to research more appropriate solar and geomagnetic indices. However, the operational models still suffer from weakness. Even if a number of studies have been carried out in the last few years to define the performance improvements, further critical assessments are necessary to evaluate and compare the models at different altitudes and solar activity conditions. Taking advantage of the results of a previous study, an investigation of thermospheric density model biases during the last sunspot maximum (October 1999 - December 2002) was carried out by analyzing the semi-major axis decay of four satellites: Cosmos 2265, Cosmos 2332, SNOE and Clementine. Six thermospheric density models, widely used in spacecraft operations, were analyzed: JR-71, MSISE-90, NRLMSISE-00, GOST-2004, JB2006 and JB2008. During the time span considered, for each satellite and atmospheric density model, a fitted drag coefficient was solved for and then compared with the calculated physical drag coefficient. It was therefore possible to derive the average density biases of the thermospheric models during the maximum of the 23rd solar cycle. Below 500 km, all the models overestimated the average atmospheric density by amounts varying between +7% and +20%. This was an inevitable consequence of constructing thermospheric models from density data obtained by assuming a fixed drag coefficient, independent of altitude. Because the uncertainty affecting the drag coefficient measurements was about 3% at both 200 km and 480 km of altitude, the calculated air density biases below 500 km were

  8. Linearized model collision operators for multiple ion species plasmas and gyrokinetic entropy balance equations

    SciTech Connect

    Sugama, H.; Watanabe, T.-H.; Nunami, M.

    2009-11-15

    Linearized model collision operators for multiple ion species plasmas are presented that conserve particles, momentum, and energy and satisfy adjointness relations and Boltzmann's H-theorem even for collisions between different particle species with unequal temperatures. The model collision operators are also written in the gyrophase-averaged form that can be applied to the gyrokinetic equation. Balance equations for the turbulent entropy density, the energy of electromagnetic fluctuations, the turbulent transport fluxes of particle and heat, and the collisional dissipation are derived from the gyrokinetic equation including the collision term and Maxwell equations. It is shown that, in the steady turbulence, the entropy produced by the turbulent transport fluxes is dissipated in part by collisions in the nonzonal-mode region and in part by those in the zonal-mode region after the nonlinear entropy transfer from nonzonal to zonal modes.

  9. Fine structure of the entanglement entropy in the O(2) model

    NASA Astrophysics Data System (ADS)

    Yang, Li-Ping; Liu, Yuzhi; Zou, Haiyuan; Xie, Z. Y.; Meurice, Y.

    2016-01-01

    We compare two calculations of the particle density in the superfluid phase of the O(2) model with a chemical potential μ in 1+1 dimensions. The first relies on exact blocking formulas from the Tensor Renormalization Group (TRG) formulation of the transfer matrix. The second is a worm algorithm. We show that the particle number distributions obtained with the two methods agree well. We use the TRG method to calculate the thermal entropy and the entanglement entropy. We describe the particle density, the two entropies and the topology of the world lines as we increase μ to go across the superfluid phase between the first two Mott insulating phases. For a sufficiently large temporal size, this process reveals an interesting fine structure: the average particle number and the winding number of most of the world lines in the Euclidean time direction increase by one unit at a time. At each step, the thermal entropy develops a peak and the entanglement entropy increases until we reach half-filling and then decreases in a way that approximately mirrors the ascent. This suggests an approximate fermionic picture.

  10. Fine structure of the entanglement entropy in the O(2) model.

    PubMed

    Yang, Li-Ping; Liu, Yuzhi; Zou, Haiyuan; Xie, Z Y; Meurice, Y

    2016-01-01

    We compare two calculations of the particle density in the superfluid phase of the O(2) model with a chemical potential μ in 1+1 dimensions. The first relies on exact blocking formulas from the Tensor Renormalization Group (TRG) formulation of the transfer matrix. The second is a worm algorithm. We show that the particle number distributions obtained with the two methods agree well. We use the TRG method to calculate the thermal entropy and the entanglement entropy. We describe the particle density, the two entropies and the topology of the world lines as we increase μ to go across the superfluid phase between the first two Mott insulating phases. For a sufficiently large temporal size, this process reveals an interesting fine structure: the average particle number and the winding number of most of the world lines in the Euclidean time direction increase by one unit at a time. At each step, the thermal entropy develops a peak and the entanglement entropy increases until we reach half-filling and then decreases in a way that approximately mirrors the ascent. This suggests an approximate fermionic picture. PMID:26871055

  11. Entity Relation Detection with Factorial Hidden Markov Models and Maximum Entropy Discriminant Latent Dirichlet Allocations

    ERIC Educational Resources Information Center

    Li, Dingcheng

    2011-01-01

    Coreference resolution (CR) and entity relation detection (ERD) aim at finding predefined relations between pairs of entities in text. CR focuses on resolving identity relations while ERD focuses on detecting non-identity relations. Both CR and ERD are important as they can potentially improve other natural language processing (NLP) related tasks…

  12. Onboard magnetic field modeling for Solar Maximum Mission /SMM/

    NASA Technical Reports Server (NTRS)

    Headrick, R. D.; Markley, F. L.

    1977-01-01

    Analysis and simulation results are presented for magnetic field models for use in attitude acquisition onboard Solar Maximum Mission (SMM). A study was made of the degree of the spherical harmonic expansion of the magnetic field required, considering mission requirements, modeling errors, and magnetometer quantization and biases. It is shown that a fifth-degree field is sufficient to provide two-degree roll angle determination accuracy with a residual magnetic bias of 10 milligauss. Also, a spherical harmonic expansion for the McIlwain L-parameter is included for the first time. This parameter will be telemetered to ground with experimental data. The fifth-degree expansion will provide the L-parameter to within two percent of accepted values. The additional onboard computational burden is the storage of 36 coefficients and an increase of about 15% in computation time. Prototype flight code was developed which is anticipated to require about 2000 bytes of core storage and 30 milliseconds of computation time per orbit point on the NSSC-1 computer.

  13. Cosmic-ray modulation at solar maximum: modeling

    NASA Astrophysics Data System (ADS)

    Kota, J.; Jokipii, J.

    The modulation of the galactic and anomalous cosmic rays is a result of the energy loss cosmic rays suffer during their passage through the heliospheric magnetic and electric fields. By contrast with the years of quiet heliosphere, which can be described with a tilted dipole model that remains stable for several solar rotations, cosmic-ray modulation during the periods of the active Sun is thought to be dominated by transient events. Propagating disturbances forming global merged interaction regions (GMIRs) act as propagating barriers. The heliospheric current sheet (HCS) dividing the opposite polarities of the heliospheric magnetic field (HMF) becomes highly tilted and may contain a significant quadrupole component, leading to a warped current sheet with a profound north-south asymmetry. We present numerical simulations to model cosmic-ray transport and acceleration in the heliosphere during solar maximum. Our 2-D and 3-D codes are extended to include several transients. We consider various complex configurations of the HMF, as well as a dynamical variation of the tilted current sheet, involving meridional field components. We discuss the effects of GMIRs on galactic and anomalous cosmic rays, and compare the time evolution of the two different species, as the disturbance propagates outward through the termination shock (TS) into the heliosheath. Some aspects of cosmic-ray modulation beyond the TS, in the subsonic heliosheath will also be addressed.

  14. Correspondence between entropy-corrected holographic and Gauss-Bonnet dark-energy models

    NASA Astrophysics Data System (ADS)

    Setare, M. R.; Jamil, Mubasher

    2010-11-01

    In the present work we investigate the cosmological implications of the entropy-corrected holographic dark-energy (ECHDE) density in the Gauss-Bonnet framework. This is motivated from the loop quantum gravity corrections to the entropy-area law. Assuming the two cosmological scenarios are valid simultaneously, we show that there is a correspondence between the ECHDE scenario in flat universe and the phantom dark-energy model in the framework of the Gauss-Bonnet theory with a potential. This correspondence leads consistently to an accelerating universe.

  15. Identifying topological order in the Shastry-Sutherland model via entanglement entropy

    NASA Astrophysics Data System (ADS)

    Ronquillo, David; Peterson, Michael

    2015-03-01

    It is known that for a topologically ordered state the area law for the entanglement entropy shows a negative universal additive constant contribution, - γ , called the topological entanglement entropy. We theoretically study the entanglement entropy of the two-dimensional Shastry-Sutherland quantum antiferromagnet using exact diagonalization on clusters of 16 and 24 spins. By utilizing the Kitaev-Preskill construction [A. Kitaev and J. Preskill, Phys. Rev. Lett. 96, 110404 (2006)] we extract a finite topological term, - γ , in the region of bond-strength parameter space corresponding to high geometrical frustration. Thus, we provide strong evidence for the existence of an exotic topologically ordered state and shed light on the nature of this model's strongly frustrated, and long controversial, intermediate phase. We acknowledge California State University Long Beach Office of Research and Sponsored Programs. Published as Phys. Rev. B 90, 201108(R) (2014).

  16. Entropy in the Bak-Sneppen Model for Self-Organized Criticality

    NASA Astrophysics Data System (ADS)

    Yang, Chun-Bin

    2003-03-01

    The distributions of fitness on the sites of one- and two-dimensional lattices are studied for the nearest-neighbour Bak-Sneppen model on self-organized criticality. The distributions show complicated behaviour showing that the system is far from equilibrium. By introducing the ``energy'' of a site, the entropy flow from the system to its environment is investigated.

  17. A Discrete Constraint for Entropy Conservation and Sound Waves in Cloud-Resolving Modeling

    NASA Technical Reports Server (NTRS)

    Zeng, Xi-Ping; Tao, Wei-Kuo; Simpson, Joanne

    2003-01-01

    Ideal cloud-resolving models contain little-accumulative errors. When their domain is so large that synoptic large-scale circulations are accommodated, they can be used for the simulation of the interaction between convective clouds and the large-scale circulations. This paper sets up a framework for the models, using moist entropy as a prognostic variable and employing conservative numerical schemes. The models possess no accumulative errors of thermodynamic variables when they comply with a discrete constraint on entropy conservation and sound waves. Alternatively speaking, the discrete constraint is related to the correct representation of the large-scale convergence and advection of moist entropy. Since air density is involved in entropy conservation and sound waves, the challenge is how to compute sound waves efficiently under the constraint. To address the challenge, a compensation method is introduced on the basis of a reference isothermal atmosphere whose governing equations are solved analytically. Stability analysis and numerical experiments show that the method allows the models to integrate efficiently with a large time step.

  18. Construction of BGK Models with a Family of Kinetic Entropies for a Given System of Conservation Laws

    NASA Astrophysics Data System (ADS)

    Bouchut, F.

    1999-04-01

    We introduce a general framework for kinetic BGK models. We assume to be given a system of hyperbolic conservation laws with a family of Lax entropies, and we characterize the BGK models that lead to this system in the hydrodynamic limit, and that are compatible with the whole family of entropies. This is obtained by a new characterization of Maxwellians as entropy minimizers that can take into account the simultaneous minimization problems corresponding to the family of entropies. We deduce a general procedure to construct such BGK models, and we show how classical examples enter the framework. We apply our theory to isentropic gas dynamics and full gas dynamics, and in both cases we obtain new BGK models satisfying all entropy inequalities.

  19. Entropy production analysis for hump characteristics of a pump turbine model

    NASA Astrophysics Data System (ADS)

    Li, Deyou; Gong, Ruzhi; Wang, Hongjie; Xiang, Gaoming; Wei, Xianzhu; Qin, Daqing

    2016-06-01

    The hump characteristic is one of the main problems for the stable operation of pump turbines in pump mode. However, traditional methods cannot reflect directly the energy dissipation in the hump region. In this paper, 3D simulations are carried out using the SST k-ω turbulence model in pump mode under different guide vane openings. The numerical results agree with the experimental data. The entropy production theory is introduced to determine the flow losses in the whole passage, based on the numerical simulation. The variation of entropy production under different guide vane openings is presented. The results show that entropy production appears to be a wave, with peaks under different guide vane openings, which correspond to wave troughs in the external characteristic curves. Entropy production mainly happens in the runner, guide vanes and stay vanes for a pump turbine in pump mode. Finally, entropy production rate distribution in the runner, guide vanes and stay vanes is analyzed for four points under the 18 mm guide vane opening in the hump region. The analysis indicates that the losses of the runner and guide vanes lead to hump characteristics. In addition, the losses mainly occur in the runner inlet near the band and on the suction surface of the blades. In the guide vanes and stay vanes, the losses come from pressure surface of the guide vanes and the wake effects of the vanes. A new insight-entropy production analysis is carried out in this paper in order to find the causes of hump characteristics in a pump turbine, and it could provide some basic theoretical guidance for the loss analysis of hydraulic machinery.

  20. A simple modelling approach for prediction of standard state real gas entropy of pure materials.

    PubMed

    Bagheri, M; Borhani, T N G; Gandomi, A H; Manan, Z A

    2014-01-01

    The performance of an energy conversion system depends on exergy analysis and entropy generation minimisation. A new simple four-parameter equation is presented in this paper to predict the standard state absolute entropy of real gases (SSTD). The model development and validation were accomplished using the Linear Genetic Programming (LGP) method and a comprehensive dataset of 1727 widely used materials. The proposed model was compared with the results obtained using a three-layer feed forward neural network model (FFNN model). The root-mean-square error (RMSE) and the coefficient of determination (r(2)) of all data obtained for the LGP model were 52.24 J/(mol K) and 0.885, respectively. Several statistical assessments were used to evaluate the predictive power of the model. In addition, this study provides an appropriate understanding of the most important molecular variables for exergy analysis. Compared with the LGP based model, the application of FFNN improved the r(2) to 0.914. The developed model is useful in the design of materials to achieve a desired entropy value. PMID:25158071

  1. First-passage time for subdiffusion: The nonadditive entropy approach versus the fractional model

    NASA Astrophysics Data System (ADS)

    Kosztołowicz, Tadeusz; Lewandowska, Katarzyna D.

    2012-08-01

    We study the similarities and differences between different models concerning subdiffusion. More particularly, we calculate first passage time (FPT) distributions for subdiffusion, derived from Greens’ functions of nonlinear equations obtained from Sharma-Mittal's, Tsallis's, and Gauss's nonadditive entropies. Then we compare these with FPT distributions calculated from a fractional model using a subdiffusion equation with a fractional time derivative. All of Greens’ functions give us exactly the same standard relation <(Δx)2>=2Dαtα which characterizes subdiffusion (0<α<1), but generally FPT distributions are not equivalent to one another. We will show here that the FPT distribution for the fractional model is asymptotically equal to the Sharma-Mittal model over the long time limit only if in the latter case one of the three parameters describing Sharma-Mittal entropy r depends on α, and satisfies the specific equation derived in this paper, whereas the other two models mentioned above give different FPT distributions with the fractional model. Greens’ functions obtained from the Sharma-Mittal and fractional models, for r obtained from this particular equation, are very similar to each other. We will also discuss the interpretation of subdiffusion models based on nonadditive entropies and the possibilities of the experimental measurement of subdiffusion models’ parameters.

  2. Last Glacial Maximum in South America: Proxies and Model Results

    NASA Astrophysics Data System (ADS)

    Wainer, I.; Ledru, M. P.; Clauzet, G.; Otto-Bliesner, B.; Brady, E.

    2003-04-01

    The lack of paleo proxies to define Full Glacial conditions in South America (see COHMAP 1988) prevented accurate climatic reconstitution until recently. It is believed that full glacial climates throughout South America were cooler than today by about 5°C with moisture patterns showing distinct regional differences.Results show that from Equator to pole, four areas can be characterized from lacustrine records, travertine and speleothems analysis: the first region, between 0 and 25°S latitude, recorded a hiatus in sedimentation with an absence of organic matter deposition in all lowland records, while the Andes Amazonian-moisture-dependant-forests were drastically reduced and showed the set up of an open vegetation. Climates were defined as drier than today with less precipitation and reduction in soil moisture supply. On the other hand, observations on travertines on the northeastern coastal area of the state of Bahia (also at low latitudes) certify a climate more humid than today. South of 25°S, in the temperate regions of northern Patagonia, lake levels were higher than today, snow precipitation in the Southern Bolivia increased with an accompanying increase in speleothems formations in southern Brazil. This was interpreted as being associated with moister and cooler climates than today in this area. At higher latitudes the low lake-levels recorded, indicate an arid climate. These observations based on paleodata are compared to the analysis from simulation results of the Paleoclimate version of the National Center for Atmospheric Research coupled climate system model (PALEO CCSM) for the Last Glacial Maximum and present day. Analysis of the LGM wind simulation for the tropical Atlantic show that the convergence zone does not extend all the way into the continent. This prevents moisture inflow into the adjacent continental area (equatorial NE Brazil). Paleo proxies results, as explained above, are consistent with this scenario. At higher latitudes (south of 50

  3. Spectral Modeling of SNe Ia Near Maximum Light: Probing the Characteristics of Hydrodynamical Models

    NASA Astrophysics Data System (ADS)

    Baron, E.; Bongard, Sebastien; Branch, David; Hauschildt, Peter H.

    2006-07-01

    We have performed detailed non-local thermodynamic equilibrium (NLTE) spectral synthesis modeling of two types of one-dimensional hydrodynamical models: the very highly parameterized deflagration model W7, and two delayed-detonation models. We find that, overall, both models do about equally well at fitting well-observed SNe Ia near maximum light. However, the Si II λ6150 feature of W7 is systematically too fast, whereas for the delayed-detonation models it is also somewhat too fast but significantly better than that of W7. We find that a parameterized mixed model does the best job of reproducing the Si II λ6150 line near maximum light, and we study the differences in the models that lead to better fits to normal SNe Ia. We discuss what is required of a hydrodynamical model to fit the spectra of observed SNe Ia near maximum light.

  4. Entropy Based Detection of DDoS Attacks in Packet Switching Network Models

    NASA Astrophysics Data System (ADS)

    Lawniczak, Anna T.; Wu, Hao; di Stefano, Bruno

    Distributed denial-of-service (DDoS) attacks are network-wide attacks that cannot be detected or stopped easily. They affect “natural” spatio-temporal packet traffic patterns, i.e. “natural distributions” of packets passing through the routers. Thus, they affect “natural” information entropy profiles, a sort of “fingerprints”, of normal packet traffic. We study if by monitoring information entropy of packet traffic through selected routers one may detect DDoS attacks or anomalous packet traffic in packet switching network (PSN) models. Our simulations show that the considered DDoS attacks of “ping” type cause shifts in information entropy profiles of packet traffic monitored even at small sets of routers and that it is easier to detect these shifts if static routing is used instead of dynamic routing. Thus, network-wide monitoring of information entropy of packet traffic at properly selected routers may provide means for detecting DDoS attacks and other anomalous packet traffics.

  5. Fault detection and diagnosis for gas turbines based on a kernelized information entropy model.

    PubMed

    Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei

    2014-01-01

    Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms. PMID:25258726

  6. Fault Detection and Diagnosis for Gas Turbines Based on a Kernelized Information Entropy Model

    PubMed Central

    Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei

    2014-01-01

    Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms. PMID:25258726

  7. Assessment Model of Ecoenvironmental Vulnerability Based on Improved Entropy Weight Method

    PubMed Central

    Zhang, Xianqi; Wang, Chenbo; Li, Enkuan; Xu, Cundong

    2014-01-01

    Assessment of ecoenvironmental vulnerability plays an important role in the guidance of regional planning, the construction and protection of ecological environment, which requires comprehensive consideration on regional resources, environment, ecology, society and other factors. Based on the driving mechanism and evolution characteristics of ecoenvironmental vulnerability in cold and arid regions of China, a novel evaluation index system on ecoenvironmental vulnerability is proposed in this paper. For the disadvantages of conventional entropy weight method, an improved entropy weight assessment model on ecoenvironmental vulnerability is developed and applied to evaluate the ecoenvironmental vulnerability in western Jilin Province of China. The assessing results indicate that the model is suitable for ecoenvironmental vulnerability assessment, and it shows more reasonable evaluation criterion, more distinct insights and satisfactory results combined with the practical conditions. The model can provide a new method for regional ecoenvironmental vulnerability evaluation. PMID:25133260

  8. Configurational entropy modeling of the viscosity of hydrous albitic, granitic and rhyolitic melts

    NASA Astrophysics Data System (ADS)

    Whittington, A.; Bouhifd, A.; Hofmeister, A.; Richet, P.; Romine, W.

    2008-12-01

    The transition from ductile to brittle behavior in silicate melts occurs when strain rates exceed relaxation rates, which are closely related to the viscosity. Viscosity is very sensitive to temperature, melt composition and dissolved water content. We present 210 new viscosity data for obsidians from Mono Crater, California, containing up to <1.2 wt.% H2O. In conjunction with literature data, we used configurational entropy theory to model the viscosity (η) and glass transition temperature (Tg) of hydrous synthetic NaAlSi3O8 and haplogranite melts, and complex (natural) leucogranites and high-silica rhyolites. In the equation log η = Ae + Be/TSconf(T), the Sconf term is configurational entropy, which varies with T depending on the configurational heat capacity of the melt. The variables Ae, Be, and Sconf(Tg) were parameterized as a function of water content for each of the four data sets. With the simplest assumption of ideal mixing between silicate and water components, configurational entropy models with between 4 and 10 fitting parameters reproduce experimentally determined η-T-X(H2O) relationships significantly better than previous literature models based on empirical equations, and have the advantage of being based on thermodynamic theory. Our preferred configurational entropy models have root-mean-square deviations of 0.26 log units for NaAlSi3O8 (n = 77), 0.16 log units for haplogranite (n = 55), 0.28 log units for peraluminous granites (n = 79), and 0.36 log units for Mono Crater rhyolites (n = 262). The majority of the data were collected in the viscosity range 108 to 1013 Pa.s, so the new models place tight constraints on the glass transition in silicic melts, especially at low water contents relevant to conduits and domes.

  9. Reconstructing interacting entropy-corrected holographic scalar field models of dark energy in the non-flat universe

    NASA Astrophysics Data System (ADS)

    Karami, K.; Khaledian, M. S.; Jamil, Mubasher

    2011-02-01

    Here we consider the entropy-corrected version of the holographic dark energy (DE) model in the non-flat universe. We obtain the equation of state parameter in the presence of interaction between DE and dark matter. Moreover, we reconstruct the potential and the dynamics of the quintessence, tachyon, K-essence and dilaton scalar field models according to the evolutionary behavior of the interacting entropy-corrected holographic DE model.

  10. Delayed feedback control via minimum entropy strategy in an economic model

    NASA Astrophysics Data System (ADS)

    Salarieh, Hassan; Alasty, Aria

    2008-02-01

    In this paper minimum entropy (ME) algorithm for controlling chaos, is applied to the Behrens-Feichtinger model, as a discrete-time dynamic system which models a drug market. The ME control is implemented through delayed feedback. It is assumed that the dynamic equations of the system are not known, so the proper feedback gain cannot be obtained analytically from the system equations. In the ME approach the feedback gain is obtained and adapted in such a way that the entropy of the system converges to zero, hence a fixed point of the system will be stabilized. Application of the proposed method with different economic control strategies is numerically investigated. Simulation results show the effectiveness of the ME method to control chaos in economic systems with unknown dynamic equations.

  11. Implementing Restricted Maximum Likelihood Estimation in Structural Equation Models

    ERIC Educational Resources Information Center

    Cheung, Mike W.-L.

    2013-01-01

    Structural equation modeling (SEM) is now a generic modeling framework for many multivariate techniques applied in the social and behavioral sciences. Many statistical models can be considered either as special cases of SEM or as part of the latent variable modeling framework. One popular extension is the use of SEM to conduct linear mixed-effects…

  12. Generalized isotropic Lipkin-Meshkov-Glick models: ground state entanglement and quantum entropies

    NASA Astrophysics Data System (ADS)

    Carrasco, José A.; Finkel, Federico; González-López, Artemio; Rodríguez, Miguel A.; Tempesta, Piergiulio

    2016-03-01

    We introduce a new class of generalized isotropic Lipkin-Meshkov-Glick models with \\text{su}(m+1) spin and long-range non-constant interactions, whose non-degenerate ground state is a Dicke state of \\text{su}(m+1) type. We evaluate in closed form the reduced density matrix of a block of L spins when the whole system is in its ground state, and study the corresponding von Neumann and Rényi entanglement entropies in the thermodynamic limit. We show that both of these entropies scale as alog L when L tends to infinity, where the coefficient a is equal to (m  -  k)/2 in the ground state phase with k vanishing \\text{su}(m+1) magnon densities. In particular, our results show that none of these generalized Lipkin-Meshkov-Glick models are critical, since when L\\to ∞ their Rényi entropy R q becomes independent of the parameter q. We have also computed the Tsallis entanglement entropy of the ground state of these generalized \\text{su}(m+1) Lipkin-Meshkov-Glick models, finding that it can be made extensive by an appropriate choice of its parameter only when m-k≥slant 3 . Finally, in the \\text{su}(3) case we construct in detail the phase diagram of the ground state in parameter space, showing that it is determined in a simple way by the weights of the fundamental representation of \\text{su}(3) . This is also true in the \\text{su}(m+1) case; for instance, we prove that the region for which all the magnon densities are non-vanishing is an (m  +  1)-simplex in {{{R}}m} whose vertices are the weights of the fundamental representation of \\text{su}(m+1) .

  13. Primordial trispectrum from entropy perturbations in multifield DBI model

    SciTech Connect

    Gao, Xian; Hu, Bin E-mail: hubin@itp.ac.cn

    2009-08-01

    We investigate the primordial trispectra of the general multifield DBI inflationary model. In contrast with the single field model, the entropic modes can source the curvature perturbations on the super horizon scales, so we calculate the contributions from the interaction of four entropic modes mediating one adiabatic mode to the trispectra, at the large transfer limit (T{sub RS} >> 1). We obtained the general form of the 4-point correlation functions, plotted the shape diagrams in two specific momenta configurations, ''equilateral configuration'' and ''specialized configuration''. Our figures showed that we can easily distinguish the two different momenta configurations.

  14. Identification of mixing barriers in chemistry-climate model simulations using Rényi entropy

    NASA Astrophysics Data System (ADS)

    Krützmann, N. C.; McDonald, A. J.; George, S. E.

    2008-03-01

    This study examines how the Rényi entropy statistical measure (RE; a generalization of Shannon entropy) can be applied to long-lived tracer data (e.g. methane), to understand mixing in the stratosphere. In order to show that RE can be used for this task we focus on the southern hemisphere stratosphere and the significant impact of the Antarctic polar vortex on the dynamics in this region. Using methane data from simulations of the chemistry-climate model SOCOL, we find clear patterns, consistent with those identified in previous studies of mixing. RE has the significant benefit that it is data driven and requires considerably less computational effort than other techniques. This initial study suggests that RE has a significant potential as a quantitative measure for analyzing mixing in the atmosphere.

  15. Exact finite reduced density matrix and von Neumann entropy for the Calogero model

    NASA Astrophysics Data System (ADS)

    Osenda, Omar; Pont, Federico M.; Okopińska, Anna; Serra, Pablo

    2015-12-01

    The information content of continuous quantum variables systems is usually studied using a number of well known approximation methods. The approximations are made to obtain the spectrum, eigenfunctions or the reduced density matrices that are essential to calculate the entropy-like quantities that quantify the information. Even in the sparse cases where the spectrum and eigenfunctions are exactly known, the entanglement spectrum- the spectrum of the reduced density matrices that characterize the problem- must be obtained in an approximate fashion. In this work, we obtain analytically a finite representation of the reduced density matrices of the fundamental state of the N-particle Calogero model for a discrete set of values of the interaction parameter. As a consequence, the exact entanglement spectrum and von Neumann entropy is worked out.

  16. The Baldwin-Lomax model for separated and wake flows using the entropy envelope concept

    NASA Technical Reports Server (NTRS)

    Brock, J. S.; Ng, W. F.

    1992-01-01

    Implementation of the Baldwin-Lomax algebraic turbulence model is difficult and ambiguous within flows characterized by strong viscous-inviscid interactions and flow separations. A new method of implementation is proposed which uses an entropy envelope concept and is demonstrated to ensure the proper evaluation of modeling parameters. The method is simple, computationally fast, and applicable to both wake and boundary layer flows. The method is general, making it applicable to any turbulence model which requires the automated determination of the proper maxima of a vorticity-based function. The new method is evalulated within two test cases involving strong viscous-inviscid interaction.

  17. MAXIMUM LIKELIHOOD ESTIMATION FOR PERIODIC AUTOREGRESSIVE MOVING AVERAGE MODELS.

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  18. Absolute Entropy and Energy of Carbon Dioxide Using the Two-Phase Thermodynamic Model.

    PubMed

    Huang, Shao-Nung; Pascal, Tod A; Goddard, William A; Maiti, Prabal K; Lin, Shiang-Tai

    2011-06-14

    The two-phase thermodynamic (2PT) model is used to determine the absolute entropy and energy of carbon dioxide over a wide range of conditions from molecular dynamics trajectories. The 2PT method determines the thermodynamic properties by applying the proper statistical mechanical partition function to the normal modes of a fluid. The vibrational density of state (DoS), obtained from the Fourier transform of the velocity autocorrelation function, converges quickly, allowing the free energy, entropy, and other thermodynamic properties to be determined from short 20-ps MD trajectories. The anharmonic effects in the vibrations are accounted for by the broadening of the normal modes into bands from sampling the velocities over the trajectory. The low frequency diffusive modes, which lead to finite DoS at zero frequency, are accounted for by considering the DoS as a superposition of gas-phase and solid-phase components (two phases). The analytical decomposition of the DoS allows for an evaluation of properties contributed by different types of molecular motions. We show that this 2PT analysis leads to accurate predictions of entropy and energy of CO2 over a wide range of conditions (from the triple point to the critical point of both the vapor and the liquid phases along the saturation line). This allows the equation of state of CO2 to be determined, which is limited only by the accuracy of the force field. We also validated that the 2PT entropy agrees with that determined from thermodynamic integration, but 2PT requires only a fraction of the time. A complication for CO2 is that its equilibrium configuration is linear, which would have only two rotational modes, but during the dynamics it is never exactly linear, so that there is a third mode from rotational about the axis. In this work, we show how to treat such linear molecules in the 2PT framework. PMID:26596450

  19. RNA Thermodynamic Structural Entropy

    PubMed Central

    Garcia-Martin, Juan Antonio; Clote, Peter

    2015-01-01

    Conformational entropy for atomic-level, three dimensional biomolecules is known experimentally to play an important role in protein-ligand discrimination, yet reliable computation of entropy remains a difficult problem. Here we describe the first two accurate and efficient algorithms to compute the conformational entropy for RNA secondary structures, with respect to the Turner energy model, where free energy parameters are determined from UV absorption experiments. An algorithm to compute the derivational entropy for RNA secondary structures had previously been introduced, using stochastic context free grammars (SCFGs). However, the numerical value of derivational entropy depends heavily on the chosen context free grammar and on the training set used to estimate rule probabilities. Using data from the Rfam database, we determine that both of our thermodynamic methods, which agree in numerical value, are substantially faster than the SCFG method. Thermodynamic structural entropy is much smaller than derivational entropy, and the correlation between length-normalized thermodynamic entropy and derivational entropy is moderately weak to poor. In applications, we plot the structural entropy as a function of temperature for known thermoswitches, such as the repression of heat shock gene expression (ROSE) element, we determine that the correlation between hammerhead ribozyme cleavage activity and total free energy is improved by including an additional free energy term arising from conformational entropy, and we plot the structural entropy of windows of the HIV-1 genome. Our software RNAentropy can compute structural entropy for any user-specified temperature, and supports both the Turner’99 and Turner’04 energy parameters. It follows that RNAentropy is state-of-the-art software to compute RNA secondary structure conformational entropy. Source code is available at https://github.com/clotelab/RNAentropy/; a full web server is available at http

  20. Calculation of the Entropy of Lattice Polymer Models from Monte Carlo Trajectories.

    PubMed

    White, Ronald P; Funt, Jason; Meirovitch, Hagai

    2005-07-20

    While lattice models are used extensively for macromolecules (synthetic polymers proteins, etc), calculation of the absolute entropy, S, and the free energy, F, from a given Monte Carlo (MC) trajectory is not straightforward. Recently we have developed the hypothetical scanning MC (HSMC) method for calculating S and F of fluids. Here we extend HSMC to self-avoiding walks on a square lattice and discuss its wide applicability to complex polymer lattice models. HSMC is independent of existing techniques and thus constitutes an independent research tool; it provides rigorous upper and lower bounds for F, which can be obtained from a very small sample and even from a single chain conformation. PMID:16912812

  1. Two dimensional velocity distribution in open channels using Renyi entropy

    NASA Astrophysics Data System (ADS)

    Kumbhakar, Manotosh; Ghoshal, Koeli

    2016-05-01

    In this study, the entropy concept is employed for describing the two-dimensional velocity distribution in an open channel. Using the principle of maximum entropy, the velocity distribution is derived by maximizing the Renyi entropy by assuming dimensionless velocity as a random variable. The derived velocity equation is capable of describing the variation of velocity along both the vertical and transverse directions with maximum velocity occurring on or below the water surface. The developed model of velocity distribution is tested with field and laboratory observations and is also compared with existing entropy-based velocity distributions. The present model has shown good agreement with the observed data and its prediction accuracy is comparable with the other existing models.

  2. MODELING VERY LONG BASELINE INTERFEROMETRIC IMAGES WITH THE CROSS-ENTROPY GLOBAL OPTIMIZATION TECHNIQUE

    SciTech Connect

    Caproni, A.; Toffoli, R. T.; Monteiro, H.; Abraham, Z.; Teixeira, D. M.

    2011-07-20

    We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N{sub s} elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e.g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting

  3. Modeling the Evolution of Protein Domain Architectures Using Maximum Parsimony

    PubMed Central

    Fong, Jessica H.; Geer, Lewis Y.; Panchenko, Anna R.; Bryant, Stephen H.

    2007-01-01

    Domains are basic evolutionary units of proteins and most proteins have more than one domain. Advances in domain modeling and collection are making it possible to annotate a large fraction of known protein sequences by a linear ordering of their domains, yielding their architecture. Protein domain architectures link evolutionarily related proteins and underscore their shared functions. Here, we attempt to better understand this association by identifying the evolutionary pathways by which extant architectures may have evolved. We propose a model of evolution in which architectures arise through rearrangements of inferred precursor architectures and acquisition of new domains. These pathways are ranked using a parsimony principle, whereby scenarios requiring the fewest number of independent recombination events, namely fission and fusion operations, are assumed to be more likely. Using a data set of domain architectures present in 159 proteomes that represent all three major branches of the tree of life allows us to estimate the history of over 85% of all architectures in the sequence database. We find that the distribution of rearrangement classes is robust with respect to alternative parsimony rules for inferring the presence of precursor architectures in ancestral species. Analyzing the most parsimonious pathways, we find 87% of architectures to gain complexity over time through simple changes, among which fusion events account for 5.6 times as many architectures as fission. Our results may be used to compute domain architecture similarities, for example, based on the number of historical recombination events separating them. Domain architecture “neighbors” identified in this way may lead to new insights about the evolution of protein function. PMID:17166515

  4. Configurational Entropy Revisited

    NASA Astrophysics Data System (ADS)

    Lambert, Frank L.

    2007-09-01

    Entropy change is categorized in some prominent general chemistry textbooks as being either positional (configurational) or thermal. In those texts, the accompanying emphasis on the dispersal of matter—independent of energy considerations and thus in discord with kinetic molecular theory—is most troubling. This article shows that the variants of entropy can be treated from a unified viewpoint and argues that to decrease students' confusion about the nature of entropy change these variants of entropy should be merged. Molecular energy dispersal in space is implicit but unfortunately tacit in the cell models of statistical mechanics that develop the configurational entropy change in gas expansion, fluids mixing, or the addition of a non-volatile solute to a solvent. Two factors are necessary for entropy change in chemistry. An increase in thermodynamic entropy is enabled in a process by the motional energy of molecules (that, in chemical reactions, can arise from the energy released from a bond energy change). However, entropy increase is only actualized if the process results in a larger number of arrangements for the system's energy, that is, a final state that involves the most probable distribution for that energy under the new constraints. Positional entropy should be eliminated from general chemistry instruction and, especially benefiting "concrete minded" students, it should be replaced by emphasis on the motional energy of molecules as enabling entropy change.

  5. Modeling specific heat and entropy change in La(Fe-Mn-Si)13-H compounds

    NASA Astrophysics Data System (ADS)

    Piazzi, Marco; Bennati, Cecilia; Curcio, Carmen; Kuepferling, Michaela; Basso, Vittorio

    2016-02-01

    In this paper we model the magnetocaloric effect of LaFexMnySiz-H1.65 compound (x + y + z = 13), a system showing a transition temperature finely tunable around room temperature by Mn substitution. The thermodynamic model takes into account the coupling between magnetism and specific volume as introduced by Bean and Rodbell. We find a good qualitative agreement between experimental and modeled entropy change - Δs(H , T). The main result is that the magnetoelastic coupling drives the phase transition of the system, changing it from second to first order by varying a model parameter η. It is also responsible for a decrease of - Δs at the transition, due to a small lattice contribution to the entropy counteracting the effect of the magnetic one. The role of Mn is reflected exclusively in a decrease of the strength of the exchange interaction, while the value of the coefficient β, responsible for the coupling between volume and exchange energy, is independent on the Mn content and it appears to be an intrinsic property of the La(Fe-Si)13 structure.

  6. An entropy spring model for the Young's modulus change of biodegradable polymers during biodegradation.

    PubMed

    Wang, Ying; Han, Xiaoxiao; Pan, Jingzhe; Sinka, Csaba

    2010-01-01

    This paper presents a model for the change in Young's modulus of biodegradable polymers due to hydrolysis cleavage of the polymer chains. The model is based on the entropy spring theory for amorphous polymers. It is assumed that isolated polymer chain cleavage and very short polymer chains do not affect the entropy change in a linear biodegradable polymer during its deformation. It is then possible to relate the Young's modulus to the average molecular weight in a computer simulated hydrolysis process of polymer chain sessions. The experimental data obtained by Tsuji [Tsuji, H., 2002. Autocatalytic hydrolysis of amorphous-made polylactides: Effects of L-lactide content, tacticity, and enantiomeric polymer blending. Polymers 43, 1789-1796] for poly(L-lactic acid) and poly(D-lactic acid) are examined using the model. It is shown that the model can provide a common thread through Tsuji's experimental data. A further numerical case study demonstrates that the Young's modulus obtained using very thin samples, such as those obtained by Tsuji, cannot be directly used to calculate the load carried by a device made of the same polymer but of various thicknesses. This is because the Young's modulus varies significantly in a biodegradable device due to the heterogeneous nature of the hydrolysis reaction. The governing equations for biodegradation and the relation between the Young's modulus and average molecular weight can be combined to calculate the load transfer from a degrading device to a healing bone. PMID:19878898

  7. Saturation field entropies of antiferromagnetic Ising models: Ladders and the kagome lattice

    NASA Astrophysics Data System (ADS)

    Varma, Vipin Kerala

    2013-10-01

    Saturation field entropies of antiferromagnetic Ising models on quasi-one-dimensional lattices (ladders) and the kagome lattice are calculated. The former is evaluated exactly by constructing the corresponding transfer matrices, while the latter calculation uses Binder's algorithm for efficiently and exactly computing the partition function of over 1300 spins to give Skag/kB=0.393589(6). We comment on the relation of the kagome lattice to the experimental situation in the spin-ice compound Dy2Ti2O7.

  8. Power Law and Logarithmic Entropy Corrected Ricci Dark Energy Models in Brans-Dicke Chameleon Cosmology

    NASA Astrophysics Data System (ADS)

    Pasqua, Antonio; Assaf, Khudhair A.; Aly, Ayman A.

    2013-10-01

    In this work, we study the power-law and the logarithmic entropy corrected versions of the Ricci Dark Energy (RDE) model in the framework of the Brans-Dicke cosmology non-minimally coupled with a chameleon scalar field ϕ. Considering the presence of interaction between Dark Energy (DE) and Dark Matter (DM), we derived the expressions of some relevant cosmological parameters, i.e. the equation of state parameter ω D , the deceleration parameter q and the evolution of the energy density parameter \\varOmega'D.

  9. Renyi entropy as a statistical entropy for complex systems

    NASA Astrophysics Data System (ADS)

    Bashkirov, A. G.

    2006-11-01

    To describe a complex system, we propose using the Renyi entropy depending on the parameter q (0 < q ≤ 1) and passing into the Gibbs-Shannon entropy at q = 1. The maximum principle for the Renyi entropy yields a Renyi distribution that passes into the Gibbs canonical distribution at q = 1. The thermodynamic entropy of the complex system is defined as the Renyi entropy for the Renyi distribution. In contrast to the usual entropy based on the Gibbs-Shannon entropy, the Renyi entropy increases as the distribution deviates from the Gibbs distribution (the deviation is estimated by the parameter η = 1 - q) and reaches its maximum at the maximum possible value ηmax. As this occurs, the Renyi distribution becomes a power-law distribution. The parameter η can be regarded as an order parameter. At η = 0, the derivative of the thermodynamic entropy with respect to η exhibits a jump, which indicates a kind of phase transition into a more ordered state. The evolution of the system toward further order in this phase state is accompanied by an entropy gain. This means that in accordance with the second law of thermodynamics, a natural evolution in the direction of self-organization is preferable.

  10. Interstitial Zn atoms do the trick in thermoelectric zinc antimonide, Zn4Sb3: a combined maximum entropy method X-ray electron density and ab initio electronic structure study.

    PubMed

    Cargnoni, Fausto; Nishibori, Eiji; Rabiller, Philippe; Bertini, Luca; Snyder, G Jeffrey; Christensen, Mogens; Gatti, Carlo; Iversen, Bo Brummerstadt

    2004-08-20

    The experimental electron density of the high-performance thermoelectric material Zn4Sb3 has been determined by maximum entropy (MEM) analysis of short-wavelength synchrotron powder diffraction data. These data are found to be more accurate than conventional single-crystal data due to the reduction of common systematic errors, such as absorption, extinction and anomalous scattering. Analysis of the MEM electron density directly reveals interstitial Zn atoms and a partially occupied main Zn site. Two types of Sb atoms are observed: a free spherical ion (Sb3-) and Sb2(4-) dimers. Analysis of the MEM electron density also reveals possible Sb disorder along the c axis. The disorder, defects and vacancies are all features that contribute to the drastic reduction of the thermal conductivity of the material. Topological analysis of the thermally smeared MEM density has been carried out. Starting with the X-ray structure ab initio computational methods have been used to deconvolute structural information from the space-time data averaging inherent to the XRD experiment. The analysis reveals how interstitial Zn atoms and vacancies affect the electronic structure and transport properties of beta-Zn4Sb3. The structure consists of an ideal A12Sb10 framework in which point defects are distributed. We propose that the material is a 0.184:0.420:0.396 mixture of A12Sb10, A11BCSb10 and A10BCDSb10 cells, in which A, B, C and D are the four Zn sites in the X-ray structure. Given the similar density of states (DOS) of the A12Sb10, A11BCSb10 and A10BCDSb10 cells, one may electronically model the defective stoichiometry of the real system either by n-doping the 12-Zn atom cell or by p-doping the two 13-Zn atom cells. This leads to similar calculated Seebeck coefficients for the A12Sb10, A11BCSb10 and A10BCDSb10 cells (115.0, 123.0 and 110.3 microV K(-1) at T=670 K). The model system is therefore a p-doped semiconductor as found experimentally. The effect is dramatic if these cells are

  11. Conciliating the nonadditive entropy approach and the fractional model formulation when describing subdiffusion

    NASA Astrophysics Data System (ADS)

    Kosztołowicz, Tadeusz; Lewandowska, Katarzyna

    2012-06-01

    We consider here two different models describing subdiffusion. One of them is derived from Continuous Time Random Walk formalism and utilizes a subdiffusion equation with a fractional time derivative. The second model is based on Sharma-Mittal nonadditive entropy formalism where the subdiffusive process is described by a nonlinear equation with ordinary derivatives. Using these two models we describe the process of a substance released from a thick membrane and we find functions which determine the time evolution of the amount of substance remaining inside this membrane. We then find `the agreement conditions' under which these two models provide the same relation defining subdiffusion and give the same function characterizing the process of the released substance. These agreement conditions enable us to determine the relation between the parameters occuring in both models.

  12. Bayesian or Laplacien inference, entropy and information theory and information geometry in data and signal processing

    NASA Astrophysics Data System (ADS)

    Mohammad-Djafari, Ali

    2015-01-01

    The main object of this tutorial article is first to review the main inference tools using Bayesian approach, Entropy, Information theory and their corresponding geometries. This review is focused mainly on the ways these tools have been used in data, signal and image processing. After a short introduction of the different quantities related to the Bayes rule, the entropy and the Maximum Entropy Principle (MEP), relative entropy and the Kullback-Leibler divergence, Fisher information, we will study their use in different fields of data and signal processing such as: entropy in source separation, Fisher information in model order selection, different Maximum Entropy based methods in time series spectral estimation and finally, general linear inverse problems.

  13. Offline modeling for product quality prediction of mineral processing using modeling error PDF shaping and entropy minimization.

    PubMed

    Ding, Jinliang; Chai, Tianyou; Wang, Hong

    2011-03-01

    This paper presents a novel offline modeling for product quality prediction of mineral processing which consists of a number of unit processes in series. The prediction of the product quality of the whole mineral process (i.e., the mixed concentrate grade) plays an important role and the establishment of its predictive model is a key issue for the plantwide optimization. For this purpose, a hybrid modeling approach of the mixed concentrate grade prediction is proposed, which consists of a linear model and a nonlinear model. The least-squares support vector machine is adopted to establish the nonlinear model. The inputs of the predictive model are the performance indices of each unit process, while the output is the mixed concentrate grade. In this paper, the model parameter selection is transformed into the shape control of the probability density function (PDF) of the modeling error. In this context, both the PDF-control-based and minimum-entropy-based model parameter selection approaches are proposed. Indeed, this is the first time that the PDF shape control idea is used to deal with system modeling, where the key idea is to turn model parameters so that either the modeling error PDF is controlled to follow a target PDF or the modeling error entropy is minimized. The experimental results using the real plant data and the comparison of the two approaches are discussed. The results show the effectiveness of the proposed approaches. PMID:21233046

  14. Entanglement entropy from corner transfer matrix in Forrester-Baxter non-unitary RSOS models

    NASA Astrophysics Data System (ADS)

    Bianchini, Davide; Ravanini, Francesco

    2016-04-01

    Using a corner transfer matrix approach, we compute the bipartite entanglement Rényi entropy in the off-critical perturbations of non-unitary conformal minimal models realised by lattice spin chains Hamiltonians related to the Forrester-Baxter RSOS models (Bianchini et al 2015 J. Stat. Mech. P03010) in regime III. This allows to show on a set of explicit examples that the Rényi entropies for non-unitary theories rescale near criticality as the logarithm of the correlation length with a coefficient proportional to the effective central charge. This complements a similar result, recently established for the size rescaling at the critical point (Bianchini et al 2015 J. Phys. A: Math. Theor. 48 04FT01), showing the expected agreement of the two behaviours. We also compute the first subleading unusual correction to the scaling behaviour, showing that it is expressible in terms of expansions of various fractional powers of the correlation length, related to the differences {{Δ }}-{{{Δ }}}{min} between the conformal dimensions of fields in the theory and the minimal conformal dimension. Finally, a few observations on the limit leading to the off-critical logarithmic minimal models of Pearce and Seaton (2012 J. Stat. Mech. P09014) are put forward.

  15. Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models

    NASA Astrophysics Data System (ADS)

    Lu, Dan; Ye, Ming; Curtis, Gary P.

    2015-10-01

    While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the

  16. Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models

    USGS Publications Warehouse

    Curtis, Gary P.; Lu, Dan; Ye, Ming

    2015-01-01

    While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the

  17. Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models

    SciTech Connect

    Lu, Dan; Ye, Ming; Curtis, Gary P.

    2015-08-01

    While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. Our study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. Moreover, these reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Finally, limitations of

  18. Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models

    DOE PAGESBeta

    Lu, Dan; Ye, Ming; Curtis, Gary P.

    2015-08-01

    While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. Our study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict themore » reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. Moreover, these reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Finally

  19. Dynamics of Entropy in Quantum-like Model of Decision Making

    NASA Astrophysics Data System (ADS)

    Basieva, Irina; Khrennikov, Andrei; Asano, Masanari; Ohya, Masanori; Tanaka, Yoshiharu

    2011-03-01

    We present a quantum-like model of decision making in games of the Prisoner's Dilemma type. By this model the brain processes information by using representation of mental states in complex Hilbert space. Driven by the master equation the mental state of a player, say Alice, approaches an equilibrium point in the space of density matrices. By using this equilibrium point Alice determines her mixed (i.e., probabilistic) strategy with respect to Bob. Thus our model is a model of thinking through decoherence of initially pure mental state. Decoherence is induced by interaction with memory and external environment. In this paper we study (numerically) dynamics of quantum entropy of Alice's state in the process of decision making. Our analysis demonstrates that this dynamics depends nontrivially on the initial state of Alice's mind on her own actions and her prediction state (for possible actions of Bob.)

  20. Distortion-rate models for entropy-coded lattice vector quantization.

    PubMed

    Raffy, P; Antonini, M; Barlaud, M

    2000-01-01

    The increasing demand for real-time applications requires the use of variable-rate quantizers having good performance in the low bit rate domain. In order to minimize the complexity of quantization, as well as maintaining a reasonably high PSNR ratio, we propose to use an entropy-coded lattice vector quantizer (ECLVQ). These quantizers have proven to outperform the well-known EZW algorithm's performance in terms of rate-distortion tradeoff. In this paper, we focus our attention on the modeling of the mean squared error (MSE) distortion and the prefix code rate for ECLVQ. First, we generalize the distortion model of Jeong and Gibson (1993) on fixed-rate cubic quantizers to lattices under a high rate assumption. Second, we derive new rate models for ECLVQ, efficient at low bit rates without any high rate assumptions. Simulation results prove the precision of our models. PMID:18262939

  1. The impact of resolution upon entropy and information in coarse-grained models

    SciTech Connect

    Foley, Thomas T.; Shell, M. Scott; Noid, W. G.

    2015-12-28

    By eliminating unnecessary degrees of freedom, coarse-grained (CG) models tremendously facilitate numerical calculations and theoretical analyses of complex phenomena. However, their success critically depends upon the representation of the system and the effective potential that governs the CG degrees of freedom. This work investigates the relationship between the CG representation and the many-body potential of mean force (PMF), W, which is the appropriate effective potential for a CG model that exactly preserves the structural and thermodynamic properties of a given high resolution model. In particular, we investigate the entropic component of the PMF and its dependence upon the CG resolution. This entropic component, S{sub W}, is a configuration-dependent relative entropy that determines the temperature dependence of W. As a direct consequence of eliminating high resolution details from the CG model, the coarsening process transfers configurational entropy and information from the configuration space into S{sub W}. In order to further investigate these general results, we consider the popular Gaussian Network Model (GNM) for protein conformational fluctuations. We analytically derive the exact PMF for the GNM as a function of the CG representation. In the case of the GNM, −TS{sub W} is a positive, configuration-independent term that depends upon the temperature, the complexity of the protein interaction network, and the details of the CG representation. This entropic term demonstrates similar behavior for seven model proteins and also suggests, in each case, that certain resolutions provide a more efficient description of protein fluctuations. These results may provide general insight into the role of resolution for determining the information content, thermodynamic properties, and transferability of CG models. Ultimately, they may lead to a rigorous and systematic framework for optimizing the representation of CG models.

  2. The impact of resolution upon entropy and information in coarse-grained models

    NASA Astrophysics Data System (ADS)

    Foley, Thomas T.; Shell, M. Scott; Noid, W. G.

    2015-12-01

    By eliminating unnecessary degrees of freedom, coarse-grained (CG) models tremendously facilitate numerical calculations and theoretical analyses of complex phenomena. However, their success critically depends upon the representation of the system and the effective potential that governs the CG degrees of freedom. This work investigates the relationship between the CG representation and the many-body potential of mean force (PMF), W, which is the appropriate effective potential for a CG model that exactly preserves the structural and thermodynamic properties of a given high resolution model. In particular, we investigate the entropic component of the PMF and its dependence upon the CG resolution. This entropic component, SW, is a configuration-dependent relative entropy that determines the temperature dependence of W. As a direct consequence of eliminating high resolution details from the CG model, the coarsening process transfers configurational entropy and information from the configuration space into SW. In order to further investigate these general results, we consider the popular Gaussian Network Model (GNM) for protein conformational fluctuations. We analytically derive the exact PMF for the GNM as a function of the CG representation. In the case of the GNM, -TSW is a positive, configuration-independent term that depends upon the temperature, the complexity of the protein interaction network, and the details of the CG representation. This entropic term demonstrates similar behavior for seven model proteins and also suggests, in each case, that certain resolutions provide a more efficient description of protein fluctuations. These results may provide general insight into the role of resolution for determining the information content, thermodynamic properties, and transferability of CG models. Ultimately, they may lead to a rigorous and systematic framework for optimizing the representation of CG models.

  3. Conditional entropy maximization for PET image reconstruction using adaptive mesh model.

    PubMed

    Zhu, Hongqing; Shu, Huazhong; Zhou, Jian; Dai, Xiubin; Luo, Limin

    2007-04-01

    Iterative image reconstruction algorithms have been widely used in the field of positron emission tomography (PET). However, such algorithms are sensitive to noise artifacts so that the reconstruction begins to degrade when the number of iterations is high. In this paper, we propose a new algorithm to reconstruct an image from the PET emission projection data by using the conditional entropy maximization and the adaptive mesh model. In a traditional tomography reconstruction method, the reconstructed image is directly computed in the pixel domain. Unlike this kind of methods, the proposed approach is performed by estimating the nodal values from the observed projection data in a mesh domain. In our method, the initial Delaunay triangulation mesh is generated from a set of randomly selected pixel points, and it is then modified according to the pixel intensity value of the estimated image at each iteration step in which the conditional entropy maximization is used. The advantage of using the adaptive mesh model for image reconstruction is that it provides a natural spatially adaptive smoothness mechanism. In experiments using the synthetic and clinical data, it is found that the proposed algorithm is more robust to noise compared to the common pixel-based MLEM algorithm and mesh-based MLEM with a fixed mesh structure. PMID:17368841

  4. Thermodynamic efficiency and entropy production in the climate system.

    PubMed

    Lucarini, Valerio

    2009-08-01

    We present an outlook on the climate system thermodynamics. First, we construct an equivalent Carnot engine with efficiency eta and frame the Lorenz energy cycle in a macroscale thermodynamic context. Then, by exploiting the second law, we prove that the lower bound to the entropy production is eta times the integrated absolute value of the internal entropy fluctuations. An exergetic interpretation is also proposed. Finally, the controversial maximum entropy production principle is reinterpreted as requiring the joint optimization of heat transport and mechanical work production. These results provide tools for climate change analysis and for climate models' validation. PMID:19792088

  5. Nonlinear model dynamics for closed-system, constrained, maximal-entropy-generation relaxation by energy redistribution

    SciTech Connect

    Beretta, Gian Paolo

    2006-02-15

    We discuss a nonlinear model for relaxation by energy redistribution within an isolated, closed system composed of noninteracting identical particles with energy levels e{sub i} with i=1,2,...,N. The time-dependent occupation probabilities p{sub i}(t) are assumed to obey the nonlinear rate equations {tau} dp{sub i}/dt=-p{sub i} ln p{sub i}-{alpha}(t)p{sub i}-{beta}(t)e{sub i}p{sub i} where {alpha}(t) and {beta}(t) are functionals of the p{sub i}(t)'s that maintain invariant the mean energy E={sigma}{sub i=1}{sup N}e{sub i}p{sub i}(t) and the normalization condition 1={sigma}{sub i=1}{sup N}p{sub i}(t). The entropy S(t)=-k{sub B}{sigma}{sub i=1}{sup N}p{sub i}(t)ln p{sub i}(t) is a nondecreasing function of time until the initially nonzero occupation probabilities reach a Boltzmann-like canonical distribution over the occupied energy eigenstates. Initially zero occupation probabilities, instead, remain zero at all times. The solutions p{sub i}(t) of the rate equations are unique and well defined for arbitrary initial conditions p{sub i}(0) and for all times. The existence and uniqueness both forward and backward in time allows the reconstruction of the ancestral or primordial lowest entropy state. By casting the rate equations in terms not of the p{sub i}'s but of their positive square roots {radical}(p{sub i}), they unfold from the assumption that time evolution is at all times along the local direction of steepest entropy ascent or, equivalently, of maximal entropy generation. These rate equations have the same mathematical structure and basic features as the nonlinear dynamical equation proposed in a series of papers ending with G. P. Beretta, Found. Phys. 17, 365 (1987) and recently rediscovered by S. Gheorghiu-Svirschevski [Phys. Rev. A 63, 022105 (2001);63, 054102 (2001)]. Numerical results illustrate the features of the dynamics and the differences from the rate equations recently considered for the same problem by M. Lemanska and Z. Jaeger [Physica D 170, 72

  6. Combined Population Dynamics and Entropy Modelling Supports Patient Stratification in Chronic Myeloid Leukemia

    NASA Astrophysics Data System (ADS)

    Brehme, Marc; Koschmieder, Steffen; Montazeri, Maryam; Copland, Mhairi; Oehler, Vivian G.; Radich, Jerald P.; Brümmendorf, Tim H.; Schuppert, Andreas

    2016-04-01

    Modelling the parameters of multistep carcinogenesis is key for a better understanding of cancer progression, biomarker identification and the design of individualized therapies. Using chronic myeloid leukemia (CML) as a paradigm for hierarchical disease evolution we show that combined population dynamic modelling and CML patient biopsy genomic analysis enables patient stratification at unprecedented resolution. Linking CD34+ similarity as a disease progression marker to patient-derived gene expression entropy separated established CML progression stages and uncovered additional heterogeneity within disease stages. Importantly, our patient data informed model enables quantitative approximation of individual patients’ disease history within chronic phase (CP) and significantly separates “early” from “late” CP. Our findings provide a novel rationale for personalized and genome-informed disease progression risk assessment that is independent and complementary to conventional measures of CML disease burden and prognosis.

  7. Combined Population Dynamics and Entropy Modelling Supports Patient Stratification in Chronic Myeloid Leukemia.

    PubMed

    Brehme, Marc; Koschmieder, Steffen; Montazeri, Maryam; Copland, Mhairi; Oehler, Vivian G; Radich, Jerald P; Brümmendorf, Tim H; Schuppert, Andreas

    2016-01-01

    Modelling the parameters of multistep carcinogenesis is key for a better understanding of cancer progression, biomarker identification and the design of individualized therapies. Using chronic myeloid leukemia (CML) as a paradigm for hierarchical disease evolution we show that combined population dynamic modelling and CML patient biopsy genomic analysis enables patient stratification at unprecedented resolution. Linking CD34(+) similarity as a disease progression marker to patient-derived gene expression entropy separated established CML progression stages and uncovered additional heterogeneity within disease stages. Importantly, our patient data informed model enables quantitative approximation of individual patients' disease history within chronic phase (CP) and significantly separates "early" from "late" CP. Our findings provide a novel rationale for personalized and genome-informed disease progression risk assessment that is independent and complementary to conventional measures of CML disease burden and prognosis. PMID:27048866

  8. Combined Population Dynamics and Entropy Modelling Supports Patient Stratification in Chronic Myeloid Leukemia

    PubMed Central

    Brehme, Marc; Koschmieder, Steffen; Montazeri, Maryam; Copland, Mhairi; Oehler, Vivian G.; Radich, Jerald P.; Brümmendorf, Tim H.; Schuppert, Andreas

    2016-01-01

    Modelling the parameters of multistep carcinogenesis is key for a better understanding of cancer progression, biomarker identification and the design of individualized therapies. Using chronic myeloid leukemia (CML) as a paradigm for hierarchical disease evolution we show that combined population dynamic modelling and CML patient biopsy genomic analysis enables patient stratification at unprecedented resolution. Linking CD34+ similarity as a disease progression marker to patient-derived gene expression entropy separated established CML progression stages and uncovered additional heterogeneity within disease stages. Importantly, our patient data informed model enables quantitative approximation of individual patients’ disease history within chronic phase (CP) and significantly separates “early” from “late” CP. Our findings provide a novel rationale for personalized and genome-informed disease progression risk assessment that is independent and complementary to conventional measures of CML disease burden and prognosis. PMID:27048866

  9. Integrals, Expectation-Values and Entropy.

    NASA Astrophysics Data System (ADS)

    Barron, Arthur Randall

    1982-03-01

    The maximum entropy principle, one of the cornerstones of equilibrium statistical mechanics, has been introduced into probability theory by E. T. JAYNES as part of a rational strategy for making plausible inferences from incomplete information. The conventional maximum entropy formalism, involving the familiar machinery of partition functions, is practically the same in both classical and quantum mechanical formulations of statistical mechanics. The present work undertakes to extend the maximum entropy principle to a generalized abstract formulation of probability theory, encompassing the familiar classical and quantal models as well as certain more exotic models uncovered by G. W. MACKEY in his axiomatization of quantum mechanics--the so-called quantum logics. In this more general approach, the conventional machinery of partition functions is not available. Instead, one makes use of a family of conditional entropy functions. In its dependence on the constraint conditions, the conditional entropy enjoys concavity and monotonicity properties analogous to those of the phenomenological entropy in equilibrium thermodynamics. The new formalism is able to take in stride the possibility that the constraints, although consistent, may fail to determine a unique maximum entropy state (probability distribution). Examples which demonstrate this possibility are readily constructed in both classical and quantal models of probability theory. One observes that, in the convex set of states compatible with the constraints, there is none of greatest entropy; typically this happens at or beyond a "barrier" where the conventional partition function becomes singular. Such examples should not simply be dismissed as "pathological"; they may perhaps have interesting physical interpretations (e.g., turbulence, disorder, chaos). In carrying out the above program it is essential to recognize that the expectation-values of an unbounded observable (real random variable) need not be finite: they

  10. Entropy generation analysis for film boiling: A simple model of quenching

    NASA Astrophysics Data System (ADS)

    Lotfi, Ali; Lakzian, Esmail

    2016-04-01

    In this paper, quenching in high-temperature materials processing is modeled as a superheated isothermal flat plate. In these phenomena, a liquid flows over the highly superheated surfaces for cooling. So the surface and the liquid are separated by the vapor layer that is formed because of the liquid which is in contact with the superheated surface. This is named forced film boiling. As an objective, the distribution of the entropy generation in the laminar forced film boiling is obtained by similarity solution for the first time in the quenching processes. The PDE governing differential equations of the laminar film boiling including continuity, momentum, and energy are reduced to ODE ones, and a dimensionless equation for entropy generation inside the liquid boundary and vapor layer is obtained. Then the ODEs are solved by applying the 4th-order Runge-Kutta method with a shooting procedure. Moreover, the Bejan number is used as a design criterion parameter for a qualitative study about the rate of cooling and the effects of plate speed are studied in the quenching processes. It is observed that for high speed of the plate the rate of cooling (heat transfer) is more.

  11. Rényi entropy of a line in two-dimensional Ising models

    NASA Astrophysics Data System (ADS)

    Stéphan, J.-M.; Misguich, G.; Pasquier, V.

    2010-09-01

    We consider the two-dimensional Ising model on an infinitely long cylinder and study the probabilities pi to observe a given spin configuration i along a circular section of the cylinder. These probabilities also occur as eigenvalues of reduced density matrices in some Rokhsar-Kivelson wave functions. We analyze the subleading constant to the Rényi entropy Rn=1/(1-n)ln(∑ipin) and discuss its scaling properties at the critical point. Studying three different microscopic realizations, we provide numerical evidence that it is universal and behaves in a steplike fashion as a function of n with a discontinuity at the Shannon point n=1 . As a consequence, a field theoretical argument based on the replica trick would fail to give the correct value at this point. We nevertheless compute it numerically with high precision. Two other values of the Rényi parameter are of special interest: n=1/2 and n=∞ are related in a simple way to the Affleck-Ludwig boundary entropies associated to free and fixed boundary conditions, respectively.

  12. Classification of Internal Carotid Artery Doppler Signals Using Hidden Markov Model and Wavelet Transform with Entropy

    NASA Astrophysics Data System (ADS)

    Uğuz, Harun; Kodaz, Halife

    Doppler ultrasound has been usually preferred for investigation of the artery conditions in the last two decade, since it is a non-invasive method which is not risky. In this study, a biomedical system based on Discrete Hidden Markov Model (DHMM) has been developed in order to classify the internal carotid artery Doppler signals recorded from 191 subjects (136 of them had suffered from internal carotid artery stenosis and rest of them had been healthy subjects). Developed system comprises of three stages. In the first stage, for feature extraction, obtained Doppler signals were separated to its sub-bands using Discrete Wavelet Transform (DWT). In the second stage, entropy of each sub-band was calculated using Shannon entropy algorithm to reduce the dimensionality of the feature vectors via DWT. In the third stage, the reduced features of carotid artery Doppler signals were used as input patterns of the DHMM classifier. Our proposed method reached 97.38% classification accuracy with 5 fold cross validation (CV) technique. The classification results showed that purposed method is effective for classification of internal carotid artery Doppler signals.

  13. A multinomial maximum likelihood program /MUNOML/. [in modeling sensory and decision phenomena

    NASA Technical Reports Server (NTRS)

    Curry, R. E.

    1975-01-01

    A multinomial maximum likelihood program (MUNOML) for signal detection and for behavior models is discussed. It is found to be useful in day to day operation since it provides maximum flexibility with minimum duplicated effort. It has excellent convergence qualities and rarely goes beyond 10 iterations. A library of subroutines is being collected for use with MUNOML, including subroutines for a successive categories model and for signal detectability models.

  14. Marginal Maximum A Posteriori Item Parameter Estimation for the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    Roberts, James S.; Thompson, Vanessa M.

    2011-01-01

    A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…

  15. DNA Nanostructures as Models for Evaluating the Role of Enthalpy and Entropy in Polyvalent Binding

    SciTech Connect

    Nangreave, Jeanette; Yan, Hao; Liu, Yan

    2011-03-30

    DNA nanotechnology allows the design and construction of nanoscale objects that have finely tuned dimensions, orientation, and structure with remarkable ease and convenience. Synthetic DNA nanostructures can be precisely engineered to model a variety of molecules and systems, providing the opportunity to probe very subtle biophysical phenomena. In this study, several such synthetic DNA nanostructures were designed to serve as models to study the binding behavior of polyvalent molecules and gain insight into how small changes to the ligand/receptor scaffolds, intended to vary their conformational flexibility, will affect their association equilibrium. This approach has yielded a quantitative identification of the roles of enthalpy and entropy in the affinity of polyvalent DNA nanostructure interactions, which exhibit an intriguing compensating effect.

  16. Inflection points of microcanonical entropy: Monte Carlo simulation of q state Potts model on a finite square lattice

    SciTech Connect

    Praveen, E. Satyanarayana, S. V. M.

    2014-04-24

    Traditional definition of phase transition involves an infinitely large system in thermodynamic limit. Finite systems such as biological proteins exhibit cooperative behavior similar to phase transitions. We employ recently discovered analysis of inflection points of microcanonical entropy to estimate the transition temperature of the phase transition in q state Potts model on a finite two dimensional square lattice for q=3 (second order) and q=8 (first order). The difference of energy density of states (DOS) Δ ln g(E) = ln g(E+ ΔE) −ln g(E) exhibits a point of inflexion at a value corresponding to inverse transition temperature. This feature is common to systems exhibiting both first as well as second order transitions. While the difference of DOS registers a monotonic variation around the point of inflexion for systems exhibiting second order transition, it has an S-shape with a minimum and maximum around the point of inflexion for the case of first order transition.

  17. Effects of lag and maximum growth in contaminant transport and biodegradation modeling

    SciTech Connect

    Wood, B.D.; Dawson, C.N.

    1992-06-01

    The effects of time lag and maximum microbial growth on biodegradation in contaminant transport are discussed. A mathematical model is formulated that accounts for these effects, and a numerical case study is presented that demonstrates how lag influences biodegradation.

  18. Modeling the Maximum Spreading of Liquid Droplets Impacting Wetting and Nonwetting Surfaces.

    PubMed

    Lee, Jae Bong; Derome, Dominique; Guyer, Robert; Carmeliet, Jan

    2016-02-01

    Droplet impact has been imaged on different rigid, smooth, and rough substrates for three liquids with different viscosity and surface tension, with special attention to the lower impact velocity range. Of all studied parameters, only surface tension and viscosity, thus the liquid properties, clearly play a role in terms of the attained maximum spreading ratio of the impacting droplet. Surface roughness and type of surface (steel, aluminum, and parafilm) slightly affect the dynamic wettability and maximum spreading at low impact velocity. The dynamic contact angle at maximum spreading has been identified to properly characterize this dynamic spreading process, especially at low impact velocity where dynamic wetting plays an important role. The dynamic contact angle is found to be generally higher than the equilibrium contact angle, showing that statically wetting surfaces can become less wetting or even nonwetting under dynamic droplet impact. An improved energy balance model for maximum spreading ratio is proposed based on a correct analytical modeling of the time at maximum spreading, which determines the viscous dissipation. Experiments show that the time at maximum spreading decreases with impact velocity depending on the surface tension of the liquid, and a scaling with maximum spreading diameter and surface tension is proposed. A second improvement is based on the use of the dynamic contact angle at maximum spreading, instead of quasi-static contact angles, to describe the dynamic wetting process at low impact velocity. This improved model showed good agreement compared to experiments for the maximum spreading ratio versus impact velocity for different liquids, and a better prediction compared to other models in literature. In particular, scaling according to We(1/2) is found invalid for low velocities, since the curves bend over to higher maximum spreading ratios due to the dynamic wetting process. PMID:26743317

  19. Uniform Accuracy of the Maximum Likelihood Estimates for Probabilistic Models of Biological Sequences

    PubMed Central

    Ekisheva, Svetlana

    2010-01-01

    Probabilistic models for biological sequences (DNA and proteins) have many useful applications in bioinformatics. Normally, the values of parameters of these models have to be estimated from empirical data. However, even for the most common estimates, the maximum likelihood (ML) estimates, properties have not been completely explored. Here we assess the uniform accuracy of the ML estimates for models of several types: the independence model, the Markov chain and the hidden Markov model (HMM). Particularly, we derive rates of decay of the maximum estimation error by employing the measure concentration as well as the Gaussian approximation, and compare these rates. PMID:21318122

  20. Exact valence bond entanglement entropy and probability distribution in the XXX spin chain and the potts model.

    PubMed

    Jacobsen, J L; Saleur, H

    2008-02-29

    We determine exactly the probability distribution of the number N_(c) of valence bonds connecting a subsystem of length L>1 to the rest of the system in the ground state of the XXX antiferromagnetic spin chain. This provides, in particular, the asymptotic behavior of the valence-bond entanglement entropy S_(VB)=N_(c)ln2=4ln2/pi(2)lnL disproving a recent conjecture that this should be related with the von Neumann entropy, and thus equal to 1/3lnL. Our results generalize to the Q-state Potts model. PMID:18352661

  1. Dynamics of Information Entropies of Atom-Field Entangled States Generated via the Jaynes–Cummings Model

    NASA Astrophysics Data System (ADS)

    Pakniat, R.; Tavassoly, M. K.; Zandi, M. H.

    2016-03-01

    In this paper we have studied the dynamical evolution of Shannon information entropies in position and momentum spaces for two classes of (nonstationary) atom-field entangled states, which are obtained via the Jaynes–Cummings model and its generalization. We have focused on the interaction between two- and Ξ-type three-level atoms with the single-mode quantized field. The three-dimensional plots of entropy densities in position and momentum spaces are presented versus corresponding coordinates and time, numerically. It is observed that for particular values of the parameters of the systems, the entropy squeezing in position space occurs. Finally, we have shown that the well-known BBM (Beckner, Bialynicki-Birola and Mycielsky) inequality, which is a stronger statement of the Heisenberg uncertainty relation, is properly satisfied.

  2. EEG entropy measures in anesthesia

    PubMed Central

    Liang, Zhenhu; Wang, Yinghua; Sun, Xue; Li, Duan; Voss, Logan J.; Sleigh, Jamie W.; Hagihira, Satoshi; Li, Xiaoli

    2015-01-01

    Highlights: ► Twelve entropy indices were systematically compared in monitoring depth of anesthesia and detecting burst suppression.► Renyi permutation entropy performed best in tracking EEG changes associated with different anesthesia states.► Approximate Entropy and Sample Entropy performed best in detecting burst suppression. Objective: Entropy algorithms have been widely used in analyzing EEG signals during anesthesia. However, a systematic comparison of these entropy algorithms in assessing anesthesia drugs' effect is lacking. In this study, we compare the capability of 12 entropy indices for monitoring depth of anesthesia (DoA) and detecting the burst suppression pattern (BSP), in anesthesia induced by GABAergic agents. Methods: Twelve indices were investigated, namely Response Entropy (RE) and State entropy (SE), three wavelet entropy (WE) measures [Shannon WE (SWE), Tsallis WE (TWE), and Renyi WE (RWE)], Hilbert-Huang spectral entropy (HHSE), approximate entropy (ApEn), sample entropy (SampEn), Fuzzy entropy, and three permutation entropy (PE) measures [Shannon PE (SPE), Tsallis PE (TPE) and Renyi PE (RPE)]. Two EEG data sets from sevoflurane-induced and isoflurane-induced anesthesia respectively were selected to assess the capability of each entropy index in DoA monitoring and BSP detection. To validate the effectiveness of these entropy algorithms, pharmacokinetic/pharmacodynamic (PK/PD) modeling and prediction probability (Pk) analysis were applied. The multifractal detrended fluctuation analysis (MDFA) as a non-entropy measure was compared. Results: All the entropy and MDFA indices could track the changes in EEG pattern during different anesthesia states. Three PE measures outperformed the other entropy indices, with less baseline variability, higher coefficient of determination (R2) and prediction probability, and RPE performed best; ApEn and SampEn discriminated BSP best. Additionally, these entropy measures showed an advantage in computation

  3. Integrating Entropy and Closed Frequent Pattern Mining for Social Network Modelling and Analysis

    NASA Astrophysics Data System (ADS)

    Adnan, Muhaimenul; Alhajj, Reda; Rokne, Jon

    The recent increase in the explicitly available social networks has attracted the attention of the research community to investigate how it would be possible to benefit from such a powerful model in producing effective solutions for problems in other domains where the social network is implicit; we argue that social networks do exist around us but the key issue is how to realize and analyze them. This chapter presents a novel approach for constructing a social network model by an integrated framework that first preparing the data to be analyzed and then applies entropy and frequent closed patterns mining for network construction. For a given problem, we first prepare the data by identifying items and transactions, which arc the basic ingredients for frequent closed patterns mining. Items arc main objects in the problem and a transaction is a set of items that could exist together at one time (e.g., items purchased in one visit to the supermarket). Transactions could be analyzed to discover frequent closed patterns using any of the well-known techniques. Frequent closed patterns have the advantage that they successfully grab the inherent information content of the dataset and is applicable to a broader set of domains. Entropies of the frequent closed patterns arc used to keep the dimensionality of the feature vectors to a reasonable size; it is a kind of feature reduction process. Finally, we analyze the dynamic behavior of the constructed social network. Experiments were conducted on a synthetic dataset and on the Enron corpus email dataset. The results presented in the chapter show that social networks extracted from a feature set as frequent closed patterns successfully carry the community structure information. Moreover, for the Enron email dataset, we present an analysis to dynamically indicate the deviations from each user's individual and community profile. These indications of deviations can be very useful to identify unusual events.

  4. The Holographic Entropy Cone

    SciTech Connect

    Bao, Ning; Nezami, Sepehr; Ooguri, Hirosi; Stoica, Bogdan; Sully, James; Walter, Michael

    2015-09-21

    We initiate a systematic enumeration and classification of entropy inequalities satisfied by the Ryu-Takayanagi formula for conformal field theory states with smooth holographic dual geometries. For 2, 3, and 4 regions, we prove that the strong subadditivity and the monogamy of mutual information give the complete set of inequalities. This is in contrast to the situation for generic quantum systems, where a complete set of entropy inequalities is not known for 4 or more regions. We also find an infinite new family of inequalities applicable to 5 or more regions. The set of all holographic entropy inequalities bounds the phase space of Ryu-Takayanagi entropies, defining the holographic entropy cone. We characterize this entropy cone by reducing geometries to minimal graph models that encode the possible cutting and gluing relations of minimal surfaces. We find that, for a fixed number of regions, there are only finitely many independent entropy inequalities. To establish new holographic entropy inequalities, we introduce a combinatorial proof technique that may also be of independent interest in Riemannian geometry and graph theory.

  5. The Holographic Entropy Cone

    DOE PAGESBeta

    Bao, Ning; Nezami, Sepehr; Ooguri, Hirosi; Stoica, Bogdan; Sully, James; Walter, Michael

    2015-09-21

    We initiate a systematic enumeration and classification of entropy inequalities satisfied by the Ryu-Takayanagi formula for conformal field theory states with smooth holographic dual geometries. For 2, 3, and 4 regions, we prove that the strong subadditivity and the monogamy of mutual information give the complete set of inequalities. This is in contrast to the situation for generic quantum systems, where a complete set of entropy inequalities is not known for 4 or more regions. We also find an infinite new family of inequalities applicable to 5 or more regions. The set of all holographic entropy inequalities bounds the phasemore » space of Ryu-Takayanagi entropies, defining the holographic entropy cone. We characterize this entropy cone by reducing geometries to minimal graph models that encode the possible cutting and gluing relations of minimal surfaces. We find that, for a fixed number of regions, there are only finitely many independent entropy inequalities. To establish new holographic entropy inequalities, we introduce a combinatorial proof technique that may also be of independent interest in Riemannian geometry and graph theory.« less

  6. The holographic entropy cone

    NASA Astrophysics Data System (ADS)

    Bao, Ning; Nezami, Sepehr; Ooguri, Hirosi; Stoica, Bogdan; Sully, James; Walter, Michael

    2015-09-01

    We initiate a systematic enumeration and classification of entropy inequalities satisfied by the Ryu-Takayanagi formula for conformal field theory states with smooth holographic dual geometries. For 2, 3, and 4 regions, we prove that the strong subadditivity and the monogamy of mutual information give the complete set of inequalities. This is in contrast to the situation for generic quantum systems, where a complete set of entropy inequalities is not known for 4 or more regions. We also find an infinite new family of inequalities applicable to 5 or more regions. The set of all holographic entropy inequalities bounds the phase space of Ryu-Takayanagi entropies, defining the holographic entropy cone. We characterize this entropy cone by reducing geometries to minimal graph models that encode the possible cutting and gluing relations of minimal surfaces. We find that, for a fixed number of regions, there are only finitely many independent entropy inequalities. To establish new holographic entropy inequalities, we introduce a combinatorial proof technique that may also be of independent interest in Riemannian geometry and graph theory.

  7. Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM

    ERIC Educational Resources Information Center

    Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman

    2012-01-01

    This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…

  8. Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models

    ERIC Educational Resources Information Center

    Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai

    2011-01-01

    Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…

  9. SELECTION OF CANDIDATE EUTROPHICATION MODELS FOR TOTAL MAXIMUM DAILY LOADS ANALYSES

    EPA Science Inventory

    A tiered approach was developed to evaluate candidate eutrophication models to select a common suite of models that could be used for Total Maximum Daily Loads (TMDL) analyses in estuaries, rivers, and lakes/reservoirs. Consideration for linkage to watershed models and ecologica...

  10. Canonical Statistical Model for Maximum Expected Immission of Wire Conductor in an Aperture Enclosure

    NASA Technical Reports Server (NTRS)

    Bremner, Paul G.; Vazquez, Gabriel; Christiano, Daniel J.; Trout, Dawn H.

    2016-01-01

    Prediction of the maximum expected electromagnetic pick-up of conductors inside a realistic shielding enclosure is an important canonical problem for system-level EMC design of space craft, launch vehicles, aircraft and automobiles. This paper introduces a simple statistical power balance model for prediction of the maximum expected current in a wire conductor inside an aperture enclosure. It calculates both the statistical mean and variance of the immission from the physical design parameters of the problem. Familiar probability density functions can then be used to predict the maximum expected immission for deign purposes. The statistical power balance model requires minimal EMC design information and solves orders of magnitude faster than existing numerical models, making it ultimately viable for scaled-up, full system-level modeling. Both experimental test results and full wave simulation results are used to validate the foundational model.

  11. Application of nonstationary generalized logistic models for analyzing the annual maximum rainfall data in Korea

    NASA Astrophysics Data System (ADS)

    Kim, S.; Joo, K.; Kim, H.; Heo, J. H.

    2014-12-01

    Recently, the various approaches for the nonstationary frequency analysis have been studied since the effect of climate change was widely recognized for hydrologic data. Most nonstationary studies proposed the nonstationary general extreme value (GEV) and generalized Pareto models for the annual maximum and POT (peak-over-threshold) data, respectively. However, various alternatives is needed to analyze the nonstationary hydrologic data because of the complicated influence of climate change. This study proposed the nonstationary generalized logistic models containing time-dependent location and scale parameters. These models contain only or both nonstationary location and scale parameters that change linearly over time. The parameters are estimated using the method of maximum likelihood based on the Newton-Raphson method. In addition, the proposed models apply to the annual maximum rainfall data of Korea in order to evaluate the applicability of the proposed models.

  12. Deriving non-homogeneous DNA Markov chain models by cluster analysis algorithm minimizing multiple alignment entropy.

    PubMed

    Borodovsky, M; Peresetsky, A

    1994-09-01

    Non-homogeneous Markov chain models can represent biologically important regions of DNA sequences. The statistical pattern that is described by these models is usually weak and was found primarily because of strong biological indications. The general method for extracting similar patterns is presented in the current paper. The algorithm incorporates cluster analysis, multiple alignment and entropy minimization. The method was first tested using the set of DNA sequences produced by Markov chain generators. It was shown that artificial gene sequences, which initially have been randomly set up along the multiple alignment panels, are aligned according to the hidden triplet phase. Then the method was applied to real protein-coding sequences and the resulting alignment clearly indicated the triplet phase and produced the parameters of the optimal 3-periodic non-homogeneous Markov chain model. These Markov models were already employed in the GeneMark gene prediction algorithm, which is used in genome sequencing projects. The algorithm can also handle the case in which the sequences to be aligned reveal different statistical patterns, such as Escherichia coli protein-coding sequences belonging to Class II and Class III. The algorithm accepts a random mix of sequences from different classes, and is able to separate them into two groups (clusters), align each cluster separately, and define a non-homogeneous Markov chain model for each sequence cluster. PMID:7952897

  13. Generalized Measure of Entropy, Mathai's Distributional Pathway Model, and Tsallis Statistics

    NASA Astrophysics Data System (ADS)

    Mathai, A. M.; Haubold, H. J.

    2006-11-01

    mathai@math.mcgill.ca The well-known pathway model of Mathai (2005) mainly deals with the rectangular matrix-variate case. In this paper the scalar version is shown to be associated with a large number of probability models used in physics. Different families of densities are discussed, which are all connected through the pathway parameter α, generating a distributional pathway. The idea is to switch from one functional form to another through this parameter and it is shown that basically one can proceed from the generalized type-1 beta family to generalized type-2 beta family to generalized gamma family when the real variable is positive and a wider set of families when the variable can take negative values also. For simplicity, only the real scalar case is discussed here but corresponding families are available when the variable is in the complex domain. A large number of densities used in physics are shown to be special cases of or associated with the pathway model, including Maxwell-Boltzmann, Fermi-Dirac, and Bose- Einstein distributions. It is also shown that the pathway model is available by maximizing a generalized measure of entropy, leading to an entropic pathway. Particular cases of the pathway model are shown to cover Tsallis statistics (Tsallis, 1988) and the superstatistics introduced by Beck and Cohen (2003).

  14. Position Sensorless Control of IPMSMs Based on a Novel Flux Model Suitable for Maximum Torque Control

    NASA Astrophysics Data System (ADS)

    Matsumoto, Atsushi; Hasegawa, Masaru; Matsui, Keiju

    In this paper, a novel position sensorless control method for interior permanent magnet synchronous motors (IPMSMs) that is based on a novel flux model suitable for maximum torque control has been proposed. Maximum torque per ampere (MTPA) control is often utilized for driving IPMSMs with the maximum efficiency. In order to implement this control, generally, the parameters are required to be accurate. However, the inductance varies dramatically because of magnetic saturation, which has been one of the most important problems in recent years. Therefore, the conventional MTPA control method fails to achieve maximum efficiency for IPMSMs because of parameter mismatches. In this paper, first, a novel flux model has been proposed for realizing the position sensorless control of IPMSMs, which is insensitive to Lq. In addition, in this paper, it has been shown that the proposed flux model can approximately estimate the maximum torque control (MTC) frame, which as a new coordinate aligned with the current vector for MTPA control. Next, in this paper, a precise estimation method for the MTC frame has been proposed. By this method, highly accurate maximum torque control can be achieved. A decoupling control algorithm based on the proposed model has also been addressed in this paper. Finally, some experimental results demonstrate the feasibility and effectiveness of the proposed method.

  15. Finite mixture model: A maximum likelihood estimation approach on time series data

    NASA Astrophysics Data System (ADS)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  16. Role of chain entropy in an analytic model of protein binding in single-DNA stretching experiments.

    PubMed

    Lam, Pui-Man; Neumann, Richard M

    2011-09-01

    We show that the simple analytical model proposed by Zhang and Marko [Phys. Rev. E 77, 031916 (2008)] to illustrate Maxwell relations for single-DNA experiments can be improved by including the zero-force entropy of a Gaussian chain. The resulting model is in excellent agreement with the discrete persistent-chain model and is in a form convenient for analyzing experimental data. PMID:22060437

  17. Reconstruction of f(R) Gravity with Ordinary and Entropy-Corrected (m, n)-Type Holographic Dark Energy Model

    NASA Astrophysics Data System (ADS)

    Prabir, Rudra

    2016-07-01

    In this assignment we will present a reconstruction scheme between f(R) gravity with ordinary and entropy corrected (m,n)-type holographic dark energy. The correspondence is established and expressions for the reconstructed f(R) models are determined. To study the evolution of the reconstructed models plots are generated. The stability of the calculated models are also investigated using the squared speed of sound in the background of the reconstructed gravities.

  18. Entropy-based neural networks model for flow duration curves at ungauged sites

    NASA Astrophysics Data System (ADS)

    Atieh, Maya; Gharabaghi, Bahram; Rudra, Ramesh

    2015-10-01

    An apportionment entropy disorder index (AEDI), capturing both temporal and spatial variability of precipitation, was introduced as a new input parameter to an artificial neural networks (ANN) model to more accurately predict flow duration curves (FDCs) at ungauged sites. The ANN model was trained on the randomly selected 2/3 of the dataset of 147 gauged streams in Ontario, and validated on the remaining 1/3. Both location and scale parameters that define the lognormal distribution for the FDCs were highly sensitive to the driving climatic factors, such as, mean annual precipitation, mean annual snowfall, and AEDI. Of the long list of watershed characteristics, the location parameter was most sensitive to drainage area, shape factor and percent area covered by natural vegetation that enhanced evapotranspiration. However, scale parameter was sensitive to drainage area, watershed slope and the baseflow index. Incorporation of the AEDI in the ANN model improved prediction performance of the location and scale parameters by 7% and 21%, respectively. A case study application of the new model for the design of micro-hydropower generators in ungauged rural headwater streams was presented.

  19. Semi-Markov regime switching interest rate models and minimal entropy measure

    NASA Astrophysics Data System (ADS)

    Hunt, Julien; Devolder, Pierre

    2011-10-01

    In this paper, we present a discrete time regime switching binomial-like model of the term structure where the regime switches are governed by a discrete time semi-Markov process. We model the evolution of the prices of zero-coupon when given an initial term structure as in the model by Ho and Lee that we aim to extend. We discuss and derive conditions for the model to be arbitrage free and relate this to the notion of martingale measure. We explicitly show that due to the extra source of uncertainty coming from the underlying semi-Markov process, there are an infinite number of equivalent martingale measures. The notion of path independence is also studied in some detail, especially in the presence of regime switches. We deal with the market incompleteness by giving an explicit characterization of the minimal entropy martingale measure. We give an application to the pricing of a European bond option both in a Markov and semi-Markov framework. Finally, we draw some conclusions.

  20. Reconstruction of f(G) gravity with ordinary and entropy-corrected (m,n)-type holographic dark energy model

    NASA Astrophysics Data System (ADS)

    Ghosh, Rahul; Debnath, Ujjal

    2014-05-01

    We have discussed the correspondence of the well-accepted f( G) gravity theory with two dark energy models: ( m, n)-type holographic dark energy (( m, n)-type HDE) and entropy-corrected ( m, n)-type holographic dark energy. For this purpose, we have considered the power law form of the scale factor a( t) = a 0 t p , p > 1. The reconstructed f( G) in these models has been found and the models in both cases are found to be realistic. We have also discussed the classical stability issues in both models. The ( m, n)-type HDE and its entropy-corrected version are more stable than the ordinary HDE model.

  1. Development of Daily Maximum Flare-Flux Forecast Models for Strong Solar Flares

    NASA Astrophysics Data System (ADS)

    Shin, Seulki; Lee, Jin-Yi; Moon, Yong-Jae; Chu, Hyoungseok; Park, Jongyeob

    2016-03-01

    We have developed a set of daily maximum flare-flux forecast models for strong flares (M- and X-class) using multiple linear regression (MLR) and artificial neural network (ANN) methods. Our input parameters are solar-activity data from January 1996 to December 2013 such as sunspot area, X-ray maximum, and weighted total flare flux of the previous day, as well as mean flare rates of McIntosh sunspot group (Zpc) and Mount Wilson magnetic classifications. For a training dataset, we used 61 events each of C-, M-, and X-class from January 1996 to December 2004. For a testing dataset, we used all events from January 2005 to November 2013. A comparison between our maximum flare-flux models and NOAA model based on true skill statistics (TSS) shows that the MLR model for X-class and the average of all flares (M{+}X-class) are much better than the NOAA model. According to the hitting fraction (HF), which is defined as a fraction of events satisfying the condition that the absolute differences of predicted and observed flare flux on a logarithm scale are smaller than or equal to 0.5, our models successfully forecast the maximum flare flux of about two-thirds of the events for strong flares. Since all input parameters for our models are easily available, the models can be operated steadily and automatically on a daily basis for space-weather services.

  2. Marginal Maximum Likelihood Estimation of a Latent Variable Model with Interaction

    ERIC Educational Resources Information Center

    Cudeck, Robert; Harring, Jeffrey R.; du Toit, Stephen H. C.

    2009-01-01

    There has been considerable interest in nonlinear latent variable models specifying interaction between latent variables. Although it seems to be only slightly more complex than linear regression without the interaction, the model that includes a product of latent variables cannot be estimated by maximum likelihood assuming normality.…

  3. Maximum Likelihood Item Easiness Models for Test Theory without an Answer Key

    ERIC Educational Resources Information Center

    France, Stephen L.; Batchelder, William H.

    2015-01-01

    Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce…

  4. Modelling the maximum voluntary joint torque/angular velocity relationship in human movement.

    PubMed

    Yeadon, Maurice R; King, Mark A; Wilson, Cassie

    2006-01-01

    The force exerted by a muscle is a function of the activation level and the maximum (tetanic) muscle force. In "maximum" voluntary knee extensions muscle activation is lower for eccentric muscle velocities than for concentric velocities. The aim of this study was to model this "differential activation" in order to calculate the maximum voluntary knee extensor torque as a function of knee angular velocity. Torque data were collected on two subjects during maximal eccentric-concentric knee extensions using an isovelocity dynamometer with crank angular velocities ranging from 50 to 450 degrees s(-1). The theoretical tetanic torque/angular velocity relationship was modelled using a four parameter function comprising two rectangular hyperbolas while the activation/angular velocity relationship was modelled using a three parameter function that rose from submaximal activation for eccentric velocities to full activation for high concentric velocities. The product of these two functions gave a seven parameter function which was fitted to the joint torque/angular velocity data, giving unbiased root mean square differences of 1.9% and 3.3% of the maximum torques achieved. Differential activation accounts for the non-hyperbolic behaviour of the torque/angular velocity data for low concentric velocities. The maximum voluntary knee extensor torque that can be exerted may be modelled accurately as the product of functions defining the maximum torque and the maximum voluntary activation level. Failure to include differential activation considerations when modelling maximal movements will lead to errors in the estimation of joint torque in the eccentric phase and low velocity concentric phase. PMID:16389087

  5. Development of Daily Solar Maximum Flare Flux Forecast Models for Strong Flares

    NASA Astrophysics Data System (ADS)

    Shin, Seulki; Chu, Hyoungseok

    2015-08-01

    We have developed a set of daily solar maximum flare flux forecast models for strong flares using Multiple Linear Regression (MLR) and Artificial Neural Network (ANN) methods. We consider input parameters as solar activity data from January 1996 to December 2013 such as sunspot area, X-ray maximum flare flux and weighted total flux of the previous day, and mean flare rates of McIntosh sunspot group (Zpc) and Mount Wilson magnetic classification. For a training data set, we use the same number of 61 events for each C-, M-, and X-class from Jan. 1996 to Dec. 2004, while other previous models use all flares. For a testing data set, we use all flares from Jan. 2005 to Nov. 2013. The statistical parameters from contingency tables show that the ANN models are better for maximum flare flux forecasting than the MLR models. A comparison between our maximum flare flux models and the previous ones based on Heidke Skill Score (HSS) shows that our all models for X-class flare are much better than the other models. According to the Hitting Fraction (HF), which is defined as a fraction of events satisfying that the absolute differences of predicted and observed flare flux in logarithm scale are less than equal to 0.5, our models successfully forecast the maximum flare flux of about two-third events for strong flares. Since all input parameters for our models are easily available, the models can be operated steadily and automatically on daily basis for space weather service.

  6. Probing local structure in the yellow phosphor LaSr[subscript 2]AlO[subscript 5]:Ce[superscript 3+], by the maximum entropy method and pair distribution function analysis

    SciTech Connect

    Im, Won Bin; Page, Katharine; DenBaars, Steven P.; Seshadri, Ram

    2011-08-04

    The compound LaSr{sub 2}AlO{sub 5} was recently introduced as a competitive Ce{sup 3+} host material for blue-pumped yellow phosphors for use in white light emitting diodes. A crucial feature of the crystal structure of LaSr{sub 2}AlO{sub 5} is that La, which is the host site for Ce{sup 3+}, is located in the 8h positions of the I4/mcm crystal structure, a site equally shared with Sr. While the average crystal structure of LaSr{sub 2}AlO{sub 5} as revealed by Rietveld analysis of laboratory and synchrotron X-ray diffraction data suggests nothing untoward, maximum entropy method analysis of the synchrotron X-ray data reveals the existence of conspicuous non-sphericity of the electron density. Pair distribution function analysis of the data suggests that despite their occupying the same crystallographic site, La and Sr possess distinct coordination environments, and the environment around La is more compact and regular than the environment suggested by the Rietveld refinement of the average structure. The absorption and emission from Ce{sup 3+} centers is controlled by the local coordination and symmetry, and the use of powerful new tools in unraveling details of these strengthens the rational search for new phosphors for solid state white lighting.

  7. A statistical development of entropy for the introductory physics course

    NASA Astrophysics Data System (ADS)

    Schoepf, David C.

    2002-02-01

    Many introductory physics texts introduce the statistical basis for the definition of entropy in addition to the Clausius definition, ΔS=q/T. We use a model based on equally spaced energy levels to present a way that the statistical definition of entropy can be developed at the introductory level. In addition to motivating the statistical definition of entropy, we also develop statistical arguments to answer the following questions: (i) Why does a system approach a state of maximum number of microstates? (ii) What is the equilibrium distribution of particles? (iii) What is the statistical basis of temperature? (iv) What is the statistical basis for the direction of spontaneous energy transfer? Finally, a correspondence between the statistical and the classical Clausius definitions of entropy is made.

  8. Uncertainties in maximum entropy (ME) reconstructions

    SciTech Connect

    Bevensee, R.M.

    1987-04-01

    This paper summarizes recent work done at the Lawrence Livermore National Laboratory by the writer on the effects of statistical uncertainty and image noise in Boltzmann ME inversion. The object of this work is the formulation of a Theory of Uncertainties which would allow one to compute confidence intervals for an object parameter near an ME reference value.

  9. Determining the Tsallis parameter via maximum entropy.

    PubMed

    Conroy, J M; Miller, H G

    2015-05-01

    The nonextensive entropic measure proposed by Tsallis [C. Tsallis, J. Stat. Phys. 52, 479 (1988)] introduces a parameter, q, which is not defined but rather must be determined. The value of q is typically determined from a piece of data and then fixed over the range of interest. On the other hand, from a phenomenological viewpoint, there are instances in which q cannot be treated as a constant. We present two distinct approaches for determining q depending on the form of the equations of constraint for the particular system. In the first case the equations of constraint for the operator Ô can be written as Tr(F(q)Ô)=C, where C may be an explicit function of the distribution function F. We show that in this case one can solve an equivalent maxent problem which yields q as a function of the corresponding Lagrange multiplier. As an illustration the exact solution of the static generalized Fokker-Planck equation (GFPE) is obtained from maxent with the Tsallis enropy. As in the case where C is a constant, if q is treated as a variable within the maxent framework the entropic measure is maximized trivially for all values of q. Therefore q must be determined from existing data. In the second case an additional equation of constraint exists which cannot be brought into the above form. In this case the additional equation of constraint may be used to determine the fixed value of q. PMID:26066124

  10. Maximum entropy approach to fuzzy control

    NASA Technical Reports Server (NTRS)

    Ramer, Arthur; Kreinovich, Vladik YA.

    1992-01-01

    For the same expert knowledge, if one uses different &- and V-operations in a fuzzy control methodology, one ends up with different control strategies. Each choice of these operations restricts the set of possible control strategies. Since a wrong choice can lead to a low quality control, it is reasonable to try to loose as few possibilities as possible. This idea is formalized and it is shown that it leads to the choice of min(a + b,1) for V and min(a,b) for &. This choice was tried on NASA Shuttle simulator; it leads to a maximally stable control.

  11. Steepest-entropy-ascent quantum thermodynamic modeling of decoherence in two different microscopic composite systems

    NASA Astrophysics Data System (ADS)

    Cano-Andrade, Sergio; Beretta, Gian Paolo; von Spakovsky, Michael R.

    2015-01-01

    The steepest-entropy-ascent quantum thermodynamic (SEAQT) framework is used to model the decoherence that occurs during the state evolution of two different microscopic composite systems. The test cases are a two-spin-1/2-particle composite system and a particle-photon field composite system like that experimentally studied in cavity quantum electrodynamics. The first system is used to study the characteristics of the nonlinear equation of motion of the SEAQT framework when modeling the state evolution of a microscopic composite system with particular interest in the phenomenon of decoherence. The second system is used to compare the numerical predictions of the SEAQT framework with experimental cavity quantum electrodynamic data available in the literature. For the two different numerical cases presented, the time evolution of the density operator of the composite system as well as that of the reduced operators belonging to the two constituents is traced from an initial nonequilibrium state of the composite along its relaxation towards stable equilibrium. Results show for both cases how the initial entanglement and coherence is dissipated during the state relaxation towards a state of stable equilibrium.

  12. Entropy generation analysis of magnetohydrodynamic induction devices

    NASA Astrophysics Data System (ADS)

    Salas, Hugo; Cuevas, Sergio; López de Haro, Mariano

    1999-10-01

    Magnetohydrodynamic (MHD) induction devices such as electromagnetic pumps or electric generators are analysed within the approach of entropy generation. The flow of an electrically-conducting incompressible fluid in an MHD induction machine is described through the well known Hartmann model. Irreversibilities in the system due to ohmic dissipation, flow friction and heat flow are included in the entropy-generation rate. This quantity is used to define an overall efficiency for the induction machine that considers the total loss caused by process irreversibility. For an MHD generator working at maximum power output with walls at constant temperature, an optimum magnetic field strength (i.e. Hartmann number) is found based on the maximum overall efficiency.

  13. Steepest-entropy-ascent quantum thermodynamic modeling of the relaxation process of isolated chemically reactive systems using density of states and the concept of hypoequilibrium state

    NASA Astrophysics Data System (ADS)

    Li, Guanchen; von Spakovsky, Michael R.

    2016-01-01

    This paper presents a study of the nonequilibrium relaxation process of chemically reactive systems using steepest-entropy-ascent quantum thermodynamics (SEAQT). The trajectory of the chemical reaction, i.e., the accessible intermediate states, is predicted and discussed. The prediction is made using a thermodynamic-ensemble approach, which does not require detailed information about the particle mechanics involved (e.g., the collision of particles). Instead, modeling the kinetics and dynamics of the relaxation process is based on the principle of steepest-entropy ascent (SEA) or maximum-entropy production, which suggests a constrained gradient dynamics in state space. The SEAQT framework is based on general definitions for energy and entropy and at least theoretically enables the prediction of the nonequilibrium relaxation of system state at all temporal and spatial scales. However, to make this not just theoretically but computationally possible, the concept of density of states is introduced to simplify the application of the relaxation model, which in effect extends the application of the SEAQT framework even to infinite energy eigenlevel systems. The energy eigenstructure of the reactive system considered here consists of an extremely large number of such levels (on the order of 10130) and yields to the quasicontinuous assumption. The principle of SEA results in a unique trajectory of system thermodynamic state evolution in Hilbert space in the nonequilibrium realm, even far from equilibrium. To describe this trajectory, the concepts of subsystem hypoequilibrium state and temperature are introduced and used to characterize each system-level, nonequilibrium state. This definition of temperature is fundamental rather than phenomenological and is a generalization of the temperature defined at stable equilibrium. In addition, to deal with the large number of energy eigenlevels, the equation of motion is formulated on the basis of the density of states and a set of

  14. Storm-surge prediction at the Tanshui estuary: development model for maximum storm surges

    NASA Astrophysics Data System (ADS)

    Tsai, C.-P.; You, C.-Y.; Chen, C.-Y.

    2013-12-01

    This study applies artificial networks, including both the supervised multilayer perception neural network and the radial basis function neural network to the prediction of storm-surges at the Tanshui estuary in Taiwan. The optimum parameters for the prediction of the maximum storm-surges based on 22 previous sets of data are discussed. Two different neural network methods are adopted to build models for the prediction of storm surges and the importance of each factor is also discussed. The factors relevant to the maximum storm surges, including the pressure difference, maximum wind speed and wind direction at the Tanshui Estuary and the flow rate at the upstream station, are all investigated. These good results can further be applied to build a neural network model for prediction of storm surges with time series data.

  15. Development of models for maximum and time variation of storm surges at the Tanshui estuary

    NASA Astrophysics Data System (ADS)

    Tsai, C.-P.; You, C.-Y.

    2014-09-01

    In this study, artificial neural networks, including both multilayer perception and the radial basis function neural networks, are applied for modeling and forecasting the maximum and time variation of storm surges at the Tanshui estuary in Taiwan. The physical parameters, including both the local atmospheric pressure and the wind field factors, for finding the maximum storm surges, are first investigated based on the training of neural networks. Then neural network models for forecasting the time series of storm surges are accordingly developed using the major meteorological parameters with time variations. The time series of storm surges for six typhoons were used for training and testing the models, and data for three typhoons were used for model forecasting. The results show that both neural network models perform very well for the forecasting of the time variation of storm surges.

  16. Safety Assessment of Dangerous Goods Transport Enterprise Based on the Relative Entropy Aggregation in Group Decision Making Model

    PubMed Central

    Wu, Jun; Li, Chengbing; Huo, Yueying

    2014-01-01

    Safety of dangerous goods transport is directly related to the operation safety of dangerous goods transport enterprise. Aiming at the problem of the high accident rate and large harm in dangerous goods logistics transportation, this paper took the group decision making problem based on integration and coordination thought into a multiagent multiobjective group decision making problem; a secondary decision model was established and applied to the safety assessment of dangerous goods transport enterprise. First of all, we used dynamic multivalue background and entropy theory building the first level multiobjective decision model. Secondly, experts were to empower according to the principle of clustering analysis, and combining with the relative entropy theory to establish a secondary rally optimization model based on relative entropy in group decision making, and discuss the solution of the model. Then, after investigation and analysis, we establish the dangerous goods transport enterprise safety evaluation index system. Finally, case analysis to five dangerous goods transport enterprises in the Inner Mongolia Autonomous Region validates the feasibility and effectiveness of this model for dangerous goods transport enterprise recognition, which provides vital decision making basis for recognizing the dangerous goods transport enterprises. PMID:25477954

  17. Safety assessment of dangerous goods transport enterprise based on the relative entropy aggregation in group decision making model.

    PubMed

    Wu, Jun; Li, Chengbing; Huo, Yueying

    2014-01-01

    Safety of dangerous goods transport is directly related to the operation safety of dangerous goods transport enterprise. Aiming at the problem of the high accident rate and large harm in dangerous goods logistics transportation, this paper took the group decision making problem based on integration and coordination thought into a multiagent multiobjective group decision making problem; a secondary decision model was established and applied to the safety assessment of dangerous goods transport enterprise. First of all, we used dynamic multivalue background and entropy theory building the first level multiobjective decision model. Secondly, experts were to empower according to the principle of clustering analysis, and combining with the relative entropy theory to establish a secondary rally optimization model based on relative entropy in group decision making, and discuss the solution of the model. Then, after investigation and analysis, we establish the dangerous goods transport enterprise safety evaluation index system. Finally, case analysis to five dangerous goods transport enterprises in the Inner Mongolia Autonomous Region validates the feasibility and effectiveness of this model for dangerous goods transport enterprise recognition, which provides vital decision making basis for recognizing the dangerous goods transport enterprises. PMID:25477954

  18. Applying a Weighted Maximum Likelihood Latent Trait Estimator to the Generalized Partial Credit Model

    ERIC Educational Resources Information Center

    Penfield, Randall D.; Bergeron, Jennifer M.

    2005-01-01

    This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…

  19. Quasi-Maximum Likelihood Estimation of Structural Equation Models with Multiple Interaction and Quadratic Effects

    ERIC Educational Resources Information Center

    Klein, Andreas G.; Muthen, Bengt O.

    2007-01-01

    In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…

  20. Sufficient Stochastic Maximum Principle in a Regime-Switching Diffusion Model

    SciTech Connect

    Donnelly, Catherine

    2011-10-15

    We prove a sufficient stochastic maximum principle for the optimal control of a regime-switching diffusion model. We show the connection to dynamic programming and we apply the result to a quadratic loss minimization problem, which can be used to solve a mean-variance portfolio selection problem.

  1. Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods

    ERIC Educational Resources Information Center

    Zhong, Xiaoling; Yuan, Ke-Hai

    2011-01-01

    In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…

  2. THE MAXIMUM LIKELIHOOD APPROACH TO PROBABILISTIC MODELING OF AIR QUALITY DATA

    EPA Science Inventory

    Software using maximum likelihood estimation to fit six probabilistic models is discussed. The software is designed as a tool for the air pollution researcher to determine what assumptions are valid in the statistical analysis of air pollution data for the purpose of standard set...

  3. Simple Statistical Model to Quantify Maximum Expected EMC in Spacecraft and Avionics Boxes

    NASA Technical Reports Server (NTRS)

    Trout, Dawn H.; Bremner, Paul

    2014-01-01

    This study shows cumulative distribution function (CDF) comparisons of composite a fairing electromagnetic field data obtained by computational electromagnetic 3D full wave modeling and laboratory testing. Test and model data correlation is shown. In addition, this presentation shows application of the power balance and extention of this method to predict the variance and maximum exptected mean of the E-field data. This is valuable for large scale evaluations of transmission inside cavities.

  4. Linear entropies in the Jaynes Cummings model with intensity-dependent coupling in a phase-damped cavity

    NASA Astrophysics Data System (ADS)

    Zhou, Qing-Chun; Zhu, Shi-Ning

    2005-06-01

    We investigate the evolution of a quantum system described by the Jaynes-Cummings model with an arbitrary form of intensity-dependent coupling by displaying the linear entropies of the atom, field and atom-field system in the large detuning approximation. The cavity field is assumed to be coupled to a reservoir with a phase-damping coupling. The effects of cavity phase damping on the entanglement and coherence loss of such a system are studied.

  5. Power and entropy generation of an extended irreversible Brayton cycle: optimal parameters and performance

    NASA Astrophysics Data System (ADS)

    Herrera, Carlos A.; Sandoval, Jairo A.; Rosillo, Miguel E.

    2006-08-01

    Finite time thermodynamics is used to solve a new model of an extended Brayton cycle with variable-temperature heat reservoirs and finite size heat exchangers. The model takes into account external and internal entropy generation and handles heat recovery and heat leaks to the environment in a novel way. The extended system considerations are very important for minimizing entropy generation and maximizing second law efficiency, profit and ecological criterion. An optimization analysis was developed on this new model to determine its maximum power and minimum entropy generation, and amid the most important findings were the global maximum net power, global minimum entropy generation, optimum global heat exchangers size distribution, best working fluid specific heat ratio and optimal fluid heat capacities, some of these never having been published previously.

  6. Estimation of Dynamic Discrete Choice Models by Maximum Likelihood and the Simulated Method of Moments

    PubMed Central

    Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano

    2015-01-01

    We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926

  7. Numerical and analytical modelling of entropy noise in a supersonic nozzle with a shock

    NASA Astrophysics Data System (ADS)

    Leyko, M.; Moreau, S.; Nicoud, F.; Poinsot, T.

    2011-08-01

    Analytical and numerical assessments of the indirect noise generated through a nozzle are presented. The configuration corresponds to an experiment achieved at DLR by Bake et al. [The entropy wave generator (EWG): a reference case on entropy noise, Journal of Sound and Vibration 326 (2009) 574-598] where an entropy wave is generated upstream of a nozzle by an electrical heating device. Both 3-D and 2-D axisymmetric simulations are performed to demonstrate that the experiment is mostly driven by linear acoustic phenomena, including pressure wave reflection at the outlet and entropy-to-acoustic conversion in the accelerated regions. Moreover, the spatial inhomogeneity of the upstream entropy fluctuation has no visible effect for the investigated frequency range (0-100 Hz). Similar results are obtained with a purely analytical method based on the compact nozzle approximation of Marble and Candel [Acoustic disturbances from gas nonuniformities convected through a nozzle, Journal of Sound and Vibration 55 (1977) 225-243] demonstrating that the DLR results can be reproduced simply on the basis of a low-frequency compact-elements approximation. Like in the present simulations, the analytical method shows that the acoustic impedance downstream of the nozzle must be accounted for to properly recover the experimental pressure signal. The analytical method can also be used to optimize the experimental parameters and avoid the interaction between transmitted and reflected waves.

  8. Entanglement entropy on fuzzy spaces

    SciTech Connect

    Dou, Djamel; Ydri, Badis

    2006-08-15

    We study the entanglement entropy of a scalar field in 2+1 spacetime where space is modeled by a fuzzy sphere and a fuzzy disc. In both models we evaluate numerically the resulting entropies and find that they are proportional to the number of boundary degrees of freedom. In the Moyal plane limit of the fuzzy disc the entanglement entropy per unite area (length) diverges if the ignored region is of infinite size. The divergence is (interpreted) of IR-UV mixing origin. In general we expect the entanglement entropy per unite area to be finite on a noncommutative space if the ignored region is of finite size.

  9. Entropy and climate. I - ERBE observations of the entropy production of the earth

    NASA Technical Reports Server (NTRS)

    Stephens, G. L.; O'Brien, D. M.

    1993-01-01

    An approximate method for estimating the global distributions of the entropy fluxes flowing through the upper boundary of the climate system is introduced, and an estimate of the entropy exchange between the earth and space and the entropy production of the planet is provided. Entropy fluxes calculated from the Earth Radiation Budget Experiment measurements show how the long-wave entropy flux densities dominate the total entropy fluxes at all latitudes compared with the entropy flux densities associated with reflected sunlight, although the short-wave flux densities are important in the context of clear sky-cloudy sky net entropy flux differences. It is suggested that the entropy production of the planet is both constant for the 36 months of data considered and very near its maximum possible value. The mean value of this production is 0.68 x 10 exp 15 W/K, and the amplitude of the annual cycle is approximately 1 to 2 percent of this value.

  10. Entropy, matter, and cosmology.

    PubMed

    Prigogine, I; Géhéniau, J

    1986-09-01

    The role of irreversible processes corresponding to creation of matter in general relativity is investigated. The use of Landau-Lifshitz pseudotensors together with conformal (Minkowski) coordinates suggests that this creation took place in the early universe at the stage of the variation of the conformal factor. The entropy production in this creation process is calculated. It is shown that these dissipative processes lead to the possibility of cosmological models that start from empty conditions and gradually build up matter and entropy. Gravitational entropy takes a simple meaning as associated to the entropy that is necessary to produce matter. This leads to an extension of the third law of thermodynamics, as now the zero point of entropy becomes the space-time structure out of which matter is generated. The theory can be put into a convenient form using a supplementary "C" field in Einstein's field equations. The role of the C field is to express the coupling between gravitation and matter leading to irreversible entropy production. PMID:16593747

  11. Entropy, matter, and cosmology

    PubMed Central

    Prigogine, I.; Géhéniau, J.

    1986-01-01

    The role of irreversible processes corresponding to creation of matter in general relativity is investigated. The use of Landau-Lifshitz pseudotensors together with conformal (Minkowski) coordinates suggests that this creation took place in the early universe at the stage of the variation of the conformal factor. The entropy production in this creation process is calculated. It is shown that these dissipative processes lead to the possibility of cosmological models that start from empty conditions and gradually build up matter and entropy. Gravitational entropy takes a simple meaning as associated to the entropy that is necessary to produce matter. This leads to an extension of the third law of thermodynamics, as now the zero point of entropy becomes the space-time structure out of which matter is generated. The theory can be put into a convenient form using a supplementary “C” field in Einstein's field equations. The role of the C field is to express the coupling between gravitation and matter leading to irreversible entropy production. PMID:16593747

  12. Classification of EEG bursts in deep sevoflurane, desflurane and isoflurane anesthesia using AR-modeling and entropy measures.

    PubMed

    Lipping, Tarmo; Stålnacke, Juha; Olejarczyk, Elzbieta; Marciniak, Radoslaw; Jäntti, Ville

    2013-01-01

    A study relating signal patterns of burst onsets in burst suppression EEG to the anesthetic agent or anesthesia induction protocol is presented. A dataset of 82 recordings of sevoflurane, isoflurane and desflurane anesthesia underlies the study. 3 second segments from the onset of altogether 3214 bursts are described using AR model parameters, spectral entropy and sample entropy as features. The features are clustered using the K-means algorithm. The results indicate that no clear cut distinction can be made between the burst patterns induced by the mentioned anesthetics although bursts of certain properties are more common in certain patient groups. Several directions for further investigations are proposed based on visual inspection of the recordings. PMID:24110878

  13. Autonomous entropy-based intelligent experimental design

    NASA Astrophysics Data System (ADS)

    Malakar, Nabin Kumar

    2011-07-01

    The aim of this thesis is to explore the application of probability and information theory in experimental design, and to do so in a way that combines what we know about inference and inquiry in a comprehensive and consistent manner. Present day scientific frontiers involve data collection at an ever-increasing rate. This requires that we find a way to collect the most relevant data in an automated fashion. By following the logic of the scientific method, we couple an inference engine with an inquiry engine to automate the iterative process of scientific learning. The inference engine involves Bayesian machine learning techniques to estimate model parameters based upon both prior information and previously collected data, while the inquiry engine implements data-driven exploration. By choosing an experiment whose distribution of expected results has the maximum entropy, the inquiry engine selects the experiment that maximizes the expected information gain. The coupled inference and inquiry engines constitute an autonomous learning method for scientific exploration. We apply it to a robotic arm to demonstrate the efficacy of the method. Optimizing inquiry involves searching for an experiment that promises, on average, to be maximally informative. If the set of potential experiments is described by many parameters, the search involves a high-dimensional entropy space. In such cases, a brute force search method will be slow and computationally expensive. We develop an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment. This helps to reduce the number of computations necessary to find the optimal experiment. We also extended the method of maximizing entropy, and developed a method of maximizing joint entropy so that it could be used as a principle of collaboration between two robots. This is a major achievement of this thesis, as it allows the information-based collaboration between two robotic units towards a same

  14. CodonPhyML: Fast Maximum Likelihood Phylogeny Estimation under Codon Substitution Models

    PubMed Central

    Gil, Manuel; Zoller, Stefan; Anisimova, Maria

    2013-01-01

    Markov models of codon substitution naturally incorporate the structure of the genetic code and the selection intensity at the protein level, providing a more realistic representation of protein-coding sequences compared with nucleotide or amino acid models. Thus, for protein-coding genes, phylogenetic inference is expected to be more accurate under codon models. So far, phylogeny reconstruction under codon models has been elusive due to computational difficulties of dealing with high dimension matrices. Here, we present a fast maximum likelihood (ML) package for phylogenetic inference, CodonPhyML offering hundreds of different codon models, the largest variety to date, for phylogeny inference by ML. CodonPhyML is tested on simulated and real data and is shown to offer excellent speed and convergence properties. In addition, CodonPhyML includes most recent fast methods for estimating phylogenetic branch supports and provides an integral framework for models selection, including amino acid and DNA models. PMID:23436912

  15. Using maximum topology matching to explore differences in species distribution models

    USGS Publications Warehouse

    Poco, Jorge; Doraiswamy, Harish; Talbert, Marian K.; Morisette, Jeffrey; Silva, Claudio

    2015-01-01

    Species distribution models (SDM) are used to help understand what drives the distribution of various plant and animal species. These models are typically high dimensional scalar functions, where the dimensions of the domain correspond to predictor variables of the model algorithm. Understanding and exploring the differences between models help ecologists understand areas where their data or understanding of the system is incomplete and will help guide further investigation in these regions. These differences can also indicate an important source of model to model uncertainty. However, it is cumbersome and often impractical to perform this analysis using existing tools, which allows for manual exploration of the models usually as 1-dimensional curves. In this paper, we propose a topology-based framework to help ecologists explore the differences in various SDMs directly in the high dimensional domain. In order to accomplish this, we introduce the concept of maximum topology matching that computes a locality-aware correspondence between similar extrema of two scalar functions. The matching is then used to compute the similarity between two functions. We also design a visualization interface that allows ecologists to explore SDMs using their topological features and to study the differences between pairs of models found using maximum topological matching. We demonstrate the utility of the proposed framework through several use cases using different data sets and report the feedback obtained from ecologists.

  16. Spatiotemporal Modeling of Ozone Levels in Quebec (Canada): A Comparison of Kriging, Land-Use Regression (LUR), and Combined Bayesian Maximum Entropy–LUR Approaches

    PubMed Central

    Adam-Poupart, Ariane; Brand, Allan; Fournier, Michel; Jerrett, Michael

    2014-01-01

    Background: Ambient air ozone (O3) is a pulmonary irritant that has been associated with respiratory health effects including increased lung inflammation and permeability, airway hyperreactivity, respiratory symptoms, and decreased lung function. Estimation of O3 exposure is a complex task because the pollutant exhibits complex spatiotemporal patterns. To refine the quality of exposure estimation, various spatiotemporal methods have been developed worldwide. Objectives: We sought to compare the accuracy of three spatiotemporal models to predict summer ground-level O3 in Quebec, Canada. Methods: We developed a land-use mixed-effects regression (LUR) model based on readily available data (air quality and meteorological monitoring data, road networks information, latitude), a Bayesian maximum entropy (BME) model incorporating both O3 monitoring station data and the land-use mixed model outputs (BME-LUR), and a kriging method model based only on available O3 monitoring station data (BME kriging). We performed leave-one-station-out cross-validation and visually assessed the predictive capability of each model by examining the mean temporal and spatial distributions of the average estimated errors. Results: The BME-LUR was the best predictive model (R2 = 0.653) with the lowest root mean-square error (RMSE ;7.06 ppb), followed by the LUR model (R2 = 0.466, RMSE = 8.747) and the BME kriging model (R2 = 0.414, RMSE = 9.164). Conclusions: Our findings suggest that errors of estimation in the interpolation of O3 concentrations with BME can be greatly reduced by incorporating outputs from a LUR model developed with readily available data. Citation: Adam-Poupart A, Brand A, Fournier M, Jerrett M, Smargiassi A. 2014. Spatiotemporal modeling of ozone levels in Quebec (Canada): a comparison of kriging, land-use regression (LUR), and combined Bayesian maximum entropy–LUR approaches. Environ Health Perspect 122:970–976; http://dx.doi.org/10.1289/ehp.1306566 PMID:24879650

  17. Entropy distance: New quantum phenomena

    SciTech Connect

    Weis, Stephan; Knauf, Andreas

    2012-10-15

    We study a curve of Gibbsian families of complex 3 Multiplication-Sign 3-matrices and point out new features, absent in commutative finite-dimensional algebras: a discontinuous maximum-entropy inference, a discontinuous entropy distance, and non-exposed faces of the mean value set. We analyze these problems from various aspects including convex geometry, topology, and information geometry. This research is motivated by a theory of infomax principles, where we contribute by computing first order optimality conditions of the entropy distance.

  18. Computing maximum-likelihood estimates for parameters of the National Descriptive Model of Mercury in Fish

    USGS Publications Warehouse

    Donato, David I.

    2012-01-01

    This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.

  19. Maximum efficiency of state-space models of nanoscale energy conversion devices

    NASA Astrophysics Data System (ADS)

    Einax, Mario; Nitzan, Abraham

    2016-07-01

    The performance of nano-scale energy conversion devices is studied in the framework of state-space models where a device is described by a graph comprising states and transitions between them represented by nodes and links, respectively. Particular segments of this network represent input (driving) and output processes whose properly chosen flux ratio provides the energy conversion efficiency. Simple cyclical graphs yield Carnot efficiency for the maximum conversion yield. We give general proof that opening a link that separate between the two driving segments always leads to reduced efficiency. We illustrate these general result with simple models of a thermoelectric nanodevice and an organic photovoltaic cell. In the latter an intersecting link of the above type corresponds to non-radiative carriers recombination and the reduced maximum efficiency is manifested as a smaller open-circuit voltage.

  20. Maximum efficiency of state-space models of nanoscale energy conversion devices.

    PubMed

    Einax, Mario; Nitzan, Abraham

    2016-07-01

    The performance of nano-scale energy conversion devices is studied in the framework of state-space models where a device is described by a graph comprising states and transitions between them represented by nodes and links, respectively. Particular segments of this network represent input (driving) and output processes whose properly chosen flux ratio provides the energy conversion efficiency. Simple cyclical graphs yield Carnot efficiency for the maximum conversion yield. We give general proof that opening a link that separate between the two driving segments always leads to reduced efficiency. We illustrate these general result with simple models of a thermoelectric nanodevice and an organic photovoltaic cell. In the latter an intersecting link of the above type corresponds to non-radiative carriers recombination and the reduced maximum efficiency is manifested as a smaller open-circuit voltage. PMID:27394100

  1. Extending Transfer Entropy Improves Identification of Effective Connectivity in a Spiking Cortical Network Model

    PubMed Central

    Ito, Shinya; Hansen, Michael E.; Heiland, Randy; Lumsdaine, Andrew; Litke, Alan M.; Beggs, John M.

    2011-01-01

    Transfer entropy (TE) is an information-theoretic measure which has received recent attention in neuroscience for its potential to identify effective connectivity between neurons. Calculating TE for large ensembles of spiking neurons is computationally intensive, and has caused most investigators to probe neural interactions at only a single time delay and at a message length of only a single time bin. This is problematic, as synaptic delays between cortical neurons, for example, range from one to tens of milliseconds. In addition, neurons produce bursts of spikes spanning multiple time bins. To address these issues, here we introduce a free software package that allows TE to be measured at multiple delays and message lengths. To assess performance, we applied these extensions of TE to a spiking cortical network model (Izhikevich, 2006) with known connectivity and a range of synaptic delays. For comparison, we also investigated single-delay TE, at a message length of one bin (D1TE), and cross-correlation (CC) methods. We found that D1TE could identify 36% of true connections when evaluated at a false positive rate of 1%. For extended versions of TE, this dramatically improved to 73% of true connections. In addition, the connections correctly identified by extended versions of TE accounted for 85% of the total synaptic weight in the network. Cross correlation methods generally performed more poorly than extended TE, but were useful when data length was short. A computational performance analysis demonstrated that the algorithm for extended TE, when used on currently available desktop computers, could extract effective connectivity from 1 hr recordings containing 200 neurons in ∼5 min. We conclude that extending TE to multiple delays and message lengths improves its ability to assess effective connectivity between spiking neurons. These extensions to TE soon could become practical tools for experimentalists who record hundreds of spiking neurons. PMID:22102894

  2. Extending transfer entropy improves identification of effective connectivity in a spiking cortical network model.

    PubMed

    Ito, Shinya; Hansen, Michael E; Heiland, Randy; Lumsdaine, Andrew; Litke, Alan M; Beggs, John M

    2011-01-01

    Transfer entropy (TE) is an information-theoretic measure which has received recent attention in neuroscience for its potential to identify effective connectivity between neurons. Calculating TE for large ensembles of spiking neurons is computationally intensive, and has caused most investigators to probe neural interactions at only a single time delay and at a message length of only a single time bin. This is problematic, as synaptic delays between cortical neurons, for example, range from one to tens of milliseconds. In addition, neurons produce bursts of spikes spanning multiple time bins. To address these issues, here we introduce a free software package that allows TE to be measured at multiple delays and message lengths. To assess performance, we applied these extensions of TE to a spiking cortical network model (Izhikevich, 2006) with known connectivity and a range of synaptic delays. For comparison, we also investigated single-delay TE, at a message length of one bin (D1TE), and cross-correlation (CC) methods. We found that D1TE could identify 36% of true connections when evaluated at a false positive rate of 1%. For extended versions of TE, this dramatically improved to 73% of true connections. In addition, the connections correctly identified by extended versions of TE accounted for 85% of the total synaptic weight in the network. Cross correlation methods generally performed more poorly than extended TE, but were useful when data length was short. A computational performance analysis demonstrated that the algorithm for extended TE, when used on currently available desktop computers, could extract effective connectivity from 1 hr recordings containing 200 neurons in ∼5 min. We conclude that extending TE to multiple delays and message lengths improves its ability to assess effective connectivity between spiking neurons. These extensions to TE soon could become practical tools for experimentalists who record hundreds of spiking neurons. PMID:22102894

  3. Saturating the holographic entropy bound

    SciTech Connect

    Bousso, Raphael; Freivogel, Ben; Leichenauer, Stefan

    2010-10-15

    The covariant entropy bound states that the entropy, S, of matter on a light sheet cannot exceed a quarter of its initial area, A, in Planck units. The gravitational entropy of black holes saturates this inequality. The entropy of matter systems, however, falls short of saturating the bound in known examples. This puzzling gap has led to speculation that a much stronger bound, S < or approx. A{sup 3/4}, may hold true. In this note, we exhibit light sheets whose entropy exceeds A{sup 3/4} by arbitrarily large factors. In open Friedmann-Robertson-Walker universes, such light sheets contain the entropy visible in the sky; in the limit of early curvature domination, the covariant bound can be saturated but not violated. As a corollary, we find that the maximum observable matter and radiation entropy in universes with positive (negative) cosmological constant is of order {Lambda}{sup -1} ({Lambda}{sup -2}), and not |{Lambda}|{sup -3/4} as had hitherto been believed. Our results strengthen the evidence for the covariant entropy bound, while showing that the stronger bound S < or approx. A{sup 3/4} is not universally valid. We conjecture that the stronger bound does hold for static, weakly gravitating systems.

  4. Artificial neural networks modeling for forecasting the maximum daily total precipitation at Athens, Greece

    NASA Astrophysics Data System (ADS)

    Nastos, P. T.; Paliatsos, A. G.; Koukouletsos, K. V.; Larissi, I. K.; Moustris, K. P.

    2014-07-01

    Extreme daily precipitation events are involved in significant environmental damages, even in life loss, because of causing adverse impacts, such as flash floods, in urban and sometimes in rural areas. Thus, long-term forecast of such events is of great importance for the preparation of local authorities in order to confront and mitigate the adverse consequences. The objective of this study is to estimate the possibility of forecasting the maximum daily precipitation for the next coming year. For this reason, appropriate prognostic models, such as Artificial Neural Networks (ANNs) were developed and applied. The data used for the analysis concern annual maximum daily precipitation totals, which have been recorded at the National Observatory of Athens (NOA), during the long term period 1891-2009. To evaluate the potential of daily extreme precipitation forecast by the applied ANNs, a different period for validation was considered than the one used for the ANNs training. Thus, the datasets of the period 1891-1980 were used as training datasets, while the datasets of the period 1981-2009 as validation datasets. Appropriate statistical indices, such as the coefficient of determination (R2), the index of agreement (IA), the Root Mean Square Error (RMSE) and the Mean Bias Error (MBE), were applied to test the reliability of the models. The findings of the analysis showed that, a quite satisfactory relationship (R2 = 0.482, IA = 0.817, RMSE = 16.4 mm and MBE = + 5.2 mm) appears between the forecasted and the respective observed maximum daily precipitation totals one year ahead. The developed ANN seems to overestimate the maximum daily precipitation totals appeared in 1988 while underestimate the maximum in 1999, which could be attributed to the relatively low frequency of occurrence of these extreme events within GAA having impact on the optimum training of ANN.

  5. Maximum Likelihood Shift Estimation Using High Resolution Polarimetric SAR Clutter Model

    NASA Astrophysics Data System (ADS)

    Harant, Olivier; Bombrun, Lionel; Vasile, Gabriel; Ferro-Famil, Laurent; Gay, Michel

    2011-03-01

    This paper deals with a Maximum Likelihood (ML) shift estimation method in the context of High Resolution (HR) Polarimetric SAR (PolSAR) clutter. Texture modeling is exposed and the generalized ML texture tracking method is extended to the merging of various sensors. Some results on displacement estimation on the Argentiere glacier in the Mont Blanc massif using dual-pol TerraSAR-X (TSX) and quad-pol RADARSAT-2 (RS2) sensors are finally discussed.

  6. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  7. Maximum-Likelihood Tree Estimation Using Codon Substitution Models with Multiple Partitions

    PubMed Central

    Zoller, Stefan; Boskova, Veronika; Anisimova, Maria

    2015-01-01

    Many protein sequences have distinct domains that evolve with different rates, different selective pressures, or may differ in codon bias. Instead of modeling these differences by more and more complex models of molecular evolution, we present a multipartition approach that allows maximum-likelihood phylogeny inference using different codon models at predefined partitions in the data. Partition models can, but do not have to, share free parameters in the estimation process. We test this approach with simulated data as well as in a phylogenetic study of the origin of the leucin-rich repeat regions in the type III effector proteins of the pythopathogenic bacteria Ralstonia solanacearum. Our study does not only show that a simple two-partition model resolves the phylogeny better than a one-partition model but also gives more evidence supporting the hypothesis of lateral gene transfer events between the bacterial pathogens and its eukaryotic hosts. PMID:25911229

  8. Modifying high-order aeroelastic math model of a jet transport using maximum likelihood estimation

    NASA Technical Reports Server (NTRS)

    Anissipour, Amir A.; Benson, Russell A.

    1989-01-01

    The design of control laws to damp flexible structural modes requires accurate math models. Unlike the design of control laws for rigid body motion (e.g., where robust control is used to compensate for modeling inaccuracies), structural mode damping usually employs narrow band notch filters. In order to obtain the required accuracy in the math model, maximum likelihood estimation technique is employed to improve the accuracy of the math model using flight data. Presented here are all phases of this methodology: (1) pre-flight analysis (i.e., optimal input signal design for flight test, sensor location determination, model reduction technique, etc.), (2) data collection and preprocessing, and (3) post-flight analysis (i.e., estimation technique and model verification). In addition, a discussion is presented of the software tools used and the need for future study in this field.

  9. Spatial modeling of the highest daily maximum temperature in Korea via max-stable processes

    NASA Astrophysics Data System (ADS)

    Lee, Youngsaeng; Yoon, Sanghoo; Murshed, Md. Sharwar; Kim, Maeng-Ki; Cho, ChunHo; Baek, Hee-Jeong; Park, Jeong-Soo

    2013-11-01

    This paper examines the annual highest daily maximum temperature (DMT) in Korea by using data from 56 weather stations and employing spatial extreme modeling. Our approach is based on max-stable processes (MSP) with Schlather’s characterization. We divide the country into four regions for a better model fit and identify the best model for each region. We show that regional MSP modeling is more suitable than MSP modeling for the entire region and the pointwise generalized extreme value distribution approach. The advantage of spatial extreme modeling is that more precise and robust return levels and some indices of the highest temperatures can be obtained for observation stations and for locations with no observed data, and so help to determine the effects and assessment of vulnerability as well as to downscale extreme events.

  10. Entropy and the Shelf Model: A Quantum Physical Approach to a Physical Property

    ERIC Educational Resources Information Center

    Jungermann, Arnd H.

    2006-01-01

    In contrast to most other thermodynamic data, entropy values are not given in relation to a certain--more or less arbitrarily defined--zero level. They are listed in standard thermodynamic tables as absolute values of specific substances. Therefore these values describe a physical property of the listed substances. One of the main tasks of…

  11. Entropy Generation in Regenerative Systems

    NASA Technical Reports Server (NTRS)

    Kittel, Peter

    1995-01-01

    Heat exchange to the oscillating flows in regenerative coolers generates entropy. These flows are characterized by oscillating mass flows and oscillating temperatures. Heat is transferred between the flow and heat exchangers and regenerators. In the former case, there is a steady temperature difference between the flow and the heat exchangers. In the latter case, there is no mean temperature difference. In this paper a mathematical model of the entropy generated is developed for both cases. Estimates of the entropy generated by this process are given for oscillating flows in heat exchangers and in regenerators. The practical significance of this entropy is also discussed.

  12. Terror birds on the run: a mechanical model to estimate its maximum running speed

    PubMed Central

    Blanco, R. Ernesto; Jones, Washington W

    2005-01-01

    ‘Terror bird’ is a common name for the family Phorusrhacidae. These large terrestrial birds were probably the dominant carnivores on the South American continent from the Middle Palaeocene to the Pliocene–Pleistocene limit. Here we use a mechanical model based on tibiotarsal strength to estimate maximum running speeds of three species of terror birds: Mesembriornis milneedwardsi, Patagornis marshi and a specimen of Phorusrhacinae gen. The model is proved on three living large terrestrial bird species. On the basis of the tibiotarsal strength we propose that Mesembriornis could have used its legs to break long bones and access their marrow. PMID:16096087

  13. Modeling a cortical auxin maximum for nodulation: different signatures of potential strategies.

    PubMed

    Deinum, Eva Elisabeth; Geurts, René; Bisseling, Ton; Mulder, Bela M

    2012-01-01

    Lateral organ formation from plant roots typically requires the de novo creation of a meristem, initiated at the location of a localized auxin maximum. Legume roots can form both root nodules and lateral roots. From the basic principles of auxin transport and metabolism only a few mechanisms can be inferred for increasing the local auxin concentration: increased influx, decreased efflux, and (increased) local production. Using computer simulations we investigate the different spatio-temporal patterns resulting from each of these mechanisms in the context of a root model of a generalized legume. We apply all mechanisms to the same group of preselected cells, dubbed the controlled area. We find that each mechanism leaves its own characteristic signature. Local production by itself can not create a strong auxin maximum. An increase of influx, as is observed in lateral root formation, can result in an auxin maximum that is spatially more confined than the controlled area. A decrease of efflux on the other hand leads to a broad maximum, which is more similar to what is observed for nodule primordia. With our prime interest in nodulation, we further investigate the dynamics following a decrease of efflux. We find that with a homogeneous change in the whole cortex, the first auxin accumulation is observed in the inner cortex. The steady state lateral location of this efflux reduced auxin maximum can be shifted by slight changes in the ratio of central to peripheral efflux carriers. We discuss the implications of this finding in the context of determinate and indeterminate nodules, which originate from different cortical positions. The patterns we have found are robust under disruption of the (artificial) tissue layout. The same patterns are therefore likely to occur in many other contexts. PMID:22654886

  14. Maximum Likelihood Bayesian Averaging of Spatial Variability Models in Unsaturated Fractured Tuff

    SciTech Connect

    Ye, Ming; Neuman, Shlomo P.; Meyer, Philip D.

    2004-05-25

    Hydrologic analyses typically rely on a single conceptual-mathematical model. Yet hydrologic environments are open and complex, rendering them prone to multiple interpretations and mathematical descriptions. Adopting only one of these may lead to statistical bias and underestimation of uncertainty. Bayesian Model Averaging (BMA) provides an optimal way to combine the predictions of several competing models and to assess their joint predictive uncertainty. However, it tends to be computationally demanding and relies heavily on prior information about model parameters. We apply a maximum likelihood (ML) version of BMA (MLBMA) to seven alternative variogram models of log air permeability data from single-hole pneumatic injection tests in six boreholes at the Apache Leap Research Site (ALRS) in central Arizona. Unbiased ML estimates of variogram and drift parameters are obtained using Adjoint State Maximum Likelihood Cross Validation in conjunction with Universal Kriging and Generalized L east Squares. Standard information criteria provide an ambiguous ranking of the models, which does not justify selecting one of them and discarding all others as is commonly done in practice. Instead, we eliminate some of the models based on their negligibly small posterior probabilities and use the rest to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. We then average these four projections, and associated kriging variances, using the posterior probability of each model as weight. Finally, we cross-validate the results by eliminating from consideration all data from one borehole at a time, repeating the above process, and comparing the predictive capability of MLBMA with that of each individual model. We find that MLBMA is superior to any individual geostatistical model of log permeability among those we consider at the ALRS.

  15. MARSpline model for lead seven-day maximum and minimum air temperature prediction in Chennai, India

    NASA Astrophysics Data System (ADS)

    Ramesh, K.; Anitha, R.

    2014-06-01

    In this study, a Multivariate Adaptive Regression Spline (MARS) based lead seven days minimum and maximum surface air temperature prediction system is modelled for station Chennai, India. To emphasize the effectiveness of the proposed system, comparison is made with the models created using statistical learning technique Support Vector Machine Regression (SVMr). The analysis highlights that prediction accuracy of MARS models for minimum temperature forecast are promising for short term forecast (lead days 1 to 3) with mean absolute error (MAE) less than 1 °C and the prediction efficiency and skill degrades in medium term forecast (lead days 4 to 7) with slightly above 1 °C. The MAE of maximum temperature is little higher than minimum temperature forecast varying from 0.87 °C for day-one to 1.27 °C for lag day-seven with MARS approach. The statistical error analysis emphasizes that MARS models perform well with an average 0.2 °C of reduction in MAE over SVMr models for all ahead seven days and provide significant guidance for the prediction of temperature event. The study also suggests that the correlation between the atmospheric parameters used as predictors and the temperature event decreases as the lag increases with both approaches.

  16. The Last Glacial Maximum in the central North Island, New Zealand: palaeoclimate inferences from glacier modelling

    NASA Astrophysics Data System (ADS)

    Eaves, Shaun R.; Mackintosh, Andrew N.; Anderson, Brian M.; Doughty, Alice M.; Townsend, Dougal B.; Conway, Chris E.; Winckler, Gisela; Schaefer, Joerg M.; Leonard, Graham S.; Calvert, Andrew T.

    2016-04-01

    Quantitative palaeoclimate reconstructions provide data for evaluating the mechanisms of past, natural climate variability. Geometries of former mountain glaciers, constrained by moraine mapping, afford the opportunity to reconstruct palaeoclimate, due to the close relationship between ice extent and local climate. In this study, we present results from a series of experiments using a 2-D coupled energy balance-ice flow model that investigate the palaeoclimate significance of Last Glacial Maximum moraines within nine catchments in the central North Island, New Zealand. We find that the former ice limits can be simulated when present-day temperatures are reduced by between 4 and 7 °C, if precipitation remains unchanged from present. The spread in the results between the nine catchments is likely to represent the combination of chronological and model uncertainties. The majority of catchments targeted require temperature decreases of 5.1 to 6.3 °C to simulate the former glaciers, which represents our best estimate of the temperature anomaly in the central North Island, New Zealand, during the Last Glacial Maximum. A decrease in precipitation of up to 25 % from present, as suggested by proxy evidence and climate models, increases the magnitude of the required temperature changes by up to 0.8 °C. Glacier model experiments using reconstructed topographies that exclude the volume of post-glacial ( < 15 ka) volcanism generally increased the magnitude of cooling required to simulate the former ice limits by up to 0.5 °C. Our palaeotemperature estimates expand the spatial coverage of proxy-based quantitative palaeoclimate reconstructions in New Zealand. Our results are also consistent with independent, proximal temperature reconstructions from fossil groundwater and pollen assemblages, as well as similar glacier modelling reconstructions from the central Southern Alps, which suggest air temperatures were ca. 6 °C lower than present across New Zealand during the Last

  17. Improving prediction of hydraulic conductivity by constraining capillary bundle models to a maximum pore size

    NASA Astrophysics Data System (ADS)

    Iden, Sascha C.; Peters, Andre; Durner, Wolfgang

    2015-11-01

    The prediction of unsaturated hydraulic conductivity from the soil water retention curve by pore-bundle models is a cost-effective and widely applied technique. One problem for conductivity predictions from retention functions with continuous derivatives, i.e. continuous water capacity functions, is that the hydraulic conductivity curve exhibits a sharp drop close to water saturation if the pore-size distribution is wide. So far this artifact has been ignored or removed by introducing an explicit air-entry value into the capillary saturation function. However, this correction leads to a retention function which is not continuously differentiable. We present a new parameterization of the hydraulic properties which uses the original saturation function (e.g. of van Genuchten) and introduces a maximum pore radius only in the pore-bundle model. In contrast to models using an explicit air entry, the resulting conductivity function is smooth and increases monotonically close to saturation. The model concept can easily be applied to any combination of retention curve and pore-bundle model. We derive closed-form expressions for the unimodal and multimodal van Genuchten-Mualem models and apply the model concept to curve fitting and inverse modeling of a transient outflow experiment. Since the new model retains the smoothness and continuous differentiability of the retention model and eliminates the sharp drop in conductivity close to saturation, the resulting hydraulic functions are physically more reasonable and ideal for numerical simulations with the Richards equation or multiphase flow models.

  18. Possible ecosystem impacts of applying maximum sustainable yield policy in food chain models.

    PubMed

    Ghosh, Bapan; Kar, T K

    2013-07-21

    This paper describes the possible impacts of maximum sustainable yield (MSY) and maximum sustainable total yield (MSTY) policy in ecosystems. In general it is observed that exploitation at MSY (of single species) or MSTY (of multispecies) level may cause the extinction of several species. In particular, for traditional prey-predator system, fishing under combined harvesting effort at MSTY (if it exists) level may be a sustainable policy, but if MSTY does not exist then it is due to the extinction of the predator species only. In generalist prey-predator system, harvesting of any one of the species at MSY level is always a sustainable policy, but harvesting of both the species at MSTY level may or may not be a sustainable policy. In addition, we have also investigated the MSY and MSTY policy in a traditional tri-trophic and four trophic food chain models. PMID:23542048

  19. Evaluation of Maximum Radionuclide Groundwater Concentrations for Basement Fill Model. Zion Station Restoration Project

    SciTech Connect

    Sullivan, Terry

    2014-12-02

    ZionSolutions is in the process of decommissioning the Zion Nuclear Power Plant in order to establish a new water treatment plant. There is some residual radioactive particles from the plant which need to be brought down to levels so an individual who receives water from the new treatment plant does not receive a radioactive dose in excess of 25 mrem/y⁻¹. The objectives of this report are: (a) To present a simplified conceptual model for release from the buildings with residual subsurface structures that can be used to provide an upper bound on contaminant concentrations in the fill material; (b) Provide maximum water concentrations and the corresponding amount of mass sorbed to the solid fill material that could occur in each building for use in dose assessment calculations; (c) Estimate the maximum concentration in a well located outside of the fill material; and (d) Perform a sensitivity analysis of key parameters.

  20. Climate Projections from the NARCliM Project: Bayesian Model Averaging of Maximum Temperature Projections

    NASA Astrophysics Data System (ADS)

    Olson, R.; Evans, J. P.; Fan, Y.

    2015-12-01

    NARCliM (NSW/ACT Regional Climate Modelling Project) is a regional climate project for Australia and the surrounding region. It dynamically downscales 4 General Circulation Models (GCMs) using three Regional Climate Models (RCMs) to provide climate projections for the CORDEX-AustralAsia region at 50 km resolution, and for south-east Australia at 10 km resolution. The project differs from previous work in the level of sophistication of model selection. Specifically, the selection process for GCMs included (i) conducting literature review to evaluate model performance, (ii) analysing model independence, and (iii) selecting models that span future temperature and precipitation change space. RCMs for downscaling the GCMs were chosen based on their performance for several precipitation events over South-East Australia, and on model independence.Bayesian Model Averaging (BMA) provides a statistically consistent framework for weighing the models based on their likelihood given the available observations. These weights are used to provide probability distribution functions (pdfs) for model projections. We develop a BMA framework for constructing probabilistic climate projections for spatially-averaged variables from the NARCliM project. The first step in the procedure is smoothing model output in order to exclude the influence of internal climate variability. Our statistical model for model-observations residuals is a homoskedastic iid process. Comparing RCMs with Australian Water Availability Project (AWAP) observations is used to determine model weights through Monte Carlo integration. Posterior pdfs of statistical parameters of model-data residuals are obtained using Markov Chain Monte Carlo. The uncertainty in the properties of the model-data residuals is fully accounted for when constructing the projections. We present the preliminary results of the BMA analysis for yearly maximum temperature for New South Wales state planning regions for the period 2060-2079.

  1. Entropy measures for networks: toward an information theory of complex topologies.

    PubMed

    Anand, Kartik; Bianconi, Ginestra

    2009-10-01

    The quantification of the complexity of networks is, today, a fundamental problem in the physics of complex systems. A possible roadmap to solve the problem is via extending key concepts of information theory to networks. In this Rapid Communication we propose how to define the Shannon entropy of a network ensemble and how it relates to the Gibbs and von Neumann entropies of network ensembles. The quantities we introduce here will play a crucial role for the formulation of null models of networks through maximum-entropy arguments and will contribute to inference problems emerging in the field of complex networks. PMID:19905379

  2. Modelling the trajectory of erratic boulders in the western Alps during the last glacial maximum

    NASA Astrophysics Data System (ADS)

    Jouvet, Guillaume; Becker, Patrick; Funk, Martin; Seguinot, Julien

    2015-04-01

    Erratic boulders of the western Alps provide valuable information about the flow field prevailing during the last glacial maximum. In particular, the origin, the exposure time and the location of several boulders identified along the Jura are well documented. The goal of this study is to corroborate these information with ice flow simulations performed with the Parallel Ice Sheet Model (PISM). PISM is capable to simulate the time evolution of a large scale ice sheet by accounting for the dynamics of ice, englacial temperature, surface mass balance and variations of the lithosphere. The main difficulty of this exercise resides in large uncertainties concerning the climate forcing required as input in the surface mass balance model. To mimic with climate conditions prevailing during the last glacial maximum, a common approach consists of applying different temperature offsets and corrections in the precipitation patterns to present-day climate data, and to select the parametrizations which yield the best match between modelled ice sheet extents and geomorphologically-based margin reconstructions. To better constrain our modelling results we take advantage of some erratic boulders from which we know their origin. More precisely, we are looking for the climatic conditions which reproduce at best the trajectories of the boulders from their origins to their final location.

  3. On the maximum energy release in flux-rope models of eruptive flares

    NASA Technical Reports Server (NTRS)

    Forbes, T. G.; Priest, E. R.; Isenberg, P. A.

    1994-01-01

    We determine the photospheric boundary conditions which maximize the magnetic energy released by a loss of ideal-magnetohydrodynamic (MHD) equilibrium in two-dimensional flux-rope models. In these models a loss of equilibrium causes a transition of the flux rope to a lower magnetic energy state at a higher altitude. During the transition a vertical current sheet forms below the flux rope, and reconnection in this current sheet releases additional energy. Here we compute how much energy is released by the loss of equilibrium relative to the total energy release. When the flux-rope radius is small compared to its height, it is possible to obtain general solutions of the Grad-Shafranov equation for a wide range of boundary conditions. Variational principles can then be used to find the particular boundary condition which maximizes the magnetic energy released for a given class of conditions. We apply this procedure to a class of models known as cusp-type catastrophes, and we find that the maximum energy released by the loss of equilibrium is 20.8% of the total energy release for any model in this class. If the additional restriction is imposed that the photospheric magnetic field forms a simple arcade in the absence of coronal currents, then the maximum energy release reduces to 8.6%

  4. Maximum penalized likelihood estimation in semiparametric mark-recapture-recovery models.

    PubMed

    Michelot, Théo; Langrock, Roland; Kneib, Thomas; King, Ruth

    2016-01-01

    We discuss the semiparametric modeling of mark-recapture-recovery data where the temporal and/or individual variation of model parameters is explained via covariates. Typically, in such analyses a fixed (or mixed) effects parametric model is specified for the relationship between the model parameters and the covariates of interest. In this paper, we discuss the modeling of the relationship via the use of penalized splines, to allow for considerably more flexible functional forms. Corresponding models can be fitted via numerical maximum penalized likelihood estimation, employing cross-validation to choose the smoothing parameters in a data-driven way. Our contribution builds on and extends the existing literature, providing a unified inferential framework for semiparametric mark-recapture-recovery models for open populations, where the interest typically lies in the estimation of survival probabilities. The approach is applied to two real datasets, corresponding to gray herons (Ardea cinerea), where we model the survival probability as a function of environmental condition (a time-varying global covariate), and Soay sheep (Ovis aries), where we model the survival probability as a function of individual weight (a time-varying individual-specific covariate). The proposed semiparametric approach is compared to a standard parametric (logistic) regression and new interesting underlying dynamics are observed in both cases. PMID:26289495

  5. The early maximum likelihood estimation model of audiovisual integration in speech perception.

    PubMed

    Andersen, Tobias S

    2015-05-01

    Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk-MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely focused on the fuzzy logical model of perception (FLMP), which provides excellent fits to experimental observations but also has been criticized for being too flexible, post hoc and difficult to interpret. The current study introduces the early maximum likelihood estimation (MLE) model of audiovisual integration to speech perception along with three model variations. In early MLE, integration is based on a continuous internal representation before categorization, which can make the model more parsimonious by imposing constraints that reflect experimental designs. The study also shows that cross-validation can evaluate models of audiovisual integration based on typical data sets taking both goodness-of-fit and model flexibility into account. All models were tested on a published data set previously used for testing the FLMP. Cross-validation favored the early MLE while more conventional error measures favored more complex models. This difference between conventional error measures and cross-validation was found to be indicative of over-fitting in more complex models such as the FLMP. PMID:25994715

  6. Robust maximum likelihood estimation for stochastic state space model with observation outliers

    NASA Astrophysics Data System (ADS)

    AlMutawa, J.

    2016-08-01

    The objective of this paper is to develop a robust maximum likelihood estimation (MLE) for the stochastic state space model via the expectation maximisation algorithm to cope with observation outliers. Two types of outliers and their influence are studied in this paper: namely,the additive outlier (AO) and innovative outlier (IO). Due to the sensitivity of the MLE to AO and IO, we propose two techniques for robustifying the MLE: the weighted maximum likelihood estimation (WMLE) and the trimmed maximum likelihood estimation (TMLE). The WMLE is easy to implement with weights estimated from the data; however, it is still sensitive to IO and a patch of AO outliers. On the other hand, the TMLE is reduced to a combinatorial optimisation problem and hard to implement but it is efficient to both types of outliers presented here. To overcome the difficulty, we apply the parallel randomised algorithm that has a low computational cost. A Monte Carlo simulation result shows the efficiency of the proposed algorithms. An earlier version of this paper was presented at the 8th Asian Control Conference, Kaohsiung, Taiwan, 2011.

  7. Maximum group velocity in a one-dimensional model with a sinusoidally varying staggered potential

    NASA Astrophysics Data System (ADS)

    Nag, Tanay; Sen, Diptiman; Dutta, Amit

    2015-06-01

    We use Floquet theory to study the maximum value of the stroboscopic group velocity in a one-dimensional tight-binding model subjected to an on-site staggered potential varying sinusoidally in time. The results obtained by numerically diagonalizing the Floquet operator are analyzed using a variety of analytical schemes. In the low-frequency limit we use adiabatic theory, while in the high-frequency limit the Magnus expansion of the Floquet Hamiltonian turns out to be appropriate. When the magnitude of the staggered potential is much greater or much less than the hopping, we use degenerate Floquet perturbation theory; we find that dynamical localization occurs in the former case when the maximum group velocity vanishes. Finally, starting from an "engineered" initial state where the particles (taken to be hard-core bosons) are localized in one part of the chain, we demonstrate that the existence of a maximum stroboscopic group velocity manifests in a light-cone-like spreading of the particles in real space.

  8. Global model SMF2 of the F2-layer maximum height

    NASA Astrophysics Data System (ADS)

    Shubin, V. N.; Karpachev, A. T.; Telegin, V. A.; Tsybulya, K. G.

    2015-09-01

    A global model SMF2 (Satellite Model of F2 layer) of the F2-layer height was created. For its creation, data from the topside sounding on board the Interkosmos-19 satellite, as well as the data of radio occultation measurements in the CHAMP, GRACE, and COSMIC experiments, were used. Data from a network of ground-based sounding stations were also additionally used. The model covers all solar activity levels, months, hours of local and universal time, longitudes, and latitudes. The model is a median one within the range of magnetic activity values K p< 3+. The spatial-temporal distribution of hmF2 in the new model is described by mutually orthogonal functions for which the attached Legendre polynomials are used. The temporal distribution is described by an expansion into a Fourier series in UT. The input parameters of the model are geographic coordinates, month, and time (UT or LT). The new model agrees well with the international model of the ionosphere IRI in places where there are many ground-based stations, and it more precisely describes the F2-layer height in places where they are absent: over the oceans and at the equator. Under low solar activity, the standard deviation in the SMF2 model does not exceed 14 km for all hours of the day, as compared to 26.6 km in the IRI-2012 model. The mean relative deviation is by approximately a factor of 4 less than that in the IRI model. Under high solar activity, the maximum standard deviations in the SMF2 model reach 25 km; however, in the IRI they are higher by a factor of ~2. The mean relative deviation is by a factor of ~2 less than in the IRI model. Thus, a hmF2 model that is more precise than IRI-2012 was created.

  9. Upper entropy axioms and lower entropy axioms

    SciTech Connect

    Guo, Jin-Li Suo, Qi

    2015-04-15

    The paper suggests the concepts of an upper entropy and a lower entropy. We propose a new axiomatic definition, namely, upper entropy axioms, inspired by axioms of metric spaces, and also formulate lower entropy axioms. We also develop weak upper entropy axioms and weak lower entropy axioms. Their conditions are weaker than those of Shannon–Khinchin axioms and Tsallis axioms, while these conditions are stronger than those of the axiomatics based on the first three Shannon–Khinchin axioms. The subadditivity and strong subadditivity of entropy are obtained in the new axiomatics. Tsallis statistics is a special case of satisfying our axioms. Moreover, different forms of information measures, such as Shannon entropy, Daroczy entropy, Tsallis entropy and other entropies, can be unified under the same axiomatics.

  10. Upper entropy axioms and lower entropy axioms

    NASA Astrophysics Data System (ADS)

    Guo, Jin-Li; Suo, Qi

    2015-04-01

    The paper suggests the concepts of an upper entropy and a lower entropy. We propose a new axiomatic definition, namely, upper entropy axioms, inspired by axioms of metric spaces, and also formulate lower entropy axioms. We also develop weak upper entropy axioms and weak lower entropy axioms. Their conditions are weaker than those of Shannon-Khinchin axioms and Tsallis axioms, while these conditions are stronger than those of the axiomatics based on the first three Shannon-Khinchin axioms. The subadditivity and strong subadditivity of entropy are obtained in the new axiomatics. Tsallis statistics is a special case of satisfying our axioms. Moreover, different forms of information measures, such as Shannon entropy, Daroczy entropy, Tsallis entropy and other entropies, can be unified under the same axiomatics.

  11. Hamiltonian formalism and path entropy maximization

    NASA Astrophysics Data System (ADS)

    Davis, Sergio; González, Diego

    2015-10-01

    Maximization of the path information entropy is a clear prescription for constructing models in non-equilibrium statistical mechanics. Here it is shown that, following this prescription under the assumption of arbitrary instantaneous constraints on position and velocity, a Lagrangian emerges which determines the most probable trajectory. Deviations from the probability maximum can be consistently described as slices in time by a Hamiltonian, according to a nonlinear Langevin equation and its associated Fokker-Planck equation. The connections unveiled between the maximization of path entropy and the Langevin/Fokker-Planck equations imply that missing information about the phase space coordinate never decreases in time, a purely information-theoretical version of the second law of thermodynamics. All of these results are independent of any physical assumptions, and thus valid for any generalized coordinate as a function of time, or any other parameter. This reinforces the view that the second law is a fundamental property of plausible inference.

  12. Entropy jump across an inviscid shock wave

    NASA Technical Reports Server (NTRS)

    Salas, Manuel D.; Iollo, Angelo

    1995-01-01

    The shock jump conditions for the Euler equations in their primitive form are derived by using generalized functions. The shock profiles for specific volume, speed, and pressure are shown to be the same, however density has a different shock profile. Careful study of the equations that govern the entropy shows that the inviscid entropy profile has a local maximum within the shock layer. We demonstrate that because of this phenomenon, the entropy, propagation equation cannot be used as a conservation law.

  13. Optimized sparse presentation-based classification method with weighted block and maximum likelihood model

    NASA Astrophysics Data System (ADS)

    He, Jun; Zuo, Tian; Sun, Bo; Wu, Xuewen; Chen, Chao

    2014-06-01

    This paper is aiming at applying sparse representation based classification (SRC) on face recognition with disguise or illumination variation. Having analyzed the characteristics of general object recognition and the principle of the classifier of SRC method, authors focus on evaluating blocks of a probe sample and propose an optimized SRC method based on position-preserving weighted block and maximum likelihood model. Principle and implementation of the proposed method have been introduced in the article, and experiments on Yale and AR face database have been given too. From experimental results, it can be seen that the proposed optimized SRC method works well than existing methods.

  14. Maximum likelihood estimation for semiparametric transformation models with interval-censored data

    PubMed Central

    Zeng, Donglin; Mao, Lu; Lin, D. Y.

    2016-01-01

    Interval censoring arises frequently in clinical, epidemiological, financial and sociological studies, where the event or failure of interest is known only to occur within an interval induced by periodic monitoring. We formulate the effects of potentially time-dependent covariates on the interval-censored failure time through a broad class of semiparametric transformation models that encompasses proportional hazards and proportional odds models. We consider nonparametric maximum likelihood estimation for this class of models with an arbitrary number of monitoring times for each subject. We devise an EM-type algorithm that converges stably, even in the presence of time-dependent covariates, and show that the estimators for the regression parameters are consistent, asymptotically normal, and asymptotically efficient with an easily estimated covariance matrix. Finally, we demonstrate the performance of our procedures through simulation studies and application to an HIV/AIDS study conducted in Thailand. PMID:27279656

  15. Support of total maximum daily load programs using spatially referenced regression models

    USGS Publications Warehouse

    McMahon, G.; Alexander, R.B.; Qian, S.

    2003-01-01

    The spatially referenced regressions on watershed attributes modeling approach, as applied to predictions of total nitrogen flux in three North Carolina river basins, addresses several information needs identified by a National Research Council evaluation of the total maximum daily load program. The model provides reach-level predictions of the probability of exceeding water-quality criteria, and estimates of total nitrogen budgets. Model estimates of point- and diffuse-source contributions and nitrogen loss rates in streams and reservoirs compared moderately well with literature estimates. Maps of reach-level predictions of nutrient inputs and delivery provide an intuitive and spatially detailed summary of the origins and fate of nutrients within a basin.

  16. Estimation of instantaneous peak flow from simulated maximum daily flow using the HBV model

    NASA Astrophysics Data System (ADS)

    Ding, Jie; Haberlandt, Uwe

    2014-05-01

    Instantaneous peak flow (IPF) data are the foundation of the design of hydraulic structures and flood frequency analysis. However, the long discharge records published by hydrological agencies contain usually only average daily flows which are of little value for design in small catchments. In former research, statistical analysis using observed peak and daily flow data was carried out to explore the link between instantaneous peak flow (IPF) and maximum daily flow (MDF) where the multiple regression model is proved to have the best performance. The objective of this study is to further investigate the acceptability of the multiple regression model for post-processing simulated daily flows from hydrological modeling. The model based flood frequency analysis allows to consider change in the condition of the catchments and in climate for design. Here, the HBV model is calibrated on peak flow distributions and flow duration curves using two approaches. In a two -step approach the simulated MDF are corrected with a priory established regressions. In a one-step procedure the regression coefficients are calibrated together with the parameters of the model. For the analysis data from 18 mesoscale catchments in the Aller-Leine river basin in Northern Germany are used. The results show that: (1) the multiple regression model is capable to predict the peak flows with the simulated MDF data; (2) the calibrated hydrological model reproduces well the magnitude and frequency distribution of peak flows; (3) the one-step procedure outperforms the two-step procedure regarding the estimation of peak flows.

  17. Thin Interface Asymptotics for an Energy/Entropy Approach to Phase-Field Models with Unequal Conductivities

    NASA Technical Reports Server (NTRS)

    McFadden, G. B.; Wheeler, A. A.; Anderson, D. M.

    1999-01-01

    Karma and Rapped recently developed a new sharp interface asymptotic analysis of the phase-field equations that is especially appropriate for modeling dendritic growth at low undercoolings. Their approach relieves a stringent restriction on the interface thickness that applies in the conventional asymptotic analysis, and has the added advantage that interfacial kinetic effects can also be eliminated. However, their analysis focussed on the case of equal thermal conductivities in the solid and liquid phases; when applied to a standard phase-field model with unequal conductivities, anomalous terms arise in the limiting forms of the boundary conditions for the interfacial temperature that are not present in conventional sharp-interface solidification models, as discussed further by Almgren. In this paper we apply their asymptotic methodology to a generalized phase-field model which is derived using a thermodynamically consistent approach that is based on independent entropy and internal energy gradient functionals that include double wells in both the entropy and internal energy densities. The additional degrees of freedom associated with the generalized phased-field equations can be chosen to eliminate the anomalous terms that arise for unequal conductivities.

  18. Near-horizon expansion (conformal) approach to the calculation of Black Hole entropy in `t Hooft's brick-wall model

    NASA Astrophysics Data System (ADS)

    Ordonez, Carlos

    2010-10-01

    A review and the latest results on the near-horizon expansion (conformal) approach to `t Hooft's brick-wall model calculation of Black Hole entropy developed recently by the speaker and his collaborators will be given in this talk. With mainly a graduate student audience in mind, the seminar will be pedagogical in nature, with emphasis on the ideas and logic of the methods and the insights gained with this approach more than on details. If time permits, possible future directions will also be mentioned.

  19. Information and Entropy

    NASA Astrophysics Data System (ADS)

    Caticha, Ariel

    2007-11-01

    What is information? Is it physical? We argue that in a Bayesian theory the notion of information must be defined in terms of its effects on the beliefs of rational agents. Information is whatever constrains rational beliefs and therefore it is the force that induces us to change our minds. This problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), which is designed for updating from arbitrary priors given information in the form of arbitrary constraints, includes as special cases both MaxEnt (which allows arbitrary constraints) and Bayes' rule (which allows arbitrary priors). Thus, ME unifies the two themes of these workshops—the Maximum Entropy and the Bayesian methods—into a single general inference scheme that allows us to handle problems that lie beyond the reach of either of the two methods separately. I conclude with a couple of simple illustrative examples.

  20. Valence bond entanglement entropy.

    PubMed

    Alet, Fabien; Capponi, Sylvain; Laflorencie, Nicolas; Mambrini, Matthieu

    2007-09-14

    We introduce for SU(2) quantum spin systems the valence bond entanglement entropy as a counting of valence bond spin singlets shared by two subsystems. For a large class of antiferromagnetic systems, it can be calculated in all dimensions with quantum Monte Carlo simulations in the valence bond basis. We show numerically that this quantity displays all features of the von Neumann entanglement entropy for several one-dimensional systems. For two-dimensional Heisenberg models, we find a strict area law for a valence bond solid state and multiplicative logarithmic corrections for the Néel phase. PMID:17930468