Science.gov

Sample records for maximum entropy models

  1. Maximum entropy model for business cycle synchronization

    NASA Astrophysics Data System (ADS)

    Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui

    2014-11-01

    The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.

  2. Maximum entropy models of ecosystem functioning

    NASA Astrophysics Data System (ADS)

    Bertram, Jason

    2014-12-01

    Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes' broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.

  3. Maximum entropy models of ecosystem functioning

    SciTech Connect

    Bertram, Jason

    2014-12-05

    Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes’ broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.

  4. A maximum entropy model for opinions in social groups

    NASA Astrophysics Data System (ADS)

    Davis, Sergio; Navarrete, Yasmín; Gutiérrez, Gonzalo

    2014-04-01

    We study how the opinions of a group of individuals determine their spatial distribution and connectivity, through an agent-based model. The interaction between agents is described by a Hamiltonian in which agents are allowed to move freely without an underlying lattice (the average network topology connecting them is determined from the parameters). This kind of model was derived using maximum entropy statistical inference under fixed expectation values of certain probabilities that (we propose) are relevant to social organization. Control parameters emerge as Lagrange multipliers of the maximum entropy problem, and they can be associated with the level of consequence between the personal beliefs and external opinions, and the tendency to socialize with peers of similar or opposing views. These parameters define a phase diagram for the social system, which we studied using Monte Carlo Metropolis simulations. Our model presents both first and second-order phase transitions, depending on the ratio between the internal consequence and the interaction with others. We have found a critical value for the level of internal consequence, below which the personal beliefs of the agents seem to be irrelevant.

  5. Maximum Entropy Fundamentals

    NASA Astrophysics Data System (ADS)

    Harremoeës, P.; Topsøe, F.

    2001-09-01

    In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over the development of natural

  6. From Maximum Entropy Models to Non-Stationarity and Irreversibility

    NASA Astrophysics Data System (ADS)

    Cofre, Rodrigo; Cessac, Bruno; Maldonado, Cesar

    The maximum entropy distribution can be obtained from a variational principle. This is important as a matter of principle and for the purpose of finding approximate solutions. One can exploit this fact to obtain relevant information about the underlying stochastic process. We report here in recent progress in three aspects to this approach.1- Biological systems are expected to show some degree of irreversibility in time. Based on the transfer matrix technique to find the spatio-temporal maximum entropy distribution, we build a framework to quantify the degree of irreversibility of any maximum entropy distribution.2- The maximum entropy solution is characterized by a functional called Gibbs free energy (solution of the variational principle). The Legendre transformation of this functional is the rate function, which controls the speed of convergence of empirical averages to their ergodic mean. We show how the correct description of this functional is determinant for a more rigorous characterization of first and higher order phase transitions.3- We assess the impact of a weak time-dependent external stimulus on the collective statistics of spiking neuronal networks. We show how to evaluate this impact on any higher order spatio-temporal correlation. RC supported by ERC advanced Grant ``Bridges'', BC: KEOPS ANR-CONICYT, Renvision and CM: CONICYT-FONDECYT No. 3140572.

  7. On the maximum-entropy/autoregressive modeling of time series

    NASA Technical Reports Server (NTRS)

    Chao, B. F.

    1984-01-01

    The autoregressive (AR) model of a random process is interpreted in the light of the Prony's relation which relates a complex conjugate pair of poles of the AR process in the z-plane (or the z domain) on the one hand, to the complex frequency of one complex harmonic function in the time domain on the other. Thus the AR model of a time series is one that models the time series as a linear combination of complex harmonic functions, which include pure sinusoids and real exponentials as special cases. An AR model is completely determined by its z-domain pole configuration. The maximum-entropy/autogressive (ME/AR) spectrum, defined on the unit circle of the z-plane (or the frequency domain), is nothing but a convenient, but ambiguous visual representation. It is asserted that the position and shape of a spectral peak is determined by the corresponding complex frequency, and the height of the spectral peak contains little information about the complex amplitude of the complex harmonic functions.

  8. Biomagnetic source detection by maximum entropy and graphical models.

    PubMed

    Amblard, Cécile; Lapalme, Ervig; Lina, Jean-Marc

    2004-03-01

    This article presents a new approach for detecting active sources in the cortex from magnetic field measurements on the scalp in magnetoencephalography (MEG). The solution of this ill-posed inverse problem is addressed within the framework of maximum entropy on the mean (MEM) principle introduced by Clarke and Janday. The main ingredient of this regularization technique is a reference probability measure on the random variables of interest. These variables are the intensity of current sources distributed on the cortical surface for which this measure encompasses all available prior information that could help to regularize the inverse problem. This measure introduces hidden Markov random variables associated with the activation state of predefined cortical regions. MEM approach is applied within this particular probabilistic framework and simulations show that the present methodology leads to a practical detection of cerebral activity from MEG data.

  9. Measurement scale in maximum entropy models of species abundance

    PubMed Central

    Frank, Steven A.

    2010-01-01

    The consistency of the species abundance distribution across diverse communities has attracted widespread attention. In this paper, I argue that the consistency of pattern arises because diverse ecological mechanisms share a common symmetry with regard to measurement scale. By symmetry, I mean that different ecological processes preserve the same measure of information and lose all other information in the aggregation of various perturbations. I frame these explanations of symmetry, measurement, and aggregation in terms of a recently developed extension to the theory of maximum entropy. I show that the natural measurement scale for the species abundance distribution is log-linear: the information in observations at small population sizes scales logarithmically and, as population size increases, the scaling of information grades from logarithmic to linear. Such log-linear scaling leads naturally to a gamma distribution for species abundance, which matches well with the observed patterns. Much of the variation between samples can be explained by the magnitude at which the measurement scale grades from logarithmic to linear. This measurement approach can be applied to the similar problem of allelic diversity in population genetics and to a wide variety of other patterns in biology. PMID:21265915

  10. Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.

    PubMed

    Chatzis, Sotirios P; Andreou, Andreas S

    2015-11-01

    Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.

  11. Structural modelling and control design under incomplete parameter information: The maximum-entropy approach

    NASA Technical Reports Server (NTRS)

    Hyland, D. C.

    1983-01-01

    A stochastic structural control model is described. In contrast to the customary deterministic model, the stochastic minimum data/maximum entropy model directly incorporates the least possible a priori parameter information. The approach is to adopt this model as the basic design model, thus incorporating the effects of parameter uncertainty at a fundamental level, and design mean-square optimal controls (that is, choose the control law to minimize the average of a quadratic performance index over the parameter ensemble).

  12. Generalized Maximum Entropy

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter; Stutz, John

    2005-01-01

    A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].

  13. Inferring Pairwise Interactions from Biological Data Using Maximum-Entropy Probability Models.

    PubMed

    Stein, Richard R; Marks, Debora S; Sander, Chris

    2015-07-01

    Maximum entropy-based inference methods have been successfully used to infer direct interactions from biological datasets such as gene expression data or sequence ensembles. Here, we review undirected pairwise maximum-entropy probability models in two categories of data types, those with continuous and categorical random variables. As a concrete example, we present recently developed inference methods from the field of protein contact prediction and show that a basic set of assumptions leads to similar solution strategies for inferring the model parameters in both variable types. These parameters reflect interactive couplings between observables, which can be used to predict global properties of the biological system. Such methods are applicable to the important problems of protein 3-D structure prediction and association of gene-gene networks, and they enable potential applications to the analysis of gene alteration patterns and to protein design. PMID:26225866

  14. Inferring Pairwise Interactions from Biological Data Using Maximum-Entropy Probability Models

    PubMed Central

    Stein, Richard R.; Marks, Debora S.; Sander, Chris

    2015-01-01

    Maximum entropy-based inference methods have been successfully used to infer direct interactions from biological datasets such as gene expression data or sequence ensembles. Here, we review undirected pairwise maximum-entropy probability models in two categories of data types, those with continuous and categorical random variables. As a concrete example, we present recently developed inference methods from the field of protein contact prediction and show that a basic set of assumptions leads to similar solution strategies for inferring the model parameters in both variable types. These parameters reflect interactive couplings between observables, which can be used to predict global properties of the biological system. Such methods are applicable to the important problems of protein 3-D structure prediction and association of gene–gene networks, and they enable potential applications to the analysis of gene alteration patterns and to protein design. PMID:26225866

  15. Convex accelerated maximum entropy reconstruction

    NASA Astrophysics Data System (ADS)

    Worley, Bradley

    2016-04-01

    Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm - called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm - is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra.

  16. Modeling the Multiple-Antenna Wireless Channel Using Maximum Entropy Methods

    NASA Astrophysics Data System (ADS)

    Guillaud, M.; Debbah, M.; Moustakas, A. L.

    2007-11-01

    Analytical descriptions of the statistics of wireless channel models are desirable tools for communication systems engineering. When multiple antennas are available at the transmit and/or the receive side (the Multiple-Input Multiple-Output, or MIMO, case), the statistics of the matrix H representing the gains between the antennas of a transmit and a receive antenna array, and in particular the correlation between its coefficients, are known to be of paramount importance for the design of such systems. However these characteristics depend on the operating environment, since the electromagnetic propagation paths are dictated by the surroundings of the antenna arrays, and little knowledge about these is available at the time of system design. An approach using the Maximum Entropy principle to derive probability density functions for the channel matrix, based on various degrees of knowledge about the environment, is presented. The general idea is to apply the maximum entropy principle to obtain the distribution of each parameter of interest (e.g. correlation), and then to marginalize them out to obtain the full channel distribution. It was shown in previous works, using sophisticated integrals from statistical physics, that by using the full spatial correlation matrix E{vec(H)vec(H)H} as the intermediate modeling parameter, this method can yield surprisingly concise channel descriptions. In this case, the joint probability density function is shown to be merely a function of the Frobenius norm of the channel matrix |H|F. In the present paper, we investigate the case where information about the average covariance matrix is available (e.g. through measurements). The maximum entropy distribution of the covariance is derived under this constraint. Furthermore, we consider also the doubly correlated case, where the intermediate modeling parameters are chosen as the transmit- and receive-side channel covariance matrices (respectively E{HHH} and E{HHH}). We compare the

  17. Steepest entropy ascent model for far-nonequilibrium thermodynamics: unified implementation of the maximum entropy production principle.

    PubMed

    Beretta, Gian Paolo

    2014-10-01

    By suitable reformulations, we cast the mathematical frameworks of several well-known different approaches to the description of nonequilibrium dynamics into a unified formulation valid in all these contexts, which extends to such frameworks the concept of steepest entropy ascent (SEA) dynamics introduced by the present author in previous works on quantum thermodynamics. Actually, the present formulation constitutes a generalization also for the quantum thermodynamics framework. The analysis emphasizes that in the SEA modeling principle a key role is played by the geometrical metric with respect to which to measure the length of a trajectory in state space. In the near-thermodynamic-equilibrium limit, the metric tensor is directly related to the Onsager's generalized resistivity tensor. Therefore, through the identification of a suitable metric field which generalizes the Onsager generalized resistance to the arbitrarily far-nonequilibrium domain, most of the existing theories of nonequilibrium thermodynamics can be cast in such a way that the state exhibits the spontaneous tendency to evolve in state space along the path of SEA compatible with the conservation constraints and the boundary conditions. The resulting unified family of SEA dynamical models is intrinsically and strongly consistent with the second law of thermodynamics. The non-negativity of the entropy production is a general and readily proved feature of SEA dynamics. In several of the different approaches to nonequilibrium description we consider here, the SEA concept has not been investigated before. We believe it defines the precise meaning and the domain of general validity of the so-called maximum entropy production principle. Therefore, it is hoped that the present unifying approach may prove useful in providing a fresh basis for effective, thermodynamically consistent, numerical models and theoretical treatments of irreversible conservative relaxation towards equilibrium from far nonequilibrium

  18. Modelling streambank erosion potential using maximum entropy in a central Appalachian watershed

    NASA Astrophysics Data System (ADS)

    Pitchford, J.; Strager, M.; Riley, A.; Lin, L.; Anderson, J.

    2015-03-01

    We used maximum entropy to model streambank erosion potential (SEP) in a central Appalachian watershed to help prioritize sites for management. Model development included measuring erosion rates, application of a quantitative approach to locate Target Eroding Areas (TEAs), and creation of maps of boundary conditions. We successfully constructed a probability distribution of TEAs using the program Maxent. All model evaluation procedures indicated that the model was an excellent predictor, and that the major environmental variables controlling these processes were streambank slope, soil characteristics, bank position, and underlying geology. A classification scheme with low, moderate, and high levels of SEP derived from logistic model output was able to differentiate sites with low erosion potential from sites with moderate and high erosion potential. A major application of this type of modelling framework is to address uncertainty in stream restoration planning, ultimately helping to bridge the gap between restoration science and practice.

  19. Maximum entropy production in daisyworld

    NASA Astrophysics Data System (ADS)

    Maunu, Haley A.; Knuth, Kevin H.

    2012-05-01

    Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.

  20. Maximum entropy beam diagnostic tomography

    SciTech Connect

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs.

  1. Modeling the Mass Action Dynamics of Metabolism with Fluctuation Theorems and Maximum Entropy

    NASA Astrophysics Data System (ADS)

    Cannon, William; Thomas, Dennis; Baxter, Douglas; Zucker, Jeremy; Goh, Garrett

    The laws of thermodynamics dictate the behavior of biotic and abiotic systems. Simulation methods based on statistical thermodynamics can provide a fundamental understanding of how biological systems function and are coupled to their environment. While mass action kinetic simulations are based on solving ordinary differential equations using rate parameters, analogous thermodynamic simulations of mass action dynamics are based on modeling states using chemical potentials. The latter have the advantage that standard free energies of formation/reaction and metabolite levels are much easier to determine than rate parameters, allowing one to model across a large range of scales. Bridging theory and experiment, statistical thermodynamics simulations allow us to both predict activities of metabolites and enzymes and use experimental measurements of metabolites and proteins as input data. Even if metabolite levels are not available experimentally, it is shown that a maximum entropy assumption is quite reasonable and in many cases results in both the most energetically efficient process and the highest material flux.

  2. Bayesian Maximum Entropy Integration of Ozone Observations and Model Predictions: A National Application.

    PubMed

    Xu, Yadong; Serre, Marc L; Reyes, Jeanette; Vizuete, William

    2016-04-19

    To improve ozone exposure estimates for ambient concentrations at a national scale, we introduce our novel Regionalized Air Quality Model Performance (RAMP) approach to integrate chemical transport model (CTM) predictions with the available ozone observations using the Bayesian Maximum Entropy (BME) framework. The framework models the nonlinear and nonhomoscedastic relation between air pollution observations and CTM predictions and for the first time accounts for variability in CTM model performance. A validation analysis using only noncollocated data outside of a validation radius rv was performed and the R(2) between observations and re-estimated values for two daily metrics, the daily maximum 8-h average (DM8A) and the daily 24-h average (D24A) ozone concentrations, were obtained with the OBS scenario using ozone observations only in contrast with the RAMP and a Constant Air Quality Model Performance (CAMP) scenarios. We show that, by accounting for the spatial and temporal variability in model performance, our novel RAMP approach is able to extract more information in terms of R(2) increase percentage, with over 12 times for the DM8A and over 3.5 times for the D24A ozone concentrations, from CTM predictions than the CAMP approach assuming that model performance does not change across space and time.

  3. Bayesian Maximum Entropy Integration of Ozone Observations and Model Predictions: A National Application.

    PubMed

    Xu, Yadong; Serre, Marc L; Reyes, Jeanette; Vizuete, William

    2016-04-19

    To improve ozone exposure estimates for ambient concentrations at a national scale, we introduce our novel Regionalized Air Quality Model Performance (RAMP) approach to integrate chemical transport model (CTM) predictions with the available ozone observations using the Bayesian Maximum Entropy (BME) framework. The framework models the nonlinear and nonhomoscedastic relation between air pollution observations and CTM predictions and for the first time accounts for variability in CTM model performance. A validation analysis using only noncollocated data outside of a validation radius rv was performed and the R(2) between observations and re-estimated values for two daily metrics, the daily maximum 8-h average (DM8A) and the daily 24-h average (D24A) ozone concentrations, were obtained with the OBS scenario using ozone observations only in contrast with the RAMP and a Constant Air Quality Model Performance (CAMP) scenarios. We show that, by accounting for the spatial and temporal variability in model performance, our novel RAMP approach is able to extract more information in terms of R(2) increase percentage, with over 12 times for the DM8A and over 3.5 times for the D24A ozone concentrations, from CTM predictions than the CAMP approach assuming that model performance does not change across space and time. PMID:26998937

  4. Tissue radiation response with maximum Tsallis entropy.

    PubMed

    Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar

    2010-10-01

    The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature. PMID:21230944

  5. Tissue Radiation Response with Maximum Tsallis Entropy

    SciTech Connect

    Sotolongo-Grau, O.; Rodriguez-Perez, D.; Antoranz, J. C.; Sotolongo-Costa, Oscar

    2010-10-08

    The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.

  6. Tissue radiation response with maximum Tsallis entropy.

    PubMed

    Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar

    2010-10-01

    The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.

  7. On the sufficiency of pairwise interactions in maximum entropy models of networks

    NASA Astrophysics Data System (ADS)

    Nemenman, Ilya; Merchan, Lina

    Biological information processing networks consist of many components, which are coupled by an even larger number of complex multivariate interactions. However, analyses of data sets from fields as diverse as neuroscience, molecular biology, and behavior have reported that observed statistics of states of some biological networks can be approximated well by maximum entropy models with only pairwise interactions among the components. Based on simulations of random Ising spin networks with p-spin (p > 2) interactions, here we argue that this reduction in complexity can be thought of as a natural property of some densely interacting networks in certain regimes, and not necessarily as a special property of living systems. This work was supported in part by James S. McDonnell Foundation Grant No. 220020321.

  8. On the Sufficiency of Pairwise Interactions in Maximum Entropy Models of Networks

    NASA Astrophysics Data System (ADS)

    Merchan, Lina; Nemenman, Ilya

    2016-03-01

    Biological information processing networks consist of many components, which are coupled by an even larger number of complex multivariate interactions. However, analyses of data sets from fields as diverse as neuroscience, molecular biology, and behavior have reported that observed statistics of states of some biological networks can be approximated well by maximum entropy models with only pairwise interactions among the components. Based on simulations of random Ising spin networks with p-spin (p>2) interactions, here we argue that this reduction in complexity can be thought of as a natural property of densely interacting networks in certain regimes, and not necessarily as a special property of living systems. By connecting our analysis to the theory of random constraint satisfaction problems, we suggest a reason for why some biological systems may operate in this regime.

  9. Maximum entropy principal for transportation

    SciTech Connect

    Bilich, F.; Da Silva, R.

    2008-11-06

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.

  10. a Maximum Entropy Model of the Bearded Capuchin Monkey Habitat Incorporating Topography and Spectral Unmixing Analysis

    NASA Astrophysics Data System (ADS)

    Howard, A. M.; Bernardes, S.; Nibbelink, N.; Biondi, L.; Presotto, A.; Fragaszy, D. M.; Madden, M.

    2012-07-01

    Movement patterns of bearded capuchin monkeys (Cebus (Sapajus) libidinosus) in northeastern Brazil are likely impacted by environmental features such as elevation, vegetation density, or vegetation type. Habitat preferences of these monkeys provide insights regarding the impact of environmental features on species ecology and the degree to which they incorporate these features in movement decisions. In order to evaluate environmental features influencing movement patterns and predict areas suitable for movement, we employed a maximum entropy modelling approach, using observation points along capuchin monkey daily routes as species presence points. We combined these presence points with spatial data on important environmental features from remotely sensed data on land cover and topography. A spectral mixing analysis procedure was used to generate fraction images that represent green vegetation, shade and soil of the study area. A Landsat Thematic Mapper scene of the area of study was geometrically and atmospherically corrected and used as input in a Minimum Noise Fraction (MNF) procedure and a linear spectral unmixing approach was used to generate the fraction images. These fraction images and elevation were the environmental layer inputs for our logistic MaxEnt model of capuchin movement. Our models' predictive power (test AUC) was 0.775. Areas of high elevation (>450 m) showed low probabilities of presence, and percent green vegetation was the greatest overall contributor to model AUC. This work has implications for predicting daily movement patterns of capuchins in our field site, as suitability values from our model may relate to habitat preference and facility of movement.

  11. Predicting the distribution of the Asian tapir in Peninsular Malaysia using maximum entropy modeling.

    PubMed

    Clements, Gopalasamy Reuben; Rayan, D Mark; Aziz, Sheema Abdul; Kawanishi, Kae; Traeholt, Carl; Magintan, David; Yazi, Muhammad Fadlli Abdul; Tingley, Reid

    2012-12-01

    In 2008, the IUCN threat status of the Asian tapir (Tapirus indicus) was reclassified from 'vulnerable' to 'endangered'. The latest distribution map from the IUCN Red List suggests that the tapirs' native range is becoming increasingly fragmented in Peninsular Malaysia, but distribution data collected by local researchers suggest a more extensive geographical range. Here, we compile a database of 1261 tapir occurrence records within Peninsular Malaysia, and demonstrate that this species, indeed, has a much broader geographical range than the IUCN range map suggests. However, extreme spatial and temporal bias in these records limits their utility for conservation planning. Therefore, we used maximum entropy (MaxEnt) modeling to elucidate the potential extent of the Asian tapir's occurrence in Peninsular Malaysia while accounting for bias in existing distribution data. Our MaxEnt model predicted that the Asian tapir has a wider geographic range than our fine-scale data and the IUCN range map both suggest. Approximately 37% of Peninsular Malaysia contains potentially suitable tapir habitats. Our results justify a revision to the Asian tapir's extent of occurrence in the IUCN Red List. Furthermore, our modeling demonstrated that selectively logged forests encompass 45% of potentially suitable tapir habitats, underscoring the importance of these habitats for the conservation of this species in Peninsular Malaysia. PMID:23253371

  12. Maximum-Entropy Models of Sequenced Immune Repertoires Predict Antigen-Antibody Affinity

    PubMed Central

    Marcatili, Paolo; Pagnani, Andrea

    2016-01-01

    The immune system has developed a number of distinct complex mechanisms to shape and control the antibody repertoire. One of these mechanisms, the affinity maturation process, works in an evolutionary-like fashion: after binding to a foreign molecule, the antibody-producing B-cells exhibit a high-frequency mutation rate in the genome region that codes for the antibody active site. Eventually, cells that produce antibodies with higher affinity for their cognate antigen are selected and clonally expanded. Here, we propose a new statistical approach based on maximum entropy modeling in which a scoring function related to the binding affinity of antibodies against a specific antigen is inferred from a sample of sequences of the immune repertoire of an individual. We use our inference strategy to infer a statistical model on a data set obtained by sequencing a fairly large portion of the immune repertoire of an HIV-1 infected patient. The Pearson correlation coefficient between our scoring function and the IC50 neutralization titer measured on 30 different antibodies of known sequence is as high as 0.77 (p-value 10−6), outperforming other sequence- and structure-based models. PMID:27074145

  13. Computational design of hepatitis C vaccines using maximum entropy models and population dynamics

    NASA Astrophysics Data System (ADS)

    Hart, Gregory; Ferguson, Andrew

    Hepatitis C virus (HCV) afflicts 170 million people and kills 350,000 annually. Vaccination offers the most realistic and cost effective hope of controlling this epidemic. Despite 20 years of research, no vaccine is available. A major obstacle is the virus' extreme genetic variability and rapid mutational escape from immune pressure. Improvements in the vaccine design process are urgently needed. Coupling data mining with spin glass models and maximum entropy inference, we have developed a computational approach to translate sequence databases into empirical fitness landscapes. These landscapes explicitly connect viral genotype to phenotypic fitness and reveal vulnerable targets that can be exploited to rationally design immunogens. Viewing these landscapes as the mutational ''playing field'' over which the virus is constrained to evolve, we have integrated them with agent-based models of the viral mutational and host immune response dynamics, establishing a data-driven immune simulator of HCV infection. We have employed this simulator to perform in silico screening of HCV immunogens. By systematically identifying a small number of promising vaccine candidates, these models can accelerate the search for a vaccine by massively reducing the experimental search space.

  14. Predicting the distribution of the Asian tapir in Peninsular Malaysia using maximum entropy modeling.

    PubMed

    Clements, Gopalasamy Reuben; Rayan, D Mark; Aziz, Sheema Abdul; Kawanishi, Kae; Traeholt, Carl; Magintan, David; Yazi, Muhammad Fadlli Abdul; Tingley, Reid

    2012-12-01

    In 2008, the IUCN threat status of the Asian tapir (Tapirus indicus) was reclassified from 'vulnerable' to 'endangered'. The latest distribution map from the IUCN Red List suggests that the tapirs' native range is becoming increasingly fragmented in Peninsular Malaysia, but distribution data collected by local researchers suggest a more extensive geographical range. Here, we compile a database of 1261 tapir occurrence records within Peninsular Malaysia, and demonstrate that this species, indeed, has a much broader geographical range than the IUCN range map suggests. However, extreme spatial and temporal bias in these records limits their utility for conservation planning. Therefore, we used maximum entropy (MaxEnt) modeling to elucidate the potential extent of the Asian tapir's occurrence in Peninsular Malaysia while accounting for bias in existing distribution data. Our MaxEnt model predicted that the Asian tapir has a wider geographic range than our fine-scale data and the IUCN range map both suggest. Approximately 37% of Peninsular Malaysia contains potentially suitable tapir habitats. Our results justify a revision to the Asian tapir's extent of occurrence in the IUCN Red List. Furthermore, our modeling demonstrated that selectively logged forests encompass 45% of potentially suitable tapir habitats, underscoring the importance of these habitats for the conservation of this species in Peninsular Malaysia.

  15. Potential distribution of Mexican primates: modeling the ecological niche with the maximum entropy algorithm.

    PubMed

    Vidal-García, Francisca; Serio-Silva, Juan Carlos

    2011-07-01

    We developed a potential distribution model for the tropical rain forest species of primates of southern Mexico: the black howler monkey (Alouatta pigra), the mantled howler monkey (Alouatta palliata), and the spider monkey (Ateles geoffroyi). To do so, we applied the maximum entropy algorithm from the ecological niche modeling program MaxEnt. For each species, we used occurrence records from scientific collections, and published and unpublished sources, and we also used the 19 environmental coverage variables related to precipitation and temperature from WorldClim to develop the models. The predicted distribution of A. pigra was strongly associated with the mean temperature of the warmest quarter (23.6%), whereas the potential distributions of A. palliata and A. geoffroyi were strongly associated with precipitation during the coldest quarter (52.2 and 34.3% respectively). The potential distribution of A. geoffroyi is broader than that of the Alouatta spp. The areas with the greatest probability of presence of A. pigra and A. palliata are strongly associated with riparian vegetation, whereas the presence of A. geoffroyi is more strongly associated with the presence of rain forest. Our most significant contribution is the identification of areas with a high probability of the presence of these primate species, which is information that can be applied to planning future studies and then establishing criteria for the creation of areas to primate conservation in Mexico.

  16. A Novel Maximum Entropy Markov Model for Human Facial Expression Recognition

    PubMed Central

    2016-01-01

    Research in video based FER systems has exploded in the past decade. However, most of the previous methods work well when they are trained and tested on the same dataset. Illumination settings, image resolution, camera angle, and physical characteristics of the people differ from one dataset to another. Considering a single dataset keeps the variance, which results from differences, to a minimum. Having a robust FER system, which can work across several datasets, is thus highly desirable. The aim of this work is to design, implement, and validate such a system using different datasets. In this regard, the major contribution is made at the recognition module which uses the maximum entropy Markov model (MEMM) for expression recognition. In this model, the states of the human expressions are modeled as the states of an MEMM, by considering the video-sensor observations as the observations of MEMM. A modified Viterbi is utilized to generate the most probable expression state sequence based on such observations. Lastly, an algorithm is designed which predicts the expression state from the generated state sequence. Performance is compared against several existing state-of-the-art FER systems on six publicly available datasets. A weighted average accuracy of 97% is achieved across all datasets. PMID:27635654

  17. Predictive modeling and mapping of Malayan Sun Bear (Helarctos malayanus) distribution using maximum entropy.

    PubMed

    Nazeri, Mona; Jusoff, Kamaruzaman; Madani, Nima; Mahmud, Ahmad Rodzi; Bahman, Abdul Rani; Kumar, Lalit

    2012-01-01

    One of the available tools for mapping the geographical distribution and potential suitable habitats is species distribution models. These techniques are very helpful for finding poorly known distributions of species in poorly sampled areas, such as the tropics. Maximum Entropy (MaxEnt) is a recently developed modeling method that can be successfully calibrated using a relatively small number of records. In this research, the MaxEnt model was applied to describe the distribution and identify the key factors shaping the potential distribution of the vulnerable Malayan Sun Bear (Helarctos malayanus) in one of the main remaining habitats in Peninsular Malaysia. MaxEnt results showed that even though Malaysian sun bear habitat is tied with tropical evergreen forests, it lives in a marginal threshold of bio-climatic variables. On the other hand, current protected area networks within Peninsular Malaysia do not cover most of the sun bears potential suitable habitats. Assuming that the predicted suitability map covers sun bears actual distribution, future climate change, forest degradation and illegal hunting could potentially severely affect the sun bear's population. PMID:23110182

  18. A Novel Maximum Entropy Markov Model for Human Facial Expression Recognition.

    PubMed

    Siddiqi, Muhammad Hameed; Alam, Md Golam Rabiul; Hong, Choong Seon; Khan, Adil Mehmood; Choo, Hyunseung

    2016-01-01

    Research in video based FER systems has exploded in the past decade. However, most of the previous methods work well when they are trained and tested on the same dataset. Illumination settings, image resolution, camera angle, and physical characteristics of the people differ from one dataset to another. Considering a single dataset keeps the variance, which results from differences, to a minimum. Having a robust FER system, which can work across several datasets, is thus highly desirable. The aim of this work is to design, implement, and validate such a system using different datasets. In this regard, the major contribution is made at the recognition module which uses the maximum entropy Markov model (MEMM) for expression recognition. In this model, the states of the human expressions are modeled as the states of an MEMM, by considering the video-sensor observations as the observations of MEMM. A modified Viterbi is utilized to generate the most probable expression state sequence based on such observations. Lastly, an algorithm is designed which predicts the expression state from the generated state sequence. Performance is compared against several existing state-of-the-art FER systems on six publicly available datasets. A weighted average accuracy of 97% is achieved across all datasets. PMID:27635654

  19. Zipf's law, power laws and maximum entropy

    NASA Astrophysics Data System (ADS)

    Visser, Matt

    2013-04-01

    Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.

  20. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    USGS Publications Warehouse

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  1. A subjective supply-demand model: the maximum Boltzmann/Shannon entropy solution

    NASA Astrophysics Data System (ADS)

    Piotrowski, Edward W.; Sładkowski, Jan

    2009-03-01

    The present authors have put forward a projective geometry model of rational trading. The expected (mean) value of the time that is necessary to strike a deal and the profit strongly depend on the strategies adopted. A frequent trader often prefers maximal profit intensity to the maximization of profit resulting from a separate transaction because the gross profit/income is the adopted/recommended benchmark. To investigate activities that have different periods of duration we define, following the queuing theory, the profit intensity as a measure of this economic category. The profit intensity in repeated trading has a unique property of attaining its maximum at a fixed point regardless of the shape of demand curves for a wide class of probability distributions of random reverse transactions (i.e. closing of the position). These conclusions remain valid for an analogous model based on supply analysis. This type of market game is often considered in research aiming at finding an algorithm that maximizes profit of a trader who negotiates prices with the Rest of the World (a collective opponent), possessing a definite and objective supply profile. Such idealization neglects the sometimes important influence of an individual trader on the demand/supply profile of the Rest of the World and in extreme cases questions the very idea of demand/supply profile. Therefore we put forward a trading model in which the demand/supply profile of the Rest of the World induces the (rational) trader to (subjectively) presume that he/she lacks (almost) all knowledge concerning the market but his/her average frequency of trade. This point of view introduces maximum entropy principles into the model and broadens the range of economic phenomena that can be perceived as a sort of thermodynamical system. As a consequence, the profit intensity has a fixed point with an astonishing connection with Fibonacci classical works and looking for the quickest algorithm for obtaining the extremum of a

  2. Maximum Entropy Production Modeling of Evapotranspiration Partitioning on Heterogeneous Terrain and Canopy Cover: advantages and limitations.

    NASA Astrophysics Data System (ADS)

    Gutierrez-Jurado, H. A.; Guan, H.; Wang, J.; Wang, H.; Bras, R. L.; Simmons, C. T.

    2015-12-01

    Quantification of evapotranspiration (ET) and its partition over regions of heterogeneous topography and canopy poses a challenge using traditional approaches. In this study, we report the results of a novel field experiment design guided by the Maximum Entropy Production model of ET (MEP-ET), formulated for estimating evaporation and transpiration from homogeneous soil and canopy. A catchment with complex terrain and patchy vegetation in South Australia was instrumented to measure temperature, humidity and net radiation at soil and canopy surfaces. Performance of the MEP-ET model to quantify transpiration and soil evaporation was evaluated during wet and dry conditions with independently and directly measured transpiration from sapflow and soil evaporation using the Bowen Ratio Energy Balance (BREB). MEP-ET transpiration shows remarkable agreement with that obtained through sapflow measurements during wet conditions, but consistently overestimates the flux during dry periods. However, an additional term introduced to the original MEP-ET model accounting for higher stomatal regulation during dry spells, based on differences between leaf and air vapor pressure deficits and temperatures, significantly improves the model performance. On the other hand, MEP-ET soil evaporation is in good agreement with that from BREB regardless of moisture conditions. The experimental design allows a plot and tree scale quantification of evaporation and transpiration respectively. This study confirms for the first time that the MEP-ET originally developed for homogeneous open bare soil and closed canopy can be used for modeling ET over heterogeneous land surfaces. Furthermore, we show that with the addition of an empirical function simulating the plants ability to regulate transpiration, and based on the same measurements of temperature and humidity, the method can produce reliable estimates of ET during both wet and dry conditions without compromising its parsimony.

  3. Modelling non-Gaussianity of background and observational errors by the Maximum Entropy method

    NASA Astrophysics Data System (ADS)

    Pires, Carlos; Talagrand, Olivier; Bocquet, Marc

    2010-05-01

    The Best Linear Unbiased Estimator (BLUE) has widely been used in atmospheric-oceanic data assimilation. However, when data errors have non-Gaussian pdfs, the BLUE differs from the absolute Minimum Variance Unbiased Estimator (MVUE), minimizing the mean square analysis error. The non-Gaussianity of errors can be due to the statistical skewness and positiveness of some physical observables (e.g. moisture, chemical species) or due to the nonlinearity of the data assimilation models and observation operators acting on Gaussian errors. Non-Gaussianity of assimilated data errors can be justified from a priori hypotheses or inferred from statistical diagnostics of innovations (observation minus background). Following this rationale, we compute measures of innovation non-Gaussianity, namely its skewness and kurtosis, relating it to: a) the non-Gaussianity of the individual error themselves, b) the correlation between nonlinear functions of errors, and c) the heteroscedasticity of errors within diagnostic samples. Those relationships impose bounds for skewness and kurtosis of errors which are critically dependent on the error variances, thus leading to a necessary tuning of error variances in order to accomplish consistency with innovations. We evaluate the sub-optimality of the BLUE as compared to the MVUE, in terms of excess of error variance, under the presence of non-Gaussian errors. The error pdfs are obtained by the maximum entropy method constrained by error moments up to fourth order, from which the Bayesian probability density function and the MVUE are computed. The impact is higher for skewed extreme innovations and grows in average with the skewness of data errors, especially if those skewnesses have the same sign. Application has been performed to the quality-accepted ECMWF innovations of brightness temperatures of a set of High Resolution Infrared Sounder channels. In this context, the MVUE has led in some extreme cases to a potential reduction of 20-60% error

  4. Maximum-entropy description of animal movement.

    PubMed

    Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M

    2015-03-01

    We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.

  5. Maximum entropy image reconstruction from projections

    NASA Astrophysics Data System (ADS)

    Bara, N.; Murata, K.

    1981-07-01

    The maximum entropy method is applied to image reconstruction from projections, of which angular view is restricted. The relaxation parameters are introduced to the maximum entropy reconstruction and after iteration the median filtering is implemented. These procedures improve the quality of the reconstructed image from noisy projections

  6. Role of adjacency-matrix degeneracy in maximum-entropy-weighted network models

    NASA Astrophysics Data System (ADS)

    Sagarra, O.; Pérez Vicente, C. J.; Díaz-Guilera, A.

    2015-11-01

    Complex network null models based on entropy maximization are becoming a powerful tool to characterize and analyze data from real systems. However, it is not easy to extract good and unbiased information from these models: A proper understanding of the nature of the underlying events represented in them is crucial. In this paper we emphasize this fact stressing how an accurate counting of configurations compatible with given constraints is fundamental to build good null models for the case of networks with integer-valued adjacency matrices constructed from an aggregation of one or multiple layers. We show how different assumptions about the elements from which the networks are built give rise to distinctively different statistics, even when considering the same observables to match those of real data. We illustrate our findings by applying the formalism to three data sets using an open-source software package accompanying the present work and demonstrate how such differences are clearly seen when measuring network observables.

  7. The two-box model of climate: limitations and applications to planetary habitability and maximum entropy production studies.

    PubMed

    Lorenz, Ralph D

    2010-05-12

    The 'two-box model' of planetary climate is discussed. This model has been used to demonstrate consistency of the equator-pole temperature gradient on Earth, Mars and Titan with what would be predicted from a principle of maximum entropy production (MEP). While useful for exposition and for generating first-order estimates of planetary heat transports, it has too low a resolution to investigate climate systems with strong feedbacks. A two-box MEP model agrees well with the observed day : night temperature contrast observed on the extrasolar planet HD 189733b. PMID:20368253

  8. The two-box model of climate: limitations and applications to planetary habitability and maximum entropy production studies.

    PubMed

    Lorenz, Ralph D

    2010-05-12

    The 'two-box model' of planetary climate is discussed. This model has been used to demonstrate consistency of the equator-pole temperature gradient on Earth, Mars and Titan with what would be predicted from a principle of maximum entropy production (MEP). While useful for exposition and for generating first-order estimates of planetary heat transports, it has too low a resolution to investigate climate systems with strong feedbacks. A two-box MEP model agrees well with the observed day : night temperature contrast observed on the extrasolar planet HD 189733b.

  9. Dynamical maximum entropy approach to flocking

    NASA Astrophysics Data System (ADS)

    Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.

    2014-04-01

    We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.

  10. The two-box model of climate: limitations and applications to planetary habitability and maximum entropy production studies

    PubMed Central

    Lorenz, Ralph D.

    2010-01-01

    The ‘two-box model’ of planetary climate is discussed. This model has been used to demonstrate consistency of the equator–pole temperature gradient on Earth, Mars and Titan with what would be predicted from a principle of maximum entropy production (MEP). While useful for exposition and for generating first-order estimates of planetary heat transports, it has too low a resolution to investigate climate systems with strong feedbacks. A two-box MEP model agrees well with the observed day : night temperature contrast observed on the extrasolar planet HD 189733b. PMID:20368253

  11. Analyzing Trade Dynamics from Incomplete Data in Spatial Regional Models: a Maximum Entropy Approach

    NASA Astrophysics Data System (ADS)

    Papalia, Rosa Bernardini

    2008-11-01

    Flow data are viewed as cross-classified data, and spatial interaction models are reformulated as log-linear models. According to this view, we introduce a spatial panel data model and we derive a Generalized Maximum Entropy—based estimation approach. The estimator has the advantage of being consistent with the underlying data generation process and eventually with the restrictions implied by some non sample information or by past empirical evidence by also controlling for collinearity and endogeneity problems.

  12. Principles of maximum entropy and maximum caliber in statistical physics

    NASA Astrophysics Data System (ADS)

    Pressé, Steve; Ghosh, Kingshuk; Lee, Julian; Dill, Ken A.

    2013-07-01

    The variational principles called maximum entropy (MaxEnt) and maximum caliber (MaxCal) are reviewed. MaxEnt originated in the statistical physics of Boltzmann and Gibbs, as a theoretical tool for predicting the equilibrium states of thermal systems. Later, entropy maximization was also applied to matters of information, signal transmission, and image reconstruction. Recently, since the work of Shore and Johnson, MaxEnt has been regarded as a principle that is broader than either physics or information alone. MaxEnt is a procedure that ensures that inferences drawn from stochastic data satisfy basic self-consistency requirements. The different historical justifications for the entropy S=-∑ipilog⁡pi and its corresponding variational principles are reviewed. As an illustration of the broadening purview of maximum entropy principles, maximum caliber, which is path entropy maximization applied to the trajectories of dynamical systems, is also reviewed. Examples are given in which maximum caliber is used to interpret dynamical fluctuations in biology and on the nanoscale, in single-molecule and few-particle systems such as molecular motors, chemical reactions, biological feedback circuits, and diffusion in microfluidics devices.

  13. Weak scale from the maximum entropy principle

    NASA Astrophysics Data System (ADS)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2015-03-01

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  14. Predicting the potential environmental suitability for Theileria orientalis transmission in New Zealand cattle using maximum entropy niche modelling.

    PubMed

    Lawrence, K E; Summers, S R; Heath, A C G; McFadden, A M J; Pulford, D J; Pomroy, W E

    2016-07-15

    The tick-borne haemoparasite Theileria orientalis is the most important infectious cause of anaemia in New Zealand cattle. Since 2012 a previously unrecorded type, T. orientalis type 2 (Ikeda), has been associated with disease outbreaks of anaemia, lethargy, jaundice and deaths on over 1000 New Zealand cattle farms, with most of the affected farms found in the upper North Island. The aim of this study was to model the relative environmental suitability for T. orientalis transmission throughout New Zealand, to predict the proportion of cattle farms potentially suitable for active T. orientalis infection by region, island and the whole of New Zealand and to estimate the average relative environmental suitability per farm by region, island and the whole of New Zealand. The relative environmental suitability for T. orientalis transmission was estimated using the Maxent (maximum entropy) modelling program. The Maxent model predicted that 99% of North Island cattle farms (n=36,257), 64% South Island cattle farms (n=15,542) and 89% of New Zealand cattle farms overall (n=51,799) could potentially be suitable for T. orientalis transmission. The average relative environmental suitability of T. orientalis transmission at the farm level was 0.34 in the North Island, 0.02 in the South Island and 0.24 overall. The study showed that the potential spatial distribution of T. orientalis environmental suitability was much greater than presumed in the early part of the Theileria associated bovine anaemia (TABA) epidemic. Maximum entropy offers a computer efficient method of modelling the probability of habitat suitability for an arthropod vectored disease. This model could help estimate the boundaries of the endemically stable and endemically unstable areas for T. orientalis transmission within New Zealand and be of considerable value in informing practitioner and farmer biosecurity decisions in these respective areas. PMID:27270395

  15. Deep-sea benthic megafaunal habitat suitability modelling: A global-scale maximum entropy model for xenophyophores

    NASA Astrophysics Data System (ADS)

    Ashford, Oliver S.; Davies, Andrew J.; Jones, Daniel O. B.

    2014-12-01

    Xenophyophores are a group of exclusively deep-sea agglutinating rhizarian protozoans, at least some of which are foraminifera. They are an important constituent of the deep-sea megafauna that are sometimes found in sufficient abundance to act as a significant source of habitat structure for meiofaunal and macrofaunal organisms. This study utilised maximum entropy modelling (Maxent) and a high-resolution environmental database to explore the environmental factors controlling the presence of Xenophyophorea and two frequently sampled xenophyophore species that are taxonomically stable: Syringammina fragilissima and Stannophyllum zonarium. These factors were also used to predict the global distribution of each taxon. Areas of high habitat suitability for xenophyophores were highlighted throughout the world's oceans, including in a large number of areas yet to be suitably sampled, but the Northeast and Southeast Atlantic Ocean, Gulf of Mexico and Caribbean Sea, the Red Sea and deep-water regions of the Malay Archipelago represented particular hotspots. The two species investigated showed more specific habitat requirements when compared to the model encompassing all xenophyophore records, perhaps in part due to the smaller number and relatively more clustered nature of the presence records available for modelling at present. The environmental variables depth, oxygen parameters, nitrate concentration, carbon-chemistry parameters and temperature were of greatest importance in determining xenophyophore distributions, but, somewhat surprisingly, hydrodynamic parameters were consistently shown to have low importance, possibly due to the paucity of well-resolved global hydrodynamic datasets. The results of this study (and others of a similar type) have the potential to guide further sample collection, environmental policy, and spatial planning of marine protected areas and industrial activities that impact the seafloor, particularly those that overlap with aggregations of

  16. Maximum entropy analysis of hydraulic pipe networks

    NASA Astrophysics Data System (ADS)

    Waldrip, Steven H.; Niven, Robert K.; Abel, Markus; Schlegel, Michael

    2014-12-01

    A Maximum Entropy (MaxEnt) method is developed to infer mean external and internal flow rates and mean pressure gradients (potential differences) in hydraulic pipe networks, without or with sufficient constraints to render the system deterministic. The proposed method substantially extends existing methods for the analysis of flow networks (e.g. Hardy-Cross), applicable only to deterministic networks.

  17. Soil Moisture and Vegetation Controls on Surface Energy Balance Using the Maximum Entropy Production Model of Evapotranspiration

    NASA Astrophysics Data System (ADS)

    Wang, J.; Parolari, A.; Huang, S. Y.

    2014-12-01

    The objective of this study is to formulate and test plant water stress parameterizations for the recently proposed maximum entropy production (MEP) model of evapotranspiration (ET) over vegetated surfaces. . The MEP model of ET is a parsimonious alternative to existing land surface parameterizations of surface energy fluxes from net radiation, temperature, humidity, and a small number of parameters. The MEP model was previously tested for vegetated surfaces under well-watered and dry, dormant conditions, when the surface energy balance is relatively insensitive to plant physiological activity. Under water stressed conditions, however, the plant water stress response strongly affects the surface energy balance. This effect occurs through plant physiological adjustments that reduce ET to maintain leaf turgor pressure as soil moisture is depleted during drought. To improve MEP model of ET predictions under water stress conditions, the model was modified to incorporate this plant-mediated feedback between soil moisture and ET. We compare MEP model predictions to observations under a range of field conditions, including bare soil, grassland, and forest. The results indicate a water stress function that combines the soil water potential in the surface soil layer with the atmospheric humidity successfully reproduces observed ET decreases during drought. In addition to its utility as a modeling tool, the calibrated water stress functions also provide a means to infer ecosystem influence on the land surface state. Challenges associated with sampling model input data (i.e., net radiation, surface temperature, and surface humidity) are also discussed.

  18. Pareto versus lognormal: A maximum entropy test

    NASA Astrophysics Data System (ADS)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  19. Pareto versus lognormal: a maximum entropy test.

    PubMed

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  20. Maximum entropy and Bayesian methods. Proceedings.

    NASA Astrophysics Data System (ADS)

    Grandy, W. T., Jr.; Schick, L. H.

    This volume contains a selection of papers presented at the Tenth Annual Workshop on Maximum Entropy and Bayesian Methods. The thirty-six papers included cover a wide range of applications in areas such as economics and econometrics, astronomy and astrophysics, general physics, complex systems, image reconstruction, and probability and mathematics. Together they give an excellent state-of-the-art overview of fundamental methods of data analysis.

  1. Inferring global wind energetics from a simple Earth system model based on the principle of maximum entropy production

    NASA Astrophysics Data System (ADS)

    Karkar, S.; Paillard, D.

    2015-03-01

    The question of total available wind power in the atmosphere is highly debated, as well as the effect large scale wind farms would have on the climate. Bottom-up approaches, such as those proposed by wind turbine engineers often lead to non-physical results (non-conservation of energy, mostly), while top-down approaches have proven to give physically consistent results. This paper proposes an original method for the calculation of mean annual wind energetics in the atmosphere, without resorting to heavy numerical integration of the entire dynamics. The proposed method is derived from a model based on the Maximum of Entropy Production (MEP) principle, which has proven to efficiently describe the annual mean temperature and energy fluxes, despite its simplicity. Because the atmosphere is represented with only one vertical layer and there is no vertical wind component, the model fails to represent the general circulation patterns such as cells or trade winds. However, interestingly, global energetic diagnostics are well captured by the mere combination of a simple MEP model and a flux inversion method.

  2. Predicting Changes in Macrophyte Community Structure from Functional Traits in a Freshwater Lake: A Test of Maximum Entropy Model

    PubMed Central

    Fu, Hui; Zhong, Jiayou; Yuan, Guixiang; Guo, Chunjing; Lou, Qian; Zhang, Wei; Xu, Jun; Ni, Leyi; Xie, Ping; Cao, Te

    2015-01-01

    Trait-based approaches have been widely applied to investigate how community dynamics respond to environmental gradients. In this study, we applied a series of maximum entropy (maxent) models incorporating functional traits to unravel the processes governing macrophyte community structure along water depth gradient in a freshwater lake. We sampled 42 plots and 1513 individual plants, and measured 16 functional traits and abundance of 17 macrophyte species. Study results showed that maxent model can be highly robust (99.8%) in predicting the species relative abundance of macrophytes with observed community-weighted mean (CWM) traits as the constraints, while relative low (about 30%) with CWM traits fitted from water depth gradient as the constraints. The measured traits showed notably distinct importance in predicting species abundances, with lowest for perennial growth form and highest for leaf dry mass content. For tuber and leaf nitrogen content, there were significant shifts in their effects on species relative abundance from positive in shallow water to negative in deep water. This result suggests that macrophyte species with tuber organ and greater leaf nitrogen content would become more abundant in shallow water, but would become less abundant in deep water. Our study highlights how functional traits distributed across gradients provide a robust path towards predictive community ecology. PMID:26167856

  3. Predicting Changes in Macrophyte Community Structure from Functional Traits in a Freshwater Lake: A Test of Maximum Entropy Model.

    PubMed

    Fu, Hui; Zhong, Jiayou; Yuan, Guixiang; Guo, Chunjing; Lou, Qian; Zhang, Wei; Xu, Jun; Ni, Leyi; Xie, Ping; Cao, Te

    2015-01-01

    Trait-based approaches have been widely applied to investigate how community dynamics respond to environmental gradients. In this study, we applied a series of maximum entropy (maxent) models incorporating functional traits to unravel the processes governing macrophyte community structure along water depth gradient in a freshwater lake. We sampled 42 plots and 1513 individual plants, and measured 16 functional traits and abundance of 17 macrophyte species. Study results showed that maxent model can be highly robust (99.8%) in predicting the species relative abundance of macrophytes with observed community-weighted mean (CWM) traits as the constraints, while relative low (about 30%) with CWM traits fitted from water depth gradient as the constraints. The measured traits showed notably distinct importance in predicting species abundances, with lowest for perennial growth form and highest for leaf dry mass content. For tuber and leaf nitrogen content, there were significant shifts in their effects on species relative abundance from positive in shallow water to negative in deep water. This result suggests that macrophyte species with tuber organ and greater leaf nitrogen content would become more abundant in shallow water, but would become less abundant in deep water. Our study highlights how functional traits distributed across gradients provide a robust path towards predictive community ecology.

  4. DEM interpolation weight calculation modulus based on maximum entropy

    NASA Astrophysics Data System (ADS)

    Chen, Tian-wei; Yang, Xia

    2015-12-01

    There is negative-weight in traditional interpolation of gridding DEM, in the article, the principle of Maximum Entropy is utilized to analyze the model system which depends on modulus of space weight. Negative-weight problem of the DEM interpolation is researched via building Maximum Entropy model, and adding nonnegative, first and second order's Moment constraints, the negative-weight problem is solved. The correctness and accuracy of the method was validated with genetic algorithm in matlab program. The method is compared with the method of Yang Chizhong interpolation and quadratic program. Comparison shows that the volume and scaling of Maximum Entropy's weight is fit to relations of space and the accuracy is superior to the latter two.

  5. Predicting the Current and Future Potential Distributions of Lymphatic Filariasis in Africa Using Maximum Entropy Ecological Niche Modelling

    PubMed Central

    Slater, Hannah; Michael, Edwin

    2012-01-01

    Modelling the spatial distributions of human parasite species is crucial to understanding the environmental determinants of infection as well as for guiding the planning of control programmes. Here, we use ecological niche modelling to map the current potential distribution of the macroparasitic disease, lymphatic filariasis (LF), in Africa, and to estimate how future changes in climate and population could affect its spread and burden across the continent. We used 508 community-specific infection presence data collated from the published literature in conjunction with five predictive environmental/climatic and demographic variables, and a maximum entropy niche modelling method to construct the first ecological niche maps describing potential distribution and burden of LF in Africa. We also ran the best-fit model against climate projections made by the HADCM3 and CCCMA models for 2050 under A2a and B2a scenarios to simulate the likely distribution of LF under future climate and population changes. We predict a broad geographic distribution of LF in Africa extending from the west to the east across the middle region of the continent, with high probabilities of occurrence in the Western Africa compared to large areas of medium probability interspersed with smaller areas of high probability in Central and Eastern Africa and in Madagascar. We uncovered complex relationships between predictor ecological niche variables and the probability of LF occurrence. We show for the first time that predicted climate change and population growth will expand both the range and risk of LF infection (and ultimately disease) in an endemic region. We estimate that populations at risk to LF may range from 543 and 804 million currently, and that this could rise to between 1.65 to 1.86 billion in the future depending on the climate scenario used and thresholds applied to signify infection presence. PMID:22359670

  6. A spatiotemporal dengue fever early warning model accounting for nonlinear associations with meteorological factors: a Bayesian maximum entropy approach

    NASA Astrophysics Data System (ADS)

    Lee, Chieh-Han; Yu, Hwa-Lung; Chien, Lung-Chang

    2014-05-01

    Dengue fever has been identified as one of the most widespread vector-borne diseases in tropical and sub-tropical. In the last decade, dengue is an emerging infectious disease epidemic in Taiwan especially in the southern area where have annually high incidences. For the purpose of disease prevention and control, an early warning system is urgently needed. Previous studies have showed significant relationships between climate variables, in particular, rainfall and temperature, and the temporal epidemic patterns of dengue cases. However, the transmission of the dengue fever is a complex interactive process that mostly understated the composite space-time effects of dengue fever. This study proposes developing a one-week ahead warning system of dengue fever epidemics in the southern Taiwan that considered nonlinear associations between weekly dengue cases and meteorological factors across space and time. The early warning system based on an integration of distributed lag nonlinear model (DLNM) and stochastic Bayesian Maximum Entropy (BME) analysis. The study identified the most significant meteorological measures including weekly minimum temperature and maximum 24-hour rainfall with continuous 15-week lagged time to dengue cases variation under condition of uncertainty. Subsequently, the combination of nonlinear lagged effects of climate variables and space-time dependence function is implemented via a Bayesian framework to predict dengue fever occurrences in the southern Taiwan during 2012. The result shows the early warning system is useful for providing potential outbreak spatio-temporal prediction of dengue fever distribution. In conclusion, the proposed approach can provide a practical disease control tool for environmental regulators seeking more effective strategies for dengue fever prevention.

  7. Binary versus non-binary information in real time series: empirical results and maximum-entropy matrix models

    NASA Astrophysics Data System (ADS)

    Almog, Assaf; Garlaschelli, Diego

    2014-09-01

    The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of multiple time series of activity of the constituent units, such as stocks or neurons, respectively. While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relations between binary and non-binary properties of financial time series. These relations are a novel quantification of the fact that extreme price increments occur more often when most stocks move in the same direction. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices and their mapping to spin models in statistical physics, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to accurately replicate, and mathematically characterize, the observed binary/non-binary relations. We also obtain a phase diagram allowing us to identify, based only on the instantaneous aggregate return of a set of multiple time series, a regime where the so-called ‘market mode’ has an optimal interpretation in terms of collective (endogenous) effects, a regime where it is parsimoniously explained by pure noise, and a regime where it can be regarded as a combination of endogenous and exogenous factors. Our approach allows us to connect spin models, simple stochastic processes, and ensembles of time series inferred from partial information.

  8. Maximum entropy principle and relativistic hydrodynamics

    NASA Astrophysics Data System (ADS)

    van Weert, Ch. G.

    1982-04-01

    A relativistic theory of hydrodynamics applicable beyond the hydrodynamic regime is developed on the basis of the maximum entropy principle. This allows the construction of a unique statistical operator representing the state of the system as specified by the values of the hydrodynamical densities. Special attention is paid to the thermodynamic limit and the virial theorem which leads to an expression for the pressure in terms of the field-theoretic energymomentum tensor of Coleman and Jackiw. It is argued that outside the hydrodynamic regime the notion of a local Gibbs relation, as usually postulated, must be abandoned in general. In the nontext of the linear approximation, the memory-retaining and non-local generalizations of the relativistic Navier-Stokes equations are derived from the underlying Heisenberg equations of motion. The formal similarity to the Zwanzig-Mori description of non-relativistic fluids is expounded.

  9. Stochastic modeling and control system designs of the NASA/MSFC Ground Facility for large space structures: The maximum entropy/optimal projection approach

    NASA Technical Reports Server (NTRS)

    Hsia, Wei-Shen

    1986-01-01

    In the Control Systems Division of the Systems Dynamics Laboratory of the NASA/MSFC, a Ground Facility (GF), in which the dynamics and control system concepts being considered for Large Space Structures (LSS) applications can be verified, was designed and built. One of the important aspects of the GF is to design an analytical model which will be as close to experimental data as possible so that a feasible control law can be generated. Using Hyland's Maximum Entropy/Optimal Projection Approach, a procedure was developed in which the maximum entropy principle is used for stochastic modeling and the optimal projection technique is used for a reduced-order dynamic compensator design for a high-order plant.

  10. A multiscale maximum entropy moment closure for locally regulated space-time point process models of population dynamics.

    PubMed

    Raghib, Michael; Hill, Nicholas A; Dieckmann, Ulf

    2011-05-01

    The prevalence of structure in biological populations challenges fundamental assumptions at the heart of continuum models of population dynamics based only on mean densities (local or global). Individual-based models (IBMs) were introduced during the last decade in an attempt to overcome this limitation by following explicitly each individual in the population. Although the IBM approach has been quite useful, the capability to follow each individual usually comes at the expense of analytical tract ability, which limits the generality of the statements that can be made. For the specific case of spatial structure in populations of sessile (and identical) organisms, space-time point processes with local regulation seem to cover the middle ground between analytical tract ability and a higher degree of biological realism. This approach has shown that simplified representations of fecundity, local dispersal and density-dependent mortality weighted by the local competitive environment are sufficient to generate spatial patterns that mimic field observations. Continuum approximations of these stochastic processes try to distill their fundamental properties, and they keep track of not only mean densities, but also higher order spatial correlations. However, due to the non-linearities involved they result in infinite hierarchies of moment equations. This leads to the problem of finding a 'moment closure'; that is, an appropriate order of (lower order) truncation, together with a method of expressing the highest order density not explicitly modelled in the truncated hierarchy in terms of the lower order densities. We use the principle of constrained maximum entropy to derive a closure relationship for truncation at second order using normalisation and the product densities of first and second orders as constraints, and apply it to one such hierarchy. The resulting 'maxent' closure is similar to the Kirkwood superposition approximation, or 'power-3' closure, but it is

  11. Beyond maximum entropy: Fractal pixon-based image reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, R. C.; Pina, R. K.

    1994-01-01

    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other methods, including Goodness-of-Fit (e.g. Least-Squares and Lucy-Richardson) and Maximum Entropy (ME). Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME.

  12. Combining Experiments and Simulations Using the Maximum Entropy Principle

    PubMed Central

    Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten

    2014-01-01

    A key component of computational biology is to compare the results of computer modelling with experimental measurements. Despite substantial progress in the models and algorithms used in many areas of computational biology, such comparisons sometimes reveal that the computations are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results that are at not in complete and quantitative accordance with experiments. A common solution to this problem is to explicitly ensure agreement between the two by perturbing the potential energy function towards the experimental data. So far, a general consensus for how such perturbations should be implemented has been lacking. Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges. PMID:24586124

  13. Combining experiments and simulations using the maximum entropy principle.

    PubMed

    Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten

    2014-02-01

    A key component of computational biology is to compare the results of computer modelling with experimental measurements. Despite substantial progress in the models and algorithms used in many areas of computational biology, such comparisons sometimes reveal that the computations are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results that are at not in complete and quantitative accordance with experiments. A common solution to this problem is to explicitly ensure agreement between the two by perturbing the potential energy function towards the experimental data. So far, a general consensus for how such perturbations should be implemented has been lacking. Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges. PMID:24586124

  14. Combining experiments and simulations using the maximum entropy principle.

    PubMed

    Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten

    2014-02-01

    A key component of computational biology is to compare the results of computer modelling with experimental measurements. Despite substantial progress in the models and algorithms used in many areas of computational biology, such comparisons sometimes reveal that the computations are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results that are at not in complete and quantitative accordance with experiments. A common solution to this problem is to explicitly ensure agreement between the two by perturbing the potential energy function towards the experimental data. So far, a general consensus for how such perturbations should be implemented has been lacking. Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges.

  15. Maximum-Entropy Inference with a Programmable Annealer

    NASA Astrophysics Data System (ADS)

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-03-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.

  16. Maximum-Entropy Inference with a Programmable Annealer.

    PubMed

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A

    2016-03-03

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.

  17. Maximum-Entropy Inference with a Programmable Annealer

    PubMed Central

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-01-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition. PMID:26936311

  18. Modeling loop entropy.

    PubMed

    Chirikjian, Gregory S

    2011-01-01

    Proteins fold from a highly disordered state into a highly ordered one. Traditionally, the folding problem has been stated as one of predicting "the" tertiary structure from sequential information. However, new evidence suggests that the ensemble of unfolded forms may not be as disordered as once believed, and that the native form of many proteins may not be described by a single conformation, but rather an ensemble of its own. Quantifying the relative disorder in the folded and unfolded ensembles as an entropy difference may therefore shed light on the folding process. One issue that clouds discussions of "entropy" is that many different kinds of entropy can be defined: entropy associated with overall translational and rotational Brownian motion, configurational entropy, vibrational entropy, conformational entropy computed in internal or Cartesian coordinates (which can even be different from each other), conformational entropy computed on a lattice, each of the above with different solvation and solvent models, thermodynamic entropy measured experimentally, etc. The focus of this work is the conformational entropy of coil/loop regions in proteins. New mathematical modeling tools for the approximation of changes in conformational entropy during transition from unfolded to folded ensembles are introduced. In particular, models for computing lower and upper bounds on entropy for polymer models of polypeptide coils both with and without end constraints are presented. The methods reviewed here include kinematics (the mathematics of rigid-body motions), classical statistical mechanics, and information theory.

  19. Maximum power entropy method for ecological data analysis

    NASA Astrophysics Data System (ADS)

    Komori, Osamu; Eguchi, Shinto

    2015-01-01

    In ecology predictive models of the geographical distribution of certain species are widely used to capture the spatial diversity. Recently a method of Maxent based on Gibbs distribution is frequently employed to have reasonable accuracy of a target distribution of species at a site using environmental features such as temperature, precipitation, elevation and so on. It requires only presence data, which is a big advantage to the case where absence data is not available or unreliable. It also incorporates our limited knowledge into the model about the target distribution such that the expected values of environmental features are equal to the empirical average. Moreover, the visualization of the inhabiting probability of species is easily done with the aid of geographical coordination information from Global Biodiversity Inventory Facility (GBIF) in a statistical software R. However, the maximum entropy distribution in Maxent is derived from the Boltzmann-Gibbs-Shannon entropy, which causes unstable estimation of the parameters in the model when some outliers in the data are observed. To overcome the weak point and to have deep understandings of the relation among the total number of species, the Boltzmann-Gibbs-Shannon entropy and Simpson's index, we propose a maximum power entropy method based on beta-divergence, which is a special case of U-divergence. It includes the Boltzmann-Gibbs-Shannon entropy as a special case, so it could have better performance of estimation of the target distribution by appropriately choosing the value of the power index beta. We demonstrate the performance of the proposed method by simulation studies as well as publicly available real data.

  20. The equivalence of minimum entropy production and maximum thermal efficiency in endoreversible heat engines.

    PubMed

    Haseli, Y

    2016-05-01

    The objective of this study is to investigate the thermal efficiency and power production of typical models of endoreversible heat engines at the regime of minimum entropy generation rate. The study considers the Curzon-Ahlborn engine, the Novikov's engine, and the Carnot vapor cycle. The operational regimes at maximum thermal efficiency, maximum power output and minimum entropy production rate are compared for each of these engines. The results reveal that in an endoreversible heat engine, a reduction in entropy production corresponds to an increase in thermal efficiency. The three criteria of minimum entropy production, the maximum thermal efficiency, and the maximum power may become equivalent at the condition of fixed heat input.

  1. A coupled force-restore model of surface temperature and soil moisture using the maximum entropy production model of heat fluxes

    NASA Astrophysics Data System (ADS)

    Huang, S.-Y.; Wang, J.

    2016-07-01

    A coupled force-restore model of surface soil temperature and moisture (FRMEP) is formulated by incorporating the maximum entropy production model of surface heat fluxes and including the gravitational drainage term. The FRMEP model driven by surface net radiation and precipitation are independent of near-surface atmospheric variables with reduced sensitivity to the uncertainties of model input and parameters compared to the classical force-restore models (FRM). The FRMEP model was evaluated using observations from two field experiments with contrasting soil moisture conditions. The modeling errors of the FRMEP predicted surface temperature and soil moisture are lower than those of the classical FRMs forced by observed or bulk formula based surface heat fluxes (bias 1 ~ 2°C versus ~4°C, 0.02 m3 m-3 versus 0.05 m3 m-3). The diurnal variations of surface temperature, soil moisture, and surface heat fluxes are well captured by the FRMEP model measured by the high correlations between the model predictions and observations (r ≥ 0.84). Our analysis suggests that the drainage term cannot be neglected under wet soil condition. A 1 year simulation indicates that the FRMEP model captures the seasonal variation of surface temperature and soil moisture with bias less than 2°C and 0.01 m3 m-3 and correlation coefficients of 0.93 and 0.9 with observations, respectively.

  2. Microcanonical origin of the maximum entropy principle for open systems.

    PubMed

    Lee, Julian; Pressé, Steve

    2012-10-01

    There are two distinct approaches for deriving the canonical ensemble. The canonical ensemble either follows as a special limit of the microcanonical ensemble or alternatively follows from the maximum entropy principle. We show the equivalence of these two approaches by applying the maximum entropy formulation to a closed universe consisting of an open system plus bath. We show that the target function for deriving the canonical distribution emerges as a natural consequence of partial maximization of the entropy over the bath degrees of freedom alone. By extending this mathematical formalism to dynamical paths rather than equilibrium ensembles, the result provides an alternative justification for the principle of path entropy maximization as well.

  3. Maximum Entropy for the International Division of Labor

    PubMed Central

    Lei, Hongmei; Chen, Ying; Li, Ruiqi; He, Deli; Zhang, Jiang

    2015-01-01

    As a result of the international division of labor, the trade value distribution on different products substantiated by international trade flows can be regarded as one country’s strategy for competition. According to the empirical data of trade flows, countries may spend a large fraction of export values on ubiquitous and competitive products. Meanwhile, countries may also diversify their exports share on different types of products to reduce the risk. In this paper, we report that the export share distribution curves can be derived by maximizing the entropy of shares on different products under the product’s complexity constraint once the international market structure (the country-product bipartite network) is given. Therefore, a maximum entropy model provides a good fit to empirical data. The empirical data is consistent with maximum entropy subject to a constraint on the expected value of the product complexity for each country. One country’s strategy is mainly determined by the types of products this country can export. In addition, our model is able to fit the empirical export share distribution curves of nearly every country very well by tuning only one parameter. PMID:26172052

  4. Maximum Entropy for the International Division of Labor.

    PubMed

    Lei, Hongmei; Chen, Ying; Li, Ruiqi; He, Deli; Zhang, Jiang

    2015-01-01

    As a result of the international division of labor, the trade value distribution on different products substantiated by international trade flows can be regarded as one country's strategy for competition. According to the empirical data of trade flows, countries may spend a large fraction of export values on ubiquitous and competitive products. Meanwhile, countries may also diversify their exports share on different types of products to reduce the risk. In this paper, we report that the export share distribution curves can be derived by maximizing the entropy of shares on different products under the product's complexity constraint once the international market structure (the country-product bipartite network) is given. Therefore, a maximum entropy model provides a good fit to empirical data. The empirical data is consistent with maximum entropy subject to a constraint on the expected value of the product complexity for each country. One country's strategy is mainly determined by the types of products this country can export. In addition, our model is able to fit the empirical export share distribution curves of nearly every country very well by tuning only one parameter. PMID:26172052

  5. Maximum entropy production in environmental and ecological systems.

    PubMed

    Kleidon, Axel; Malhi, Yadvinder; Cox, Peter M

    2010-05-12

    The coupled biosphere-atmosphere system entails a vast range of processes at different scales, from ecosystem exchange fluxes of energy, water and carbon to the processes that drive global biogeochemical cycles, atmospheric composition and, ultimately, the planetary energy balance. These processes are generally complex with numerous interactions and feedbacks, and they are irreversible in their nature, thereby producing entropy. The proposed principle of maximum entropy production (MEP), based on statistical mechanics and information theory, states that thermodynamic processes far from thermodynamic equilibrium will adapt to steady states at which they dissipate energy and produce entropy at the maximum possible rate. This issue focuses on the latest development of applications of MEP to the biosphere-atmosphere system including aspects of the atmospheric circulation, the role of clouds, hydrology, vegetation effects, ecosystem exchange of energy and mass, biogeochemical interactions and the Gaia hypothesis. The examples shown in this special issue demonstrate the potential of MEP to contribute to improved understanding and modelling of the biosphere and the wider Earth system, and also explore limitations and constraints to the application of the MEP principle.

  6. The maximum entropy production principle: two basic questions

    PubMed Central

    Martyushev, Leonid M.

    2010-01-01

    The overwhelming majority of maximum entropy production applications to ecological and environmental systems are based on thermodynamics and statistical physics. Here, we discuss briefly maximum entropy production principle and raises two questions: (i) can this principle be used as the basis for non-equilibrium thermodynamics and statistical mechanics and (ii) is it possible to ‘prove’ the principle? We adduce one more proof which is most concise today. PMID:20368251

  7. Maximum-entropy probability distributions under Lp-norm constraints

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1991-01-01

    Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.

  8. Ab initio-informed maximum entropy modeling of rovibrational relaxation and state-specific dissociation with application to the O2 + O system.

    PubMed

    Kulakhmetov, Marat; Gallis, Michael; Alexeenko, Alina

    2016-05-01

    Quasi-classical trajectory (QCT) calculations are used to study state-specific ro-vibrational energy exchange and dissociation in the O2 + O system. Atom-diatom collisions with energy between 0.1 and 20 eV are calculated with a double many body expansion potential energy surface by Varandas and Pais [Mol. Phys. 65, 843 (1988)]. Inelastic collisions favor mono-quantum vibrational transitions at translational energies above 1.3 eV although multi-quantum transitions are also important. Post-collision vibrational favoring decreases first exponentially and then linearly as Δv increases. Vibrationally elastic collisions (Δv = 0) favor small ΔJ transitions while vibrationally inelastic collisions have equilibrium post-collision rotational distributions. Dissociation exhibits both vibrational and rotational favoring. New vibrational-translational (VT), vibrational-rotational-translational (VRT) energy exchange, and dissociation models are developed based on QCT observations and maximum entropy considerations. Full set of parameters for state-to-state modeling of oxygen is presented. The VT energy exchange model describes 22 000 state-to-state vibrational cross sections using 11 parameters and reproduces vibrational relaxation rates within 30% in the 2500-20 000 K temperature range. The VRT model captures 80 × 10(6) state-to-state ro-vibrational cross sections using 19 parameters and reproduces vibrational relaxation rates within 60% in the 5000-15 000 K temperature range. The developed dissociation model reproduces state-specific and equilibrium dissociation rates within 25% using just 48 parameters. The maximum entropy framework makes it feasible to upscale ab initio simulation to full nonequilibrium flow calculations. PMID:27155635

  9. Stationary neutrino radiation transport by maximum entropy closure

    SciTech Connect

    Bludman, S.A. ||; Cernohorsky, J.

    1994-11-01

    The authors obtain the angular distributions that maximize the entropy functional for Maxwell-Boltzmann (classical), Bose-Einstein, and Fermi-Dirac radiation. In the low and high occupancy limits, the maximum entropy closure is bounded by previously known variable Eddington factors that depend only on the flux. For intermediate occupancy, the maximum entropy closure depends on both the occupation density and the flux. The Fermi-Dirac maximum entropy variable Eddington factor shows a scale invariance, which leads to a simple, exact analytic closure for fermions. This two-dimensional variable Eddington factor gives results that agree well with exact (Monte Carlo) neutrino transport calculations out of a collapse residue during early phases of hydrostatic neutron star formation.

  10. Possible dynamical explanations for Paltridge's principle of maximum entropy production

    SciTech Connect

    Virgo, Nathaniel Ikegami, Takashi

    2014-12-05

    Throughout the history of non-equilibrium thermodynamics a number of theories have been proposed in which complex, far from equilibrium flow systems are hypothesised to reach a steady state that maximises some quantity. Perhaps the most celebrated is Paltridge's principle of maximum entropy production for the horizontal heat flux in Earth's atmosphere, for which there is some empirical support. There have been a number of attempts to derive such a principle from maximum entropy considerations. However, we currently lack a more mechanistic explanation of how any particular system might self-organise into a state that maximises some quantity. This is in contrast to equilibrium thermodynamics, in which models such as the Ising model have been a great help in understanding the relationship between the predictions of MaxEnt and the dynamics of physical systems. In this paper we show that, unlike in the equilibrium case, Paltridge-type maximisation in non-equilibrium systems cannot be achieved by a simple dynamical feedback mechanism. Nevertheless, we propose several possible mechanisms by which maximisation could occur. Showing that these occur in any real system is a task for future work. The possibilities presented here may not be the only ones. We hope that by presenting them we can provoke further discussion about the possible dynamical mechanisms behind extremum principles for non-equilibrium systems, and their relationship to predictions obtained through MaxEnt.

  11. A maximum entropy framework for nonexponential distributions.

    PubMed

    Peterson, Jack; Dixit, Purushottam D; Dill, Ken A

    2013-12-17

    Probability distributions having power-law tails are observed in a broad range of social, economic, and biological systems. We describe here a potentially useful common framework. We derive distribution functions for situations in which a "joiner particle" k pays some form of price to enter a community of size , where costs are subject to economies of scale. Maximizing the Boltzmann-Gibbs-Shannon entropy subject to this energy-like constraint predicts a distribution having a power-law tail; it reduces to the Boltzmann distribution in the absence of economies of scale. We show that the predicted function gives excellent fits to 13 different distribution functions, ranging from friendship links in social networks, to protein-protein interactions, to the severity of terrorist attacks. This approach may give useful insights into when to expect power-law distributions in the natural and social sciences. PMID:24297895

  12. A maximum entropy framework for nonexponential distributions

    PubMed Central

    Peterson, Jack; Dixit, Purushottam D.; Dill, Ken A.

    2013-01-01

    Probability distributions having power-law tails are observed in a broad range of social, economic, and biological systems. We describe here a potentially useful common framework. We derive distribution functions for situations in which a “joiner particle” k pays some form of price to enter a community of size , where costs are subject to economies of scale. Maximizing the Boltzmann–Gibbs–Shannon entropy subject to this energy-like constraint predicts a distribution having a power-law tail; it reduces to the Boltzmann distribution in the absence of economies of scale. We show that the predicted function gives excellent fits to 13 different distribution functions, ranging from friendship links in social networks, to protein–protein interactions, to the severity of terrorist attacks. This approach may give useful insights into when to expect power-law distributions in the natural and social sciences. PMID:24297895

  13. How multiplicity determines entropy and the derivation of the maximum entropy principle for complex systems.

    PubMed

    Hanel, Rudolf; Thurner, Stefan; Gell-Mann, Murray

    2014-05-13

    The maximum entropy principle (MEP) is a method for obtaining the most likely distribution functions of observables from statistical systems by maximizing entropy under constraints. The MEP has found hundreds of applications in ergodic and Markovian systems in statistical mechanics, information theory, and statistics. For several decades there has been an ongoing controversy over whether the notion of the maximum entropy principle can be extended in a meaningful way to nonextensive, nonergodic, and complex statistical systems and processes. In this paper we start by reviewing how Boltzmann-Gibbs-Shannon entropy is related to multiplicities of independent random processes. We then show how the relaxation of independence naturally leads to the most general entropies that are compatible with the first three Shannon-Khinchin axioms, the (c,d)-entropies. We demonstrate that the MEP is a perfectly consistent concept for nonergodic and complex statistical systems if their relative entropy can be factored into a generalized multiplicity and a constraint term. The problem of finding such a factorization reduces to finding an appropriate representation of relative entropy in a linear basis. In a particular example we show that path-dependent random processes with memory naturally require specific generalized entropies. The example is to our knowledge the first exact derivation of a generalized entropy from the microscopic properties of a path-dependent random process.

  14. How multiplicity determines entropy and the derivation of the maximum entropy principle for complex systems

    PubMed Central

    Hanel, Rudolf; Thurner, Stefan; Gell-Mann, Murray

    2014-01-01

    The maximum entropy principle (MEP) is a method for obtaining the most likely distribution functions of observables from statistical systems by maximizing entropy under constraints. The MEP has found hundreds of applications in ergodic and Markovian systems in statistical mechanics, information theory, and statistics. For several decades there has been an ongoing controversy over whether the notion of the maximum entropy principle can be extended in a meaningful way to nonextensive, nonergodic, and complex statistical systems and processes. In this paper we start by reviewing how Boltzmann–Gibbs–Shannon entropy is related to multiplicities of independent random processes. We then show how the relaxation of independence naturally leads to the most general entropies that are compatible with the first three Shannon–Khinchin axioms, the -entropies. We demonstrate that the MEP is a perfectly consistent concept for nonergodic and complex statistical systems if their relative entropy can be factored into a generalized multiplicity and a constraint term. The problem of finding such a factorization reduces to finding an appropriate representation of relative entropy in a linear basis. In a particular example we show that path-dependent random processes with memory naturally require specific generalized entropies. The example is to our knowledge the first exact derivation of a generalized entropy from the microscopic properties of a path-dependent random process. PMID:24782541

  15. Maximum Entropy, Word-Frequency, Chinese Characters, and Multiple Meanings

    PubMed Central

    Yan, Xiaoyong; Minnhagen, Petter

    2015-01-01

    The word-frequency distribution of a text written by an author is well accounted for by a maximum entropy distribution, the RGF (random group formation)-prediction. The RGF-distribution is completely determined by the a priori values of the total number of words in the text (M), the number of distinct words (N) and the number of repetitions of the most common word (kmax). It is here shown that this maximum entropy prediction also describes a text written in Chinese characters. In particular it is shown that although the same Chinese text written in words and Chinese characters have quite differently shaped distributions, they are nevertheless both well predicted by their respective three a priori characteristic values. It is pointed out that this is analogous to the change in the shape of the distribution when translating a given text to another language. Another consequence of the RGF-prediction is that taking a part of a long text will change the input parameters (M, N, kmax) and consequently also the shape of the frequency distribution. This is explicitly confirmed for texts written in Chinese characters. Since the RGF-prediction has no system-specific information beyond the three a priori values (M, N, kmax), any specific language characteristic has to be sought in systematic deviations from the RGF-prediction and the measured frequencies. One such systematic deviation is identified and, through a statistical information theoretical argument and an extended RGF-model, it is proposed that this deviation is caused by multiple meanings of Chinese characters. The effect is stronger for Chinese characters than for Chinese words. The relation between Zipf’s law, the Simon-model for texts and the present results are discussed. PMID:25955175

  16. Stochastic model of the NASA/MSFC ground facility for large space structures with uncertain parameters: The maximum entropy approach, part 2

    NASA Technical Reports Server (NTRS)

    Hsia, Wei Shen

    1989-01-01

    A validated technology data base is being developed in the areas of control/structures interaction, deployment dynamics, and system performance for Large Space Structures (LSS). A Ground Facility (GF), in which the dynamics and control systems being considered for LSS applications can be verified, was designed and built. One of the important aspects of the GF is to verify the analytical model for the control system design. The procedure is to describe the control system mathematically as well as possible, then to perform tests on the control system, and finally to factor those results into the mathematical model. The reduction of the order of a higher order control plant was addressed. The computer program was improved for the maximum entropy principle adopted in Hyland's MEOP method. The program was tested against the testing problem. It resulted in a very close match. Two methods of model reduction were examined: Wilson's model reduction method and Hyland's optimal projection (OP) method. Design of a computer program for Hyland's OP method was attempted. Due to the difficulty encountered at the stage where a special matrix factorization technique is needed in order to obtain the required projection matrix, the program was successful up to the finding of the Linear Quadratic Gaussian solution but not beyond. Numerical results along with computer programs which employed ORACLS are presented.

  17. Maximum Entropy Theory of Non-Ideal Detonation

    NASA Astrophysics Data System (ADS)

    Watt, Simon; Braithwaite, Martin; Brown, William Byers; Falle, Sam; Sharpe, Gary

    2009-12-01

    According to the theory of Byers Brown, in a steady state detonation the entropy production between the shock and sonic locus is a maximum in a self-sustaining wave. This has shown to hold true for all one-dimensional cases. Byers Brown also suggested a novel variational approach by maximising the global entropy generation within the detonation driving zone to solve the problem of self-sustaining, two-dimensional steady curved detonation waves in a slab or cylindrical stick of explosive. Preliminary application of such a variational technique, albeit with simplifying assumptions, demonstrate its potential to provide a rapid and accurate solution method for the problem. In this paper, recent progress in the development and validation of the variational maximum entropy concept, for the case of weakly curved waves, are reported. The predictions of the theory are compared with those of Detonation Shock Dynamics theory.

  18. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution

    NASA Astrophysics Data System (ADS)

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.

  19. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution.

    PubMed

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method. PMID:27627406

  20. Beyond maximum entropy: Fractal Pixon-based image reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard C.; Pina, R. K.

    1994-01-01

    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.

  1. Generalized Maximum Entropy Principle, Superstatistics and Problem of Networks Classification

    NASA Astrophysics Data System (ADS)

    Gadjiev, Bahruz

    2011-03-01

    In the presented paper the results of superstatistics methods are given to study growing networks with exponential topology. Topologies of strongly inhomogeneous growing networks are determined. In the framework of maximum entropy method the probability distribution of real networks is derived and their classification is discussed.

  2. Bayesian methods, maximum entropy, and quantum Monte Carlo

    SciTech Connect

    Gubernatis, J.E.; Silver, R.N. ); Jarrell, M. )

    1991-01-01

    We heuristically discuss the application of the method of maximum entropy to the extraction of dynamical information from imaginary-time, quantum Monte Carlo data. The discussion emphasizes the utility of a Bayesian approach to statistical inference and the importance of statistically well-characterized data. 14 refs.

  3. Crowd macro state detection using entropy model

    NASA Astrophysics Data System (ADS)

    Zhao, Ying; Yuan, Mengqi; Su, Guofeng; Chen, Tao

    2015-08-01

    In the crowd security research area a primary concern is to identify the macro state of crowd behaviors to prevent disasters and to supervise the crowd behaviors. The entropy is used to describe the macro state of a self-organization system in physics. The entropy change indicates the system macro state change. This paper provides a method to construct crowd behavior microstates and the corresponded probability distribution using the individuals' velocity information (magnitude and direction). Then an entropy model was built up to describe the crowd behavior macro state. Simulation experiments and video detection experiments were conducted. It was verified that in the disordered state, the crowd behavior entropy is close to the theoretical maximum entropy; while in ordered state, the entropy is much lower than half of the theoretical maximum entropy. The crowd behavior macro state sudden change leads to the entropy change. The proposed entropy model is more applicable than the order parameter model in crowd behavior detection. By recognizing the entropy mutation, it is possible to detect the crowd behavior macro state automatically by utilizing cameras. Results will provide data support on crowd emergency prevention and on emergency manual intervention.

  4. What is the maximum rate at which entropy of a string can increase?

    SciTech Connect

    Ropotenko, Kostyantyn

    2009-03-15

    According to Susskind, a string falling toward a black hole spreads exponentially over the stretched horizon due to repulsive interactions of the string bits. In this paper such a string is modeled as a self-avoiding walk and the string entropy is found. It is shown that the rate at which information/entropy contained in the string spreads is the maximum rate allowed by quantum theory. The maximum rate at which the black hole entropy can increase when a string falls into a black hole is also discussed.

  5. Quantifying evaporation and transpiration fluxes of an Eucalyptus woodland in complex terrain with varying tree cover using the Maximum Entropy Production model of evapotranspiration

    NASA Astrophysics Data System (ADS)

    Gutierrez-Jurado, H. A.; Guan, H.; Wang, H.; Wang, J.; Bras, R. L.; Simmons, C. T.

    2013-12-01

    The measurement of evapotranspiration (ET) fluxes in areas with complex terrain and non-uniform vegetation cover pose a challenge to traditional techniques with fetch constraints, such as the Eddy Covariance method. In this study, we report the results of a field monitoring design based on the Maximum Entropy Production model of ET (MEP-ET), that quantifies evaporation and transpiration from soil and vegetation respectively, using a limited number of measurements of temperature, humidity and net radiation above soil and canopies. Following the MEP-ET model requirements we instrumented a catchment with complex terrain and native vegetation (Eucalyptus leucoxylon) in South Australia. We deployed vertical-through-canopy and near-soil temperature and humidity transects in two opposing slopes (north and south facing) with contrasting canopy cover and understory conditions to measure tree transpiration from 2 eucalyptus trees and soil evaporation of the area under their canopies. We compare the results with transpiration measurements from sapflow data on the same trees and soil evaporation estimates with the Bowen Ratio Energy Balance (BREB) method. Our results show good agreement between the MEP-ET derived transpiration and evaporation and the sapflow transpiration and BREB evaporation estimates, respectively. Using a LiDAR derived canopy cover we upscale the MEP-ET fluxes on each slope and explore the effect of terrain and vegetation cover on the partition of ET and the water budgets across the catchment.

  6. Propane spectral resolution enhancement by the maximum entropy method

    NASA Technical Reports Server (NTRS)

    Bonavito, N. L.; Stewart, K. P.; Hurley, E. J.; Yeh, K. C.; Inguva, R.

    1990-01-01

    The Burg algorithm for maximum entropy power spectral density estimation is applied to a time series of data obtained from a Michelson interferometer and compared with a standard FFT estimate for resolution capability. The propane transmittance spectrum was estimated by use of the FFT with a 2 to the 18th data sample interferogram, giving a maximum unapodized resolution of 0.06/cm. This estimate was then interpolated by zero filling an additional 2 to the 18th points, and the final resolution was taken to be 0.06/cm. Comparison of the maximum entropy method (MEM) estimate with the FFT was made over a 45/cm region of the spectrum for several increasing record lengths of interferogram data beginning at 2 to the 10th. It is found that over this region the MEM estimate with 2 to the 16th data samples is in close agreement with the FFT estimate using 2 to the 18th samples.

  7. Development of an Anisotropic Geological-Based Land Use Regression and Bayesian Maximum Entropy Model for Estimating Groundwater Radon across Northing Carolina

    NASA Astrophysics Data System (ADS)

    Messier, K. P.; Serre, M. L.

    2015-12-01

    Radon (222Rn) is a naturally occurring chemically inert, colorless, and odorless radioactive gas produced from the decay of uranium (238U), which is ubiquitous in rocks and soils worldwide. Exposure to 222Rn is likely the second leading cause of lung cancer after cigarette smoking via inhalation; however, exposure through untreated groundwater is also a contributing factor to both inhalation and ingestion routes. A land use regression (LUR) model for groundwater 222Rn with anisotropic geological and 238U based explanatory variables is developed, which helps elucidate the factors contributing to elevated 222Rn across North Carolina. Geological and uranium based variables are constructed in elliptical buffers surrounding each observation such that they capture the lateral geometric anisotropy present in groundwater 222Rn. Moreover, geological features are defined at three different geological spatial scales to allow the model to distinguish between large area and small area effects of geology on groundwater 222Rn. The LUR is also integrated into the Bayesian Maximum Entropy (BME) geostatistical framework to increase accuracy and produce a point-level LUR-BME model of groundwater 222Rn across North Carolina including prediction uncertainty. The LUR-BME model of groundwater 222Rn results in a leave-one out cross-validation of 0.46 (Pearson correlation coefficient= 0.68), effectively predicting within the spatial covariance range. Modeled results of 222Rn concentrations show variability among Intrusive Felsic geological formations likely due to average bedrock 238U defined on the basis of overlying stream-sediment 238U concentrations that is a widely distributed consistently analyzed point-source data.

  8. Inverse spin glass and related maximum entropy problems.

    PubMed

    Castellana, Michele; Bialek, William

    2014-09-12

    If we have a system of binary variables and we measure the pairwise correlations among these variables, then the least structured or maximum entropy model for their joint distribution is an Ising model with pairwise interactions among the spins. Here we consider inhomogeneous systems in which we constrain, for example, not the full matrix of correlations, but only the distribution from which these correlations are drawn. In this sense, what we have constructed is an inverse spin glass: rather than choosing coupling constants at random from a distribution and calculating correlations, we choose the correlations from a distribution and infer the coupling constants. We argue that such models generate a block structure in the space of couplings, which provides an explicit solution of the inverse problem. This allows us to generate a phase diagram in the space of (measurable) moments of the distribution of correlations. We expect that these ideas will be most useful in building models for systems that are nonequilibrium statistical mechanics problems, such as networks of real neurons. PMID:25260004

  9. Triadic conceptual structure of the maximum entropy approach to evolution.

    PubMed

    Herrmann-Pillath, Carsten; Salthe, Stanley N

    2011-03-01

    Many problems in evolutionary theory are cast in dyadic terms, such as the polar oppositions of organism and environment. We argue that a triadic conceptual structure offers an alternative perspective under which the information generating role of evolution as a physical process can be analyzed, and propose a new diagrammatic approach. Peirce's natural philosophy was deeply influenced by his reception of both Darwin's theory and thermodynamics. Thus, we elaborate on a new synthesis which puts together his theory of signs and modern Maximum Entropy approaches to evolution in a process discourse. Following recent contributions to the naturalization of Peircean semiosis, pointing towards 'physiosemiosis' or 'pansemiosis', we show that triadic structures involve the conjunction of three different kinds of causality, efficient, formal and final. In this, we accommodate the state-centered thermodynamic framework to a process approach. We apply this on Ulanowicz's analysis of autocatalytic cycles as primordial patterns of life. This paves the way for a semiotic view of thermodynamics which is built on the idea that Peircean interpretants are systems of physical inference devices evolving under natural selection. In this view, the principles of Maximum Entropy, Maximum Power, and Maximum Entropy Production work together to drive the emergence of information carrying structures, which at the same time maximize information capacity as well as the gradients of energy flows, such that ultimately, contrary to Schrödinger's seminal contribution, the evolutionary process is seen to be a physical expression of the Second Law. PMID:21055440

  10. A maximum entropy method for MEG source imaging

    SciTech Connect

    Khosla, D. |; Singh, M.

    1996-12-31

    The estimation of three-dimensional dipole current sources on the cortical surface from the measured magnetoencephalogram (MEG) is a highly under determined inverse problem as there are many {open_quotes}feasible{close_quotes} images which are consistent with the MEG data. Previous approaches to this problem have concentrated on the use of weighted minimum norm inverse methods. While these methods ensure a unique solution, they often produce overly smoothed solutions and exhibit severe sensitivity to noise. In this paper we explore the maximum entropy approach to obtain better solutions to the problem. This estimation technique selects that image from the possible set of feasible images which has the maximum entropy permitted by the information available to us. In order to account for the presence of noise in the data, we have also incorporated a noise rejection or likelihood term into our maximum entropy method. This makes our approach mirror a Bayesian maximum a posteriori (MAP) formulation. Additional information from other functional techniques like functional magnetic resonance imaging (fMRI) can be incorporated in the proposed method in the form of a prior bias function to improve solutions. We demonstrate the method with experimental phantom data from a clinical 122 channel MEG system.

  11. Triadic conceptual structure of the maximum entropy approach to evolution.

    PubMed

    Herrmann-Pillath, Carsten; Salthe, Stanley N

    2011-03-01

    Many problems in evolutionary theory are cast in dyadic terms, such as the polar oppositions of organism and environment. We argue that a triadic conceptual structure offers an alternative perspective under which the information generating role of evolution as a physical process can be analyzed, and propose a new diagrammatic approach. Peirce's natural philosophy was deeply influenced by his reception of both Darwin's theory and thermodynamics. Thus, we elaborate on a new synthesis which puts together his theory of signs and modern Maximum Entropy approaches to evolution in a process discourse. Following recent contributions to the naturalization of Peircean semiosis, pointing towards 'physiosemiosis' or 'pansemiosis', we show that triadic structures involve the conjunction of three different kinds of causality, efficient, formal and final. In this, we accommodate the state-centered thermodynamic framework to a process approach. We apply this on Ulanowicz's analysis of autocatalytic cycles as primordial patterns of life. This paves the way for a semiotic view of thermodynamics which is built on the idea that Peircean interpretants are systems of physical inference devices evolving under natural selection. In this view, the principles of Maximum Entropy, Maximum Power, and Maximum Entropy Production work together to drive the emergence of information carrying structures, which at the same time maximize information capacity as well as the gradients of energy flows, such that ultimately, contrary to Schrödinger's seminal contribution, the evolutionary process is seen to be a physical expression of the Second Law.

  12. Metabolic networks evolve towards states of maximum entropy production

    PubMed Central

    Unrean, Pornkamol; Srienc, Friedrich

    2011-01-01

    A metabolic network can be described by a set of elementary modes or pathways representing discrete metabolic states that support cell function. We have recently shown that in the most likely metabolic state the usage probability of individual elementary modes is distributed according to the Boltzmann distribution law while complying with the principle of maximum entropy production. To demonstrate that a metabolic network evolves towards such state we have carried out adaptive evolution experiments with Thermoanaerobacterium saccharolyticum operating with a reduced metabolic functionality based on a reduced set of elementary modes. In such reduced metabolic network metabolic fluxes can be conveniently computed from the measured metabolite secretion pattern. Over a time span of 300 generations the specific growth rate of the strain continuously increased together with a continuous increase in the rate of entropy production. We show that the rate of entropy production asymptotically approaches the maximum entropy production rate predicted from the state when the usage probability of individual elementary modes is distributed according to the Boltzmann distribution. Therefore, the outcome of evolution of a complex biological system can be predicted in highly quantitative terms using basic statistical mechanical principles. PMID:21903175

  13. Stationary properties of maximum-entropy random walks

    NASA Astrophysics Data System (ADS)

    Dixit, Purushottam D.

    2015-10-01

    Maximum-entropy (ME) inference of state probabilities using state-dependent constraints is popular in the study of complex systems. In stochastic systems, how state space topology and path-dependent constraints affect ME-inferred state probabilities remains unknown. To that end, we derive the transition probabilities and the stationary distribution of a maximum path entropy Markov process subject to state- and path-dependent constraints. A main finding is that the stationary distribution over states differs significantly from the Boltzmann distribution and reflects a competition between path multiplicity and imposed constraints. We illustrate our results with particle diffusion on a two-dimensional landscape. Connections with the path integral approach to diffusion are discussed.

  14. Stationary properties of maximum-entropy random walks.

    PubMed

    Dixit, Purushottam D

    2015-10-01

    Maximum-entropy (ME) inference of state probabilities using state-dependent constraints is popular in the study of complex systems. In stochastic systems, how state space topology and path-dependent constraints affect ME-inferred state probabilities remains unknown. To that end, we derive the transition probabilities and the stationary distribution of a maximum path entropy Markov process subject to state- and path-dependent constraints. A main finding is that the stationary distribution over states differs significantly from the Boltzmann distribution and reflects a competition between path multiplicity and imposed constraints. We illustrate our results with particle diffusion on a two-dimensional landscape. Connections with the path integral approach to diffusion are discussed.

  15. Maximum information entropy: a foundation for ecological theory.

    PubMed

    Harte, John; Newman, Erica A

    2014-07-01

    The maximum information entropy (MaxEnt) principle is a successful method of statistical inference that has recently been applied to ecology. Here, we show how MaxEnt can accurately predict patterns such as species-area relationships (SARs) and abundance distributions in macroecology and be a foundation for ecological theory. We discuss the conceptual foundation of the principle, why it often produces accurate predictions of probability distributions in science despite not incorporating explicit mechanisms, and how mismatches between predictions and data can shed light on driving mechanisms in ecology. We also review possible future extensions of the maximum entropy theory of ecology (METE), a potentially important foundation for future developments in ecological theory.

  16. Quantum maximum entropy principle for fractional exclusion statistics.

    PubMed

    Trovato, M; Reggiani, L

    2013-01-11

    Using the Wigner representation, compatibly with the uncertainty principle, we formulate a quantum maximum entropy principle for the fractional exclusion statistics. By considering anyonic systems satisfying fractional exclusion statistic, all the results available in the literature are generalized in terms of both the kind of statistics and a nonlocal description for excluson gases. Gradient quantum corrections are explicitly given at different levels of degeneracy and classical results are recovered when ℏ→0.

  17. PNNL: A Supervised Maximum Entropy Approach to Word Sense Disambiguation

    SciTech Connect

    Tratz, Stephen C.; Sanfilippo, Antonio P.; Gregory, Michelle L.; Chappell, Alan R.; Posse, Christian; Whitney, Paul D.

    2007-06-23

    In this paper, we described the PNNL Word Sense Disambiguation system as applied to the English All-Word task in Se-mEval 2007. We use a supervised learning approach, employing a large number of features and using Information Gain for dimension reduction. Our Maximum Entropy approach combined with a rich set of features produced results that are significantly better than baseline and are the highest F-score for the fined-grained English All-Words subtask.

  18. Quasiparticle density of states by inversion with maximum entropy method

    NASA Astrophysics Data System (ADS)

    Sui, Xiao-Hong; Wang, Han-Ting; Tang, Hui; Su, Zhao-Bin

    2016-10-01

    We propose to extract the quasiparticle density of states (DOS) of the superconductor directly from the experimentally measured superconductor-insulator-superconductor junction tunneling data by applying the maximum entropy method to the nonlinear systems. It merits the advantage of model independence with minimum a priori assumptions. Various components of the proposed method have been carefully investigated, including the meaning of the targeting function, the mock function, as well as the role and the designation of the input parameters. The validity of the developed scheme is shown by two kinds of tests for systems with known DOS. As a preliminary application to a Bi2Sr2CaCu2O8 +δ sample with its critical temperature Tc=89 K , we extract the DOS from the measured intrinsic Josephson junction current data at temperatures of T =4.2 K , 45 K , 55 K , 95 K , and 130 K . The energy gap decreases with increasing temperature below Tc, while above Tc, a kind of energy gap survives, which provides an angle to investigate the pseudogap phenomenon in high-Tc superconductors. The developed method itself might be a useful tool for future applications in various fields.

  19. Estimating Thermal Inertia with a Maximum Entropy Boundary Condition

    NASA Astrophysics Data System (ADS)

    Nearing, G.; Moran, M. S.; Scott, R.; Ponce-Campos, G.

    2012-04-01

    Thermal inertia, P [Jm-2s-1/2K-1], is a physical property the land surface which determines resistance to temperature change under seasonal or diurnal heating. It is a function of volumetric heat capacity, c [Jm-3K-1], and thermal conductivity, k [Wm-1K-1] of the soil near the surface: P=√ck. Thermal inertia of soil varies with moisture content due the difference between thermal properties of water and air, and a number of studies have demonstrated that it is feasible to estimate soil moisture given thermal inertia (e.g. Lu et al, 2009, Murray and Verhoef, 2007). We take the common approach to estimating thermal inertia using measurements of surface temperature by modeling the Earth's surface as a 1-dimensional homogeneous diffusive half-space. In this case, surface temperature is a function of the ground heat flux (G) boundary condition and thermal inertia and a daily value of P was estimated by matching measured and modeled diurnal surface temperature fluctuations. The difficulty is in measuring G; we demonstrate that the new maximum entropy production (MEP) method for partitioning net radiation into surface energy fluxes (Wang and Bras, 2011) provides a suitable boundary condition for estimating P. Adding the diffusion representation of heat transfer in the soil reduces the number of free parameters in the MEP model from two to one, and we provided a sensitivity analysis which suggests that, for the purpose of estimating P, it is preferable to parameterize the coupled MEP-diffusion model by the ratio of thermal inertia of the soil to the effective thermal inertia of convective heat transfer to the atmosphere. We used this technique to estimate thermal inertia at two semiarid, non-vegetated locations in the Walnut Gulch Experimental Watershed in southeast AZ, USA and compared these estimates to estimates of P made using the Xue and Cracknell (1995) solution for a linearized ground heat flux boundary condition, and we found that the MEP-diffusion model produced

  20. Time-Reversal Acoustics and Maximum-Entropy Imaging

    SciTech Connect

    Berryman, J G

    2001-08-22

    Target location is a common problem in acoustical imaging using either passive or active data inversion. Time-reversal methods in acoustics have the important characteristic that they provide a means of determining the eigenfunctions and eigenvalues of the scattering operator for either of these problems. Each eigenfunction may often be approximately associated with an individual scatterer. The resulting decoupling of the scattered field from a collection of targets is a very useful aid to localizing the targets, and suggests a number of imaging and localization algorithms. Two of these are linear subspace methods and maximum-entropy imaging.

  1. The mechanics of granitoid systems and maximum entropy production rates.

    PubMed

    Hobbs, Bruce E; Ord, Alison

    2010-01-13

    A model for the formation of granitoid systems is developed involving melt production spatially below a rising isotherm that defines melt initiation. Production of the melt volumes necessary to form granitoid complexes within 10(4)-10(7) years demands control of the isotherm velocity by melt advection. This velocity is one control on the melt flux generated spatially just above the melt isotherm, which is the control valve for the behaviour of the complete granitoid system. Melt transport occurs in conduits initiated as sheets or tubes comprising melt inclusions arising from Gurson-Tvergaard constitutive behaviour. Such conduits appear as leucosomes parallel to lineations and foliations, and ductile and brittle dykes. The melt flux generated at the melt isotherm controls the position of the melt solidus isotherm and hence the physical height of the Transport/Emplacement Zone. A conduit width-selection process, driven by changes in melt viscosity and constitutive behaviour, operates within the Transport Zone to progressively increase the width of apertures upwards. Melt can also be driven horizontally by gradients in topography; these horizontal fluxes can be similar in magnitude to vertical fluxes. Fluxes induced by deformation can compete with both buoyancy and topographic-driven flow over all length scales and results locally in transient 'ponds' of melt. Pluton emplacement is controlled by the transition in constitutive behaviour of the melt/magma from elastic-viscous at high temperatures to elastic-plastic-viscous approaching the melt solidus enabling finite thickness plutons to develop. The system involves coupled feedback processes that grow at the expense of heat supplied to the system and compete with melt advection. The result is that limits are placed on the size and time scale of the system. Optimal characteristics of the system coincide with a state of maximum entropy production rate.

  2. Nuclear-weighted X-ray maximum entropy method - NXMEM.

    PubMed

    Christensen, Sebastian; Bindzus, Niels; Christensen, Mogens; Brummerstedt Iversen, Bo

    2015-01-01

    Subtle structural features such as disorder and anharmonic motion may be accurately characterized from nuclear density distributions (NDDs). As a viable alternative to neutron diffraction, this paper introduces a new approach named the nuclear-weighted X-ray maximum entropy method (NXMEM) for reconstructing pseudo NDDs. It calculates an electron-weighted nuclear density distribution (eNDD), exploiting that X-ray diffraction delivers data of superior quality, requires smaller sample volumes and has higher availability. NXMEM is tested on two widely different systems: PbTe and Ba(8)Ga(16)Sn(30). The first compound, PbTe, possesses a deceptively simple crystal structure on the macroscopic level that is unable to account for its excellent thermoelectric properties. The key mechanism involves local distortions, and the capability of NXMEM to probe this intriguing feature is established with simulated powder diffraction data. In the second compound, Ba(8)Ga(16)Sn(30), disorder among the Ba guest atoms is analysed with both experimental and simulated single-crystal diffraction data. In all cases, NXMEM outperforms the maximum entropy method by substantially enhancing the nuclear resolution. The induced improvements correlate with the amount of available data, rendering NXMEM especially powerful for powder and low-resolution single-crystal diffraction. The NXMEM procedure can be implemented in existing software and facilitates widespread characterization of disorder in functional materials. PMID:25537384

  3. Conjugate variables in continuous maximum-entropy inference.

    PubMed

    Davis, Sergio; Gutiérrez, Gonzalo

    2012-11-01

    For a continuous maximum-entropy distribution (obtained from an arbitrary number of simultaneous constraints), we derive a general relation connecting the Lagrange multipliers and the expectation values of certain particularly constructed functions of the states of the system. From this relation, an estimator for a given Lagrange multiplier can be constructed from derivatives of the corresponding constraining function. These estimators sometimes lead to the determination of the Lagrange multipliers by way of solving a linear system, and, in general, they provide another tool to widen the applicability of Jaynes's formalism. This general relation, especially well suited for computer simulation techniques, also provides some insight into the interpretation of the hypervirial relations known in statistical mechanics and the recently derived microcanonical dynamical temperature. We illustrate the usefulness of these new relations with several applications in statistics.

  4. Test images for the maximum entropy image restoration method

    NASA Technical Reports Server (NTRS)

    Mackey, James E.

    1990-01-01

    One of the major activities of any experimentalist is data analysis and reduction. In solar physics, remote observations are made of the sun in a variety of wavelengths and circumstances. In no case is the data collected free from the influence of the design and operation of the data gathering instrument as well as the ever present problem of noise. The presence of significant noise invalidates the simple inversion procedure regardless of the range of known correlation functions. The Maximum Entropy Method (MEM) attempts to perform this inversion by making minimal assumptions about the data. To provide a means of testing the MEM and characterizing its sensitivity to noise, choice of point spread function, type of data, etc., one would like to have test images of known characteristics that can represent the type of data being analyzed. A means of reconstructing these images is presented.

  5. LIBOR troubles: Anomalous movements detection based on maximum entropy

    NASA Astrophysics Data System (ADS)

    Bariviera, Aurelio F.; Martín, María T.; Plastino, Angelo; Vampa, Victoria

    2016-05-01

    According to the definition of the London Interbank Offered Rate (LIBOR), contributing banks should give fair estimates of their own borrowing costs in the interbank market. Between 2007 and 2009, several banks made inappropriate submissions of LIBOR, sometimes motivated by profit-seeking from their trading positions. In 2012, several newspapers' articles began to cast doubt on LIBOR integrity, leading surveillance authorities to conduct investigations on banks' behavior. Such procedures resulted in severe fines imposed to involved banks, who recognized their financial inappropriate conduct. In this paper, we uncover such unfair behavior by using a forecasting method based on the Maximum Entropy principle. Our results are robust against changes in parameter settings and could be of great help for market surveillance.

  6. Verification and validation of the maximum entropy method of moment reconstruction of energy dependent neutron flux

    NASA Astrophysics Data System (ADS)

    Crawford, Douglas Spencer

    Verification and Validation of reconstructed neutron flux based on the maximum entropy method, is presented in this paper. The verification is carried out by comparing the neutron flux spectrum from the maximum entropy method with Monte Carlo N Particle 5 version 1.40 (MCNP5) and Attila-7.1.0-beta (Attila). A spherical 100% 235U critical assembly is modeled as the test case to compare the three methods. The verification error range for the maximum entropy method is 15% to 23% where MCNP5 is taken to be the comparison standard. Attila relative error for the critical assembly is 20% to 35%. Validation is accomplished by comparing a neutron flux spectrum that is back calculated from foil activation measurements performed in the GODIVA experiment (GODIVA). The error range of the reconstructed flux compared to GODIVA is 0%-10%. The error range of the neutron flux spectrum from MCNP5 compared to GODIVA is 0%-20% and the Attila error range compared to the GODIVA is 0%-35%. The maximum entropy method for reconstructing flux is shown to be a fast reliable method, compared to either Monte Carlo methods (MCNP5) or 30 multienergy group methods (Attila) and with respect to the GODIVA experiment.

  7. In Vivo potassium-39 NMR spectra by the burg maximum-entropy method

    NASA Astrophysics Data System (ADS)

    Uchiyama, Takanori; Minamitani, Haruyuki

    The Burg maximum-entropy method was applied to estimate 39K NMR spectra of mung bean root tips. The maximum-entropy spectra have as good a linearity between peak areas and potassium concentrations as those obtained by fast Fourier transform and give a better estimation of intracellular potassium concentrations. Therefore potassium uptake and loss processes of mung bean root tips are shown to be more clearly traced by the maximum-entropy method.

  8. Maximum Entropy Production As a Framework for Understanding How Living Systems Evolve, Organize and Function

    NASA Astrophysics Data System (ADS)

    Vallino, J. J.; Algar, C. K.; Huber, J. A.; Fernandez-Gonzalez, N.

    2014-12-01

    The maximum entropy production (MEP) principle holds that non equilibrium systems with sufficient degrees of freedom will likely be found in a state that maximizes entropy production or, analogously, maximizes potential energy destruction rate. The theory does not distinguish between abiotic or biotic systems; however, we will show that systems that can coordinate function over time and/or space can potentially dissipate more free energy than purely Markovian processes (such as fire or a rock rolling down a hill) that only maximize instantaneous entropy production. Biological systems have the ability to store useful information acquired via evolution and curated by natural selection in genomic sequences that allow them to execute temporal strategies and coordinate function over space. For example, circadian rhythms allow phototrophs to "predict" that sun light will return and can orchestrate metabolic machinery appropriately before sunrise, which not only gives them a competitive advantage, but also increases the total entropy production rate compared to systems that lack such anticipatory control. Similarly, coordination over space, such a quorum sensing in microbial biofilms, can increase acquisition of spatially distributed resources and free energy and thereby enhance entropy production. In this talk we will develop a modeling framework to describe microbial biogeochemistry based on the MEP conjecture constrained by information and resource availability. Results from model simulations will be compared to laboratory experiments to demonstrate the usefulness of the MEP approach.

  9. Exploiting Acoustic and Syntactic Features for Automatic Prosody Labeling in a Maximum Entropy Framework

    PubMed Central

    Sridhar, Vivek Kumar Rangarajan; Bangalore, Srinivas; Narayanan, Shrikanth S.

    2009-01-01

    In this paper, we describe a maximum entropy-based automatic prosody labeling framework that exploits both language and speech information. We apply the proposed framework to both prominence and phrase structure detection within the Tones and Break Indices (ToBI) annotation scheme. Our framework utilizes novel syntactic features in the form of supertags and a quantized acoustic–prosodic feature representation that is similar to linear parameterizations of the prosodic contour. The proposed model is trained discriminatively and is robust in the selection of appropriate features for the task of prosody detection. The proposed maximum entropy acoustic–syntactic model achieves pitch accent and boundary tone detection accuracies of 86.0% and 93.1% on the Boston University Radio News corpus, and, 79.8% and 90.3% on the Boston Directions corpus. The phrase structure detection through prosodic break index labeling provides accuracies of 84% and 87% on the two corpora, respectively. The reported results are significantly better than previously reported results and demonstrate the strength of maximum entropy model in jointly modeling simple lexical, syntactic, and acoustic features for automatic prosody labeling. PMID:19603083

  10. Maximum Entropy Distributions of Scale-Invariant Processes

    NASA Astrophysics Data System (ADS)

    Nieves, V.; Wood, E.; Wang, J.; Bras, R. L.

    2010-12-01

    Scale invariance is one of the common statistical properties of many variables in nature such as drainage area of river network, soil moisture and topography. The Maximum Entropy principle (MaxEnt) is proposed to show how these variables can be statistically described using their scale-invariant properties and geometric mean. The theory of MaxEnt has not been widely used in the study of this phenomenon although it has been successful in solving a wide range of scientific and engineering problems. From a proof-of-concept case where a MaxEnt distribution of the drainage area of river network, a power-law probability distribution, was derived and confirmed. Then, we investigated two other important processes in hydro-meteorology: soil moisture and topography. A major advantage of the MaxEnt method is that the characterization of the multi-scaling processes - in terms of the parameters of the corresponding probability distribution - is directly linked to their observable macroscopic properties (such as the exponents of multi-scaling moments), especially using remote sensing data. Our case studies may be viewed as evidence that the MaxEnt theory is an effective tool in dealing with complexity, and may help in improving our understanding of broad geophysical processes governed by scale-invariant laws. MaxEnt offers a universal and unified framework to characterize various multi-scaling processes.

  11. Speed-gradient principle for description of transient dynamics in systems obeying maximum entropy principle

    NASA Astrophysics Data System (ADS)

    Fradkov, Alexander; Krivtsov, Anton

    2011-03-01

    The speed-gradient variational principle (SG-principle) for nonstationary nonequilib-rium systems is formulated and illustrated by an example. It is proposed to use the SG-principle to model transient (relaxation) dynamics for systems satisfying maximum entropy principle. Nonstationary processes generated with the method of dynamics of particles are studied. A comparison of theoretic prediction and simulation results confirming reasonable prediction precision is presented.

  12. Application of the maximum relative entropy method to the physics of ferromagnetic materials

    NASA Astrophysics Data System (ADS)

    Giffin, Adom; Cafaro, Carlo; Ali, Sean Alan

    2016-08-01

    It is known that the Maximum relative Entropy (MrE) method can be used to both update and approximate probability distributions functions in statistical inference problems. In this manuscript, we apply the MrE method to infer magnetic properties of ferromagnetic materials. In addition to comparing our approach to more traditional methodologies based upon the Ising model and Mean Field Theory, we also test the effectiveness of the MrE method on conventionally unexplored ferromagnetic materials with defects.

  13. Maximum entropy principle for stationary states underpinned by stochastic thermodynamics

    NASA Astrophysics Data System (ADS)

    Ford, Ian J.

    2015-11-01

    The selection of an equilibrium state by maximizing the entropy of a system, subject to certain constraints, is often powerfully motivated as an exercise in logical inference, a procedure where conclusions are reached on the basis of incomplete information. But such a framework can be more compelling if it is underpinned by dynamical arguments, and we show how this can be provided by stochastic thermodynamics, where an explicit link is made between the production of entropy and the stochastic dynamics of a system coupled to an environment. The separation of entropy production into three components allows us to select a stationary state by maximizing the change, averaged over all realizations of the motion, in the principal relaxational or nonadiabatic component, equivalent to requiring that this contribution to the entropy production should become time independent for all realizations. We show that this recovers the usual equilibrium probability density function (pdf) for a conservative system in an isothermal environment, as well as the stationary nonequilibrium pdf for a particle confined to a potential under nonisothermal conditions, and a particle subject to a constant nonconservative force under isothermal conditions. The two remaining components of entropy production account for a recently discussed thermodynamic anomaly between over- and underdamped treatments of the dynamics in the nonisothermal stationary state.

  14. Maximum entropy principle for stationary states underpinned by stochastic thermodynamics.

    PubMed

    Ford, Ian J

    2015-11-01

    The selection of an equilibrium state by maximizing the entropy of a system, subject to certain constraints, is often powerfully motivated as an exercise in logical inference, a procedure where conclusions are reached on the basis of incomplete information. But such a framework can be more compelling if it is underpinned by dynamical arguments, and we show how this can be provided by stochastic thermodynamics, where an explicit link is made between the production of entropy and the stochastic dynamics of a system coupled to an environment. The separation of entropy production into three components allows us to select a stationary state by maximizing the change, averaged over all realizations of the motion, in the principal relaxational or nonadiabatic component, equivalent to requiring that this contribution to the entropy production should become time independent for all realizations. We show that this recovers the usual equilibrium probability density function (pdf) for a conservative system in an isothermal environment, as well as the stationary nonequilibrium pdf for a particle confined to a potential under nonisothermal conditions, and a particle subject to a constant nonconservative force under isothermal conditions. The two remaining components of entropy production account for a recently discussed thermodynamic anomaly between over- and underdamped treatments of the dynamics in the nonisothermal stationary state.

  15. Maximum entropy method applied to deblurring images on a MasPar MP-1 computer

    NASA Technical Reports Server (NTRS)

    Bonavito, N. L.; Dorband, John; Busse, Tim

    1991-01-01

    A statistical inference method based on the principle of maximum entropy is developed for the purpose of enhancing and restoring satellite images. The proposed maximum entropy image restoration method is shown to overcome the difficulties associated with image restoration and provide the smoothest and most appropriate solution consistent with the measured data. An implementation of the method on the MP-1 computer is described, and results of tests on simulated data are presented.

  16. Unification of field theory and maximum entropy methods for learning probability densities.

    PubMed

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  17. Estimation of typhoon rainfall in GaoPing River: A Multivariate Maximum Entropy Method

    NASA Astrophysics Data System (ADS)

    Pei-Jui, Wu; Hwa-Lung, Yu

    2016-04-01

    The heavy rainfall from typhoons is the main factor of the natural disaster in Taiwan, which causes the significant loss of human lives and properties. Statistically average 3.5 typhoons invade Taiwan every year, and the serious typhoon, Morakot in 2009, impacted Taiwan in recorded history. Because the duration, path and intensity of typhoon, also affect the temporal and spatial rainfall type in specific region , finding the characteristics of the typhoon rainfall type is advantageous when we try to estimate the quantity of rainfall. This study developed a rainfall prediction model and can be divided three parts. First, using the EEOF(extended empirical orthogonal function) to classify the typhoon events, and decompose the standard rainfall type of all stations of each typhoon event into the EOF and PC(principal component). So we can classify the typhoon events which vary similarly in temporally and spatially as the similar typhoon types. Next, according to the classification above, we construct the PDF(probability density function) in different space and time by means of using the multivariate maximum entropy from the first to forth moment statistically. Therefore, we can get the probability of each stations of each time. Final we use the BME(Bayesian Maximum Entropy method) to construct the typhoon rainfall prediction model , and to estimate the rainfall for the case of GaoPing river which located in south of Taiwan.This study could be useful for typhoon rainfall predictions in future and suitable to government for the typhoon disaster prevention .

  18. Entropy-based portfolio models: Practical issues

    NASA Astrophysics Data System (ADS)

    Shirazi, Yasaman Izadparast; Sabiruzzaman, Md.; Hamzah, Nor Aishah

    2015-10-01

    Entropy is a nonparametric alternative of variance and has been used as a measure of risk in portfolio analysis. In this paper, the computation of entropy risk for a given set of data is discussed with illustration. A comparison between entropy-based portfolio models is made. We propose a natural extension of the mean entropy portfolio to make it more general and diversified. In terms of performance, this new model is similar to the mean-entropy portfolio when applied to real and simulated data, and offers higher return if no constraint is set for the desired return; also it is found to be the most diversified portfolio model.

  19. A novel impact identification algorithm based on a linear approximation with maximum entropy

    NASA Astrophysics Data System (ADS)

    Sanchez, N.; Meruane, V.; Ortiz-Bernardin, A.

    2016-09-01

    This article presents a novel impact identification algorithm that uses a linear approximation handled by a statistical inference model based on the maximum-entropy principle, termed linear approximation with maximum entropy (LME). Unlike other regression algorithms as artificial neural networks (ANNs) and support vector machines, the proposed algorithm requires only parameter to be selected and the impact is identified after solving a convex optimization problem that has a unique solution. In addition, with LME data is processed in a period of time that is comparable to the one of other algorithms. The performance of the proposed methodology is validated by considering an experimental aluminum plate. Time varying strain data is measured using four piezoceramic sensors bonded to the plate. To demonstrate the potential of the proposed approach over existing ones, results obtained via LME are compared with those of ANN and least square support vector machines. The results demonstrate that with a low number of sensors it is possible to accurately locate and quantify impacts on a structure and that LME outperforms other impact identification algorithms.

  20. Generalized maximum entropy approach to quasistationary states in long-range systems

    NASA Astrophysics Data System (ADS)

    Martelloni, Gabriele; Martelloni, Gianluca; de Buyl, Pierre; Fanelli, Duccio

    2016-02-01

    Systems with long-range interactions display a short-time relaxation towards quasistationary states (QSSs) whose lifetime increases with the system size. In the paradigmatic Hamiltonian mean-field model (HMF) out-of-equilibrium phase transitions are predicted and numerically detected which separate homogeneous (zero magnetization) and inhomogeneous (nonzero magnetization) QSSs. In the former regime, the velocity distribution presents (at least) two large, symmetric bumps, which cannot be self-consistently explained by resorting to the conventional Lynden-Bell maximum entropy approach. We propose a generalized maximum entropy scheme which accounts for the pseudoconservation of additional charges, the even momenta of the single-particle distribution. These latter are set to the asymptotic values, as estimated by direct integration of the underlying Vlasov equation, which formally holds in the thermodynamic limit. Methodologically, we operate in the framework of a generalized Gibbs ensemble, as sometimes defined in statistical quantum mechanics, which contains an infinite number of conserved charges. The agreement between theory and simulations is satisfying, both above and below the out-of-equilibrium transition threshold. A previously unaccessible feature of the QSSs, the multiple bumps in the velocity profile, is resolved by our approach.

  1. Estimation of Lithological Classification in Taipei Basin: A Bayesian Maximum Entropy Method

    NASA Astrophysics Data System (ADS)

    Wu, Meng-Ting; Lin, Yuan-Chien; Yu, Hwa-Lung

    2015-04-01

    In environmental or other scientific applications, we must have a certain understanding of geological lithological composition. Because of restrictions of real conditions, only limited amount of data can be acquired. To find out the lithological distribution in the study area, many spatial statistical methods used to estimate the lithological composition on unsampled points or grids. This study applied the Bayesian Maximum Entropy (BME method), which is an emerging method of the geological spatiotemporal statistics field. The BME method can identify the spatiotemporal correlation of the data, and combine not only the hard data but the soft data to improve estimation. The data of lithological classification is discrete categorical data. Therefore, this research applied Categorical BME to establish a complete three-dimensional Lithological estimation model. Apply the limited hard data from the cores and the soft data generated from the geological dating data and the virtual wells to estimate the three-dimensional lithological classification in Taipei Basin. Keywords: Categorical Bayesian Maximum Entropy method, Lithological Classification, Hydrogeological Setting

  2. Night vision image fusion for target detection with improved 2D maximum entropy segmentation

    NASA Astrophysics Data System (ADS)

    Bai, Lian-fa; Liu, Ying-bin; Yue, Jiang; Zhang, Yi

    2013-08-01

    Infrared and LLL image are used for night vision target detection. In allusion to the characteristics of night vision imaging and lack of traditional detection algorithm for segmentation and extraction of targets, we propose a method of infrared and LLL image fusion for target detection with improved 2D maximum entropy segmentation. Firstly, two-dimensional histogram was improved by gray level and maximum gray level in weighted area, weights were selected to calculate the maximum entropy for infrared and LLL image segmentation by using the histogram. Compared with the traditional maximum entropy segmentation, the algorithm had significant effect in target detection, and the functions of background suppression and target extraction. And then, the validity of multi-dimensional characteristics AND operation on the infrared and LLL image feature level fusion for target detection is verified. Experimental results show that detection algorithm has a relatively good effect and application in target detection and multiple targets detection in complex background.

  3. Estimation of design sea ice thickness with maximum entropy distribution by particle swarm optimization method

    NASA Astrophysics Data System (ADS)

    Tao, Shanshan; Dong, Sheng; Wang, Zhifeng; Jiang, Wensheng

    2016-06-01

    The maximum entropy distribution, which consists of various recognized theoretical distributions, is a better curve to estimate the design thickness of sea ice. Method of moment and empirical curve fitting method are common-used parameter estimation methods for maximum entropy distribution. In this study, we propose to use the particle swarm optimization method as a new parameter estimation method for the maximum entropy distribution, which has the advantage to avoid deviation introduced by simplifications made in other methods. We conducted a case study to fit the hindcasted thickness of the sea ice in the Liaodong Bay of Bohai Sea using these three parameter-estimation methods for the maximum entropy distribution. All methods implemented in this study pass the K-S tests at 0.05 significant level. In terms of the average sum of deviation squares, the empirical curve fitting method provides the best fit for the original data, while the method of moment provides the worst. Among all three methods, the particle swarm optimization method predicts the largest thickness of the sea ice for a same return period. As a result, we recommend using the particle swarm optimization method for the maximum entropy distribution for offshore structures mainly influenced by the sea ice in winter, but using the empirical curve fitting method to reduce the cost in the design of temporary and economic buildings.

  4. Maximum entropy restoration of blurred and oversaturated Hubble Space Telescope imagery.

    PubMed

    Bonavito, N L; Dorband, J E; Busse, T

    1993-10-10

    A brief introduction to image reconstruction is made and the basic concepts of the maximum entropy method are outlined. A statistical inference algorithm based on this method is presented. The algorithm is tested on simulated data and applied to real data. The latter is from a 1024 × 1024 Hubble Space Telescope image of the binary stellar system R Aquarii, which suffers from both spherical aberration and detector saturation. Under these constraints the maximum entropy method produces an image that agrees closely with observed results. The calculations were performed on the MasPar MP-1 single-instruction/multiple-data computer.

  5. The functional design of the rotary enzyme ATP synthase is consistent with maximum entropy production

    NASA Astrophysics Data System (ADS)

    Dewar, R. C.; Juretić, D.; Županović, P.

    2006-10-01

    We show that the molecular motor ATP synthase has evolved in accordance with the statistical selection principle of Maximum Shannon Entropy and one of its corollaries, Maximum Entropy Production. These principles predict an optimal angular position for the ATP-binding transition close to the experimental value; an inverse relation between the optimal gearing ratio and the proton motive force ( pmf); optimal operation at an inflection point in the curve of ATP synthesis rate versus pmf, enabling rapid metabolic control; and a high optimal free energy conversion efficiency. Our results suggest a statistical interpretation for the evolutionary optimization of ATP synthase function.

  6. Determination of zero-coupon and spot rates from treasury data by maximum entropy methods

    NASA Astrophysics Data System (ADS)

    Gzyl, Henryk; Mayoral, Silvia

    2016-08-01

    An interesting and important inverse problem in finance consists of the determination of spot rates or prices of the zero coupon bonds, when the only information available consists of the prices of a few coupon bonds. A variety of methods have been proposed to deal with this problem. Here we present variants of a non-parametric method to treat with such problems, which neither imposes an analytic form on the rates or bond prices, nor imposes a model for the (random) evolution of the yields. The procedure consists of transforming the problem of the determination of the prices of the zero coupon bonds into a linear inverse problem with convex constraints, and then applying the method of maximum entropy in the mean. This method is flexible enough to provide a possible solution to a mispricing problem.

  7. Bayesian Maximum Entropy Approach to Mapping Soil Moisture at the Field Scale

    NASA Astrophysics Data System (ADS)

    Dong, J.; Ochsner, T.; Cosh, M. H.

    2012-12-01

    The study of soil moisture spatial variability at the field scale is important to aid in modeling hydrological processes at the land surface. The Bayesian Maximum Entropy (BME) framework is a more general method than classical geostatistics and has not yet been applied to soil moisture spatial estimation. This research compares the effectiveness of BME versus kriging estimators for spatial prediction of soil moisture at the field scale. Surface soil moisture surveys were conducted in a 227 ha pasture at the Marena, Oklahoma In Situ Sensor Testbed (MOISST) site. Remotely sensed vegetation data will be incorporated into the soil moisture spatial prediction using the BME method. Soil moisture maps based on the BME and traditional kriging frameworks will be cross-validated and compared.

  8. Frequency-domain localization of alpha rhythm in humans via a maximum entropy approach

    NASA Astrophysics Data System (ADS)

    Patel, Pankaj; Khosla, Deepak; Al-Dayeh, Louai; Singh, Manbir

    1997-05-01

    Generators of spontaneous human brain activity such as alpha rhythm may be easier and more accurate to localize in frequency-domain than in time-domain since these generators are characterized by a specific frequency range. We carried out a frequency-domain analysis of synchronous alpha sources by generating equivalent potential maps using the Fourier transform of each channel of electro-encephalographic (EEG) recordings. SInce the alpha rhythm recorded by EEG scalp measurements is probably produced by several independent generators, a distributed source imaging approach was considered more appropriate than a model based on a single equivalent current dipole. We used an imaging approach based on a Bayesian maximum entropy technique. Reconstructed sources were superposed on corresponding anatomy form magnetic resonance imaging. Results from human studies suggest that reconstructed sources responsible for alpha rhythm are mainly located in the occipital and parieto- occipital lobes.

  9. Maximum Entropy of Effective Reaction Theory of Steady Non-ideal Detonation

    NASA Astrophysics Data System (ADS)

    Watt, Simon; Braithwaite, Martin; Byers Brown, William; Falle, Samuel; Sharpe, Gary

    2009-06-01

    According to the theory of Byers Brown, in a steady state detonation the entropy production between the shock and sonic locus is a maximum in a self-sustaining wave. This has shown to hold true for all one-dimensional cases. Applied to 2D steady curved detonation waves in a slab or cylindrical stick of explosive, Byers Brown suggested a novel variational approach for maximising the global entropy generation within the detonation driving zone, hence providing the solution of the self-sustaining detonation wave problem. Preliminary application of such a variational technique, albeit with simplfying assumptions, demonstrate its potential to provide a rapid and accurate solution method for the problem. In this paper, recent progress in the development of the 2D variational technique and validation of the maximum entropy concept are reported. The predictions of the theory are compared with high-resolution numerical simulations and with the predictions of existing Detonation Shock Dynamics theory.

  10. Extraction of spectral functions from Dyson-Schwinger studies via the maximum entropy method

    SciTech Connect

    Nickel, Dominik . E-mail: dominik.nickel@physik.tu-darmstadt.de

    2007-08-15

    It is shown how to apply the Maximum Entropy Method (MEM) to numerical Dyson-Schwinger studies for the extraction of spectral functions of correlators from their corresponding Euclidean propagators. Differences to the application in lattice QCD are emphasized and, as an example, the spectral functions of massless quarks in cold and dense matter are presented.

  11. Monitoring of Time-Dependent System Profiles by Multiplex Gas Chromatography with Maximum Entropy Demodulation

    NASA Technical Reports Server (NTRS)

    Becker, Joseph F.; Valentin, Jose

    1996-01-01

    The maximum entropy technique was successfully applied to the deconvolution of overlapped chromatographic peaks. An algorithm was written in which the chromatogram was represented as a vector of sample concentrations multiplied by a peak shape matrix. Simulation results demonstrated that there is a trade off between the detector noise and peak resolution in the sense that an increase of the noise level reduced the peak separation that could be recovered by the maximum entropy method. Real data originated from a sample storage column was also deconvoluted using maximum entropy. Deconvolution is useful in this type of system because the conservation of time dependent profiles depends on the band spreading processes in the chromatographic column, which might smooth out the finer details in the concentration profile. The method was also applied to the deconvolution of previously interpretted Pioneer Venus chromatograms. It was found in this case that the correct choice of peak shape function was critical to the sensitivity of maximum entropy in the reconstruction of these chromatograms.

  12. Maximum entropy, fractal dimension and lacunarity in quantification of cellular rejection in myocardial biopsy of patients submitted to heart transplantation

    NASA Astrophysics Data System (ADS)

    Neves, L. A.; Oliveira, F. R.; Peres, F. A.; Moreira, R. D.; Moriel, A. R.; de Godoy, M. F.; Murta Junior, L. O.

    2011-03-01

    This paper presents a method for the quantification of cellular rejection in endomyocardial biopsies of patients submitted to heart transplant. The model is based on automatic multilevel thresholding, which employs histogram quantification techniques, histogram slope percentage analysis and the calculation of maximum entropy. The structures were quantified with the aid of the multi-scale fractal dimension and lacunarity for the identification of behavior patterns in myocardial cellular rejection in order to determine the most adequate treatment for each case.

  13. Exploring the concept of maximum entropy production for the local atmosphere-glacier system

    NASA Astrophysics Data System (ADS)

    Mölg, Thomas

    2015-06-01

    The concept of maximum entropy production (MEP) is closely linked to the second law of thermodynamics, which explains spontaneous processes in the universe. In geophysics, studies have argued that planetary atmospheres and various subsystems of Earth also operate at maximum dissipation through MEP. One of the debates, however, has concerned the degree of empirical support. This article extends the topic by considering measurements from a high-altitude, cold glacier in the tropical atmosphere and a numerical model, which represents the open and nonequilibrium system of glacier-air exchanges. Results reveal that several sensitive system parameters, which are mainly tied to the shortwave radiation budget, cause MEP states at values that coincide closely with the in situ observations. Parameters that set up the forcing of the whole system, however, do not show this pattern. Empirical support for the detection of MEP states, therefore, is limited to parameters that regulate the internal efficiency of energy flow in the glacier. System constraints are shown to affect the solutions, yet not critically in the case of the two most sensitive parameters. In terms of MEP and geophysical fluids, the results suggest that the local atmosphere-glacier system might be of relevance in the further discussion. For practical purposes, the results hold promise for using MEP in single or multiparameter optimization for process-based mass balance models of glaciers.

  14. Ecosystem functioning and maximum entropy production: a quantitative test of hypotheses

    PubMed Central

    Meysman, Filip J. R.; Bruers, Stijn

    2010-01-01

    The idea that entropy production puts a constraint on ecosystem functioning is quite popular in ecological thermodynamics. Yet, until now, such claims have received little quantitative verification. Here, we examine three ‘entropy production’ hypotheses that have been forwarded in the past. The first states that increased entropy production serves as a fingerprint of living systems. The other two hypotheses invoke stronger constraints. The state selection hypothesis states that when a system can attain multiple steady states, the stable state will show the highest entropy production rate. The gradient response principle requires that when the thermodynamic gradient increases, the system's new stable state should always be accompanied by a higher entropy production rate. We test these three hypotheses by applying them to a set of conventional food web models. Each time, we calculate the entropy production rate associated with the stable state of the ecosystem. This analysis shows that the first hypothesis holds for all the food webs tested: the living state shows always an increased entropy production over the abiotic state. In contrast, the state selection and gradient response hypotheses break down when the food web incorporates more than one trophic level, indicating that they are not generally valid. PMID:20368259

  15. Self-Assembled Wiggling Nano-Structures and the Principle of Maximum Entropy Production

    PubMed Central

    Belkin, A.; Hubler, A.; Bezryadin, A.

    2015-01-01

    While behavior of equilibrium systems is well understood, evolution of nonequilibrium ones is much less clear. Yet, many researches have suggested that the principle of the maximum entropy production is of key importance in complex systems away from equilibrium. Here, we present a quantitative study of large ensembles of carbon nanotubes suspended in a non-conducting non-polar fluid subject to a strong electric field. Being driven out of equilibrium, the suspension spontaneously organizes into an electrically conducting state under a wide range of parameters. Such self-assembly allows the Joule heating and, therefore, the entropy production in the fluid, to be maximized. Curiously, we find that emerging self-assembled structures can start to wiggle. The wiggling takes place only until the entropy production in the suspension reaches its maximum, at which time the wiggling stops and the structure becomes quasi-stable. Thus, we provide strong evidence that maximum entropy production principle plays an essential role in the evolution of self-organizing systems far from equilibrium. PMID:25662746

  16. Inverting ion images without Abel inversion: maximum entropy reconstruction of velocity maps.

    PubMed

    Dick, Bernhard

    2014-01-14

    A new method for the reconstruction of velocity maps from ion images is presented, which is based on the maximum entropy concept. In contrast to other methods used for Abel inversion the new method never applies an inversion or smoothing to the data. Instead, it iteratively finds the map which is the most likely cause for the observed data, using the correct likelihood criterion for data sampled from a Poissonian distribution. The entropy criterion minimizes the information content in this map, which hence contains no information for which there is no evidence in the data. Two implementations are proposed, and their performance is demonstrated with simulated and experimental data: Maximum Entropy Velocity Image Reconstruction (MEVIR) obtains a two-dimensional slice through the velocity distribution and can be compared directly to Abel inversion. Maximum Entropy Velocity Legendre Reconstruction (MEVELER) finds one-dimensional distribution functions Q(l)(v) in an expansion of the velocity distribution in Legendre polynomials P((cos θ) for the angular dependence. Both MEVIR and MEVELER can be used for the analysis of ion images with intensities as low as 0.01 counts per pixel, with MEVELER performing significantly better than MEVIR for images with low intensity. Both methods perform better than pBASEX, in particular for images with less than one average count per pixel.

  17. Nonequilibrium thermodynamics and maximum entropy production in the Earth system: applications and implications.

    PubMed

    Kleidon, Axel

    2009-06-01

    The Earth system is maintained in a unique state far from thermodynamic equilibrium, as, for instance, reflected in the high concentration of reactive oxygen in the atmosphere. The myriad of processes that transform energy, that result in the motion of mass in the atmosphere, in oceans, and on land, processes that drive the global water, carbon, and other biogeochemical cycles, all have in common that they are irreversible in their nature. Entropy production is a general consequence of these processes and measures their degree of irreversibility. The proposed principle of maximum entropy production (MEP) states that systems are driven to steady states in which they produce entropy at the maximum possible rate given the prevailing constraints. In this review, the basics of nonequilibrium thermodynamics are described, as well as how these apply to Earth system processes. Applications of the MEP principle are discussed, ranging from the strength of the atmospheric circulation, the hydrological cycle, and biogeochemical cycles to the role that life plays in these processes. Nonequilibrium thermodynamics and the MEP principle have potentially wide-ranging implications for our understanding of Earth system functioning, how it has evolved in the past, and why it is habitable. Entropy production allows us to quantify an objective direction of Earth system change (closer to vs further away from thermodynamic equilibrium, or, equivalently, towards a state of MEP). When a maximum in entropy production is reached, MEP implies that the Earth system reacts to perturbations primarily with negative feedbacks. In conclusion, this nonequilibrium thermodynamic view of the Earth system shows great promise to establish a holistic description of the Earth as one system. This perspective is likely to allow us to better understand and predict its function as one entity, how it has evolved in the past, and how it is modified by human activities in the future.

  18. Structural damage assessment using linear approximation with maximum entropy and transmissibility data

    NASA Astrophysics Data System (ADS)

    Meruane, V.; Ortiz-Bernardin, A.

    2015-03-01

    Supervised learning algorithms have been proposed as a suitable alternative to model updating methods in structural damage assessment, being Artificial Neural Networks the most frequently used. Notwithstanding, the slow learning speed and the large number of parameters that need to be tuned within the training stage have been a major bottleneck in their application. This article presents a new algorithm for real-time damage assessment that uses a linear approximation method in conjunction with antiresonant frequencies that are identified from transmissibility functions. The linear approximation is handled by a statistical inference model based on the maximum-entropy principle. The merits of this new approach are twofold: training is avoided and data is processed in a period of time that is comparable to the one of Neural Networks. The performance of the proposed methodology is validated by considering three experimental structures: an eight-degree-of-freedom (DOF) mass-spring system, a beam, and an exhaust system of a car. To demonstrate the potential of the proposed algorithm over existing ones, the obtained results are compared with those of a model updating method based on parallel genetic algorithms and a multilayer feedforward neural network approach.

  19. Entanglement entropy in top-down models

    NASA Astrophysics Data System (ADS)

    Jones, Peter A. R.; Taylor, Marika

    2016-08-01

    We explore holographic entanglement entropy in ten-dimensional supergravity solutions. It has been proposed that entanglement entropy can be computed in such top-down models using minimal surfaces which asymptotically wrap the compact part of the geometry. We show explicitly in a wide range of examples that the holographic entan-glement entropy thus computed agrees with the entanglement entropy computed using the Ryu-Takayanagi formula from the lower-dimensional Einstein metric obtained from reduc-tion over the compact space. Our examples include not only consistent truncations but also cases in which no consistent truncation exists and Kaluza-Klein holography is used to identify the lower-dimensional Einstein metric. We then give a general proof, based on the Lewkowycz-Maldacena approach, of the top-down entanglement entropy formula.

  20. Maximum-Entropy Meshfree Method for Compressible and Near-Incompressible Elasticity

    SciTech Connect

    Ortiz, A; Puso, M A; Sukumar, N

    2009-09-04

    Numerical integration errors and volumetric locking in the near-incompressible limit are two outstanding issues in Galerkin-based meshfree computations. In this paper, we present a modified Gaussian integration scheme on background cells for meshfree methods that alleviates errors in numerical integration and ensures patch test satisfaction to machine precision. Secondly, a locking-free small-strain elasticity formulation for meshfree methods is proposed, which draws on developments in assumed strain methods and nodal integration techniques. In this study, maximum-entropy basis functions are used; however, the generality of our approach permits the use of any meshfree approximation. Various benchmark problems in two-dimensional compressible and near-incompressible small strain elasticity are presented to demonstrate the accuracy and optimal convergence in the energy norm of the maximum-entropy meshfree formulation.

  1. Maximum entropy deconvolution of the optical jet of 3C 273

    NASA Technical Reports Server (NTRS)

    Evans, I. N.; Ford, H. C.; Hui, X.

    1989-01-01

    The technique of maximum entropy image restoration is applied to the problem of deconvolving the point spread function from a deep, high-quality V band image of the optical jet of 3C 273. The resulting maximum entropy image has an approximate spatial resolution of 0.6 arcsec and has been used to study the morphology of the optical jet. Four regularly-spaced optical knots are clearly evident in the data, together with an optical 'extension' at each end of the optical jet. The jet oscillates around its center of gravity, and the spatial scale of the oscillations is very similar to the spacing between the optical knots. The jet is marginally resolved in the transverse direction and has an asymmetric profile perpendicular to the jet axis. The distribution of V band flux along the length of the jet, and accurate astrometry of the optical knot positions are presented.

  2. Maximum information entropy principle and the interpretation of probabilities in statistical mechanics - a short review

    NASA Astrophysics Data System (ADS)

    Kuić, Domagoj

    2016-05-01

    In this paper an alternative approach to statistical mechanics based on the maximum information entropy principle (MaxEnt) is examined, specifically its close relation with the Gibbs method of ensembles. It is shown that the MaxEnt formalism is the logical extension of the Gibbs formalism of equilibrium statistical mechanics that is entirely independent of the frequentist interpretation of probabilities only as factual (i.e. experimentally verifiable) properties of the real world. Furthermore, we show that, consistently with the law of large numbers, the relative frequencies of the ensemble of systems prepared under identical conditions (i.e. identical constraints) actually correspond to the MaxEnt probabilites in the limit of a large number of systems in the ensemble. This result implies that the probabilities in statistical mechanics can be interpreted, independently of the frequency interpretation, on the basis of the maximum information entropy principle.

  3. REMARKS ON THE MAXIMUM ENTROPY METHOD APPLIED TO FINITE TEMPERATURE LATTICE QCD.

    SciTech Connect

    UMEDA, T.; MATSUFURU, H.

    2005-07-25

    We make remarks on the Maximum Entropy Method (MEM) for studies of the spectral function of hadronic correlators in finite temperature lattice QCD. We discuss the virtues and subtlety of MEM in the cases that one does not have enough number of data points such as at finite temperature. Taking these points into account, we suggest several tests which one should examine to keep the reliability for the results, and also apply them using mock and lattice QCD data.

  4. Hydrodynamic equations for electrons in graphene obtained from the maximum entropy principle

    SciTech Connect

    Barletti, Luigi

    2014-08-15

    The maximum entropy principle is applied to the formal derivation of isothermal, Euler-like equations for semiclassical fermions (electrons and holes) in graphene. After proving general mathematical properties of the equations so obtained, their asymptotic form corresponding to significant physical regimes is investigated. In particular, the diffusive regime, the Maxwell-Boltzmann regime (high temperature), the collimation regime and the degenerate gas limit (vanishing temperature) are considered.

  5. Mixed memory, (non) Hurst effect, and maximum entropy of rainfall in the tropical Andes

    NASA Astrophysics Data System (ADS)

    Poveda, Germán

    2011-02-01

    Diverse linear and nonlinear statistical parameters of rainfall under aggregation in time and the kind of temporal memory are investigated. Data sets from the Andes of Colombia at different resolutions (15 min and 1-h), and record lengths (21 months and 8-40 years) are used. A mixture of two timescales is found in the autocorrelation and autoinformation functions, with short-term memory holding for time lags less than 15-30 min, and long-term memory onwards. Consistently, rainfall variance exhibits different temporal scaling regimes separated at 15-30 min and 24 h. Tests for the Hurst effect evidence the frailty of the R/ S approach in discerning the kind of memory in high resolution rainfall, whereas rigorous statistical tests for short-memory processes do reject the existence of the Hurst effect. Rainfall information entropy grows as a power law of aggregation time, S( T) ˜ Tβ with < β> = 0.51, up to a timescale, TMaxEnt (70-202 h), at which entropy saturates, with β = 0 onwards. Maximum entropy is reached through a dynamic Generalized Pareto distribution, consistently with the maximum information-entropy principle for heavy-tailed random variables, and with its asymptotically infinitely divisible property. The dynamics towards the limit distribution is quantified. Tsallis q-entropies also exhibit power laws with T, such that Sq( T) ˜ Tβ( q) , with β( q) ⩽ 0 for q ⩽ 0, and β( q) ≃ 0.5 for q ⩾ 1. No clear patterns are found in the geographic distribution within and among the statistical parameters studied, confirming the strong variability of tropical Andean rainfall.

  6. Scour development around submarine pipelines due to current based on the maximum entropy theory

    NASA Astrophysics Data System (ADS)

    Zhang, Jing; Shi, Bing; Guo, Yakun; Xu, Weilin; Yang, Kejun; Zhao, Enjin

    2016-10-01

    This paper presents the results from laboratory experiments and theoretical analysis to investigate the development of scour around submarine pipeline under steady current conditions. Experiments show that the scour process takes place in two stages: the initial rapid scour and the subsequent gradual scour development stage. An empirical formula for calculating the equilibrium scour depth (the maximum scour depth) is developed by using the regression method. This formula together with the maximum entropy theory can be applied to establish a formula to predict the scour process for given water depth, diameter of pipeline and flow velocity. Good agreement between the predicted and measured scour depth is obtained.

  7. Causal nexus between energy consumption and carbon dioxide emission for Malaysia using maximum entropy bootstrap approach.

    PubMed

    Gul, Sehrish; Zou, Xiang; Hassan, Che Hashim; Azam, Muhammad; Zaman, Khalid

    2015-12-01

    This study investigates the relationship between energy consumption and carbon dioxide emission in the causal framework, as the direction of causality remains has a significant policy implication for developed and developing countries. The study employed maximum entropy bootstrap (Meboot) approach to examine the causal nexus between energy consumption and carbon dioxide emission using bivariate as well as multivariate framework for Malaysia, over a period of 1975-2013. This is a unified approach without requiring the use of conventional techniques based on asymptotical theory such as testing for possible unit root and cointegration. In addition, it can be applied in the presence of non-stationary of any type including structural breaks without any type of data transformation to achieve stationary. Thus, it provides more reliable and robust inferences which are insensitive to time span as well as lag length used. The empirical results show that there is a unidirectional causality running from energy consumption to carbon emission both in the bivariate model and multivariate framework, while controlling for broad money supply and population density. The results indicate that Malaysia is an energy-dependent country and hence energy is stimulus to carbon emissions. PMID:26282441

  8. Merging daily sea surface temperature data from multiple satellites using a Bayesian maximum entropy method

    NASA Astrophysics Data System (ADS)

    Tang, Shaolei; Yang, Xiaofeng; Dong, Di; Li, Ziwei

    2015-12-01

    Sea surface temperature (SST) is an important variable for understanding interactions between the ocean and the atmosphere. SST fusion is crucial for acquiring SST products of high spatial resolution and coverage. This study introduces a Bayesian maximum entropy (BME) method for blending daily SSTs from multiple satellite sensors. A new spatiotemporal covariance model of an SST field is built to integrate not only single-day SSTs but also time-adjacent SSTs. In addition, AVHRR 30-year SST climatology data are introduced as soft data at the estimation points to improve the accuracy of blended results within the BME framework. The merged SSTs, with a spatial resolution of 4 km and a temporal resolution of 24 hours, are produced in the Western Pacific Ocean region to demonstrate and evaluate the proposed methodology. Comparisons with in situ drifting buoy observations show that the merged SSTs are accurate and the bias and root-mean-square errors for the comparison are 0.15°C and 0.72°C, respectively.

  9. Causal nexus between energy consumption and carbon dioxide emission for Malaysia using maximum entropy bootstrap approach.

    PubMed

    Gul, Sehrish; Zou, Xiang; Hassan, Che Hashim; Azam, Muhammad; Zaman, Khalid

    2015-12-01

    This study investigates the relationship between energy consumption and carbon dioxide emission in the causal framework, as the direction of causality remains has a significant policy implication for developed and developing countries. The study employed maximum entropy bootstrap (Meboot) approach to examine the causal nexus between energy consumption and carbon dioxide emission using bivariate as well as multivariate framework for Malaysia, over a period of 1975-2013. This is a unified approach without requiring the use of conventional techniques based on asymptotical theory such as testing for possible unit root and cointegration. In addition, it can be applied in the presence of non-stationary of any type including structural breaks without any type of data transformation to achieve stationary. Thus, it provides more reliable and robust inferences which are insensitive to time span as well as lag length used. The empirical results show that there is a unidirectional causality running from energy consumption to carbon emission both in the bivariate model and multivariate framework, while controlling for broad money supply and population density. The results indicate that Malaysia is an energy-dependent country and hence energy is stimulus to carbon emissions.

  10. Maximum entropy inference of seabed attenuation parameters using ship radiated broadband noise.

    PubMed

    Knobles, D P

    2015-12-01

    The received acoustic field generated by a single passage of a research vessel on the New Jersey continental shelf is employed to infer probability distributions for the parameter values representing the frequency dependence of the seabed attenuation and the source levels of the ship. The statistical inference approach employed in the analysis is a maximum entropy methodology. The average value of the error function, needed to uniquely specify a conditional posterior probability distribution, is estimated with data samples from time periods in which the ship-receiver geometry is dominated by either the stern or bow aspect. The existence of ambiguities between the source levels and the environmental parameter values motivates an attempt to partially decouple these parameter values. The main result is the demonstration that parameter values for the attenuation (α and the frequency exponent), the sediment sound speed, and the source levels can be resolved through a model space reduction technique. The results of this multi-step statistical inference developed for ship radiated noise is then tested by processing towed source data over the same bandwidth and source track to estimate continuous wave source levels that were measured independently with a reference hydrophone on the tow body. PMID:26723313

  11. A basic introduction to the thermodynamics of the Earth system far from equilibrium and maximum entropy production.

    PubMed

    Kleidon, A

    2010-05-12

    The Earth system is remarkably different from its planetary neighbours in that it shows pronounced, strong global cycling of matter. These global cycles result in the maintenance of a unique thermodynamic state of the Earth's atmosphere which is far from thermodynamic equilibrium (TE). Here, I provide a simple introduction of the thermodynamic basis to understand why Earth system processes operate so far away from TE. I use a simple toy model to illustrate the application of non-equilibrium thermodynamics and to classify applications of the proposed principle of maximum entropy production (MEP) to such processes into three different cases of contrasting flexibility in the boundary conditions. I then provide a brief overview of the different processes within the Earth system that produce entropy, review actual examples of MEP in environmental and ecological systems, and discuss the role of interactions among dissipative processes in making boundary conditions more flexible. I close with a brief summary and conclusion.

  12. Learning probabilities from random observables in high dimensions: the maximum entropy distribution and others

    NASA Astrophysics Data System (ADS)

    Obuchi, Tomoyuki; Cocco, Simona; Monasson, Remi

    We consider the problem of learning a target probability distribution over a set of N binary variables from the knowledge of the expectation values (with this target distribution) of M observables, drawn uniformly at random. The space of all probability distributions compatible with these M expectation values within some fixed accuracy, called version space, is studied. We introduce a biased measure over the version space, which gives a boost with the entropy of the distributions and with an arbitrary `temperature'. The choice of the temperature allows us to interpolate between the flat measure over all the distributions and the pointwise measure concentrated at the maximum entropy distribution. Using the replica method we compute the volume of the version space and other quantities of interest, such as the distance R between the target distribution and the center-of-mass distribution over the version space. Some phase transitions are found, corresponding to qualitative improvements in the learning of the target distribution and to the decrease of the distance R. However, the distance R does not vary with the temperature, meaning that the maximum entropy distribution is not closer to the target distribution than any others. I am a member of one of the reciprocal societies, The Physical Society of Japan (JPS), and put the ID of JPS above.

  13. Quantum maximum-entropy principle for closed quantum hydrodynamic transport within a Wigner function formalism

    SciTech Connect

    Trovato, M.; Reggiani, L.

    2011-12-15

    By introducing a quantum entropy functional of the reduced density matrix, the principle of quantum maximum entropy is asserted as fundamental principle of quantum statistical mechanics. Accordingly, we develop a comprehensive theoretical formalism to construct rigorously a closed quantum hydrodynamic transport within a Wigner function approach. The theoretical formalism is formulated in both thermodynamic equilibrium and nonequilibrium conditions, and the quantum contributions are obtained by only assuming that the Lagrange multipliers can be expanded in powers of ({h_bar}/2{pi}){sup 2}. In particular, by using an arbitrary number of moments, we prove that (1) on a macroscopic scale all nonlocal effects, compatible with the uncertainty principle, are imputable to high-order spatial derivatives, both of the numerical density n and of the effective temperature T; (2) the results available from the literature in the framework of both a quantum Boltzmann gas and a degenerate quantum Fermi gas are recovered as a particular case; (3) the statistics for the quantum Fermi and Bose gases at different levels of degeneracy are explicitly incorporated; (4) a set of relevant applications admitting exact analytical equations are explicitly given and discussed; (5) the quantum maximum entropy principle keeps full validity in the classical limit, when ({h_bar}/2{pi}){yields}0.

  14. A Maximum-Entropy approach for accurate document annotation in the biomedical domain.

    PubMed

    Tsatsaronis, George; Macari, Natalia; Torge, Sunna; Dietze, Heiko; Schroeder, Michael

    2012-01-01

    The increasing number of scientific literature on the Web and the absence of efficient tools used for classifying and searching the documents are the two most important factors that influence the speed of the search and the quality of the results. Previous studies have shown that the usage of ontologies makes it possible to process document and query information at the semantic level, which greatly improves the search for the relevant information and makes one step further towards the Semantic Web. A fundamental step in these approaches is the annotation of documents with ontology concepts, which can also be seen as a classification task. In this paper we address this issue for the biomedical domain and present a new automated and robust method, based on a Maximum Entropy approach, for annotating biomedical literature documents with terms from the Medical Subject Headings (MeSH).The experimental evaluation shows that the suggested Maximum Entropy approach for annotating biomedical documents with MeSH terms is highly accurate, robust to the ambiguity of terms, and can provide very good performance even when a very small number of training documents is used. More precisely, we show that the proposed algorithm obtained an average F-measure of 92.4% (precision 99.41%, recall 86.77%) for the full range of the explored terms (4,078 MeSH terms), and that the algorithm's performance is resilient to terms' ambiguity, achieving an average F-measure of 92.42% (precision 99.32%, recall 86.87%) in the explored MeSH terms which were found to be ambiguous according to the Unified Medical Language System (UMLS) thesaurus. Finally, we compared the results of the suggested methodology with a Naive Bayes and a Decision Trees classification approach, and we show that the Maximum Entropy based approach performed with higher F-Measure in both ambiguous and monosemous MeSH terms.

  15. Application of a multiscale maximum entropy image restoration algorithm to HXMT observations

    NASA Astrophysics Data System (ADS)

    Guan, Ju; Song, Li-Ming; Huo, Zhuo-Xi

    2016-08-01

    This paper introduces a multiscale maximum entropy (MSME) algorithm for image restoration of the Hard X-ray Modulation Telescope (HXMT), which is a collimated scan X-ray satellite mainly devoted to a sensitive all-sky survey and pointed observations in the 1–250 keV range. The novelty of the MSME method is to use wavelet decomposition and multiresolution support to control noise amplification at different scales. Our work is focused on the application and modification of this method to restore diffuse sources detected by HXMT scanning observations. An improved method, the ensemble multiscale maximum entropy (EMSME) algorithm, is proposed to alleviate the problem of mode mixing exiting in MSME. Simulations have been performed on the detection of the diffuse source Cen A by HXMT in all-sky survey mode. The results show that the MSME method is adapted to the deconvolution task of HXMT for diffuse source detection and the improved method could suppress noise and improve the correlation and signal-to-noise ratio, thus proving itself a better algorithm for image restoration. Through one all-sky survey, HXMT could reach a capacity of detecting a diffuse source with maximum differential flux of 0.5 mCrab. Supported by Strategic Priority Research Program on Space Science, Chinese Academy of Sciences (XDA04010300) and National Natural Science Foundation of China (11403014)

  16. Application of a multiscale maximum entropy image restoration algorithm to HXMT observations

    NASA Astrophysics Data System (ADS)

    Guan, Ju; Song, Li-Ming; Huo, Zhuo-Xi

    2016-08-01

    This paper introduces a multiscale maximum entropy (MSME) algorithm for image restoration of the Hard X-ray Modulation Telescope (HXMT), which is a collimated scan X-ray satellite mainly devoted to a sensitive all-sky survey and pointed observations in the 1-250 keV range. The novelty of the MSME method is to use wavelet decomposition and multiresolution support to control noise amplification at different scales. Our work is focused on the application and modification of this method to restore diffuse sources detected by HXMT scanning observations. An improved method, the ensemble multiscale maximum entropy (EMSME) algorithm, is proposed to alleviate the problem of mode mixing exiting in MSME. Simulations have been performed on the detection of the diffuse source Cen A by HXMT in all-sky survey mode. The results show that the MSME method is adapted to the deconvolution task of HXMT for diffuse source detection and the improved method could suppress noise and improve the correlation and signal-to-noise ratio, thus proving itself a better algorithm for image restoration. Through one all-sky survey, HXMT could reach a capacity of detecting a diffuse source with maximum differential flux of 0.5 mCrab. Supported by Strategic Priority Research Program on Space Science, Chinese Academy of Sciences (XDA04010300) and National Natural Science Foundation of China (11403014)

  17. A homotopy algorithm for synthesizing robust controllers for flexible structures via the maximum entropy design equations

    NASA Technical Reports Server (NTRS)

    Collins, Emmanuel G., Jr.; Richter, Stephen

    1990-01-01

    One well known deficiency of LQG compensators is that they do not guarantee any measure of robustness. This deficiency is especially highlighted when considering control design for complex systems such as flexible structures. There has thus been a need to generalize LQG theory to incorporate robustness constraints. Here we describe the maximum entropy approach to robust control design for flexible structures, a generalization of LQG theory, pioneered by Hyland, which has proved useful in practice. The design equations consist of a set of coupled Riccati and Lyapunov equations. A homotopy algorithm that is used to solve these design equations is presented.

  18. Reply to ``Comment on `Mobility spectrum computational analysis using a maximum entropy approach' ''

    NASA Astrophysics Data System (ADS)

    Mironov, O. A.; Myronov, M.; Kiatgamolchai, S.; Kantser, V. G.

    2004-03-01

    In their Comment [J. Antoszewski, D. D. Redfern, L. Faraone, J. R. Meyer, I. Vurgaftman, and J. Lindemuth, Phys. Rev. E 69, 038701 (2004)] on our paper [S. Kiatgamolchai, M. Myronov, O. A. Mironov, V. G. Kantser, E. H. C. Parker, and T. E. Whall, Phys. Rev. E 66, 036705 (2002)] the authors present computational results obtained with the improved quantitative mobility spectrum analysis technique implemented in the commercial software of Lake Shore Cryotronics. We suggest that this is just information additional to the mobility spectrum analysis (MSA) in general without any direct relation to our maximum entropy MSA (ME-MSA) algorithm.

  19. Maximum entropy and the stress distribution in soft disk packings above jamming.

    PubMed

    Wu, Yegang; Teitel, S

    2015-08-01

    We show that the maximum entropy hypothesis can successfully explain the distribution of stresses on compact clusters of particles within disordered mechanically stable packings of soft, isotropically stressed, frictionless disks above the jamming transition. We show that, in our two-dimensional case, it becomes necessary to consider not only the stress but also the Maxwell-Cremona force-tile area as a constraining variable that determines the stress distribution. The importance of the force-tile area had been suggested by earlier computations on an idealized force-network ensemble. PMID:26382394

  20. High resolution VLBI polarisation imaging of AGN with the Maximum Entropy Method

    NASA Astrophysics Data System (ADS)

    Coughlan, Colm P.; Gabuzda, Denise C.

    2016-08-01

    Radio polarisation images of the jets of Active Galactic Nuclei (AGN) can provide a deep insight into the launching and collimation mechanisms of relativistic jets. However, even at VLBI scales, resolution is often a limiting factor in the conclusions that can be drawn from observations. The Maximum Entropy Method (MEM) is a deconvolution algorithm that can outperform the more common CLEAN algorithm in many cases, particularly when investigating structures present on scales comparable to or smaller than the nominal beam size with "super-resolution". A new implementation of the MEM suitable for single- or multiple-wavelength VLBI polarisation observations has been developed and is described here. Monte Carlo simulations comparing the performances of CLEAN and MEM at reconstructing the properties of model images are presented; these demonstrate the enhanced reliability of MEM over CLEAN when images of the fractional polarisation and polarisation angle are constructed using convolving beams that are appreciably smaller than the full CLEAN beam. The results of using this new MEM software to image VLBA observations of the AGN 0716+714 at six different wavelengths are presented, and compared to corresponding maps obtained with CLEAN. MEM and CLEAN maps of Stokes I, the polarised flux, the fractional polarisation and the polarisation angle are compared for convolving beams ranging from the full CLEAN beam down to a beam one-third of this size. MEM's ability to provide more trustworthy polarisation imaging than a standard CLEAN-based deconvolution when convolving beams appreciably smaller than the full CLEAN beam are used is discussed.

  1. Algorithms for optimized maximum entropy and diagnostic tools for analytic continuation.

    PubMed

    Bergeron, Dominic; Tremblay, A-M S

    2016-08-01

    Analytic continuation of numerical data obtained in imaginary time or frequency has become an essential part of many branches of quantum computational physics. It is, however, an ill-conditioned procedure and thus a hard numerical problem. The maximum-entropy approach, based on Bayesian inference, is the most widely used method to tackle that problem. Although the approach is well established and among the most reliable and efficient ones, useful developments of the method and of its implementation are still possible. In addition, while a few free software implementations are available, a well-documented, optimized, general purpose, and user-friendly software dedicated to that specific task is still lacking. Here we analyze all aspects of the implementation that are critical for accuracy and speed and present a highly optimized approach to maximum entropy. Original algorithmic and conceptual contributions include (1) numerical approximations that yield a computational complexity that is almost independent of temperature and spectrum shape (including sharp Drude peaks in broad background, for example) while ensuring quantitative accuracy of the result whenever precision of the data is sufficient, (2) a robust method of choosing the entropy weight α that follows from a simple consistency condition of the approach and the observation that information- and noise-fitting regimes can be identified clearly from the behavior of χ^{2} with respect to α, and (3) several diagnostics to assess the reliability of the result. Benchmarks with test spectral functions of different complexity and an example with an actual physical simulation are presented. Our implementation, which covers most typical cases for fermions, bosons, and response functions, is available as an open source, user-friendly software.

  2. Algorithms for optimized maximum entropy and diagnostic tools for analytic continuation.

    PubMed

    Bergeron, Dominic; Tremblay, A-M S

    2016-08-01

    Analytic continuation of numerical data obtained in imaginary time or frequency has become an essential part of many branches of quantum computational physics. It is, however, an ill-conditioned procedure and thus a hard numerical problem. The maximum-entropy approach, based on Bayesian inference, is the most widely used method to tackle that problem. Although the approach is well established and among the most reliable and efficient ones, useful developments of the method and of its implementation are still possible. In addition, while a few free software implementations are available, a well-documented, optimized, general purpose, and user-friendly software dedicated to that specific task is still lacking. Here we analyze all aspects of the implementation that are critical for accuracy and speed and present a highly optimized approach to maximum entropy. Original algorithmic and conceptual contributions include (1) numerical approximations that yield a computational complexity that is almost independent of temperature and spectrum shape (including sharp Drude peaks in broad background, for example) while ensuring quantitative accuracy of the result whenever precision of the data is sufficient, (2) a robust method of choosing the entropy weight α that follows from a simple consistency condition of the approach and the observation that information- and noise-fitting regimes can be identified clearly from the behavior of χ^{2} with respect to α, and (3) several diagnostics to assess the reliability of the result. Benchmarks with test spectral functions of different complexity and an example with an actual physical simulation are presented. Our implementation, which covers most typical cases for fermions, bosons, and response functions, is available as an open source, user-friendly software. PMID:27627408

  3. Algorithms for optimized maximum entropy and diagnostic tools for analytic continuation

    NASA Astrophysics Data System (ADS)

    Bergeron, Dominic; Tremblay, A.-M. S.

    2016-08-01

    Analytic continuation of numerical data obtained in imaginary time or frequency has become an essential part of many branches of quantum computational physics. It is, however, an ill-conditioned procedure and thus a hard numerical problem. The maximum-entropy approach, based on Bayesian inference, is the most widely used method to tackle that problem. Although the approach is well established and among the most reliable and efficient ones, useful developments of the method and of its implementation are still possible. In addition, while a few free software implementations are available, a well-documented, optimized, general purpose, and user-friendly software dedicated to that specific task is still lacking. Here we analyze all aspects of the implementation that are critical for accuracy and speed and present a highly optimized approach to maximum entropy. Original algorithmic and conceptual contributions include (1) numerical approximations that yield a computational complexity that is almost independent of temperature and spectrum shape (including sharp Drude peaks in broad background, for example) while ensuring quantitative accuracy of the result whenever precision of the data is sufficient, (2) a robust method of choosing the entropy weight α that follows from a simple consistency condition of the approach and the observation that information- and noise-fitting regimes can be identified clearly from the behavior of χ2 with respect to α , and (3) several diagnostics to assess the reliability of the result. Benchmarks with test spectral functions of different complexity and an example with an actual physical simulation are presented. Our implementation, which covers most typical cases for fermions, bosons, and response functions, is available as an open source, user-friendly software.

  4. Improvement of the detector resolution in X-ray spectrometry by using the maximum entropy method

    NASA Astrophysics Data System (ADS)

    Fernández, Jorge E.; Scot, Viviana; Giulio, Eugenio Di; Sabbatucci, Lorenzo

    2015-11-01

    In every X-ray spectroscopy measurement the influence of the detection system causes loss of information. Different mechanisms contribute to form the so-called detector response function (DRF): the detector efficiency, the escape of photons as a consequence of photoelectric or scattering interactions, the spectrum smearing due to the energy resolution, and, in solid states detectors (SSD), the charge collection artifacts. To recover the original spectrum, it is necessary to remove the detector influence by solving the so-called inverse problem. The maximum entropy unfolding technique solves this problem by imposing a set of constraints, taking advantage of the known a priori information and preserving the positive-defined character of the X-ray spectrum. This method has been included in the tool UMESTRAT (Unfolding Maximum Entropy STRATegy), which adopts a semi-automatic strategy to solve the unfolding problem based on a suitable combination of the codes MAXED and GRAVEL, developed at PTB. In the past UMESTRAT proved the capability to resolve characteristic peaks which were revealed as overlapped by a Si SSD, giving good qualitative results. In order to obtain quantitative results, UMESTRAT has been modified to include the additional constraint of the total number of photons of the spectrum, which can be easily determined by inverting the diagonal efficiency matrix. The features of the improved code are illustrated with some examples of unfolding from three commonly used SSD like Si, Ge, and CdTe. The quantitative unfolding can be considered as a software improvement of the detector resolution.

  5. Spectral maximum entropy hydrodynamics of fermionic radiation: a three-moment system for one-dimensional flows

    NASA Astrophysics Data System (ADS)

    Banach, Zbigniew; Larecki, Wieslaw

    2013-06-01

    The spectral formulation of the nine-moment radiation hydrodynamics resulting from using the Boltzmann entropy maximization procedure is considered. The analysis is restricted to the one-dimensional flows of a gas of massless fermions. The objective of the paper is to demonstrate that, for such flows, the spectral nine-moment maximum entropy hydrodynamics of fermionic radiation is not a purely formal theory. We first determine the domains of admissible values of the spectral moments and of the Lagrange multipliers corresponding to them. We then prove the existence of a solution to the constrained entropy optimization problem. Due to the strict concavity of the entropy functional defined on the space of distribution functions, there exists a one-to-one correspondence between the Lagrange multipliers and the moments. The maximum entropy closure of moment equations results in the symmetric conservative system of first-order partial differential equations for the Lagrange multipliers. However, this system can be transformed into the equivalent system of conservation equations for the moments. These two systems are consistent with the additional conservation equation interpreted as the balance of entropy. Exploiting the above facts, we arrive at the differential relations satisfied by the entropy function and the additional function required to close the system of moment equations. We refer to this additional function as the moment closure function. In general, the moment closure and entropy-entropy flux functions cannot be explicitly calculated in terms of the moments determining the state of a gas. Therefore, we develop a perturbation method of calculating these functions. Some additional analytical (and also numerical) results are obtained, assuming that the maximum entropy distribution function tends to the Maxwell-Boltzmann limit.

  6. Estimation of Wild Fire Risk Area based on Climate and Maximum Entropy in Korean Peninsular

    NASA Astrophysics Data System (ADS)

    Kim, T.; Lim, C. H.; Song, C.; Lee, W. K.

    2015-12-01

    The number of forest fires and accompanying human injuries and physical damages has been increased by frequent drought. In this study, forest fire danger zone of Korea is estimated to predict and prepare for future forest fire hazard regions. The MaxEnt (Maximum Entropy) model is used to estimate the forest fire hazard region which estimates the probability distribution of the status. The MaxEnt model is primarily for the analysis of species distribution, but its applicability for various natural disasters is getting recognition. The detailed forest fire occurrence data collected by the MODIS for past 5 years (2010-2014) is used as occurrence data for the model. Also meteorology, topography, vegetation data are used as environmental variable. In particular, various meteorological variables are used to check impact of climate such as annual average temperature, annual precipitation, precipitation of dry season, annual effective humidity, effective humidity of dry season, aridity index. Consequently, the result was valid based on the AUC(Area Under the Curve) value (= 0.805) which is used to predict accuracy in the MaxEnt model. Also predicted forest fire locations were practically corresponded with the actual forest fire distribution map. Meteorological variables such as effective humidity showed the greatest contribution, and topography variables such as TWI (Topographic Wetness Index) and slope also contributed on the forest fire. As a result, the east coast and the south part of Korea peninsula were predicted to have high risk on the forest fire. In contrast, high-altitude mountain area and the west coast appeared to be safe with the forest fire. The result of this study is similar with former studies, which indicates high risks of forest fire in accessible area and reflects climatic characteristics of east and south part in dry season. To sum up, we estimated the forest fire hazard zone with existing forest fire locations and environment variables and had

  7. Maximum entropy production, carbon assimilation, and the spatial organization of vegetation in river basins

    PubMed Central

    del Jesus, Manuel; Foti, Romano; Rinaldo, Andrea; Rodriguez-Iturbe, Ignacio

    2012-01-01

    The spatial organization of functional vegetation types in river basins is a major determinant of their runoff production, biodiversity, and ecosystem services. The optimization of different objective functions has been suggested to control the adaptive behavior of plants and ecosystems, often without a compelling justification. Maximum entropy production (MEP), rooted in thermodynamics principles, provides a tool to justify the choice of the objective function controlling vegetation organization. The application of MEP at the ecosystem scale results in maximum productivity (i.e., maximum canopy photosynthesis) as the thermodynamic limit toward which the organization of vegetation appears to evolve. Maximum productivity, which incorporates complex hydrologic feedbacks, allows us to reproduce the spatial macroscopic organization of functional types of vegetation in a thoroughly monitored river basin, without the need for a reductionist description of the underlying microscopic dynamics. The methodology incorporates the stochastic characteristics of precipitation and the associated soil moisture on a spatially disaggregated framework. Our results suggest that the spatial organization of functional vegetation types in river basins naturally evolves toward configurations corresponding to dynamically accessible local maxima of the maximum productivity of the ecosystem. PMID:23213227

  8. Structural study of sodium-type zeolite LTA by combination of Rietveld and maximum-entropy methods

    SciTech Connect

    Ikeda, T.; Kamiyama, T.; Izumi, F.; Kodaira, T. |

    1998-12-01

    The electron-density distribution in hydrated and dehydrated sodium-type zeolite LTA was visualized from X-ray powder diffraction data by combining Rietveld analysis and a maximum-entropy method. The X-ray diffraction data were analyzed on the basis of the composition Na{sub 95}Si{sub 97}Al{sub 95}O{sub 384} and a structural model (space group Fm{bar 3}c) obtained by the Rietveld refinement of time-of-flight neutron powder diffraction data for the dehydrated sample. Weighted R factors, wR{sub F}, resulting from the analysis by the maximum-entropy method reached 1.15% for the hydrated sample and 0.99% for the dehydrated one. The number of water molecules per unit cell in the hydrated sample was estimated to be ca. 255 by electron-density analysis, agreeing well with ca. 248 determined by thermogravimetry. Adsorbed water molecules are situated beside Na{sup +} ions in electron-density maps. In contrast with the dehydrated sample, Na{sup +} ions in eight-membered rings were moved a little by introducing water into the {alpha}-cages.

  9. Maximum Entropy Methods as the Bridge Between Microscopic and Macroscopic Theory

    NASA Astrophysics Data System (ADS)

    Taylor, Jamie M.

    2016-09-01

    This paper is concerned with an investigation into a function of macroscopic variables known as the singular potential, building on previous work by Ball and Majumdar. The singular potential is a function of the admissible statistical averages of probability distributions on a state space, defined so that it corresponds to the maximum possible entropy given known observed statistical averages, although non-classical entropy-like objective functions will also be considered. First the set of admissible moments must be established, and under the conditions presented in this work the set is open, bounded and convex allowing a description in terms of supporting hyperplanes, which provides estimates on the development of singularities for related probability distributions. Under appropriate conditions it is shown that the singular potential is strictly convex, as differentiable as the microscopic entropy, and blows up uniformly as the macroscopic variable tends to the boundary of the set of admissible moments. Applications of the singular potential are then discussed, and particular consideration will be given to certain free-energy functionals typical in mean-field theory, demonstrating an equivalence between certain microscopic and macroscopic free-energy functionals. This allows statements about L^1-local minimisers of Onsager's free energy to be obtained which cannot be given by two-sided variations, and overcomes the need to ensure local minimisers are bounded away from zero and +∞ before taking L^∞ variations. The analysis also permits the definition of a dual order parameter for which Onsager's free energy allows an explicit representation. Also, the difficulties in approximating the singular potential by everywhere defined functions, in particular by polynomial functions, are addressed, with examples demonstrating the failure of the Taylor approximation to preserve relevant shape properties of the singular potential.

  10. Maximum Entropy Methods as the Bridge Between Microscopic and Macroscopic Theory

    NASA Astrophysics Data System (ADS)

    Taylor, Jamie M.

    2016-07-01

    This paper is concerned with an investigation into a function of macroscopic variables known as the singular potential, building on previous work by Ball and Majumdar. The singular potential is a function of the admissible statistical averages of probability distributions on a state space, defined so that it corresponds to the maximum possible entropy given known observed statistical averages, although non-classical entropy-like objective functions will also be considered. First the set of admissible moments must be established, and under the conditions presented in this work the set is open, bounded and convex allowing a description in terms of supporting hyperplanes, which provides estimates on the development of singularities for related probability distributions. Under appropriate conditions it is shown that the singular potential is strictly convex, as differentiable as the microscopic entropy, and blows up uniformly as the macroscopic variable tends to the boundary of the set of admissible moments. Applications of the singular potential are then discussed, and particular consideration will be given to certain free-energy functionals typical in mean-field theory, demonstrating an equivalence between certain microscopic and macroscopic free-energy functionals. This allows statements about L^1 -local minimisers of Onsager's free energy to be obtained which cannot be given by two-sided variations, and overcomes the need to ensure local minimisers are bounded away from zero and +∞ before taking L^∞ variations. The analysis also permits the definition of a dual order parameter for which Onsager's free energy allows an explicit representation. Also, the difficulties in approximating the singular potential by everywhere defined functions, in particular by polynomial functions, are addressed, with examples demonstrating the failure of the Taylor approximation to preserve relevant shape properties of the singular potential.

  11. Learning Gaussian mixture models with entropy-based criteria.

    PubMed

    Penalver Benavent, Antonio; Escolano Ruiz, Francisco; Saez, Juan Manuel

    2009-11-01

    In this paper, we address the problem of estimating the parameters of Gaussian mixture models. Although the expectation-maximization (EM) algorithm yields the maximum-likelihood (ML) solution, its sensitivity to the selection of the starting parameters is well-known and it may converge to the boundary of the parameter space. Furthermore, the resulting mixture depends on the number of selected components, but the optimal number of kernels may be unknown beforehand. We introduce the use of the entropy of the probability density function (pdf) associated to each kernel to measure the quality of a given mixture model with a fixed number of kernels. We propose two methods to approximate the entropy of each kernel and a modification of the classical EM algorithm in order to find the optimum number of components of the mixture. Moreover, we use two stopping criteria: a novel global mixture entropy-based criterion called Gaussianity deficiency (GD) and a minimum description length (MDL) principle-based one. Our algorithm, called entropy-based EM (EBEM), starts with a unique kernel and performs only splitting by selecting the worst kernel attending to GD. We have successfully tested it in probability density estimation, pattern classification, and color image segmentation. Experimental results improve the ones of other state-of-the-art model order selection methods. PMID:19770090

  12. A maximum (non-extensive) entropy approach to equity options bid-ask spread

    NASA Astrophysics Data System (ADS)

    Tapiero, Oren J.

    2013-07-01

    The cross-section of options bid-ask spreads with their strikes are modelled by maximising the Kaniadakis entropy. A theoretical model results with the bid-ask spread depending explicitly on the implied volatility; the probability of expiring at-the-money and an asymmetric information parameter (κ). Considering AIG as a test case for the period between January 2006 and October 2008, we find that information flows uniquely from the trading activity in the underlying asset to its derivatives. Suggesting that κ is possibly an option implied measure of the current state of trading liquidity in the underlying asset.

  13. Modelling the spreading rate of controlled communicable epidemics through an entropy-based thermodynamic model

    NASA Astrophysics Data System (ADS)

    Wang, WenBin; Wu, ZiNiu; Wang, ChunFeng; Hu, RuiFeng

    2013-11-01

    A model based on a thermodynamic approach is proposed for predicting the dynamics of communicable epidemics assumed to be governed by controlling efforts of multiple scales so that an entropy is associated with the system. All the epidemic details are factored into a single and time-dependent coefficient, the functional form of this coefficient is found through four constraints, including notably the existence of an inflexion point and a maximum. The model is solved to give a log-normal distribution for the spread rate, for which a Shannon entropy can be defined. The only parameter, that characterizes the width of the distribution function, is uniquely determined through maximizing the rate of entropy production. This entropy-based thermodynamic (EBT) model predicts the number of hospitalized cases with a reasonable accuracy for SARS in the year 2003. This EBT model can be of use for potential epidemics such as avian influenza and H7N9 in China.

  14. Optical Spectrum Analysis of Real-Time TDDFT Using the Maximum Entropy Method

    NASA Astrophysics Data System (ADS)

    Toogoshi, M.; Kato, M.; Kano, S. S.; Zempo, Y.

    2014-05-01

    In the calculation of time-dependent density-functional theory in real time, we apply an external field to perturb the optimized electronic structure, and follow the time evolution of the dipole moment to calculate the oscillator strength distribution. We solve the time-dependent equation of motion, keeping track of the dipole moment as time-series data. We adopt Burg's maximum entropy method (MEM) to compute the spectrum of the oscillator strength, and apply this technique to several molecules. We find that MEM provides the oscillator strength distribution at high resolution even with a half of the evolution time of a simple FFT of the dynamic dipole moment. In this paper we show the effectiveness and efficiency of MEM in comparison with that of FFT. Not only the total number of time steps, but also the length of the autocorrelation, the lag, plays an important role in improving the resolution of the spectrum.

  15. On the stability of the moments of the maximum entropy wind wave spectrum

    SciTech Connect

    Pena, H.G.

    1983-03-01

    The stability of some current wind wave parameters as a function of high-frequency cut-off and degrees of freedom of the spectrum has been numerically investigated when computed in terms of the moments of the wave energy spectrum. From the Pierson-Moskovitz wave spectrum type, a sea surface profile is simulated and its wave energy spectrum is estimated by the Maximum Entropy Method (MEM). As the degrees of freedom of the MEM spectral estimation are varied, the results show a much better stability of the wave parameters as compared to the classical periodogram and correlogram spectral approaches. The stability of wave parameters as a function of high-frequency cut-off has the same result as obtained by the classical techniques.

  16. An understanding of human dynamics in urban subway traffic from the Maximum Entropy Principle

    NASA Astrophysics Data System (ADS)

    Yong, Nuo; Ni, Shunjiang; Shen, Shifei; Ji, Xuewei

    2016-08-01

    We studied the distribution of entry time interval in Beijing subway traffic by analyzing the smart card transaction data, and then deduced the probability distribution function of entry time interval based on the Maximum Entropy Principle. Both theoretical derivation and data statistics indicated that the entry time interval obeys power-law distribution with an exponential cutoff. In addition, we pointed out the constraint conditions for the distribution form and discussed how the constraints affect the distribution function. It is speculated that for bursts and heavy tails in human dynamics, when the fitted power exponent is less than 1.0, it cannot be a pure power-law distribution, but with an exponential cutoff, which may be ignored in the previous studies.

  17. Optimal resolution in maximum entropy image reconstruction from projections with multigrid acceleration

    NASA Technical Reports Server (NTRS)

    Limber, Mark A.; Manteuffel, Thomas A.; Mccormick, Stephen F.; Sholl, David S.

    1993-01-01

    We consider the problem of image reconstruction from a finite number of projections over the space L(sup 1)(Omega), where Omega is a compact subset of the set of Real numbers (exp 2). We prove that, given a discretization of the projection space, the function that generates the correct projection data and maximizes the Boltzmann-Shannon entropy is piecewise constant on a certain discretization of Omega, which we call the 'optimal grid'. It is on this grid that one obtains the maximum resolution given the problem setup. The size of this grid grows very quickly as the number of projections and number of cells per projection grow, indicating fast computational methods are essential to make its use feasible. We use a Fenchel duality formulation of the problem to keep the number of variables small while still using the optimal discretization, and propose a multilevel scheme to improve convergence of a simple cyclic maximization scheme applied to the dual problem.

  18. Verification and validation of the maximum entropy method for reconstructing neutron flux, with MCNP5, Attila-7.1.0 and the GODIVA experiment

    SciTech Connect

    Douglas S. Crawford; Tony Saad; Terry A. Ring

    2013-03-01

    Verification and validation of reconstructed neutron flux based on the maximum entropy method is presented in this paper. The verification is carried out by comparing the neutron flux spectrum from the maximum entropy method with Monte Carlo N Particle 5 version 1.40 (MCNP5) and Attila-7.1.0-beta (Attila). A spherical 100% 235U critical assembly is modeled as the test case to compare the three methods. The verification error range for the maximum entropy method is 15–21% where MCNP5 is taken to be the comparison standard. Attila relative error for the critical assembly is 20–35%. Validation is accomplished by comparing a neutron flux spectrum that is back calculated from foil activation measurements performed in the GODIVA experiment (GODIVA). The error range of the reconstructed flux compared to GODIVA is 0–10%. The error range of the neutron flux spectrum from MCNP5 compared to GODIVA is 0–20% and the Attila error range compared to the GODIVA is 0–35%. The maximum entropy method is shown to be a fast reliable method, compared to either Monte Carlo methods (MCNP5) or 30 multienergy group methods (Attila) and with respect to the GODIVA experiment.

  19. Forest Tree Species Distribution Mapping Using Landsat Satellite Imagery and Topographic Variables with the Maximum Entropy Method in Mongolia

    NASA Astrophysics Data System (ADS)

    Hao Chiang, Shou; Valdez, Miguel; Chen, Chi-Farn

    2016-06-01

    Forest is a very important ecosystem and natural resource for living things. Based on forest inventories, government is able to make decisions to converse, improve and manage forests in a sustainable way. Field work for forestry investigation is difficult and time consuming, because it needs intensive physical labor and the costs are high, especially surveying in remote mountainous regions. A reliable forest inventory can give us a more accurate and timely information to develop new and efficient approaches of forest management. The remote sensing technology has been recently used for forest investigation at a large scale. To produce an informative forest inventory, forest attributes, including tree species are unavoidably required to be considered. In this study the aim is to classify forest tree species in Erdenebulgan County, Huwsgul province in Mongolia, using Maximum Entropy method. The study area is covered by a dense forest which is almost 70% of total territorial extension of Erdenebulgan County and is located in a high mountain region in northern Mongolia. For this study, Landsat satellite imagery and a Digital Elevation Model (DEM) were acquired to perform tree species mapping. The forest tree species inventory map was collected from the Forest Division of the Mongolian Ministry of Nature and Environment as training data and also used as ground truth to perform the accuracy assessment of the tree species classification. Landsat images and DEM were processed for maximum entropy modeling, and this study applied the model with two experiments. The first one is to use Landsat surface reflectance for tree species classification; and the second experiment incorporates terrain variables in addition to the Landsat surface reflectance to perform the tree species classification. All experimental results were compared with the tree species inventory to assess the classification accuracy. Results show that the second one which uses Landsat surface reflectance coupled

  20. Application of maximum-entropy spectral estimation to deconvolution of XPS data. [X-ray Photoelectron Spectroscopy

    NASA Technical Reports Server (NTRS)

    Vasquez, R. P.; Klein, J. D.; Barton, J. J.; Grunthaner, F. J.

    1981-01-01

    A comparison is made between maximum-entropy spectral estimation and traditional methods of deconvolution used in electron spectroscopy. The maximum-entropy method is found to have higher resolution-enhancement capabilities and, if the broadening function is known, can be used with no adjustable parameters with a high degree of reliability. The method and its use in practice are briefly described, and a criterion is given for choosing the optimal order for the prediction filter based on the prediction-error power sequence. The method is demonstrated on a test case and applied to X-ray photoelectron spectra.

  1. Configurational entropy in brane-world models

    NASA Astrophysics Data System (ADS)

    Correa, R. A. C.; da Rocha, Roldão

    2015-11-01

    In this work we investigate the entropic information on thick brane-world scenarios and its consequences. The brane-world entropic information is studied for the sine-Gordon model and hence the brane-world entropic information measure is shown to be an accurate way for providing the most suitable range for the bulk AdS curvature, in particular from the informational content of physical solutions. Besides, the brane-world configurational entropy is employed to demonstrate a high organisational degree in the structure of the configuration of the system, for large values of a parameter of the sine-Gordon model but the one related to the AdS curvature. The Gleiser and Stamatopoulos procedure is finally applied in order to achieve a precise correlation between the energy of the system and the brane-world configurational entropy.

  2. Non-equilibrium thermodynamics, maximum entropy production and Earth-system evolution.

    PubMed

    Kleidon, Axel

    2010-01-13

    The present-day atmosphere is in a unique state far from thermodynamic equilibrium. This uniqueness is for instance reflected in the high concentration of molecular oxygen and the low relative humidity in the atmosphere. Given that the concentration of atmospheric oxygen has likely increased throughout Earth-system history, we can ask whether this trend can be generalized to a trend of Earth-system evolution that is directed away from thermodynamic equilibrium, why we would expect such a trend to take place and what it would imply for Earth-system evolution as a whole. The justification for such a trend could be found in the proposed general principle of maximum entropy production (MEP), which states that non-equilibrium thermodynamic systems maintain steady states at which entropy production is maximized. Here, I justify and demonstrate this application of MEP to the Earth at the planetary scale. I first describe the non-equilibrium thermodynamic nature of Earth-system processes and distinguish processes that drive the system's state away from equilibrium from those that are directed towards equilibrium. I formulate the interactions among these processes from a thermodynamic perspective and then connect them to a holistic view of the planetary thermodynamic state of the Earth system. In conclusion, non-equilibrium thermodynamics and MEP have the potential to provide a simple and holistic theory of Earth-system functioning. This theory can be used to derive overall evolutionary trends of the Earth's past, identify the role that life plays in driving thermodynamic states far from equilibrium, identify habitability in other planetary environments and evaluate human impacts on Earth-system functioning.

  3. Entropy Based Modelling for Estimating Demographic Trends.

    PubMed

    Li, Guoqi; Zhao, Daxuan; Xu, Yi; Kuo, Shyh-Hao; Xu, Hai-Yan; Hu, Nan; Zhao, Guangshe; Monterola, Christopher

    2015-01-01

    In this paper, an entropy-based method is proposed to forecast the demographical changes of countries. We formulate the estimation of future demographical profiles as a constrained optimization problem, anchored on the empirically validated assumption that the entropy of age distribution is increasing in time. The procedure of the proposed method involves three stages, namely: 1) Prediction of the age distribution of a country's population based on an "age-structured population model"; 2) Estimation the age distribution of each individual household size with an entropy-based formulation based on an "individual household size model"; and 3) Estimation the number of each household size based on a "total household size model". The last stage is achieved by projecting the age distribution of the country's population (obtained in stage 1) onto the age distributions of individual household sizes (obtained in stage 2). The effectiveness of the proposed method is demonstrated by feeding real world data, and it is general and versatile enough to be extended to other time dependent demographic variables. PMID:26382594

  4. An Instructive Model of Entropy

    ERIC Educational Resources Information Center

    Zimmerman, Seth

    2010-01-01

    This article first notes the misinterpretation of a common thought experiment, and the misleading comment that "systems tend to flow from less probable to more probable macrostates". It analyses the experiment, generalizes it and introduces a new tool of investigation, the simplectic structure. A time-symmetric model is built upon this structure,…

  5. Analysis of the Velocity Distribution in Partially-Filled Circular Pipe Employing the Principle of Maximum Entropy.

    PubMed

    Jiang, Yulin; Li, Bin; Chen, Jie

    2016-01-01

    The flow velocity distribution in partially-filled circular pipe was investigated in this paper. The velocity profile is different from full-filled pipe flow, since the flow is driven by gravity, not by pressure. The research findings show that the position of maximum flow is below the water surface, and varies with the water depth. In the region of near tube wall, the fluid velocity is mainly influenced by the friction of the wall and the pipe bottom slope, and the variation of velocity is similar to full-filled pipe. But near the free water surface, the velocity distribution is mainly affected by the contractive tube wall and the secondary flow, and the variation of the velocity is relatively small. Literature retrieval results show relatively less research has been shown on the practical expression to describe the velocity distribution of partially-filled circular pipe. An expression of two-dimensional (2D) velocity distribution in partially-filled circular pipe flow was derived based on the principle of maximum entropy (POME). Different entropies were compared according to fluid knowledge, and non-extensive entropy was chosen. A new cumulative distribution function (CDF) of partially-filled circular pipe velocity in terms of flow depth was hypothesized. Combined with the CDF hypothesis, the 2D velocity distribution was derived, and the position of maximum velocity distribution was analyzed. The experimental results show that the estimated velocity values based on the principle of maximum Tsallis wavelet entropy are in good agreement with measured values.

  6. Linearized semiclassical initial value time correlation functions with maximum entropy analytic continuation

    SciTech Connect

    Liu, Jian; Miller, William H.

    2008-08-01

    The maximum entropy analytic continuation (MEAC) method is used to extend the range of accuracy of the linearized semiclassical initial value representation (LSC-IVR)/classical Wigner approximation for real time correlation functions. The LSC-IVR provides a very effective 'prior' for the MEAC procedure since it is very good for short times, exact for all time and temperature for harmonic potentials (even for correlation functions of nonlinear operators), and becomes exact in the classical high temperature limit. This combined MEAC+LSC/IVR approach is applied here to two highly nonlinear dynamical systems, a pure quartic potential in one dimensional and liquid para-hydrogen at two thermal state points (25K and 14K under nearly zero external pressure). The former example shows the MEAC procedure to be a very significant enhancement of the LSC-IVR, for correlation functions of both linear and nonlinear operators, and especially at low temperature where semiclassical approximations are least accurate. For liquid para-hydrogen, the LSC-IVR is seen already to be excellent at T = 25K, but the MEAC procedure produces a significant correction at the lower temperature (T = 14K). Comparisons are also made to how the MEAC procedure is able to provide corrections for other trajectory-based dynamical approximations when used as priors.

  7. Maximum entropy theory and the rapid relaxation of three-dimensional quasi-geostrophic turbulence.

    PubMed

    Schecter, David A

    2003-12-01

    Turbulent flow in a rapidly rotating stably stratified fluid (quasi-geostrophic turbulence) commonly decays toward a stable pattern of large-scale jets or vortices. A formula for the most probable three-dimensional end state, the maximum entropy state (MES), is derived using a form of Lynden-Bell statistical mechanics. The MES is determined by a set of integral invariants, including energy, as opposed to a complete description of the initial condition. A computed MES qualitatively resembles the quasistationary end state of a numerical simulation that is initialized with red noise, and relaxes for a time on the order of 100 (initial) eddy turnovers. However, the potential enstrophy of the end state, obtained from a coarsened potential vorticity distribution, exceeds that of the MES by nearly a factor of 2. The estimated errors for both theory and simulation do not account for the discrepancy. This suggests that the MES, if ever realized, requires a much longer time scale to fully develop.

  8. Maximum-entropy reconstruction method for moment-based solution of the Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Summy, Dustin; Pullin, Dale

    2013-11-01

    We describe a method for a moment-based solution of the Boltzmann equation. This starts with moment equations for a 10 + 9 N , N = 0 , 1 , 2 . . . -moment representation. The partial-differential equations (PDEs) for these moments are unclosed, containing both higher-order moments and molecular-collision terms. These are evaluated using a maximum-entropy construction of the velocity distribution function f (c , x , t) , using the known moments, within a finite-box domain of single-particle-velocity (c) space. Use of a finite-domain alleviates known problems (Junk and Unterreiter, Continuum Mech. Thermodyn., 2002) concerning existence and uniqueness of the reconstruction. Unclosed moments are evaluated with quadrature while collision terms are calculated using a Monte-Carlo method. This allows integration of the moment PDEs in time. Illustrative examples will include zero-space- dimensional relaxation of f (c , t) from a Mott-Smith-like initial condition toward equilibrium and one-space dimensional, finite Knudsen number, planar Couette flow. Comparison with results using the direct-simulation Monte-Carlo method will be presented.

  9. Maximum entropy estimation of a Benzene contaminated plume using ecotoxicological assays.

    PubMed

    Wahyudi, Agung; Bartzke, Mariana; Küster, Eberhard; Bogaert, Patrick

    2013-01-01

    Ecotoxicological bioassays, e.g. based on Danio rerio teratogenicity (DarT) or the acute luminescence inhibition with Vibrio fischeri, could potentially lead to significant benefits for detecting on site contaminations on qualitative or semi-quantitative bases. The aim was to use the observed effects of two ecotoxicological assays for estimating the extent of a Benzene groundwater contamination plume. We used a Maximum Entropy (MaxEnt) method to rebuild a bivariate probability table that links the observed toxicity from the bioassays with Benzene concentrations. Compared with direct mapping of the contamination plume as obtained from groundwater samples, the MaxEnt concentration map exhibits on average slightly higher concentrations though the global pattern is close to it. This suggest MaxEnt is a valuable method to build a relationship between quantitative data, e.g. contaminant concentrations, and more qualitative or indirect measurements, in a spatial mapping framework, which is especially useful when clear quantitative relation is not at hand.

  10. Decision Aggregation in Distributed Classification by a Transductive Extension of Maximum Entropy/Improved Iterative Scaling

    NASA Astrophysics Data System (ADS)

    Miller, David J.; Zhang, Yanxin; Kesidis, George

    2008-12-01

    In many ensemble classification paradigms, the function which combines local/base classifier decisions is learned in a supervised fashion. Such methods require common labeled training examples across the classifier ensemble. However, in some scenarios, where an ensemble solution is necessitated, common labeled data may not exist: (i) legacy/proprietary classifiers, and (ii) spatially distributed and/or multiple modality sensors. In such cases, it is standard to apply fixed ( untrained) decision aggregation such as voting, averaging, or naive Bayes rules. In recent work, an alternative transductive learning strategy was proposed. There, decisions on test samples were chosen aiming to satisfy constraints measured by each local classifier. This approach was shown to reliably correct for class prior mismatch and to robustly account for classifier dependencies. Significant gains in accuracy over fixed aggregation rules were demonstrated. There are two main limitations of that work. First, feasibility of the constraints was not guaranteed. Second, heuristic learning was applied. Here, we overcome these problems via a transductive extension of maximum entropy/improved iterative scaling for aggregation in distributed classification. This method is shown to achieve improved decision accuracy over the earlier transductive approach and fixed rules on a number of UC Irvine datasets.

  11. Maximum Entropy Estimation of Probability Distribution of Variables in Higher Dimensions from Lower Dimensional Data

    NASA Astrophysics Data System (ADS)

    ', Jayajit; Mukherjee, Sayak; Hodge, Susan

    2015-07-01

    A common statistical situation concerns inferring an unknown distribution Q(x) from a known distribution P(y), where X (dimension n), and Y (dimension m) have a known functional relationship. Most commonly, nm. In general, in the absence of additional information, there is no unique solution to Q in those cases. Nevertheless, one may still want to draw some inferences about Q. To this end, we propose a novel maximum entropy (MaxEnt) approach that estimates Q(x) based only on the available data, namely, P(y). The method has the additional advantage that one does not need to explicitly calculate the Lagrange multipliers. In this paper we develop the approach, for both discrete and continuous probability distributions, and demonstrate its validity. We give an intuitive justification as well, and we illustrate with examples.

  12. Time-dependent radiative transfer through thin films: Chapman Enskog-maximum entropy method

    NASA Astrophysics Data System (ADS)

    Abulwafa, E. M.; Hassan, T.; El-Wakil, S. A.; Razi Naqvi, K.

    2005-09-01

    Approximate solutions to the time-dependent radiative transfer equation, also called the phonon radiative transfer equation, for a plane-parallel system have been obtained by combining the flux-limited Chapman-Enskog approximation with the maximum entropy method. For problems involving heat transfer at small scales (short times and/or thin films), the results found by this combined approach are closer to the outcome of the more labour-intensive Laguerre-Galerkin technique (a moment method described recently by the authors) than the results obtained by using the diffusion equation (Fourier's law) or the telegraph equation (Cattaneo's law). The results for heat flux and temperature are presented in graphical form for xL = 0.01, 0.1, 1 and 10, and at τ = 0.01, 0.1, 1.0 and 10, where xL is the film thickness in mean free paths, and τ is the value of time in mean free times.

  13. Applying Bayesian Maximum Entropy to extrapolating local-scale water consumption in Maricopa County, Arizona

    NASA Astrophysics Data System (ADS)

    Lee, Seung-Jae; Wentz, Elizabeth A.

    2008-01-01

    Understanding water use in the context of urban growth and climate variability requires an accurate representation of regional water use. It is challenging, however, because water use data are often unavailable, and when they are available, they are geographically aggregated to protect the identity of individuals. The present paper aims to map local-scale estimates of water use in Maricopa County, Arizona, on the basis of data aggregated to census tracts and measured only in the City of Phoenix. To complete our research goals we describe two types of data uncertainty sources (i.e., extrapolation and downscaling processes) and then generate data that account for the uncertainty sources (i.e., soft data). Our results ascertain that the Bayesian Maximum Entropy (BME) mapping method of modern geostatistics is a theoretically sound approach for assimilating the soft data into mapping processes. Our results lead to increased mapping accuracy over classical geostatistics, which does not account for the soft data. The confirmed BME maps therefore provide useful knowledge on local water use variability in the whole county that is further applied to the understanding of causal factors of urban water demand.

  14. Fast Maximum Entropy Moment Closure Approach to Solving the Boltzmann Equation

    NASA Astrophysics Data System (ADS)

    Summy, Dustin; Pullin, Dale

    2015-11-01

    We describe a method for a moment-based solution of the Boltzmann Equation (BE). This is applicable to an arbitrary set of velocity moments whose transport is governed by partial-differential equations (PDEs) derived from the BE. The equations are unclosed, containing both higher-order moments and molecular-collision terms. These are evaluated using a maximum-entropy reconstruction of the velocity distribution function f (c , x , t) , from the known moments, within a finite-box domain of single-particle velocity (c) space. Use of a finite-domain alleviates known problems (Junk and Unterreiter, Continuum Mech. Thermodyn., 2002) concerning existence and uniqueness of the reconstruction. Unclosed moments are evaluated with quadrature while collision terms are calculated using any desired method. This allows integration of the moment PDEs in time. The high computational cost of the general method is greatly reduced by careful choice of the velocity moments, allowing the necessary integrals to be reduced from three- to one-dimensional in the case of strictly 1D flows. A method to extend this enhancement to fully 3D flows is discussed. Comparison with relaxation and shock-wave problems using the DSMC method will be presented. Partially supported by NSF grant DMS-1418903.

  15. Developing Soil Moisture Profiles Utilizing Remotely Sensed MW and TIR Based SM Estimates Through Principle of Maximum Entropy

    NASA Astrophysics Data System (ADS)

    Mishra, V.; Cruise, J. F.; Mecikalski, J. R.

    2015-12-01

    Developing accurate vertical soil moisture profiles with minimum input requirements is important to agricultural as well as land surface modeling. Earlier studies show that the principle of maximum entropy (POME) can be utilized to develop vertical soil moisture profiles with accuracy (MAE of about 1% for a monotonically dry profile; nearly 2% for monotonically wet profiles and 3.8% for mixed profiles) with minimum constraints (surface, mean and bottom soil moisture contents). In this study, the constraints for the vertical soil moisture profiles were obtained from remotely sensed data. Low resolution (25 km) MW soil moisture estimates (AMSR-E) were downscaled to 4 km using a soil evaporation efficiency index based disaggregation approach. The downscaled MW soil moisture estimates served as a surface boundary condition, while 4 km resolution TIR based Atmospheric Land Exchange Inverse (ALEXI) estimates provided the required mean root-zone soil moisture content. Bottom soil moisture content is assumed to be a soil dependent constant. Mulit-year (2002-2011) gridded profiles were developed for the southeastern United States using the POME method. The soil moisture profiles were compared to those generated in land surface models (Land Information System (LIS) and an agricultural model DSSAT) along with available NRCS SCAN sites in the study region. The end product, spatial soil moisture profiles, can be assimilated into agricultural and hydrologic models in lieu of precipitation for data scarce regions.Developing accurate vertical soil moisture profiles with minimum input requirements is important to agricultural as well as land surface modeling. Previous studies have shown that the principle of maximum entropy (POME) can be utilized with minimal constraints to develop vertical soil moisture profiles with accuracy (MAE = 1% for monotonically dry profiles; MAE = 2% for monotonically wet profiles and MAE = 3.8% for mixed profiles) when compared to laboratory and field

  16. Quantum mechanical correlation functions, maximum entropy analytic continuation, and ring polymer molecular dynamics.

    PubMed

    Habershon, Scott; Braams, Bastiaan J; Manolopoulos, David E

    2007-11-01

    The maximum entropy analytic continuation (MEAC) and ring polymer molecular dynamics (RPMD) methods provide complementary approaches to the calculation of real time quantum correlation functions. RPMD becomes exact in the high temperature limit, where the thermal time betavariant Planck's over 2pi tends to zero and the ring polymer collapses to a single classical bead. MEAC becomes most reliable at low temperatures, where betavariant Planck's over 2pi exceeds the correlation time of interest and the numerical imaginary time correlation function contains essentially all of the information that is needed to recover the real time dynamics. We show here that this situation can be exploited by combining the two methods to give an improved approximation that is better than either of its parts. In particular, the MEAC method provides an ideal way to impose exact moment (or sum rule) constraints on a prior RPMD spectrum. The resulting scheme is shown to provide a practical solution to the "nonlinear operator problem" of RPMD, and to give good agreement with recent exact results for the short-time velocity autocorrelation function of liquid parahydrogen. Moreover these improvements are obtained with little extra effort, because the imaginary time correlation function that is used in the MEAC procedure can be computed at the same time as the RPMD approximation to the real time correlation function. However, there are still some problems involving long-time dynamics for which the RPMD+MEAC combination is inadequate, as we illustrate with an example application to the collective density fluctuations in liquid orthodeuterium. PMID:17994808

  17. Spatiotemporal fusion of multiple-satellite aerosol optical depth (AOD) products using Bayesian maximum entropy method

    NASA Astrophysics Data System (ADS)

    Tang, Qingxin; Bo, Yanchen; Zhu, Yuxin

    2016-04-01

    Merging multisensor aerosol optical depth (AOD) products is an effective way to produce more spatiotemporally complete and accurate AOD products. A spatiotemporal statistical data fusion framework based on a Bayesian maximum entropy (BME) method was developed for merging satellite AOD products in East Asia. The advantages of the presented merging framework are that it not only utilizes the spatiotemporal autocorrelations but also explicitly incorporates the uncertainties of the AOD products being merged. The satellite AOD products used for merging are the Moderate Resolution Imaging Spectroradiometer (MODIS) Collection 5.1 Level-2 AOD products (MOD04_L2) and the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Deep Blue Level 2 AOD products (SWDB_L2). The results show that the average completeness of the merged AOD data is 95.2%,which is significantly superior to the completeness of MOD04_L2 (22.9%) and SWDB_L2 (20.2%). By comparing the merged AOD to the Aerosol Robotic Network AOD records, the results show that the correlation coefficient (0.75), root-mean-square error (0.29), and mean bias (0.068) of the merged AOD are close to those (the correlation coefficient (0.82), root-mean-square error (0.19), and mean bias (0.059)) of the MODIS AOD. In the regions where both MODIS and SeaWiFS have valid observations, the accuracy of the merged AOD is higher than those of MODIS and SeaWiFS AODs. Even in regions where both MODIS and SeaWiFS AODs are missing, the accuracy of the merged AOD is also close to the accuracy of the regions where both MODIS and SeaWiFS have valid observations.

  18. Application of the maximum entropy technique in tomographic reconstruction from laser diffraction data to determine local spray drop size distribution

    NASA Astrophysics Data System (ADS)

    Yongyingsakthavorn, Pisit; Vallikul, Pumyos; Fungtammasan, Bundit; Dumouchel, Christophe

    2007-03-01

    This work proposes a new deconvolution technique to obtain local drop size distributions from line-of-sight intensity data measured by laser diffraction technique. The tomographic reconstruction, based on the maximum entropy (ME) technique, is applied to forward scattered light signal from a laser beam scanning horizontally through the spray on each plane from the center to the edge of spray, resulting in the reconstructed scattered light intensities at particular points in the spray. These reconstructed intensities are in turn converted to local drop size distributions. Unlike the classical method of the onion peeling technique or other mathematical transformation techniques that yield unrealistic negative scattered light intensity solutions, the maximum entropy constraints ensure positive light intensity. Experimental validations to the reconstructed results are achieved by using phase Doppler particle analyzer (PDPA). The results from the PDPA measurements agree very well with the proposed ME tomographic reconstruction.

  19. Analysis of the Velocity Distribution in Partially-Filled Circular Pipe Employing the Principle of Maximum Entropy

    PubMed Central

    2016-01-01

    The flow velocity distribution in partially-filled circular pipe was investigated in this paper. The velocity profile is different from full-filled pipe flow, since the flow is driven by gravity, not by pressure. The research findings show that the position of maximum flow is below the water surface, and varies with the water depth. In the region of near tube wall, the fluid velocity is mainly influenced by the friction of the wall and the pipe bottom slope, and the variation of velocity is similar to full-filled pipe. But near the free water surface, the velocity distribution is mainly affected by the contractive tube wall and the secondary flow, and the variation of the velocity is relatively small. Literature retrieval results show relatively less research has been shown on the practical expression to describe the velocity distribution of partially-filled circular pipe. An expression of two-dimensional (2D) velocity distribution in partially-filled circular pipe flow was derived based on the principle of maximum entropy (POME). Different entropies were compared according to fluid knowledge, and non-extensive entropy was chosen. A new cumulative distribution function (CDF) of partially-filled circular pipe velocity in terms of flow depth was hypothesized. Combined with the CDF hypothesis, the 2D velocity distribution was derived, and the position of maximum velocity distribution was analyzed. The experimental results show that the estimated velocity values based on the principle of maximum Tsallis wavelet entropy are in good agreement with measured values. PMID:26986064

  20. Analysis of the Velocity Distribution in Partially-Filled Circular Pipe Employing the Principle of Maximum Entropy.

    PubMed

    Jiang, Yulin; Li, Bin; Chen, Jie

    2016-01-01

    The flow velocity distribution in partially-filled circular pipe was investigated in this paper. The velocity profile is different from full-filled pipe flow, since the flow is driven by gravity, not by pressure. The research findings show that the position of maximum flow is below the water surface, and varies with the water depth. In the region of near tube wall, the fluid velocity is mainly influenced by the friction of the wall and the pipe bottom slope, and the variation of velocity is similar to full-filled pipe. But near the free water surface, the velocity distribution is mainly affected by the contractive tube wall and the secondary flow, and the variation of the velocity is relatively small. Literature retrieval results show relatively less research has been shown on the practical expression to describe the velocity distribution of partially-filled circular pipe. An expression of two-dimensional (2D) velocity distribution in partially-filled circular pipe flow was derived based on the principle of maximum entropy (POME). Different entropies were compared according to fluid knowledge, and non-extensive entropy was chosen. A new cumulative distribution function (CDF) of partially-filled circular pipe velocity in terms of flow depth was hypothesized. Combined with the CDF hypothesis, the 2D velocity distribution was derived, and the position of maximum velocity distribution was analyzed. The experimental results show that the estimated velocity values based on the principle of maximum Tsallis wavelet entropy are in good agreement with measured values. PMID:26986064

  1. THE LICK AGN MONITORING PROJECT: VELOCITY-DELAY MAPS FROM THE MAXIMUM-ENTROPY METHOD FOR Arp 151

    SciTech Connect

    Bentz, Misty C.; Barth, Aaron J.; Walsh, Jonelle L.; Horne, Keith; Bennert, Vardha Nicola; Treu, Tommaso; Canalizo, Gabriela; Filippenko, Alexei V.; Gates, Elinor L.; Malkan, Matthew A.; Minezaki, Takeo; Woo, Jong-Hak

    2010-09-01

    We present velocity-delay maps for optical H I, He I, and He II recombination lines in Arp 151, recovered by fitting a reverberation model to spectrophotometric monitoring data using the maximum-entropy method. H I response is detected over the range 0-15 days, with the response confined within the virial envelope. The Balmer-line maps have similar morphologies but exhibit radial stratification, with progressively longer delays for H{gamma} to H{beta} to H{alpha}. The He I and He II response is confined within 1-2 days. There is a deficit of prompt response in the Balmer-line cores but strong prompt response in the red wings. Comparison with simple models identifies two classes that reproduce these features: free-falling gas and a half-illuminated disk with a hot spot at small radius on the receding lune. Symmetrically illuminated models with gas orbiting in an inclined disk or an isotropic distribution of randomly inclined circular orbits can reproduce the virial structure but not the observed asymmetry. Radial outflows are also largely ruled out by the observed asymmetry. A warped-disk geometry provides a physically plausible mechanism for the asymmetric illumination and hot spot features. Simple estimates show that a disk in the broad-line region of Arp 151 could be unstable to warping induced by radiation pressure. Our results demonstrate the potential power of detailed modeling combined with monitoring campaigns at higher cadence to characterize the gas kinematics and physical processes that give rise to the broad emission lines in active galactic nuclei.

  2. The Lick AGN Monitoring Project: Velocity-delay Maps from the Maximum-entropy Method for Arp 151

    NASA Astrophysics Data System (ADS)

    Bentz, Misty C.; Horne, Keith; Barth, Aaron J.; Bennert, Vardha Nicola; Canalizo, Gabriela; Filippenko, Alexei V.; Gates, Elinor L.; Malkan, Matthew A.; Minezaki, Takeo; Treu, Tommaso; Woo, Jong-Hak; Walsh, Jonelle L.

    2010-09-01

    We present velocity-delay maps for optical H I, He I, and He II recombination lines in Arp 151, recovered by fitting a reverberation model to spectrophotometric monitoring data using the maximum-entropy method. H I response is detected over the range 0-15 days, with the response confined within the virial envelope. The Balmer-line maps have similar morphologies but exhibit radial stratification, with progressively longer delays for Hγ to Hβ to Hα. The He I and He II response is confined within 1-2 days. There is a deficit of prompt response in the Balmer-line cores but strong prompt response in the red wings. Comparison with simple models identifies two classes that reproduce these features: free-falling gas and a half-illuminated disk with a hot spot at small radius on the receding lune. Symmetrically illuminated models with gas orbiting in an inclined disk or an isotropic distribution of randomly inclined circular orbits can reproduce the virial structure but not the observed asymmetry. Radial outflows are also largely ruled out by the observed asymmetry. A warped-disk geometry provides a physically plausible mechanism for the asymmetric illumination and hot spot features. Simple estimates show that a disk in the broad-line region of Arp 151 could be unstable to warping induced by radiation pressure. Our results demonstrate the potential power of detailed modeling combined with monitoring campaigns at higher cadence to characterize the gas kinematics and physical processes that give rise to the broad emission lines in active galactic nuclei.

  3. Topological properties of hydrogen bonds and covalent bonds from charge densities obtained by the maximum entropy method (MEM)

    PubMed Central

    Netzel, Jeanette; van Smaalen, Sander

    2009-01-01

    Charge densities have been determined by the Maximum Entropy Method (MEM) from the high-resolution, low-temperature (T ≃ 20 K) X-ray diffraction data of six different crystals of amino acids and peptides. A comparison of dynamic deformation densities of the MEM with static and dynamic deformation densities of multipole models shows that the MEM may lead to a better description of the electron density in hydrogen bonds in cases where the multipole model has been restricted to isotropic displacement parameters and low-order multipoles (l max = 1) for the H atoms. Topological properties at bond critical points (BCPs) are found to depend systematically on the bond length, but with different functions for covalent C—C, C—N and C—O bonds, and for hydrogen bonds together with covalent C—H and N—H bonds. Similar dependencies are known for AIM properties derived from static multipole densities. The ratio of potential and kinetic energy densities |V(BCP)|/G(BCP) is successfully used for a classification of hydrogen bonds according to their distance d(H⋯O) between the H atom and the acceptor atom. The classification based on MEM densities coincides with the usual classification of hydrogen bonds as strong, intermediate and weak [Jeffrey (1997) ▶. An Introduction to Hydrogen Bonding. Oxford University Press]. MEM and procrystal densities lead to similar values of the densities at the BCPs of hydrogen bonds, but differences are shown to prevail, such that it is found that only the true charge density, represented by MEM densities, the multipole model or some other method can lead to the correct characterization of chemical bonding. Our results do not confirm suggestions in the literature that the promolecule density might be sufficient for a characterization of hydrogen bonds. PMID:19767685

  4. Entropy, complexity, and Markov diagrams for random walk cancer models.

    PubMed

    Newton, Paul K; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter

    2014-12-19

    The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.

  5. Entropy, complexity, and Markov diagrams for random walk cancer models

    NASA Astrophysics Data System (ADS)

    Newton, Paul K.; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter

    2014-12-01

    The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.

  6. Entanglement entropy and entanglement spectrum of the Kitaev model.

    PubMed

    Yao, Hong; Qi, Xiao-Liang

    2010-08-20

    In this letter, we obtain an exact formula for the entanglement entropy of the ground state and all excited states of the Kitaev model. Remarkably, the entanglement entropy can be expressed in a simple separable form S = SG+SF, with SF the entanglement entropy of a free Majorana fermion system and SG that of a Z2 gauge field. The Z2 gauge field part contributes to the universal "topological entanglement entropy" of the ground state while the fermion part is responsible for the nonlocal entanglement carried by the Z2 vortices (visons) in the non-Abelian phase. Our result also enables the calculation of the entire entanglement spectrum and the more general Renyi entropy of the Kitaev model. Based on our results we propose a new quantity to characterize topologically ordered states--the capacity of entanglement, which can distinguish the st ates with and without topologically protected gapless entanglement spectrum.

  7. Maximum entropy and stability of a random process with a 1/f power spectrum under deterministic action

    NASA Astrophysics Data System (ADS)

    Koverda, V. P.; Skokov, V. N.

    2012-12-01

    The principle of maximum entropy has been used to analyze the stability of the resulting process observed during the interaction of a random process with a 1/f spectrum and a deterministic action in lumped and distributed systems of nonlinear stochastic differential equations describing the coupled nonequilibrium phase transitions. Under the action of a harmonic force the stable resulting process is divided into two branches depending on the amplitude of the harmonic force. Under the action of exponential relaxation in a lumped system with an increase in the dumping coefficient the power spectrum of the resulting process becomes a spectrum of the Lorentz type.

  8. Towards realizable hyperbolic moment closures for viscous heat-conducting gas flows based on a maximum-entropy distribution

    NASA Astrophysics Data System (ADS)

    McDonald, James G.; Groth, Clinton P. T.

    2013-09-01

    The ability to predict continuum and transition-regime flows by hyperbolic moment methods offers the promise of several advantages over traditional techniques. These methods offer an extended range of physical validity as compared with the Navier-Stokes equations and can be used for the prediction of many non-equilibrium flows with a lower expense than particle-based methods. Also, the hyperbolic first-order nature of the resulting partial differential equations leads to mathematical and numerical advantages. Moment equations generated through an entropy-maximization principle are particularly attractive due to their apparent robustness; however, their application to practical situations involving viscous, heat-conducting gases has been hampered by several issues. Firstly, the lack of closed-form expressions for closing fluxes leads to numerical expense as many integrals of distribution functions must be computed numerically during the course of a flow computation. Secondly, it has been shown that there exist physically realizable moment states for which the entropy-maximizing problem on which the method is based cannot be solved. Following a review of the theory surrounding maximum-entropy moment closures, this paper shows that both of these problems can be addressed in practice, at least for a simplified one-dimensional gas, and that the resulting flow predictions can be surprisingly good. The numerical results described provide significant motivations for the extension of these ideas to the fully three-dimensional case.

  9. Coupling diffusion and maximum entropy models to estimate thermal inertia

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Thermal inertia is a physical property of soil at the land surface related to water content. We have developed a method for estimating soil thermal inertia using two daily measurements of surface temperature, to capture the diurnal range, and diurnal time series of net radiation and specific humidi...

  10. The Study on Business Growth Process Management Entropy Model

    NASA Astrophysics Data System (ADS)

    Jing, Duan

    Enterprise's growth is a dynamic process. The factors of enterprise development are changing all the time. For this reason, it is difficult to study management entropy growth-oriented enterprises from static view. Its characteristic is the business enterprise growth stage, and puts forward a kind of measuring and calculating model based on enterprise management entropy for business scale, the enterprise ability and development speed. According to entropy measured by the model, enterprise can adopt revolution measure in the moment of truth. It can make the enterprise avoid crisis and take the road of sustainable development.

  11. Maximum Entropy and the Inference of Pattern and Dynamics in Ecology

    NASA Astrophysics Data System (ADS)

    Harte, John

    Constrained maximization of information entropy yields least biased probability distributions. From physics to economics, from forensics to medicine, this powerful inference method has enriched science. Here I apply this method to ecology, using constraints derived from ratios of ecological state variables, and infer functional forms for the ecological metrics describing patterns in the abundance, distribution, and energetics of species. I show that a static version of the theory describes remarkably well observed patterns in quasi-steady-state ecosystems across a wide range of habitats, spatial scales, and taxonomic groups. A systematic pattern of failure is observed, however, for ecosystems either losing species following disturbance or diversifying in evolutionary time; I show that this problem may be remedied with a stochastic-dynamic extension of the theory.

  12. Beyond Boltzmann-Gibbs statistics: Maximum entropy hyperensemblesout-of-equilibrium

    SciTech Connect

    Crooks, Gavin E.

    2006-02-23

    What is the best description that we can construct of athermodynamic system that is not in equilibrium, given only one, or afew, extra parameters over and above those needed for a description ofthe same system at equilibrium? Here, we argue the most appropriateadditional parameter is the non-equilibrium entropy of the system, andthat we should not attempt to estimate the probability distribution ofthe system, but rather the metaprobability (or hyperensemble) that thesystem is described by a particular probability distribution. The resultis an entropic distribution with two parameters, one a non-equilibriumtemperature, and the other a measure of distance from equilibrium. Thisdispersion parameter smoothly interpolates between certainty of acanonical distribution at equilibrium and great uncertainty as to theprobability distribution as we move away from equilibrium. We deducethat, in general, large, rare fluctuations become far more common as wemove away from equilibrium.

  13. Model Fit after Pairwise Maximum Likelihood

    PubMed Central

    Barendse, M. T.; Ligtvoet, R.; Timmerman, M. E.; Oort, F. J.

    2016-01-01

    Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response patterns is computationally very intensive, the sum of the log–likelihoods of the bivariate response patterns is maximized instead. Little is yet known about how to assess model fit when the analysis is based on such a pairwise maximum likelihood (PML) of two–way contingency tables. We propose new fit criteria for the PML method and conduct a simulation study to evaluate their performance in model selection. With large sample sizes (500 or more), PML performs as well the robust weighted least squares analysis of polychoric correlations. PMID:27148136

  14. Irreversible entropy model for damage diagnosis in resistors

    SciTech Connect

    Cuadras, Angel Crisóstomo, Javier; Ovejas, Victoria J.; Quilez, Marcos

    2015-10-28

    We propose a method to characterize electrical resistor damage based on entropy measurements. Irreversible entropy and the rate at which it is generated are more convenient parameters than resistance for describing damage because they are essentially positive in virtue of the second law of thermodynamics, whereas resistance may increase or decrease depending on the degradation mechanism. Commercial resistors were tested in order to characterize the damage induced by power surges. Resistors were biased with constant and pulsed voltage signals, leading to power dissipation in the range of 4–8 W, which is well above the 0.25 W nominal power to initiate failure. Entropy was inferred from the added power and temperature evolution. A model is proposed to understand the relationship among resistance, entropy, and damage. The power surge dissipates into heat (Joule effect) and damages the resistor. The results show a correlation between entropy generation rate and resistor failure. We conclude that damage can be conveniently assessed from irreversible entropy generation. Our results for resistors can be easily extrapolated to other systems or machines that can be modeled based on their resistance.

  15. Deriving the electron-phonon spectral density of MgB2 from optical data, using maximum entropy techniques.

    PubMed

    Hwang, J; Carbotte, J P

    2014-04-23

    We use maximum entropy techniques to extract an electron-phonon density from optical data for the normal state at T = 45 K of MgB2. Limiting the analysis to a range of phonon energies below 110 meV, which is sufficient for capturing all phonon structures, we find a spectral function that is in good agreement with that calculated for the quasi-two-dimensional σ-band. Extending the analysis to higher energies, up to 160 meV, we find no evidence for any additional contributions to the fluctuation spectrum, but find that the data can only be understood if the density of states is taken to decrease with increasing energy.

  16. Coupling entropy of co-processing model on social networks

    NASA Astrophysics Data System (ADS)

    Zhang, Zhanli

    2015-08-01

    Coupling entropy of co-processing model on social networks is investigated in this paper. As one crucial factor to determine the processing ability of nodes, the information flow with potential time lag is modeled by co-processing diffusion which couples the continuous time processing and the discrete diffusing dynamics. Exact results on master equation and stationary state are achieved to disclose the formation. In order to understand the evolution of the co-processing and design the optimal routing strategy according to the maximal entropic diffusion on networks, we propose the coupling entropy comprehending the structural characteristics and information propagation on social network. Based on the analysis of the co-processing model, we analyze the coupling impact of the structural factor and information propagating factor on the coupling entropy, where the analytical results fit well with the numerical ones on scale-free social networks.

  17. Experiments and Model for Serration Statistics in Low-Entropy, Medium-Entropy, and High-Entropy Alloys

    SciTech Connect

    Carroll, Robert; Lee, Chi; Tsai, Che-Wei; Yeh, Jien-Wei; Antonaglia, James; Brinkman, Braden A.W.; LeBlanc, Michael; Xie, Xie; Chen, Shuying; Liaw, Peter K; Dahmen, Karin A

    2015-11-23

    High-entropy alloys (HEAs) are new alloys that contain five or more elements in roughly equal proportion. We present new experiments and theory on the deformation behavior of HEAs under slow stretching (straining), and observe differences, compared to conventional alloys with fewer elements. For a specific range of temperatures and strain-rates, HEAs deform in a jerky way, with sudden slips that make it difficult to precisely control the deformation. An analytic model explains these slips as avalanches of slipping weak spots and predicts the observed slip statistics, stress-strain curves, and their dependence on temperature, strain-rate, and material composition. The ratio of the weak spots’ healing rate to the strain-rate is the main tuning parameter, reminiscent of the Portevin-LeChatellier effect and time-temperature superposition in polymers. Our model predictions agree with the experimental results. The proposed widely-applicable deformation mechanism is useful for deformation control and alloys design.

  18. Monotonic entropy growth for a nonlinear model of random exchanges.

    PubMed

    Apenko, S M

    2013-02-01

    We present a proof of the monotonic entropy growth for a nonlinear discrete-time model of a random market. This model, based on binary collisions, also may be viewed as a particular case of Ulam's redistribution of energy problem. We represent each step of this dynamics as a combination of two processes. The first one is a linear energy-conserving evolution of the two-particle distribution, for which the entropy growth can be easily verified. The original nonlinear process is actually a result of a specific "coarse graining" of this linear evolution, when after the collision one variable is integrated away. This coarse graining is of the same type as the real space renormalization group transformation and leads to an additional entropy growth. The combination of these two factors produces the required result which is obtained only by means of information theory inequalities.

  19. Monotonic entropy growth for a nonlinear model of random exchanges

    NASA Astrophysics Data System (ADS)

    Apenko, S. M.

    2013-02-01

    We present a proof of the monotonic entropy growth for a nonlinear discrete-time model of a random market. This model, based on binary collisions, also may be viewed as a particular case of Ulam's redistribution of energy problem. We represent each step of this dynamics as a combination of two processes. The first one is a linear energy-conserving evolution of the two-particle distribution, for which the entropy growth can be easily verified. The original nonlinear process is actually a result of a specific “coarse graining” of this linear evolution, when after the collision one variable is integrated away. This coarse graining is of the same type as the real space renormalization group transformation and leads to an additional entropy growth. The combination of these two factors produces the required result which is obtained only by means of information theory inequalities.

  20. An improved model for the transit entropy of monatomic liquids

    SciTech Connect

    Wallace, Duane C; Chisolm, Eric D; Bock, Nicolas

    2009-01-01

    In the original formulation of V-T theory for monatomic liquid dynamics, the transit contribution to entropy was taken to be a universal constant, calibrated to the constant-volume entropy of melting. This model suffers two deficiencies: (a) it does not account for experimental entropy differences of {+-}2% among elemental liquids, and (b) it implies a value of zero for the transit contribution to internal energy. The purpose of this paper is to correct these deficiencies. To this end, the V-T equation for entropy is fitted to an overall accuracy of {+-}0.1% to the available experimental high temperature entropy data for elemental liquids. The theory contains two nuclear motion contributions: (a) the dominant vibrational contribution S{sub vib}(T/{theta}{sub 0}), where T is temperature and {theta}{sub 0} is the vibrational characteristic temperature, and (b) the transit contribution S{sub tr}(T/{theta}{sub tr}), where {theta}{sub tr} is a scaling temperature for each liquid. The appearance of a common functional form of S{sub tr} for all the liquids studied is a property of the experimental data, when analyzed via the V-T formula. The resulting S{sub tr} implies the correct transit contribution to internal energy. The theoretical entropy of melting is derived, in a single formula applying to normal and anomalous melting alike. An ab initio calculation of {theta}{sub 0}, based on density functional theory, is reported for liquid Na and Cu. Comparison of these calculations with the above analysis of experimental entropy data provides verification of V-T theory. In view of the present results, techniques currently being applied in ab initio simulations of liquid properties can be employed to advantage in the further testing and development of V-T theory.

  1. Improved model for the transit entropy of monatomic liquids

    NASA Astrophysics Data System (ADS)

    Wallace, Duane C.; Chisolm, Eric D.; Bock, Nicolas

    2009-05-01

    In the original formulation of vibration-transit (V-T) theory for monatomic liquid dynamics, the transit contribution to entropy was taken to be a universal constant, calibrated to the constant-volume entropy of melting. This model suffers two deficiencies: (a) it does not account for experimental entropy differences of ±2% among elemental liquids and (b) it implies a value of zero for the transit contribution to internal energy. The purpose of this paper is to correct these deficiencies. To this end, the V-T equation for entropy is fitted to an overall accuracy of ±0.1% to the available experimental high-temperature entropy data for elemental liquids. The theory contains two nuclear motion contributions: (a) the dominant vibrational contribution Svib(T/θ0) , where T is temperature and θ0 is the vibrational characteristic temperature, and (b) the transit contribution Str(T/θtr) , where θtr is a scaling temperature for each liquid. The appearance of a common functional form of Str for all the liquids studied is a property of the experimental data, when analyzed via the V-T formula. The resulting Str implies the correct transit contribution to internal energy. The theoretical entropy of melting is derived in a single formula applying to normal and anomalous melting alike. An ab initio calculation of θ0 , based on density-functional theory, is reported for liquid Na and Cu. Comparison of these calculations with the above analysis of experimental entropy data provides verification of V-T theory. In view of the present results, techniques currently being applied in ab initio simulations of liquid properties can be employed to advantage in the further testing and development of V-T theory.

  2. An articulatorily constrained, maximum entropy approach to speech recognition and speech coding

    SciTech Connect

    Hogden, J.

    1996-12-31

    Hidden Markov models (HMM`s) are among the most popular tools for performing computer speech recognition. One of the primary reasons that HMM`s typically outperform other speech recognition techniques is that the parameters used for recognition are determined by the data, not by preconceived notions of what the parameters should be. This makes HMM`s better able to deal with intra- and inter-speaker variability despite the limited knowledge of how speech signals vary and despite the often limited ability to correctly formulate rules describing variability and invariance in speech. In fact, it is often the case that when HMM parameter values are constrained using the limited knowledge of speech, recognition performance decreases. However, the structure of an HMM has little in common with the mechanisms underlying speech production. Here, the author argues that by using probabilistic models that more accurately embody the process of speech production, he can create models that have all the advantages of HMM`s, but that should more accurately capture the statistical properties of real speech samples--presumably leading to more accurate speech recognition. The model he will discuss uses the fact that speech articulators move smoothly and continuously. Before discussing how to use articulatory constraints, he will give a brief description of HMM`s. This will allow him to highlight the similarities and differences between HMM`s and the proposed technique.

  3. Factor Analysis of Wildfire and Risk Area Estimation in Korean Peninsula Using Maximum Entropy

    NASA Astrophysics Data System (ADS)

    Kim, Teayeon; Lim, Chul-Hee; Lee, Woo-Kyun; Kim, YouSeung; Heo, Seongbong; Cha, Sung Eun; Kim, Seajin

    2016-04-01

    The number of wildfires and accompanying human injuries and physical damages has been increased by frequent drought. Especially, Korea experienced severe drought and numbers of wildfire took effect this year. We used MaxEnt model to figure out major environmental factors for wildfire and used RCP scenarios to predict future wildfire risk area. In this study, environmental variables including topographic, anthropogenic, meteorologic data was used to figure out contributing variables of wildfire in South and North Korea, and compared accordingly. As for occurrence data, we used MODIS fire data after verification. In North Korea, AUC(Area Under the ROC Curve) value was 0.890 which was high enough to explain the distribution of wildfires. South Korea had low AUC value than North Korea and high mean standard deviation which means there is low anticipation to predict fire with same environmental variables. It is expected to enhance AUC value in South Korea with environmental variables such as distance from trails, wildfire management systems. For instance, fire occurred within DMZ(demilitarized zone, 4kms boundary from 38th parallel) has decisive influence on fire risk area in South Korea, but not in North Korea. The contribution of each environmental variables was more distributed among variables in North Korea than in South Korea. This means South Korea is dependent on few certain variables, and North Korea can be explained as number of variables with evenly distributed portions. Although the AUC value and standard deviation of South Korea was not high enough to predict wildfire, the result carries an significant meaning to figure out scientific and social matters that certain environmental variables has great weight by understanding their response curves. We also made future wildfire risk area map in whole Korean peninsula using the same model. In four RCP scenarios, it was found that severe climate change would lead wildfire risk area move north. Especially North

  4. n-Order and maximum fuzzy similarity entropy for discrimination of signals of different complexity: Application to fetal heart rate signals.

    PubMed

    Zaylaa, Amira; Oudjemia, Souad; Charara, Jamal; Girault, Jean-Marc

    2015-09-01

    This paper presents two new concepts for discrimination of signals of different complexity. The first focused initially on solving the problem of setting entropy descriptors by varying the pattern size instead of the tolerance. This led to the search for the optimal pattern size that maximized the similarity entropy. The second paradigm was based on the n-order similarity entropy that encompasses the 1-order similarity entropy. To improve the statistical stability, n-order fuzzy similarity entropy was proposed. Fractional Brownian motion was simulated to validate the different methods proposed, and fetal heart rate signals were used to discriminate normal from abnormal fetuses. In all cases, it was found that it was possible to discriminate time series of different complexity such as fractional Brownian motion and fetal heart rate signals. The best levels of performance in terms of sensitivity (90%) and specificity (90%) were obtained with the n-order fuzzy similarity entropy. However, it was shown that the optimal pattern size and the maximum similarity measurement were related to intrinsic features of the time series.

  5. d-Wave Signatures and Vortex Anomalies: A Maximum-Entropy μSR Study of RBCO Vortex States

    NASA Astrophysics Data System (ADS)

    Kong, Fanyun; Ruiz, E. J.; Punjabi, S. R.; Boekema, C.; Cooke, D. W.

    1998-03-01

    A maximum-entropy (ME) technique has been applied to transverse-field muon- spin-relaxation (μSR) vortex data of several polycrystalline cuprate superconductors (RBCO with R = Er, Gd, Ho, Eu, Y). This ME method produces spectra representing estimates for the magnetic-field distributions in the vortex state. The main vortex signals for R = Er, Gd, and Ho reveal signs of a twin peak in the field distributions below the applied field (1 kOe), as predicted for d-wave superconductivity [1]. For comparison, we have also applied the ME method to simulated μSR data using the predicted shapes. For RBCO vortex states below 10 K, low-field tails in the field distribution have been confirmed. This low-field tail may be caused by magnetic frustration in the vortex state [2] or possible CuO-chain superconductivity below 25 K [3]. Tentative results on the grain-orientation dependence of the YBCO vortex states are presented and discussed. [1] M. Franz et al., Phys Rev B53 (1996) 5795; I. Affleck et al., Phys Rev B55 (1997) R704. [2] S.Alves et al., Phys Rev B49 (1994) 12396; C. Boekema et al., Physica C235-240 (1994) 2633; J Phys Chem Solids 56 (1995) 1905. [3] C.H. Pennington et al., M2S-HTSC-V Conf Proc, Physica C282-287 (1997).

  6. Extension of spray nozzle correlations to the prediction of drop size distributions using principles of maximum entropy

    NASA Astrophysics Data System (ADS)

    Sankagiri, N.; Ruff, G. A.

    1993-01-01

    For years, design and evaluation of the performance of many existing liquid spray systems has made use of the many empirical correlations for the bulk properties of a spray such as mean drop size, spread angle, etc. However, more detailed information, such as the drop size distribution, is required to evaluate critical performance parameters such as NOx emission in internal combustion engines and the combustion efficiency of a hazardous waste incinerator. The principles of conservation of mass, momentum, and energy can be applied through the maximum entropy formulation to estimate the joint drop size-velocity distribution provided that some information about the bulk properties of the spray exists from empirical correlations. A general method for this determination is described in this paper and differences from previous work are highlighted. Comparisons between the predicted and experimental results verify that this method does yield a good estimation of the drop size distribution for certain applications. Other uses for this methodology in spray analysis are also discussed.

  7. Experiments and Model for Serration Statistics in Low-Entropy, Medium-Entropy, and High-Entropy Alloys

    DOE PAGES

    Carroll, Robert; Lee, Chi; Tsai, Che-Wei; Yeh, Jien-Wei; Antonaglia, James; Brinkman, Braden A.W.; LeBlanc, Michael; Xie, Xie; Chen, Shuying; Liaw, Peter K; et al

    2015-11-23

    High-entropy alloys (HEAs) are new alloys that contain five or more elements in roughly equal proportion. We present new experiments and theory on the deformation behavior of HEAs under slow stretching (straining), and observe differences, compared to conventional alloys with fewer elements. For a specific range of temperatures and strain-rates, HEAs deform in a jerky way, with sudden slips that make it difficult to precisely control the deformation. An analytic model explains these slips as avalanches of slipping weak spots and predicts the observed slip statistics, stress-strain curves, and their dependence on temperature, strain-rate, and material composition. The ratio ofmore » the weak spots’ healing rate to the strain-rate is the main tuning parameter, reminiscent of the Portevin-LeChatellier effect and time-temperature superposition in polymers. Our model predictions agree with the experimental results. The proposed widely-applicable deformation mechanism is useful for deformation control and alloys design.« less

  8. Experiments and Model for Serration Statistics in Low-Entropy, Medium-Entropy, and High-Entropy Alloys

    PubMed Central

    Carroll, Robert; Lee, Chi; Tsai, Che-Wei; Yeh, Jien-Wei; Antonaglia, James; Brinkman, Braden A. W.; LeBlanc, Michael; Xie, Xie; Chen, Shuying; Liaw, Peter K.; Dahmen, Karin A.

    2015-01-01

    High-entropy alloys (HEAs) are new alloys that contain five or more elements in roughly-equal proportion. We present new experiments and theory on the deformation behavior of HEAs under slow stretching (straining), and observe differences, compared to conventional alloys with fewer elements. For a specific range of temperatures and strain-rates, HEAs deform in a jerky way, with sudden slips that make it difficult to precisely control the deformation. An analytic model explains these slips as avalanches of slipping weak spots and predicts the observed slip statistics, stress-strain curves, and their dependence on temperature, strain-rate, and material composition. The ratio of the weak spots’ healing rate to the strain-rate is the main tuning parameter, reminiscent of the Portevin-LeChatellier effect and time-temperature superposition in polymers. Our model predictions agree with the experimental results. The proposed widely-applicable deformation mechanism is useful for deformation control and alloy design. PMID:26593056

  9. Experiments and Model for Serration Statistics in Low-Entropy, Medium-Entropy, and High-Entropy Alloys.

    PubMed

    Carroll, Robert; Lee, Chi; Tsai, Che-Wei; Yeh, Jien-Wei; Antonaglia, James; Brinkman, Braden A W; LeBlanc, Michael; Xie, Xie; Chen, Shuying; Liaw, Peter K; Dahmen, Karin A

    2015-01-01

    High-entropy alloys (HEAs) are new alloys that contain five or more elements in roughly-equal proportion. We present new experiments and theory on the deformation behavior of HEAs under slow stretching (straining), and observe differences, compared to conventional alloys with fewer elements. For a specific range of temperatures and strain-rates, HEAs deform in a jerky way, with sudden slips that make it difficult to precisely control the deformation. An analytic model explains these slips as avalanches of slipping weak spots and predicts the observed slip statistics, stress-strain curves, and their dependence on temperature, strain-rate, and material composition. The ratio of the weak spots' healing rate to the strain-rate is the main tuning parameter, reminiscent of the Portevin-LeChatellier effect and time-temperature superposition in polymers. Our model predictions agree with the experimental results. The proposed widely-applicable deformation mechanism is useful for deformation control and alloy design. PMID:26593056

  10. Modeling stochastic dynamics in biochemical systems with feedback using Maximum Caliber

    PubMed Central

    Pressé, S.; Ghosh, K.; Dill, K.A.

    2011-01-01

    Complex feedback systems are ubiquitous in biology. Modeling such systems with mass action laws or master equations requires information rarely measured directly. Thus rates and reaction topologies are often treated as adjustable parameters. Here we present a general stochastic modeling method for small chemical and biochemical systems with emphasis on feedback systems. The method, Maximum Caliber, is more parsimonious than others in constructing dynamical models requiring fewer model assumptions and parameters to capture the effects of feedback. Maximum Caliber is the dynamical analog of Maximum Entropy. It uses average rate quantities and correlations obtained from short experimental trajectories to construct dynamical models. We illustrate the method on the bistable genetic toggle switch. To test our method, we generate synthetic data from an underlying stochastic model. MaxCal reliably infers the statistics of the stochastic bistability and other full dynamical distributions of the simulated data, without having to invoke complex reaction schemes. The method should be broadly applicable to other systems. PMID:21524067

  11. Does the soil's effective hydraulic conductivity adapt in order to obey the Maximum Entropy Production principle? A lab experiment

    NASA Astrophysics Data System (ADS)

    Westhoff, Martijn; Zehe, Erwin; Erpicum, Sébastien; Archambeau, Pierre; Pirotton, Michel; Dewals, Benjamin

    2015-04-01

    The Maximum Entropy Production (MEP) principle is a conjecture assuming that a medium is organized in such a way that maximum power is subtracted from a gradient driving a flux (with power being a flux times its driving gradient). This maximum power is also known as the Carnot limit. It has already been shown that the atmosphere operates close to this Carnot limit when it comes to heat transport from the Equator to the poles, or vertically, from the surface to the atmospheric boundary layer. To reach this state close to the Carnot limit, the effective thermal conductivity of the atmosphere is adapted by the creation of convection cells (e.g. wind). The aim of this study is to test if the soil's effective hydraulic conductivity also adapts itself in such a way that it operates close to the Carnot limit. The big difference between atmosphere and soil is the way of adaptation of its resistance. The soil's hydraulic conductivity is either changed by weathering processes, which is a very slow process, or by creation of preferential flow paths. In this study the latter process is simulated in a lab experiment, where we focus on the preferential flow paths created by piping. Piping is the process of backwards erosion of sand particles subject to a large pressure gradient. Since this is a relatively fast process, it is suitable for being tested in the lab. In the lab setup a horizontal sand bed connects two reservoirs that both drain freely at a level high enough to keep the sand bed always saturated. By adding water to only one reservoir, a horizontal pressure gradient is maintained. If the flow resistance is small, a large gradient develops, leading to the effect of piping. When pipes are being formed, the effective flow resistance decreases; the flow through the sand bed increases and the pressure gradient decreases. At a certain point, the flow velocity is small enough to stop the pipes from growing any further. In this steady state, the effective flow resistance of

  12. Modeling maximum daily temperature using a varying coefficient regression model

    NASA Astrophysics Data System (ADS)

    Li, Han; Deng, Xinwei; Kim, Dong-Yun; Smith, Eric P.

    2014-04-01

    Relationships between stream water and air temperatures are often modeled using linear or nonlinear regression methods. Despite a strong relationship between water and air temperatures and a variety of models that are effective for data summarized on a weekly basis, such models did not yield consistently good predictions for summaries such as daily maximum temperature. A good predictive model for daily maximum temperature is required because daily maximum temperature is an important measure for predicting survival of temperature sensitive fish. To appropriately model the strong relationship between water and air temperatures at a daily time step, it is important to incorporate information related to the time of the year into the modeling. In this work, a time-varying coefficient model is used to study the relationship between air temperature and water temperature. The time-varying coefficient model enables dynamic modeling of the relationship, and can be used to understand how the air-water temperature relationship varies over time. The proposed model is applied to 10 streams in Maryland, West Virginia, Virginia, North Carolina, and Georgia using daily maximum temperatures. It provides a better fit and better predictions than those produced by a simple linear regression model or a nonlinear logistic model.

  13. Computational Bayesian maximum entropy solution of a stochastic advection-reaction equation in the light of site-specific information

    NASA Astrophysics Data System (ADS)

    Kolovos, Alexander; Christakos, George; Serre, Marc L.; Miller, Cass T.

    2002-12-01

    This work presents a computational formulation of the Bayesian maximum entropy (BME) approach to solve a stochastic partial differential equation (PDE) representing the advection-reaction process across space and time. The solution approach provided by BME has some important features that distinguish it from most standard stochastic PDE techniques. In addition to the physical law, the BME solution can assimilate other sources of general and site-specific knowledge, including multiple-point nonlinear space/time statistics, hard measurements, and various forms of uncertain (soft) information. There is no need to explicitly solve the moment equations of the advection-reaction law since BME allows the information contained in them to consolidate within the general knowledge base at the structural (prior) stage of the analysis. No restrictions are posed on the shape of the underlying probability distributions or the space/time pattern of the contaminant process. Solutions of nonlinear systems of equations are obtained in four space/time dimensions and efficient computational schemes are introduced to cope with complexity. The BME solution at the prior stage is in excellent agreement with the exact analytical solution obtained in a controlled environment for comparison purposes. The prior solution is further improved at the integration (posterior) BME stage by assimilating uncertain information at the data points as well as at the solution grid nodes themselves, thus leading to the final solution of the advection-reaction law in the form of the probability distribution of possible concentration values at each space/time grid node. This is the most complete way of describing a stochastic solution and provides considerable flexibility concerning the choice of the concentration realization that is more representative of the physical situation. Numerical experiments demonstrated a high solution accuracy of the computational BME approach. The BME approach can benefit from the

  14. Improving Estimations of Spatial Distribution of Soil Respiration Using the Bayesian Maximum Entropy Algorithm and Soil Temperature as Auxiliary Data

    PubMed Central

    Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi

    2016-01-01

    Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points. PMID:26807579

  15. Improving Estimations of Spatial Distribution of Soil Respiration Using the Bayesian Maximum Entropy Algorithm and Soil Temperature as Auxiliary Data.

    PubMed

    Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi

    2016-01-01

    Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points. PMID:26807579

  16. ATR applications of minimax entropy models of texture and shape

    NASA Astrophysics Data System (ADS)

    Zhu, Song-Chun; Yuille, Alan L.; Lanterman, Aaron D.

    2001-10-01

    Concepts from information theory have recently found favor in both the mainstream computer vision community and the military automatic target recognition community. In the computer vision literature, the principles of minimax entropy learning theory have been used to generate rich probabilitistic models of texture and shape. In addition, the method of types and large deviation theory has permitted the difficulty of various texture and shape recognition tasks to be characterized by 'order parameters' that determine how fundamentally vexing a task is, independent of the particular algorithm used. These information-theoretic techniques have been demonstrated using traditional visual imagery in applications such as simulating cheetah skin textures and such as finding roads in aerial imagery. We discuss their application to problems in the specific application domain of automatic target recognition using infrared imagery. We also review recent theoretical and algorithmic developments which permit learning minimax entropy texture models for infrared textures in reasonable timeframes.

  17. Holography and entropy bounds in the plane wave matrix model

    SciTech Connect

    Bousso, Raphael; Mints, Aleksey L.

    2006-06-15

    As a quantum theory of gravity, matrix theory should provide a realization of the holographic principle, in the sense that a holographic theory should contain one binary degree of freedom per Planck area. We present evidence that Bekenstein's entropy bound, which is related to area differences, is manifest in the plane wave matrix model. If holography is implemented in this way, we predict crossover behavior at strong coupling when the energy exceeds N{sup 2} in units of the mass scale.

  18. Modeling sediment concentration in debris flow by Tsallis entropy

    NASA Astrophysics Data System (ADS)

    Singh, Vijay P.; Cui, Huijuan

    2015-02-01

    Debris flow is a natural hazard that occurs in landscapes having high slopes, such as mountainous areas. It can be so powerful that it destroys whatever comes in its way, that is, it can kill people and animals; decimate roads, bridges, railway tracks, homes and other property; and fill reservoirs. Owing to its frequent occurrence, it is receiving considerable attention these days. Of fundamental importance in debris flow modeling is the determination of concentration of debris (or sediment) in the flow. The usual approach to determining debris flow concentration is either empirical or hydraulic. Both approaches are deterministic and therefore say nothing about the uncertainty associated with the sediment concentration in the flow. This paper proposes to model debris flow concentration using the Tsallis entropy theory. Verification of the entropy-based distribution of debris flow concentration using the data and equations reported in the literature shows that the Tsallis entropy-proposed model is capable of mimicking the field observed concentration and has potential for practical application.

  19. Stability of ecological industry chain: an entropy model approach.

    PubMed

    Wang, Qingsong; Qiu, Shishou; Yuan, Xueliang; Zuo, Jian; Cao, Dayong; Hong, Jinglan; Zhang, Jian; Dong, Yong; Zheng, Ying

    2016-07-01

    A novel methodology is proposed in this study to examine the stability of ecological industry chain network based on entropy theory. This methodology is developed according to the associated dissipative structure characteristics, i.e., complexity, openness, and nonlinear. As defined in the methodology, network organization is the object while the main focus is the identification of core enterprises and core industry chains. It is proposed that the chain network should be established around the core enterprise while supplementation to the core industry chain helps to improve system stability, which is verified quantitatively. Relational entropy model can be used to identify core enterprise and core eco-industry chain. It could determine the core of the network organization and core eco-industry chain through the link form and direction of node enterprises. Similarly, the conductive mechanism of different node enterprises can be examined quantitatively despite the absence of key data. Structural entropy model can be employed to solve the problem of order degree for network organization. Results showed that the stability of the entire system could be enhanced by the supplemented chain around the core enterprise in eco-industry chain network organization. As a result, the sustainability of the entire system could be further improved.

  20. Stability of ecological industry chain: an entropy model approach.

    PubMed

    Wang, Qingsong; Qiu, Shishou; Yuan, Xueliang; Zuo, Jian; Cao, Dayong; Hong, Jinglan; Zhang, Jian; Dong, Yong; Zheng, Ying

    2016-07-01

    A novel methodology is proposed in this study to examine the stability of ecological industry chain network based on entropy theory. This methodology is developed according to the associated dissipative structure characteristics, i.e., complexity, openness, and nonlinear. As defined in the methodology, network organization is the object while the main focus is the identification of core enterprises and core industry chains. It is proposed that the chain network should be established around the core enterprise while supplementation to the core industry chain helps to improve system stability, which is verified quantitatively. Relational entropy model can be used to identify core enterprise and core eco-industry chain. It could determine the core of the network organization and core eco-industry chain through the link form and direction of node enterprises. Similarly, the conductive mechanism of different node enterprises can be examined quantitatively despite the absence of key data. Structural entropy model can be employed to solve the problem of order degree for network organization. Results showed that the stability of the entire system could be enhanced by the supplemented chain around the core enterprise in eco-industry chain network organization. As a result, the sustainability of the entire system could be further improved. PMID:27055893

  1. A Bayesian Maximum Entropy approach to address the change of support problem in the spatial analysis of childhood asthma prevalence across North Carolina

    PubMed Central

    LEE, SEUNG-JAE; YEATTS, KARIN; SERRE, MARC L.

    2009-01-01

    The spatial analysis of data observed at different spatial observation scales leads to the change of support problem (COSP). A solution to the COSP widely used in linear spatial statistics consists in explicitly modeling the spatial autocorrelation of the variable observed at different spatial scales. We present a novel approach that takes advantage of the non-linear Bayesian Maximum Entropy (BME) extension of linear spatial statistics to address the COSP directly without relying on the classical linear approach. Our procedure consists in modeling data observed over large areas as soft data for the process at the local scale. We demonstrate the application of our approach to obtain spatially detailed maps of childhood asthma prevalence across North Carolina (NC). Because of the high prevalence of childhood asthma in NC, the small number problem is not an issue, so we can focus our attention solely to the COSP of integrating prevalence data observed at the county-level together with data observed at a targeted local scale equivalent to the scale of school-districts. Our spatially detailed maps can be used for different applications ranging from exploratory and hypothesis generating analyses to targeting intervention and exposure mitigation efforts. PMID:20300553

  2. On the possibility of obtaining non-diffused proximity functions from cloud-chamber data: II. Maximum entropy and Bayesian methods.

    PubMed

    Zaider, M; Minerbo, G N

    1988-11-01

    Maximum entropy and Bayesian methods are applied to an inversion problem which consists of unfolding diffusion from proximity functions calculated from cloud-chamber data. The solution appears to be relatively insensitive to statistical errors in the data (an important feature) given the limited number of tracks normally available from cloud-chamber measurements. It is the first time, to our knowledge, that such methods are applied to microdosimetry.

  3. Relative entropy as model selection tool in cluster expansions

    NASA Astrophysics Data System (ADS)

    Kristensen, Jesper; Bilionis, Ilias; Zabaras, Nicholas

    2013-05-01

    Cluster expansions are simplified, Ising-like models for binary alloys in which vibrational and electronic degrees of freedom are coarse grained. The usual practice is to learn the parameters of the cluster expansion by fitting the energy they predict to a finite set of ab initio calculations. In some cases, experiments suggest that such approaches may lead to overestimation of the phase transition temperature. In this work, we present a novel approach to fitting the parameters based on the relative entropy framework which, instead of energies, attempts to fit the Boltzmann distribution of the configurational degrees of freedom. We show how this leads to T-dependent parameters.

  4. Entropy maximization under the constraints on the generalized Gini index and its application in modeling income distributions

    NASA Astrophysics Data System (ADS)

    Khosravi Tanak, A.; Mohtashami Borzadaran, G. R.; Ahmadi, J.

    2015-11-01

    In economics and social sciences, the inequality measures such as Gini index, Pietra index etc., are commonly used to measure the statistical dispersion. There is a generalization of Gini index which includes it as special case. In this paper, we use principle of maximum entropy to approximate the model of income distribution with a given mean and generalized Gini index. Many distributions have been used as descriptive models for the distribution of income. The most widely known of these models are the generalized beta of second kind and its subclass distributions. The obtained maximum entropy distributions are fitted to the US family total money income in 2009, 2011 and 2013 and their relative performances with respect to generalized beta of second kind family are compared.

  5. Maximum Entropy Method and Charge Flipping, a Powerful Combination to Visualize the True Nature of Structural Disorder from in situ X-ray Powder Diffraction Data

    SciTech Connect

    Samy, A.; Dinnebier, R; van Smaalen, S; Jansen, M

    2010-01-01

    In a systematic approach, the ability of the Maximum Entropy Method (MEM) to reconstruct the most probable electron density of highly disordered crystal structures from X-ray powder diffraction data was evaluated. As a case study, the ambient temperature crystal structures of disordered {alpha}-Rb{sub 2}[C{sub 2}O{sub 4}] and {alpha}-Rb{sub 2}[CO{sub 3}] and ordered {delta}-K{sub 2}[C{sub 2}O{sub 4}] were investigated in detail with the aim of revealing the 'true' nature of the apparent disorder. Different combinations of F (based on phased structure factors) and G constraints (based on structure-factor amplitudes) from different sources were applied in MEM calculations. In particular, a new combination of the MEM with the recently developed charge-flipping algorithm with histogram matching for powder diffraction data (pCF) was successfully introduced to avoid the inevitable bias of the phases of the structure-factor amplitudes by the Rietveld model. Completely ab initio electron-density distributions have been obtained with the MEM applied to a combination of structure-factor amplitudes from Le Bail fits with phases derived from pCF. All features of the crystal structures, in particular the disorder of the oxalate and carbonate anions, and the displacements of the cations, are clearly obtained. This approach bears the potential of a fast method of electron-density determination, even for highly disordered materials. All the MEM maps obtained in this work were compared with the MEM map derived from the best Rietveld refined model. In general, the phased observed structure factors obtained from Rietveld refinement (applying F and G constraints) were found to give the closest description of the experimental data and thus lead to the most accurate image of the actual disorder.

  6. Maximum likelihood estimation of finite mixture model for economic data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  7. Emergence of spacetime dynamics in entropy corrected and braneworld models

    SciTech Connect

    Sheykhi, A.; Dehghani, M.H.; Hosseini, S.E. E-mail: mhd@shirazu.ac.ir

    2013-04-01

    A very interesting new proposal on the origin of the cosmic expansion was recently suggested by Padmanabhan [arXiv:1206.4916]. He argued that the difference between the surface degrees of freedom and the bulk degrees of freedom in a region of space drives the accelerated expansion of the universe, as well as the standard Friedmann equation through relation ΔV = Δt(N{sub sur}−N{sub bulk}). In this paper, we first present the general expression for the number of degrees of freedom on the holographic surface, N{sub sur}, using the general entropy corrected formula S = A/(4L{sub p}{sup 2})+s(A). Then, as two example, by applying the Padmanabhan's idea we extract the corresponding Friedmann equations in the presence of power-law and logarithmic correction terms in the entropy. We also extend the study to RS II and DGP braneworld models and derive successfully the correct form of the Friedmann equations in these theories. Our study further supports the viability of Padmanabhan's proposal.

  8. Single-particle spectral density of the unitary Fermi gas: Novel approach based on the operator product expansion, sum rules and the maximum entropy method

    SciTech Connect

    Gubler, Philipp; Yamamoto, Naoki; Hatsuda, Tetsuo; Nishida, Yusuke

    2015-05-15

    Making use of the operator product expansion, we derive a general class of sum rules for the imaginary part of the single-particle self-energy of the unitary Fermi gas. The sum rules are analyzed numerically with the help of the maximum entropy method, which allows us to extract the single-particle spectral density as a function of both energy and momentum. These spectral densities contain basic information on the properties of the unitary Fermi gas, such as the dispersion relation and the superfluid pairing gap, for which we obtain reasonable agreement with the available results based on quantum Monte-Carlo simulations.

  9. Reprint of : Connection between wave transport through disordered 1D waveguides and energy density inside the sample: A maximum-entropy approach

    NASA Astrophysics Data System (ADS)

    Mello, Pier A.; Shi, Zhou; Genack, Azriel Z.

    2016-08-01

    We study the average energy - or particle - density of waves inside disordered 1D multiply-scattering media. We extend the transfer-matrix technique that was used in the past for the calculation of the intensity beyond the sample to study the intensity in the interior of the sample by considering the transfer matrices of the two segments that form the entire waveguide. The statistical properties of the two disordered segments are found using a maximum-entropy ansatz subject to appropriate constraints. The theoretical expressions are shown to be in excellent agreement with 1D transfer-matrix simulations.

  10. Shifting distributions of adult Atlantic sturgeon amidst post-industrialization and future impacts in the Delaware River: a maximum entropy approach.

    PubMed

    Breece, Matthew W; Oliver, Matthew J; Cimino, Megan A; Fox, Dewayne A

    2013-01-01

    Atlantic sturgeon (Acipenser oxyrinchus oxyrinchus) experienced severe declines due to habitat destruction and overfishing beginning in the late 19(th) century. Subsequent to the boom and bust period of exploitation, there has been minimal fishing pressure and improving habitats. However, lack of recovery led to the 2012 listing of Atlantic sturgeon under the Endangered Species Act. Although habitats may be improving, the availability of high quality spawning habitat, essential for the survival and development of eggs and larvae may still be a limiting factor in the recovery of Atlantic sturgeon. To estimate adult Atlantic sturgeon spatial distributions during riverine occupancy in the Delaware River, we utilized a maximum entropy (MaxEnt) approach along with passive biotelemetry during the likely spawning season. We found that substrate composition and distance from the salt front significantly influenced the locations of adult Atlantic sturgeon in the Delaware River. To broaden the scope of this study we projected our model onto four scenarios depicting varying locations of the salt front in the Delaware River: the contemporary location of the salt front during the likely spawning season, the location of the salt front during the historic fishery in the late 19(th) century, an estimated shift in the salt front by the year 2100 due to climate change, and an extreme drought scenario, similar to that which occurred in the 1960's. The movement of the salt front upstream as a result of dredging and climate change likely eliminated historic spawning habitats and currently threatens areas where Atlantic sturgeon spawning may be taking place. Identifying where suitable spawning substrate and water chemistry intersect with the likely occurrence of adult Atlantic sturgeon in the Delaware River highlights essential spawning habitats, enhancing recovery prospects for this imperiled species. PMID:24260570

  11. Shifting Distributions of Adult Atlantic Sturgeon Amidst Post-Industrialization and Future Impacts in the Delaware River: a Maximum Entropy Approach

    PubMed Central

    Breece, Matthew W.; Oliver, Matthew J.; Cimino, Megan A.; Fox, Dewayne A.

    2013-01-01

    Atlantic sturgeon (Acipenser oxyrinchus oxyrinchus) experienced severe declines due to habitat destruction and overfishing beginning in the late 19th century. Subsequent to the boom and bust period of exploitation, there has been minimal fishing pressure and improving habitats. However, lack of recovery led to the 2012 listing of Atlantic sturgeon under the Endangered Species Act. Although habitats may be improving, the availability of high quality spawning habitat, essential for the survival and development of eggs and larvae may still be a limiting factor in the recovery of Atlantic sturgeon. To estimate adult Atlantic sturgeon spatial distributions during riverine occupancy in the Delaware River, we utilized a maximum entropy (MaxEnt) approach along with passive biotelemetry during the likely spawning season. We found that substrate composition and distance from the salt front significantly influenced the locations of adult Atlantic sturgeon in the Delaware River. To broaden the scope of this study we projected our model onto four scenarios depicting varying locations of the salt front in the Delaware River: the contemporary location of the salt front during the likely spawning season, the location of the salt front during the historic fishery in the late 19th century, an estimated shift in the salt front by the year 2100 due to climate change, and an extreme drought scenario, similar to that which occurred in the 1960’s. The movement of the salt front upstream as a result of dredging and climate change likely eliminated historic spawning habitats and currently threatens areas where Atlantic sturgeon spawning may be taking place. Identifying where suitable spawning substrate and water chemistry intersect with the likely occurrence of adult Atlantic sturgeon in the Delaware River highlights essential spawning habitats, enhancing recovery prospects for this imperiled species. PMID:24260570

  12. Shifting distributions of adult Atlantic sturgeon amidst post-industrialization and future impacts in the Delaware River: a maximum entropy approach.

    PubMed

    Breece, Matthew W; Oliver, Matthew J; Cimino, Megan A; Fox, Dewayne A

    2013-01-01

    Atlantic sturgeon (Acipenser oxyrinchus oxyrinchus) experienced severe declines due to habitat destruction and overfishing beginning in the late 19(th) century. Subsequent to the boom and bust period of exploitation, there has been minimal fishing pressure and improving habitats. However, lack of recovery led to the 2012 listing of Atlantic sturgeon under the Endangered Species Act. Although habitats may be improving, the availability of high quality spawning habitat, essential for the survival and development of eggs and larvae may still be a limiting factor in the recovery of Atlantic sturgeon. To estimate adult Atlantic sturgeon spatial distributions during riverine occupancy in the Delaware River, we utilized a maximum entropy (MaxEnt) approach along with passive biotelemetry during the likely spawning season. We found that substrate composition and distance from the salt front significantly influenced the locations of adult Atlantic sturgeon in the Delaware River. To broaden the scope of this study we projected our model onto four scenarios depicting varying locations of the salt front in the Delaware River: the contemporary location of the salt front during the likely spawning season, the location of the salt front during the historic fishery in the late 19(th) century, an estimated shift in the salt front by the year 2100 due to climate change, and an extreme drought scenario, similar to that which occurred in the 1960's. The movement of the salt front upstream as a result of dredging and climate change likely eliminated historic spawning habitats and currently threatens areas where Atlantic sturgeon spawning may be taking place. Identifying where suitable spawning substrate and water chemistry intersect with the likely occurrence of adult Atlantic sturgeon in the Delaware River highlights essential spawning habitats, enhancing recovery prospects for this imperiled species.

  13. Modeling the Overalternating Bias with an Asymmetric Entropy Measure

    PubMed Central

    Gronchi, Giorgio; Raglianti, Marco; Noventa, Stefano; Lazzeri, Alessandro; Guazzini, Andrea

    2016-01-01

    Psychological research has found that human perception of randomness is biased. In particular, people consistently show the overalternating bias: they rate binary sequences of symbols (such as Heads and Tails in coin flipping) with an excess of alternation as more random than prescribed by the normative criteria of Shannon's entropy. Within data mining for medical applications, Marcellin proposed an asymmetric measure of entropy that can be ideal to account for such bias and to quantify subjective randomness. We fitted Marcellin's entropy and Renyi's entropy (a generalized form of uncertainty measure comprising many different kinds of entropies) to experimental data found in the literature with the Differential Evolution algorithm. We observed a better fit for Marcellin's entropy compared to Renyi's entropy. The fitted asymmetric entropy measure also showed good predictive properties when applied to different datasets of randomness-related tasks. We concluded that Marcellin's entropy can be a parsimonious and effective measure of subjective randomness that can be useful in psychological research about randomness perception. PMID:27458418

  14. Mixture Rasch Models with Joint Maximum Likelihood Estimation

    ERIC Educational Resources Information Center

    Willse, John T.

    2011-01-01

    This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…

  15. Modeling cancer growth and its treatment by means of statistical mechanics entropy

    NASA Astrophysics Data System (ADS)

    Khordad, R.; Rastegar Sedehi, H. R.

    2016-08-01

    In this paper, we have modeled cancer growth and its treatment based on nonextensive entropies. To this end, five nonextensive entropies are employed to model the cancer growth. The used entropies are Tsallis, Rényi, Landsberg-Vedral, Abe and Escort. First, we have proposed the growth of cancer tumor as a function of time for all the entropies with different nonextensive parameter q. When the time passes, the entropies show a bounded growth for cancer tumor size. The speed of tumor size growth is different for all the entropies. The Tsallis and Escort ones have highest and lowest speed, respectively. For q>1, the Escort entropy cannot predict a bounded growth for cancer tumor size. Then, we have investigated the cancer tumor treatment by adding a cell-kill function to the evolution equation. For q<1, a constant cell-kill function is unable to reduce the cancer tumor size to zero for all the entropies. But, for q>1, a cell-kill term is a suitable case. According to the results, it is found that the nonextensive parameter q, type of entropy, and cell-kill function are important factors for modeling the cancer growth and its treatment.

  16. Models, Entropy and Information of Temporal Social Networks

    NASA Astrophysics Data System (ADS)

    Zhao, Kun; Karsai, Márton; Bianconi, Ginestra

    Temporal social networks are characterized by heterogeneous duration of contacts, which can either follow a power-law distribution, such as in face-to-face interactions, or a Weibull distribution, such as in mobile-phone communication. Here we model the dynamics of face-to-face interaction and mobile phone communication by a reinforcement dynamics, which explains the data observed in these different types of social interactions. We quantify the information encoded in the dynamics of these networks by the entropy of temporal networks. Finally, we show evidence that human dynamics is able to modulate the information present in social network dynamics when it follows circadian rhythms and when it is interfacing with a new technology such as the mobile-phone communication technology.

  17. Assessing Bayesian model averaging uncertainty of groundwater modeling based on information entropy method

    NASA Astrophysics Data System (ADS)

    Zeng, Xiankui; Wu, Jichun; Wang, Dong; Zhu, Xiaobin; Long, Yuqiao

    2016-07-01

    Because of groundwater conceptualization uncertainty, multi-model methods are usually used and the corresponding uncertainties are estimated by integrating Markov Chain Monte Carlo (MCMC) and Bayesian model averaging (BMA) methods. Generally, the variance method is used to measure the uncertainties of BMA prediction. The total variance of ensemble prediction is decomposed into within-model and between-model variances, which represent the uncertainties derived from parameter and conceptual model, respectively. However, the uncertainty of a probability distribution couldn't be comprehensively quantified by variance solely. A new measuring method based on information entropy theory is proposed in this study. Due to actual BMA process hard to meet the ideal mutually exclusive collectively exhaustive condition, BMA predictive uncertainty could be decomposed into parameter, conceptual model, and overlapped uncertainties, respectively. Overlapped uncertainty is induced by the combination of predictions from correlated model structures. In this paper, five simple analytical functions are firstly used to illustrate the feasibility of the variance and information entropy methods. A discrete distribution example shows that information entropy could be more appropriate to describe between-model uncertainty than variance. Two continuous distribution examples show that the two methods are consistent in measuring normal distribution, and information entropy is more appropriate to describe bimodal distribution than variance. The two examples of BMA uncertainty decomposition demonstrate that the two methods are relatively consistent in assessing the uncertainty of unimodal BMA prediction. Information entropy is more informative in describing the uncertainty decomposition of bimodal BMA prediction. Then, based on a synthetical groundwater model, the variance and information entropy methods are used to assess the BMA uncertainty of groundwater modeling. The uncertainty assessments of

  18. A stochastic model for the analysis of maximum daily temperature

    NASA Astrophysics Data System (ADS)

    Sirangelo, B.; Caloiero, T.; Coscarelli, R.; Ferrari, E.

    2016-08-01

    In this paper, a stochastic model for the analysis of the daily maximum temperature is proposed. First, a deseasonalization procedure based on the truncated Fourier expansion is adopted. Then, the Johnson transformation functions were applied for the data normalization. Finally, the fractionally autoregressive integrated moving average model was used to reproduce both short- and long-memory behavior of the temperature series. The model was applied to the data of the Cosenza gauge (Calabria region) and verified on other four gauges of southern Italy. Through a Monte Carlo simulation procedure based on the proposed model, 105 years of daily maximum temperature have been generated. Among the possible applications of the model, the occurrence probabilities of the annual maximum values have been evaluated. Moreover, the procedure was applied for the estimation of the return periods of long sequences of days with maximum temperature above prefixed thresholds.

  19. Two aspects of black hole entropy in Lanczos-Lovelock models of gravity

    NASA Astrophysics Data System (ADS)

    Kolekar, Sanved; Kothawala, Dawood; Padmanabhan, T.

    2012-03-01

    We consider two specific approaches to evaluate the black hole entropy which are known to produce correct results in the case of Einstein’s theory and generalize them to Lanczos-Lovelock models. In the first approach (which could be called extrinsic), we use a procedure motivated by earlier work by Pretorius, Vollick, and Israel, and by Oppenheim, and evaluate the entropy of a configuration of densely packed gravitating shells on the verge of forming a black hole in Lanczos-Lovelock theories of gravity. We find that this matter entropy is not equal to (it is less than) Wald entropy, except in the case of Einstein theory, where they are equal. The matter entropy is proportional to the Wald entropy if we consider a specific mth-order Lanczos-Lovelock model, with the proportionality constant depending on the spacetime dimensions D and the order m of the Lanczos-Lovelock theory as (D-2m)/(D-2). Since the proportionality constant depends on m, the proportionality between matter entropy and Wald entropy breaks down when we consider a sum of Lanczos-Lovelock actions involving different m. In the second approach (which could be called intrinsic), we generalize a procedure, previously introduced by Padmanabhan in the context of general relativity, to study off-shell entropy of a class of metrics with horizon using a path integral method. We consider the Euclidean action of Lanczos-Lovelock models for a class of metrics off shell and interpret it as a partition function. We show that in the case of spherically symmetric metrics, one can interpret the Euclidean action as the free energy and read off both the entropy and energy of a black hole spacetime. Surprisingly enough, this leads to exactly the Wald entropy and the energy of the spacetime in Lanczos-Lovelock models obtained by other methods. We comment on possible implications of the result.

  20. Entropy-Based Model for Interpreting Life Systems in Traditional Chinese Medicine

    PubMed Central

    Kang, Guo-lian; Zhang, Ji-feng

    2008-01-01

    Traditional Chinese medicine (TCM) treats qi as the core of the human life systems. Starting with a hypothetical correlation between TCM qi and the entropy theory, we address in this article a holistic model for evaluating and unveiling the rule of TCM life systems. Several new concepts such as acquired life entropy (ALE), acquired life entropy flow (ALEF) and acquired life entropy production (ALEP) are propounded to interpret TCM life systems. Using the entropy theory, mathematical models are established for ALE, ALEF and ALEP, which reflect the evolution of life systems. Some criteria are given on physiological activities and pathological changes of the body in different stages of life. Moreover, a real data-based simulation shows life entropies of the human body with different ages, Cold and Hot constitutions and in different seasons in North China are coincided with the manifestations of qi as well as the life evolution in TCM descriptions. Especially, based on the comparative and quantitative analysis, the entropy-based model can nicely describe the evolution of life entropies in Cold and Hot individuals thereby fitting the Yin–Yang theory in TCM. Thus, this work establishes a novel approach to interpret the fundamental principles in TCM, and provides an alternative understanding for the complex life systems. PMID:18830452

  1. A robust channel-calibration algorithm for multi-channel in azimuth HRWS SAR imaging based on local maximum-likelihood weighted minimum entropy.

    PubMed

    Zhang, Shuang-Xi; Xing, Meng-Dao; Xia, Xiang-Gen; Liu, Yan-Yang; Guo, Rui; Bao, Zheng

    2013-12-01

    High-resolution and wide-swath (HRWS) synthetic aperture radar (SAR) is an essential tool for modern remote sensing. To effectively deal with the contradiction problem between high-resolution and low pulse repetition frequency and obtain an HRWS SAR image, a multi-channel in azimuth SAR system has been adopted in the literature. However, the performance of the Doppler ambiguity suppression via digital beam forming processing suffers the losses from the channel mismatch. In this paper, a robust channel-calibration algorithm based on weighted minimum entropy is proposed for the multi-channel in azimuth HRWS SAR imaging. The proposed algorithm is implemented by a two-step process. 1) The timing uncertainty in each channel and most of the range-invariant channel mismatches in amplitude and phase have been corrected in the pre-processing of the coarse-compensation. 2) After the pre-processing, there is only residual range-dependent channel mismatch in phase. Then, the retrieval of the range-dependent channel mismatch in phase is achieved by a local maximum-likelihood weighted minimum entropy algorithm. The simulated multi-channel in azimuth HRWS SAR data experiment is adopted to evaluate the performance of the proposed algorithm. Then, some real measured airborne multi-channel in azimuth HRWS Scan-SAR data is used to demonstrate the effectiveness of the proposed approach. PMID:23893723

  2. Improved maximum entropy method for the analysis of fluorescence spectroscopy data: evaluating zero-time shift and assessing its effect on the determination of fluorescence lifetimes.

    PubMed

    Esposito, Rosario; Mensitieri, Giuseppe; de Nicola, Sergio

    2015-12-21

    A new algorithm based on the Maximum Entropy Method (MEM) is proposed for recovering both the lifetime distribution and the zero-time shift from time-resolved fluorescence decay intensities. The developed algorithm allows the analysis of complex time decays through an iterative scheme based on entropy maximization and the Brent method to determine the minimum of the reduced chi-squared value as a function of the zero-time shift. The accuracy of this algorithm has been assessed through comparisons with simulated fluorescence decays both of multi-exponential and broad lifetime distributions for different values of the zero-time shift. The method is capable of recovering the zero-time shift with an accuracy greater than 0.2% over a time range of 2000 ps. The center and the width of the lifetime distributions are retrieved with relative discrepancies that are lower than 0.1% and 1% for the multi-exponential and continuous lifetime distributions, respectively. The MEM algorithm is experimentally validated by applying the method to fluorescence measurements of the time decays of the flavin adenine dinucleotide (FAD).

  3. Develop and test a solvent accessible surface area-based model in conformational entropy calculations.

    PubMed

    Wang, Junmei; Hou, Tingjun

    2012-05-25

    It is of great interest in modern drug design to accurately calculate the free energies of protein-ligand or nucleic acid-ligand binding. MM-PBSA (molecular mechanics Poisson-Boltzmann surface area) and MM-GBSA (molecular mechanics generalized Born surface area) have gained popularity in this field. For both methods, the conformational entropy, which is usually calculated through normal-mode analysis (NMA), is needed to calculate the absolute binding free energies. Unfortunately, NMA is computationally demanding and becomes a bottleneck of the MM-PB/GBSA-NMA methods. In this work, we have developed a fast approach to estimate the conformational entropy based upon solvent accessible surface area calculations. In our approach, the conformational entropy of a molecule, S, can be obtained by summing up the contributions of all atoms, no matter they are buried or exposed. Each atom has two types of surface areas, solvent accessible surface area (SAS) and buried SAS (BSAS). The two types of surface areas are weighted to estimate the contribution of an atom to S. Atoms having the same atom type share the same weight and a general parameter k is applied to balance the contributions of the two types of surface areas. This entropy model was parametrized using a large set of small molecules for which their conformational entropies were calculated at the B3LYP/6-31G* level taking the solvent effect into account. The weighted solvent accessible surface area (WSAS) model was extensively evaluated in three tests. For convenience, TS values, the product of temperature T and conformational entropy S, were calculated in those tests. T was always set to 298.15 K through the text. First of all, good correlations were achieved between WSAS TS and NMA TS for 44 protein or nucleic acid systems sampled with molecular dynamics simulations (10 snapshots were collected for postentropy calculations): the mean correlation coefficient squares (R²) was 0.56. As to the 20 complexes, the TS

  4. A non-uniformly sampled 4D HCC(CO)NH-TOCSY experiment processed using maximum entropy for rapid protein sidechain assignment

    PubMed Central

    Mobli, Mehdi; Stern, Alan S.; Bermel, Wolfgang; King, Glenn F.; Hoch, Jeffrey C.

    2010-01-01

    One of the stiffest challenges in structural studies of proteins using NMR is the assignment of sidechain resonances. Typically, a panel of lengthy 3D experiments are acquired in order to establish connectivities and resolve ambiguities due to overlap. We demonstrate that these experiments can be replaced by a single 4D experiment that is time-efficient, yields excellent resolution, and captures unique carbon-proton connectivity information. The approach is made practical by the use of non-uniform sampling in the three indirect time dimensions and maximum entropy reconstruction of the corresponding 3D frequency spectrum. This 4D method will facilitate automated resonance assignment procedures and it should be particularly beneficial for increasing throughput in NMR-based structural genomics initiatives. PMID:20299257

  5. A non-uniformly sampled 4D HCC(CO)NH-TOCSY experiment processed using maximum entropy for rapid protein sidechain assignment

    NASA Astrophysics Data System (ADS)

    Mobli, Mehdi; Stern, Alan S.; Bermel, Wolfgang; King, Glenn F.; Hoch, Jeffrey C.

    2010-05-01

    One of the stiffest challenges in structural studies of proteins using NMR is the assignment of sidechain resonances. Typically, a panel of lengthy 3D experiments are acquired in order to establish connectivities and resolve ambiguities due to overlap. We demonstrate that these experiments can be replaced by a single 4D experiment that is time-efficient, yields excellent resolution, and captures unique carbon-proton connectivity information. The approach is made practical by the use of non-uniform sampling in the three indirect time dimensions and maximum entropy reconstruction of the corresponding 3D frequency spectrum. This 4D method will facilitate automated resonance assignment procedures and it should be particularly beneficial for increasing throughput in NMR-based structural genomics initiatives.

  6. Evaluation of the reliability of the maximum entropy method for reconstructing 3D and 4D NOESY-type NMR spectra of proteins.

    PubMed

    Shigemitsu, Yoshiki; Ikeya, Teppei; Yamamoto, Akihiro; Tsuchie, Yuusuke; Mishima, Masaki; Smith, Brian O; Güntert, Peter; Ito, Yutaka

    2015-02-01

    Despite their advantages in analysis, 4D NMR experiments are still infrequently used as a routine tool in protein NMR projects due to the long duration of the measurement and limited digital resolution. Recently, new acquisition techniques for speeding up multidimensional NMR experiments, such as nonlinear sampling, in combination with non-Fourier transform data processing methods have been proposed to be beneficial for 4D NMR experiments. Maximum entropy (MaxEnt) methods have been utilised for reconstructing nonlinearly sampled multi-dimensional NMR data. However, the artefacts arising from MaxEnt processing, particularly, in NOESY spectra have not yet been clearly assessed in comparison with other methods, such as quantitative maximum entropy, multidimensional decomposition, and compressed sensing. We compared MaxEnt with other methods in reconstructing 3D NOESY data acquired with variously reduced sparse sampling schedules and found that MaxEnt is robust, quick and competitive with other methods. Next, nonlinear sampling and MaxEnt processing were applied to 4D NOESY experiments, and the effect of the artefacts of MaxEnt was evaluated by calculating 3D structures from the NOE-derived distance restraints. Our results demonstrated that sufficiently converged and accurate structures (RMSD of 0.91Å to the mean and 1.36Å to the reference structures) were obtained even with NOESY spectra reconstructed from 1.6% randomly selected sampling points for indirect dimensions. This suggests that 3D MaxEnt processing in combination with nonlinear sampling schedules is still a useful and advantageous option for rapid acquisition of high-resolution 4D NOESY spectra of proteins.

  7. A new assessment method for urbanization environmental impact: urban environment entropy model and its application.

    PubMed

    Ouyang, Tingping; Fu, Shuqing; Zhu, Zhaoyu; Kuang, Yaoqiu; Huang, Ningsheng; Wu, Zhifeng

    2008-11-01

    The thermodynamic law is one of the most widely used scientific principles. The comparability between the environmental impact of urbanization and the thermodynamic entropy was systematically analyzed. Consequently, the concept "Urban Environment Entropy" was brought forward and the "Urban Environment Entropy" model was established for urbanization environmental impact assessment in this study. The model was then utilized in a case study for the assessment of river water quality in the Pearl River Delta Economic Zone. The results indicated that the assessing results of the model are consistent to that of the equalized synthetic pollution index method. Therefore, it can be concluded that the Urban Environment Entropy model has high reliability and can be applied widely in urbanization environmental assessment research using many different environmental parameters.

  8. The viscosity of planetary tholeiitic melts: A configurational entropy model

    NASA Astrophysics Data System (ADS)

    Sehlke, Alexander; Whittington, Alan G.

    2016-10-01

    The viscosity (η) of silicate melts is a fundamental physical property controlling mass transfer in magmatic systems. Viscosity can span many orders of magnitude, strongly depending on temperature and composition. Several models are available that describe this dependency for terrestrial melts quite well. Planetary basaltic lavas however are distinctly different in composition, being dominantly alkali-poor, iron-rich and/or highly magnesian. We measured the viscosity of 20 anhydrous tholeiitic melts, of which 15 represent known or estimated surface compositions of Mars, Mercury, the Moon, Io and Vesta, by concentric cylinder and parallel plate viscometry. The planetary basalts span a viscosity range of 2 orders of magnitude at liquidus temperatures and 4 orders of magnitude near the glass transition, and can be more or less viscous than terrestrial lavas. We find that current models under- and overestimate superliquidus viscosities by up to 2 orders of magnitude for these compositions, and deviate even more strongly from measured viscosities toward the glass transition. We used the Adam-Gibbs theory (A-G) to relate viscosity (η) to absolute temperature (T) and the configurational entropy of the system at that temperature (Sconf), which is in the form of log η =Ae +Be /TSconf . Heat capacities (CP) for glasses and liquids of our investigated compositions were calculated via available literature models. We show that the A-G theory is applicable to model the viscosity of individual complex tholeiitic melts containing 10 or more major oxides as well or better than the commonly used empirical equations. We successfully modeled the global viscosity data set using a constant Ae of -3.34 ± 0.22 log units and 12 adjustable sub-parameters, which capture the compositional and temperature dependence on melt viscosity. Seven sub-parameters account for the compositional dependence of Be and 5 for Sconf. Our model reproduces the 496 measured viscosity data points with a 1

  9. Cluster-size entropy in the Axelrod model of social influence: Small-world networks and mass media

    NASA Astrophysics Data System (ADS)

    Gandica, Y.; Charmell, A.; Villegas-Febres, J.; Bonalde, I.

    2011-10-01

    We study the Axelrod's cultural adaptation model using the concept of cluster-size entropy Sc, which gives information on the variability of the cultural cluster size present in the system. Using networks of different topologies, from regular to random, we find that the critical point of the well-known nonequilibrium monocultural-multicultural (order-disorder) transition of the Axelrod model is given by the maximum of the Sc(q) distributions. The width of the cluster entropy distributions can be used to qualitatively determine whether the transition is first or second order. By scaling the cluster entropy distributions we were able to obtain a relationship between the critical cultural trait qc and the number F of cultural features in two-dimensional regular networks. We also analyze the effect of the mass media (external field) on social systems within the Axelrod model in a square network. We find a partially ordered phase whose largest cultural cluster is not aligned with the external field, in contrast with a recent suggestion that this type of phase cannot be formed in regular networks. We draw a q-B phase diagram for the Axelrod model in regular networks.

  10. Entropy, chaos, and excited-state quantum phase transitions in the Dicke model.

    PubMed

    Lóbez, C M; Relaño, A

    2016-07-01

    We study nonequilibrium processes in an isolated quantum system-the Dicke model-focusing on the role played by the transition from integrability to chaos and the presence of excited-state quantum phase transitions. We show that both diagonal and entanglement entropies are abruptly increased by the onset of chaos. Also, this increase ends in both cases just after the system crosses the critical energy of the excited-state quantum phase transition. The link between entropy production, the development of chaos, and the excited-state quantum phase transition is more clear for the entanglement entropy. PMID:27575109

  11. Dynamic approximate entropy electroanatomic maps detect rotors in a simulated atrial fibrillation model.

    PubMed

    Ugarte, Juan P; Orozco-Duque, Andrés; Tobón, Catalina; Kremen, Vaclav; Novak, Daniel; Saiz, Javier; Oesterlein, Tobias; Schmitt, Clauss; Luik, Armin; Bustamante, John

    2014-01-01

    There is evidence that rotors could be drivers that maintain atrial fibrillation. Complex fractionated atrial electrograms have been located in rotor tip areas. However, the concept of electrogram fractionation, defined using time intervals, is still controversial as a tool for locating target sites for ablation. We hypothesize that the fractionation phenomenon is better described using non-linear dynamic measures, such as approximate entropy, and that this tool could be used for locating the rotor tip. The aim of this work has been to determine the relationship between approximate entropy and fractionated electrograms, and to develop a new tool for rotor mapping based on fractionation levels. Two episodes of chronic atrial fibrillation were simulated in a 3D human atrial model, in which rotors were observed. Dynamic approximate entropy maps were calculated using unipolar electrogram signals generated over the whole surface of the 3D atrial model. In addition, we optimized the approximate entropy calculation using two real multi-center databases of fractionated electrogram signals, labeled in 4 levels of fractionation. We found that the values of approximate entropy and the levels of fractionation are positively correlated. This allows the dynamic approximate entropy maps to localize the tips from stable and meandering rotors. Furthermore, we assessed the optimized approximate entropy using bipolar electrograms generated over a vicinity enclosing a rotor, achieving rotor detection. Our results suggest that high approximate entropy values are able to detect a high level of fractionation and to locate rotor tips in simulated atrial fibrillation episodes. We suggest that dynamic approximate entropy maps could become a tool for atrial fibrillation rotor mapping.

  12. A new one-dimensional radiative equilibrium model for investigating atmospheric radiation entropy flux.

    PubMed

    Wu, Wei; Liu, Yangang

    2010-05-12

    A new one-dimensional radiative equilibrium model is built to analytically evaluate the vertical profile of the Earth's atmospheric radiation entropy flux under the assumption that atmospheric longwave radiation emission behaves as a greybody and shortwave radiation as a diluted blackbody. Results show that both the atmospheric shortwave and net longwave radiation entropy fluxes increase with altitude, and the latter is about one order in magnitude greater than the former. The vertical profile of the atmospheric net radiation entropy flux follows approximately that of the atmospheric net longwave radiation entropy flux. Sensitivity study further reveals that a 'darker' atmosphere with a larger overall atmospheric longwave optical depth exhibits a smaller net radiation entropy flux at all altitudes, suggesting an intrinsic connection between the atmospheric net radiation entropy flux and the overall atmospheric longwave optical depth. These results indicate that the overall strength of the atmospheric irreversible processes at all altitudes as determined by the corresponding atmospheric net entropy flux is closely related to the amount of greenhouse gases in the atmosphere.

  13. Entropy analysis on non-equilibrium two-phase flow models

    SciTech Connect

    Karwat, H.; Ruan, Y.Q.

    1995-09-01

    A method of entropy analysis according to the second law of thermodynamics is proposed for the assessment of a class of practical non-equilibrium two-phase flow models. Entropy conditions are derived directly from a local instantaneous formulation for an arbitrary control volume of a structural two-phase fluid, which are finally expressed in terms of the averaged thermodynamic independent variables and their time derivatives as well as the boundary conditions for the volume. On the basis of a widely used thermal-hydraulic system code it is demonstrated with practical examples that entropy production rates in control volumes can be numerically quantified by using the data from the output data files. Entropy analysis using the proposed method is useful in identifying some potential problems in two-phase flow models and predictions as well as in studying the effects of some free parameters in closure relationships.

  14. An Integrated Modeling Framework for Probable Maximum Precipitation and Flood

    NASA Astrophysics Data System (ADS)

    Gangrade, S.; Rastogi, D.; Kao, S. C.; Ashfaq, M.; Naz, B. S.; Kabela, E.; Anantharaj, V. G.; Singh, N.; Preston, B. L.; Mei, R.

    2015-12-01

    With the increasing frequency and magnitude of extreme precipitation and flood events projected in the future climate, there is a strong need to enhance our modeling capabilities to assess the potential risks on critical energy-water infrastructures such as major dams and nuclear power plants. In this study, an integrated modeling framework is developed through high performance computing to investigate the climate change effects on probable maximum precipitation (PMP) and probable maximum flood (PMF). Multiple historical storms from 1981-2012 over the Alabama-Coosa-Tallapoosa River Basin near the Atlanta metropolitan area are simulated by the Weather Research and Forecasting (WRF) model using the Climate Forecast System Reanalysis (CFSR) forcings. After further WRF model tuning, these storms are used to simulate PMP through moisture maximization at initial and lateral boundaries. A high resolution hydrological model, Distributed Hydrology-Soil-Vegetation Model, implemented at 90m resolution and calibrated by the U.S. Geological Survey streamflow observations, is then used to simulate the corresponding PMF. In addition to the control simulation that is driven by CFSR, multiple storms from the Community Climate System Model version 4 under the Representative Concentrations Pathway 8.5 emission scenario are used to simulate PMP and PMF in the projected future climate conditions. The multiple PMF scenarios developed through this integrated modeling framework may be utilized to evaluate the vulnerability of existing energy-water infrastructures with various aspects associated PMP and PMF.

  15. Expected Shannon Entropy and Shannon Differentiation between Subpopulations for Neutral Genes under the Finite Island Model

    PubMed Central

    Chao, Anne; Jost, Lou; Hsieh, T. C.; Ma, K. H.; Sherwin, William B.; Rollins, Lee Ann

    2015-01-01

    Shannon entropy H and related measures are increasingly used in molecular ecology and population genetics because (1) unlike measures based on heterozygosity or allele number, these measures weigh alleles in proportion to their population fraction, thus capturing a previously-ignored aspect of allele frequency distributions that may be important in many applications; (2) these measures connect directly to the rich predictive mathematics of information theory; (3) Shannon entropy is completely additive and has an explicitly hierarchical nature; and (4) Shannon entropy-based differentiation measures obey strong monotonicity properties that heterozygosity-based measures lack. We derive simple new expressions for the expected values of the Shannon entropy of the equilibrium allele distribution at a neutral locus in a single isolated population under two models of mutation: the infinite allele model and the stepwise mutation model. Surprisingly, this complex stochastic system for each model has an entropy expressable as a simple combination of well-known mathematical functions. Moreover, entropy- and heterozygosity-based measures for each model are linked by simple relationships that are shown by simulations to be approximately valid even far from equilibrium. We also identify a bridge between the two models of mutation. We apply our approach to subdivided populations which follow the finite island model, obtaining the Shannon entropy of the equilibrium allele distributions of the subpopulations and of the total population. We also derive the expected mutual information and normalized mutual information (“Shannon differentiation”) between subpopulations at equilibrium, and identify the model parameters that determine them. We apply our measures to data from the common starling (Sturnus vulgaris) in Australia. Our measures provide a test for neutrality that is robust to violations of equilibrium assumptions, as verified on real world data from starlings. PMID

  16. Expected Shannon Entropy and Shannon Differentiation between Subpopulations for Neutral Genes under the Finite Island Model.

    PubMed

    Chao, Anne; Jost, Lou; Hsieh, T C; Ma, K H; Sherwin, William B; Rollins, Lee Ann

    2015-01-01

    Shannon entropy H and related measures are increasingly used in molecular ecology and population genetics because (1) unlike measures based on heterozygosity or allele number, these measures weigh alleles in proportion to their population fraction, thus capturing a previously-ignored aspect of allele frequency distributions that may be important in many applications; (2) these measures connect directly to the rich predictive mathematics of information theory; (3) Shannon entropy is completely additive and has an explicitly hierarchical nature; and (4) Shannon entropy-based differentiation measures obey strong monotonicity properties that heterozygosity-based measures lack. We derive simple new expressions for the expected values of the Shannon entropy of the equilibrium allele distribution at a neutral locus in a single isolated population under two models of mutation: the infinite allele model and the stepwise mutation model. Surprisingly, this complex stochastic system for each model has an entropy expressable as a simple combination of well-known mathematical functions. Moreover, entropy- and heterozygosity-based measures for each model are linked by simple relationships that are shown by simulations to be approximately valid even far from equilibrium. We also identify a bridge between the two models of mutation. We apply our approach to subdivided populations which follow the finite island model, obtaining the Shannon entropy of the equilibrium allele distributions of the subpopulations and of the total population. We also derive the expected mutual information and normalized mutual information ("Shannon differentiation") between subpopulations at equilibrium, and identify the model parameters that determine them. We apply our measures to data from the common starling (Sturnus vulgaris) in Australia. Our measures provide a test for neutrality that is robust to violations of equilibrium assumptions, as verified on real world data from starlings.

  17. Abolishing the maximum tension principle

    NASA Astrophysics Data System (ADS)

    Dąbrowski, Mariusz P.; Gohar, H.

    2015-09-01

    We find the series of example theories for which the relativistic limit of maximum tension Fmax =c4 / 4 G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.

  18. Constant Entropy Properties for an Approximate Model of Equilibrium Air

    NASA Technical Reports Server (NTRS)

    Hansen, C. Frederick; Hodge, Marion E.

    1961-01-01

    Approximate analytic solutions for properties of equilibrium air up to 15,000 K have been programmed for machine computation. Temperature, compressibility, enthalpy, specific heats, and speed of sound are tabulated as constant entropy functions of temperature. The reciprocal of acoustic impedance and its integral with respect to pressure are also given for the purpose of evaluating the Riemann constants for one-dimensional, isentropic flow.

  19. Improved model for the transit entropy of monatomic liquids

    NASA Astrophysics Data System (ADS)

    Chisolm, Eric; Bock, Nicolas; Wallace, Duane

    2010-03-01

    In the original formulation of vibration-transit (V-T) theory for monatomic liquid dynamics, the transit contribution to entropy was taken to be a universal constant, calibrated to the constant-volume entropy of melting. This implied that the transit contribution to energy vanishes, which is incorrect. Here we develop a new formulation that corrects this deficiency. The theory contains two nuclear motion contributions: (a) the dominant vibrational contribution Svib(T/θ0), where T is temperature and θ0 is the vibrational characteristic temperature, and (b) the transit contribution Str(T/θtr), where θtr is a scaling temperature for each liquid. The appearance of a common functional form of Str for all the liquids studied is deduced from the experimental data, when analyzed via the V-T formula. The theoretical entropy of melting is derived, in a single formula applying to normal and anomalous melting alike. An ab initio calculation of θ0 for Na and Cu, based on density functional theory, provides verification of our analysis and V-T theory. In view of the present results, techniques currently being applied in ab initio simulations of liquid properties can be employed to advantage in the further testing and development of V-T theory.

  20. Develop and Test a Solvent Accessible Surface Area-Based Model in Conformational Entropy Calculations

    PubMed Central

    Wang, Junmei; Hou, Tingjun

    2012-01-01

    It is of great interest in modern drug design to accurately calculate the free energies of protein-ligand or nucleic acid-ligand binding. MM-PBSA (Molecular Mechanics-Poisson Boltzmann Surface Area) and MM-GBSA (Molecular Mechanics-Generalized Born Surface Area) have gained popularity in this field. For both methods, the conformational entropy, which is usually calculated through normal mode analysis (NMA), is needed to calculate the absolute binding free energies. Unfortunately, NMA is computationally demanding and becomes a bottleneck of the MM-PB/GBSA-NMA methods. In this work, we have developed a fast approach to estimate the conformational entropy based upon solvent accessible surface area calculations. In our approach, the conformational entropy of a molecule, S, can be obtained by summing up the contributions of all atoms, no matter they are buried or exposed. Each atom has two types of surface areas, solvent accessible surface area (SAS) and buried SAS (BSAS). The two types of surface areas are weighted to estimate the contribution of an atom to S. Atoms having the same atom type share the same weight and a general parameter k is applied to balance the contributions of the two types of surface areas. This entropy model was parameterized using a large set of small molecules for which their conformational entropies were calculated at the B3LYP/6-31G* level taking the solvent effect into account. The weighted solvent accessible surface area (WSAS) model was extensively evaluated in three tests. For the convenience, TS, the product of temperature T and conformational entropy S, were calculated in those tests. T was always set to 298.15 K through the text. First of all, good correlations were achieved between WSAS TS and NMA TS for 44 protein or nucleic acid systems sampled with molecular dynamics simulations (10 snapshots were collected for post-entropy calculations): the mean correlation coefficient squares (R2) was 0.56. As to the 20 complexes, the TS changes

  1. Tracking instantaneous entropy in heartbeat dynamics through inhomogeneous point-process nonlinear models.

    PubMed

    Valenza, Gaetano; Citi, Luca; Scilingo, Enzo Pasquale; Barbieri, Riccardo

    2014-01-01

    Measures of entropy have been proved as powerful quantifiers of complex nonlinear systems, particularly when applied to stochastic series of heartbeat dynamics. Despite the remarkable achievements obtained through standard definitions of approximate and sample entropy, a time-varying definition of entropy characterizing the physiological dynamics at each moment in time is still missing. To this extent, we propose two novel measures of entropy based on the inho-mogeneous point-process theory. The RR interval series is modeled through probability density functions (pdfs) which characterize and predict the time until the next event occurs as a function of the past history. Laguerre expansions of the Wiener-Volterra autoregressive terms account for the long-term nonlinear information. As the proposed measures of entropy are instantaneously defined through such probability functions, the proposed indices are able to provide instantaneous tracking of autonomic nervous system complexity. Of note, the distance between the time-varying phase-space vectors is calculated through the Kolmogorov-Smirnov distance of two pdfs. Experimental results, obtained from the analysis of RR interval series extracted from ten healthy subjects during stand-up tasks, suggest that the proposed entropy indices provide instantaneous tracking of the heartbeat complexity, also allowing for the definition of complexity variability indices.

  2. Tracking instantaneous entropy in heartbeat dynamics through inhomogeneous point-process nonlinear models.

    PubMed

    Valenza, Gaetano; Citi, Luca; Scilingo, Enzo Pasquale; Barbieri, Riccardo

    2014-01-01

    Measures of entropy have been proved as powerful quantifiers of complex nonlinear systems, particularly when applied to stochastic series of heartbeat dynamics. Despite the remarkable achievements obtained through standard definitions of approximate and sample entropy, a time-varying definition of entropy characterizing the physiological dynamics at each moment in time is still missing. To this extent, we propose two novel measures of entropy based on the inho-mogeneous point-process theory. The RR interval series is modeled through probability density functions (pdfs) which characterize and predict the time until the next event occurs as a function of the past history. Laguerre expansions of the Wiener-Volterra autoregressive terms account for the long-term nonlinear information. As the proposed measures of entropy are instantaneously defined through such probability functions, the proposed indices are able to provide instantaneous tracking of autonomic nervous system complexity. Of note, the distance between the time-varying phase-space vectors is calculated through the Kolmogorov-Smirnov distance of two pdfs. Experimental results, obtained from the analysis of RR interval series extracted from ten healthy subjects during stand-up tasks, suggest that the proposed entropy indices provide instantaneous tracking of the heartbeat complexity, also allowing for the definition of complexity variability indices. PMID:25571453

  3. Computational realizations of the entropy condition in modeling congested traffic flow. Final report

    SciTech Connect

    Bui, D.D.; Nelson, P.; Narasimhan, S.L.

    1992-04-01

    Existing continuum models of traffic flow tend to provide somewhat unrealistic predictions for conditions of congested flow. Previous approaches to modeling congested flow conditions are based on various types of special treatments at the congested freeway sections. Ansorge (Transpn. Res. B, 24B(1990), 133-143) has suggested that such difficulties might be substantially alleviated, even for the simple conservation model of Lighthill and Whitman, if the entropy condition were incorporated into the numerical schemes. In this report the numerical aspects and effects of incorporating the entropy condition in congested traffic flow problems are discussed. Results for simple scenarios involving dissipation of traffic jams suggest that Godnunov's method, which in a numerical technique that incorporates the entropy condition, is more accurate than two alternative methods. Similarly, numerical results for this method, applied to simple model problems involving formation of traffic jams, appear at least as realistic as those obtained from the well-known code of FREFLO.

  4. Maximum sustainable yields from a spatially-explicit harvest model.

    PubMed

    Takashina, Nao; Mougi, Akihiko

    2015-10-21

    Spatial heterogeneity plays an important role in complex ecosystem dynamics, and therefore is also an important consideration in sustainable resource management. However, little is known about how spatial effects can influence management targets derived from a non-spatial harvest model. Here, we extended the Schaefer model, a conventional non-spatial harvest model that is widely used in resource management, to a spatially-explicit harvest model by integrating environmental heterogeneities, as well as species exchange between patches. By comparing the maximum sustainable yields (MSY), one of the central management targets in resource management, obtained from the spatially extended model with that of the conventional model, we examined the effect of spatial heterogeneity. When spatial heterogeneity exists, we found that the Schaefer model tends to overestimate the MSY, implying potential for causing overharvesting. In addition, by assuming a well-mixed population in the heterogeneous environment, we showed analytically that the Schaefer model always overestimate the MSY, regardless of the number of patches existing. The degree of overestimation becomes significant when spatial heterogeneity is marked. Collectively, these results highlight the importance of integrating the spatial structure to conduct sustainable resource management.

  5. Modeling of groundwater productivity in northeastern Wasit Governorate, Iraq using frequency ratio and Shannon's entropy models

    NASA Astrophysics Data System (ADS)

    Al-Abadi, Alaa M.

    2015-04-01

    In recent years, delineation of groundwater productivity zones plays an increasingly important role in sustainable management of groundwater resource throughout the world. In this study, groundwater productivity index of northeastern Wasit Governorate was delineated using probabilistic frequency ratio (FR) and Shannon's entropy models in framework of GIS. Eight factors believed to influence the groundwater occurrence in the study area were selected and used as the input data. These factors were elevation (m), slope angle (degree), geology, soil, aquifer transmissivity (m2/d), storativity (dimensionless), distance to river (m), and distance to faults (m). In the first step, borehole location inventory map consisting of 68 boreholes with relatively high yield (>8 l/sec) was prepared. 47 boreholes (70 %) were used as training data and the remaining 21 (30 %) were used for validation. The predictive capability of each model was determined using relative operating characteristic technique. The results of the analysis indicate that the FR model with a success rate of 87.4 % and prediction rate 86.9 % performed slightly better than Shannon's entropy model with success rate of 84.4 % and prediction rate of 82.4 %. The resultant groundwater productivity index was classified into five classes using natural break classification scheme: very low, low, moderate, high, and very high. The high-very high classes for FR and Shannon's entropy models occurred within 30 % (217 km2) and 31 % (220 km2), respectively indicating low productivity conditions of the aquifer system. From final results, both of the models were capable to prospect GWPI with very good results, but FR was better in terms of success and prediction rates. Results of this study could be helpful for better management of groundwater resources in the study area and give planners and decision makers an opportunity to prepare appropriate groundwater investment plans.

  6. The Maximum Disk Hypothesis and 2-D Spiral Galaxy Models

    NASA Astrophysics Data System (ADS)

    Palunas, P.; Williams, T. B.

    1995-12-01

    We present an analysis of two-dimensional \\ha\\ velocity fields and I-band surface photometry for spiral galaxies taken from the southern sky Fabry-Perot Tully-Fisher survey (Schommer et al., 1993, AJ 105, 97). We construct axi-symmetric maximum disk mass models for 75 galaxies and examine in detail the deviations from axi-symmetry in the surface brightness and kinematics for a subsample of these galaxies. The luminosity profiles and rotation curves are derived using consistent centers, position angles, and inclinations. The disk and bulge are deconvolved by fitting an exponential disk and a series expansion of Gaussians for the bulge directly to the I-band images. This helps constrain the deconvolution by exploiting geometric information as well as the distinct disk and bulge radial profiles. The final disk model is the surface brightness profile of the bulge-subtracted image. The photometric model is fitted to the rotation curve assuming a maximum disk and constant M/L's for the disk and bulge components. The overall structure of the photometric models reproduces the structure in the rotation curves in the majority of galaxies spanning a large range of morphologies and rotation widths from 120 \\kms\\ to 680 \\kms. The median I-band M/L in solar units is 2.8, consistent with normal stellar populations. These results make the disk-halo conspiracy even more puzzling. The degree to which spiral galaxy mass models can reproduce small-scale structure in rotation curves is often used as evidence to support or refute the maximum disk hypothesis. However, single-slit rotation curves sample the velocity distribution only along the major axis, and photometric profiles for inclined galaxies are also sampled most heavily near the major axis. The small-scale structure can be due to local perturbations, such as spiral arms and spiral-arm streaming motions, rather than variations in the global mass distribution. We test this hypothesis by analysing azimuthal correlations in

  7. Experimental tests of the von Karman self-preservation hypothesis: decay of an electron plasma to a near-maximum entropy state

    NASA Astrophysics Data System (ADS)

    Rodgers, D.; Servidio, S.; Matthaeus, W. H.; Montgomery, D.; Mitchell, T.; Aziz, T.

    2009-12-01

    The self-preservation hypothesis of von Karman [1] implies that in three dimensiolnal turbulence the energy E decays as dE/dt = - a Z^3/L, where a is a constant, Z is the turbulence amplitude and L is a simlarity length scale. Extensions of this idea to MHD [2] has been of great utility in solar wind and coronal heating studies. Here we conduct an experimental study of this idea in the context of two dimensional electron plasma turbulence. In particular, we examine the time evolution that leads to dynamical relaxation of a pure electron plasma in a Malmberg-Penning (MP) trap, comparing experiments and statistical theories of weakly dissipative two-dimensional (2D) turbulence [3]. A formulation of von Karman-Howarth (vKH) self-preserving decay is presented for a 2D positive-vorticity fluid, a system that corresponds closely to a 2D electron ExB drift plasma. When the enstrophy of the meta-stable equilibrium is accounted for, the enstrophy decay follows the predicted vKH decay for a variety of initial conditions in the MP experiment. Statistical analysis favors a theoretical picture of relaxation to a near-maximum entropy state, evidently driven by a self-preserving decay of enstrophy. [1] T. de Karman and L. Howarth, Proc. Roy. Soc Lon. A, 164, 192, 1938. [2] W. H. Matthaeus, G. P. Zank, and S. Oughton. J. Plas. Phys., 56:659, 1996. [3] D. J. Rodgers, S. Servidio, W. H. Matthaeus, D. C. Montgomery, T. B. Mitchell, and T. Aziz. Phys. Rev. Lett., 102(24):244501, 2009.

  8. Estimating the entropy of DNA sequences.

    PubMed

    Schmitt, A O; Herzel, H

    1997-10-01

    The Shannon entropy is a standard measure for the order state of symbol sequences, such as, for example, DNA sequences. In order to incorporate correlations between symbols, the entropy of n-mers (consecutive strands of n symbols) has to be determined. Here, an assay is presented to estimate such higher order entropies (block entropies) for DNA sequences when the actual number of observations is small compared with the number of possible outcomes. The n-mer probability distribution underlying the dynamical process is reconstructed using elementary statistical principles: The theorem of asymptotic equi-distribution and the Maximum Entropy Principle. Constraints are set to force the constructed distributions to adopt features which are characteristic for the real probability distribution. From the many solutions compatible with these constraints the one with the highest entropy is the most likely one according to the Maximum Entropy Principle. An algorithm performing this procedure is expounded. It is tested by applying it to various DNA model sequences whose exact entropies are known. Finally, results for a real DNA sequence, the complete genome of the Epstein Barr virus, are presented and compared with those of other information carriers (texts, computer source code, music). It seems as if DNA sequences possess much more freedom in the combination of the symbols of their alphabet than written language or computer source codes. PMID:9344742

  9. Configurational entropy as a constraint for Gauss-Bonnet braneworld models

    NASA Astrophysics Data System (ADS)

    Correa, R. A. C.; Moraes, P. H. R. S.; de Souza Dutra, A.; de Paula, W.; Frederico, T.

    2016-10-01

    Configurational entropy has been revealed as a reliable method for constraining some parameters of a given model [Phys. Rev. D 92, 126005 (2015); Eur. Phys. J. C 76, 100 (2016)]. In this work, we calculate the configurational entropy in Gauss-Bonnet braneworld models. Our results restrict the range of acceptability of the Gauss-Bonnet scalar values. In this way, the information theoretical measure in Gauss-Bonnet scenarios opens a new window to probe situations where the additional parameters, responsible for the Gauss-Bonnet sector, are arbitrary.

  10. A stochastic Pella Tomlinson model and its maximum sustainable yield.

    PubMed

    Bordet, Charles; Rivest, Louis-Paul

    2014-11-01

    This paper investigates the biological reference points, such as the maximum sustainable yield (MSY), for the Pella Tomlinson and the Fox surplus production models (SPM) in the presence of a multiplicative environmental noise. These models are used in fisheries stock assessment as a firsthand tool for the elaboration of harvesting strategies. We derive conditions on the environmental noise distribution that insure that the biomass process for an SPM has a stationary distribution, so that extinction is avoided. Explicit results about the stationary behavior of the biomass distribution are provided for a particular specification of the noise. The consideration of random noise in the MSY calculations leads to more conservative harvesting target than deterministic models. The derivations account for a possible noise autocorrelation that represents the occurrence of spells of good and bad years. The impact of the noise is found to be more severe on Pella Tomlinson model for which the asymmetry parameter p is large while it is less important for Fox model.

  11. Modeling East African tropical glaciers during the Last Glacial Maximum

    NASA Astrophysics Data System (ADS)

    Doughty, Alice; Kelly, Meredith; Russell, James; Jackson, Margaret; Anderson, Brian; Nakileza, Robert

    2016-04-01

    The timing and magnitude of tropical glacier fluctuations since the last glacial maximum could elucidate how climatic signals transfer between hemispheres. We focus on ancient glaciers of the East African Rwenzori Mountains, Uganda/D.R. Congo, where efforts to map and date the moraines are on-going. We use a coupled mass balance - ice flow model to infer past climate by simulating glacier extents that match the mapped and dated LGM moraines. A range of possible temperature/precipitation change combinations (e.g. -15% precipitation and -7C temperature change) allow simulated glaciers to fit the LGM moraines dated to 20,140±610 and 23,370±470 years old.

  12. Maximum likelihood identification of aircraft parameters with unsteady aerodynamic modelling

    NASA Technical Reports Server (NTRS)

    Keskar, D. A.; Wells, W. R.

    1979-01-01

    A simplified aerodynamic force model based on the physical principle of Prandtl's lifting line theory and trailing vortex concept has been developed to account for unsteady aerodynamic effects in aircraft dynamics. Longitudinal equations of motion have been modified to include these effects. The presence of convolution integrals in the modified equations of motion led to a frequency domain analysis utilizing Fourier transforms. This reduces the integro-differential equations to relatively simple algebraic equations, thereby reducing computation time significantly. A parameter extraction program based on the maximum likelihood estimation technique is developed in the frequency domain. The extraction algorithm contains a new scheme for obtaining sensitivity functions by using numerical differentiation. The paper concludes with examples using computer generated and real flight data

  13. Fine structure of the entanglement entropy in the O(2) model

    NASA Astrophysics Data System (ADS)

    Yang, Li-Ping; Liu, Yuzhi; Zou, Haiyuan; Xie, Z. Y.; Meurice, Y.

    2016-01-01

    We compare two calculations of the particle density in the superfluid phase of the O(2) model with a chemical potential μ in 1+1 dimensions. The first relies on exact blocking formulas from the Tensor Renormalization Group (TRG) formulation of the transfer matrix. The second is a worm algorithm. We show that the particle number distributions obtained with the two methods agree well. We use the TRG method to calculate the thermal entropy and the entanglement entropy. We describe the particle density, the two entropies and the topology of the world lines as we increase μ to go across the superfluid phase between the first two Mott insulating phases. For a sufficiently large temporal size, this process reveals an interesting fine structure: the average particle number and the winding number of most of the world lines in the Euclidean time direction increase by one unit at a time. At each step, the thermal entropy develops a peak and the entanglement entropy increases until we reach half-filling and then decreases in a way that approximately mirrors the ascent. This suggests an approximate fermionic picture.

  14. Thermospheric density model biases at the 23rd sunspot maximum

    NASA Astrophysics Data System (ADS)

    Pardini, C.; Moe, K.; Anselmo, L.

    2012-07-01

    Uncertainties in the neutral density estimation are the major source of aerodynamic drag errors and one of the main limiting factors in the accuracy of the orbit prediction and determination process at low altitudes. Massive efforts have been made over the years to constantly improve the existing operational density models, or to create even more precise and sophisticated tools. Special attention has also been paid to research more appropriate solar and geomagnetic indices. However, the operational models still suffer from weakness. Even if a number of studies have been carried out in the last few years to define the performance improvements, further critical assessments are necessary to evaluate and compare the models at different altitudes and solar activity conditions. Taking advantage of the results of a previous study, an investigation of thermospheric density model biases during the last sunspot maximum (October 1999 - December 2002) was carried out by analyzing the semi-major axis decay of four satellites: Cosmos 2265, Cosmos 2332, SNOE and Clementine. Six thermospheric density models, widely used in spacecraft operations, were analyzed: JR-71, MSISE-90, NRLMSISE-00, GOST-2004, JB2006 and JB2008. During the time span considered, for each satellite and atmospheric density model, a fitted drag coefficient was solved for and then compared with the calculated physical drag coefficient. It was therefore possible to derive the average density biases of the thermospheric models during the maximum of the 23rd solar cycle. Below 500 km, all the models overestimated the average atmospheric density by amounts varying between +7% and +20%. This was an inevitable consequence of constructing thermospheric models from density data obtained by assuming a fixed drag coefficient, independent of altitude. Because the uncertainty affecting the drag coefficient measurements was about 3% at both 200 km and 480 km of altitude, the calculated air density biases below 500 km were

  15. Entity Relation Detection with Factorial Hidden Markov Models and Maximum Entropy Discriminant Latent Dirichlet Allocations

    ERIC Educational Resources Information Center

    Li, Dingcheng

    2011-01-01

    Coreference resolution (CR) and entity relation detection (ERD) aim at finding predefined relations between pairs of entities in text. CR focuses on resolving identity relations while ERD focuses on detecting non-identity relations. Both CR and ERD are important as they can potentially improve other natural language processing (NLP) related tasks…

  16. A Discrete Constraint for Entropy Conservation and Sound Waves in Cloud-Resolving Modeling

    NASA Technical Reports Server (NTRS)

    Zeng, Xi-Ping; Tao, Wei-Kuo; Simpson, Joanne

    2003-01-01

    Ideal cloud-resolving models contain little-accumulative errors. When their domain is so large that synoptic large-scale circulations are accommodated, they can be used for the simulation of the interaction between convective clouds and the large-scale circulations. This paper sets up a framework for the models, using moist entropy as a prognostic variable and employing conservative numerical schemes. The models possess no accumulative errors of thermodynamic variables when they comply with a discrete constraint on entropy conservation and sound waves. Alternatively speaking, the discrete constraint is related to the correct representation of the large-scale convergence and advection of moist entropy. Since air density is involved in entropy conservation and sound waves, the challenge is how to compute sound waves efficiently under the constraint. To address the challenge, a compensation method is introduced on the basis of a reference isothermal atmosphere whose governing equations are solved analytically. Stability analysis and numerical experiments show that the method allows the models to integrate efficiently with a large time step.

  17. Entropy production analysis for hump characteristics of a pump turbine model

    NASA Astrophysics Data System (ADS)

    Li, Deyou; Gong, Ruzhi; Wang, Hongjie; Xiang, Gaoming; Wei, Xianzhu; Qin, Daqing

    2016-07-01

    The hump characteristic is one of the main problems for the stable operation of pump turbines in pump mode. However, traditional methods cannot reflect directly the energy dissipation in the hump region. In this paper, 3D simulations are carried out using the SST k- ω turbulence model in pump mode under different guide vane openings. The numerical results agree with the experimental data. The entropy production theory is introduced to determine the flow losses in the whole passage, based on the numerical simulation. The variation of entropy production under different guide vane openings is presented. The results show that entropy production appears to be a wave, with peaks under different guide vane openings, which correspond to wave troughs in the external characteristic curves. Entropy production mainly happens in the runner, guide vanes and stay vanes for a pump turbine in pump mode. Finally, entropy production rate distribution in the runner, guide vanes and stay vanes is analyzed for four points under the 18 mm guide vane opening in the hump region. The analysis indicates that the losses of the runner and guide vanes lead to hump characteristics. In addition, the losses mainly occur in the runner inlet near the band and on the suction surface of the blades. In the guide vanes and stay vanes, the losses come from pressure surface of the guide vanes and the wake effects of the vanes. A new insight-entropy production analysis is carried out in this paper in order to find the causes of hump characteristics in a pump turbine, and it could provide some basic theoretical guidance for the loss analysis of hydraulic machinery.

  18. Entropy production analysis for hump characteristics of a pump turbine model

    NASA Astrophysics Data System (ADS)

    Li, Deyou; Gong, Ruzhi; Wang, Hongjie; Xiang, Gaoming; Wei, Xianzhu; Qin, Daqing

    2016-06-01

    The hump characteristic is one of the main problems for the stable operation of pump turbines in pump mode. However, traditional methods cannot reflect directly the energy dissipation in the hump region. In this paper, 3D simulations are carried out using the SST k-ω turbulence model in pump mode under different guide vane openings. The numerical results agree with the experimental data. The entropy production theory is introduced to determine the flow losses in the whole passage, based on the numerical simulation. The variation of entropy production under different guide vane openings is presented. The results show that entropy production appears to be a wave, with peaks under different guide vane openings, which correspond to wave troughs in the external characteristic curves. Entropy production mainly happens in the runner, guide vanes and stay vanes for a pump turbine in pump mode. Finally, entropy production rate distribution in the runner, guide vanes and stay vanes is analyzed for four points under the 18 mm guide vane opening in the hump region. The analysis indicates that the losses of the runner and guide vanes lead to hump characteristics. In addition, the losses mainly occur in the runner inlet near the band and on the suction surface of the blades. In the guide vanes and stay vanes, the losses come from pressure surface of the guide vanes and the wake effects of the vanes. A new insight-entropy production analysis is carried out in this paper in order to find the causes of hump characteristics in a pump turbine, and it could provide some basic theoretical guidance for the loss analysis of hydraulic machinery.

  19. A simple modelling approach for prediction of standard state real gas entropy of pure materials.

    PubMed

    Bagheri, M; Borhani, T N G; Gandomi, A H; Manan, Z A

    2014-01-01

    The performance of an energy conversion system depends on exergy analysis and entropy generation minimisation. A new simple four-parameter equation is presented in this paper to predict the standard state absolute entropy of real gases (SSTD). The model development and validation were accomplished using the Linear Genetic Programming (LGP) method and a comprehensive dataset of 1727 widely used materials. The proposed model was compared with the results obtained using a three-layer feed forward neural network model (FFNN model). The root-mean-square error (RMSE) and the coefficient of determination (r(2)) of all data obtained for the LGP model were 52.24 J/(mol K) and 0.885, respectively. Several statistical assessments were used to evaluate the predictive power of the model. In addition, this study provides an appropriate understanding of the most important molecular variables for exergy analysis. Compared with the LGP based model, the application of FFNN improved the r(2) to 0.914. The developed model is useful in the design of materials to achieve a desired entropy value.

  20. The Entropy Estimation of the Physics’ Course Content on the Basis of Intradisciplinary Connections’ Information Model

    NASA Astrophysics Data System (ADS)

    Tatyana, Gnitetskaya

    2016-08-01

    In this paper the information model of intradisciplinary connections and semantic structures method are described. The information parameters, which we use in information model, are introduced. The question we would like to answer in this paper is - how to optimize the Physics Course’ content. As an example, the differences between entropy values in the contents of physics lecture with one topic but different logics of explanations are showed.

  1. Fault Detection and Diagnosis for Gas Turbines Based on a Kernelized Information Entropy Model

    PubMed Central

    Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei

    2014-01-01

    Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms. PMID:25258726

  2. Fault detection and diagnosis for gas turbines based on a kernelized information entropy model.

    PubMed

    Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei

    2014-01-01

    Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms. PMID:25258726

  3. Fault detection and diagnosis for gas turbines based on a kernelized information entropy model.

    PubMed

    Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei

    2014-01-01

    Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms.

  4. Assessment model of ecoenvironmental vulnerability based on improved entropy weight method.

    PubMed

    Zhang, Xianqi; Wang, Chenbo; Li, Enkuan; Xu, Cundong

    2014-01-01

    Assessment of ecoenvironmental vulnerability plays an important role in the guidance of regional planning, the construction and protection of ecological environment, which requires comprehensive consideration on regional resources, environment, ecology, society and other factors. Based on the driving mechanism and evolution characteristics of ecoenvironmental vulnerability in cold and arid regions of China, a novel evaluation index system on ecoenvironmental vulnerability is proposed in this paper. For the disadvantages of conventional entropy weight method, an improved entropy weight assessment model on ecoenvironmental vulnerability is developed and applied to evaluate the ecoenvironmental vulnerability in western Jilin Province of China. The assessing results indicate that the model is suitable for ecoenvironmental vulnerability assessment, and it shows more reasonable evaluation criterion, more distinct insights and satisfactory results combined with the practical conditions. The model can provide a new method for regional ecoenvironmental vulnerability evaluation. PMID:25133260

  5. Assessment Model of Ecoenvironmental Vulnerability Based on Improved Entropy Weight Method

    PubMed Central

    Zhang, Xianqi; Wang, Chenbo; Li, Enkuan; Xu, Cundong

    2014-01-01

    Assessment of ecoenvironmental vulnerability plays an important role in the guidance of regional planning, the construction and protection of ecological environment, which requires comprehensive consideration on regional resources, environment, ecology, society and other factors. Based on the driving mechanism and evolution characteristics of ecoenvironmental vulnerability in cold and arid regions of China, a novel evaluation index system on ecoenvironmental vulnerability is proposed in this paper. For the disadvantages of conventional entropy weight method, an improved entropy weight assessment model on ecoenvironmental vulnerability is developed and applied to evaluate the ecoenvironmental vulnerability in western Jilin Province of China. The assessing results indicate that the model is suitable for ecoenvironmental vulnerability assessment, and it shows more reasonable evaluation criterion, more distinct insights and satisfactory results combined with the practical conditions. The model can provide a new method for regional ecoenvironmental vulnerability evaluation. PMID:25133260

  6. Spectral Modeling of SNe Ia Near Maximum Light: Probing the Characteristics of Hydrodynamical Models

    NASA Astrophysics Data System (ADS)

    Baron, E.; Bongard, Sebastien; Branch, David; Hauschildt, Peter H.

    2006-07-01

    We have performed detailed non-local thermodynamic equilibrium (NLTE) spectral synthesis modeling of two types of one-dimensional hydrodynamical models: the very highly parameterized deflagration model W7, and two delayed-detonation models. We find that, overall, both models do about equally well at fitting well-observed SNe Ia near maximum light. However, the Si II λ6150 feature of W7 is systematically too fast, whereas for the delayed-detonation models it is also somewhat too fast but significantly better than that of W7. We find that a parameterized mixed model does the best job of reproducing the Si II λ6150 line near maximum light, and we study the differences in the models that lead to better fits to normal SNe Ia. We discuss what is required of a hydrodynamical model to fit the spectra of observed SNe Ia near maximum light.

  7. Generalized isotropic Lipkin-Meshkov-Glick models: ground state entanglement and quantum entropies

    NASA Astrophysics Data System (ADS)

    Carrasco, José A.; Finkel, Federico; González-López, Artemio; Rodríguez, Miguel A.; Tempesta, Piergiulio

    2016-03-01

    We introduce a new class of generalized isotropic Lipkin-Meshkov-Glick models with \\text{su}(m+1) spin and long-range non-constant interactions, whose non-degenerate ground state is a Dicke state of \\text{su}(m+1) type. We evaluate in closed form the reduced density matrix of a block of L spins when the whole system is in its ground state, and study the corresponding von Neumann and Rényi entanglement entropies in the thermodynamic limit. We show that both of these entropies scale as alog L when L tends to infinity, where the coefficient a is equal to (m  -  k)/2 in the ground state phase with k vanishing \\text{su}(m+1) magnon densities. In particular, our results show that none of these generalized Lipkin-Meshkov-Glick models are critical, since when L\\to ∞ their Rényi entropy R q becomes independent of the parameter q. We have also computed the Tsallis entanglement entropy of the ground state of these generalized \\text{su}(m+1) Lipkin-Meshkov-Glick models, finding that it can be made extensive by an appropriate choice of its parameter only when m-k≥slant 3 . Finally, in the \\text{su}(3) case we construct in detail the phase diagram of the ground state in parameter space, showing that it is determined in a simple way by the weights of the fundamental representation of \\text{su}(3) . This is also true in the \\text{su}(m+1) case; for instance, we prove that the region for which all the magnon densities are non-vanishing is an (m  +  1)-simplex in {{{R}}m} whose vertices are the weights of the fundamental representation of \\text{su}(m+1) .

  8. Spatial entanglement entropy in the ground state of the Lieb-Liniger model

    NASA Astrophysics Data System (ADS)

    Herdman, C. M.; Roy, P.-N.; Melko, R. G.; Del Maestro, A.

    2016-08-01

    We consider the entanglement between two spatial subregions in the Lieb-Liniger model of bosons in one spatial dimension interacting via a contact interaction. Using ground-state path integral quantum Monte Carlo we numerically compute the Rényi entropy of the reduced density matrix of the subsystem as a measure of entanglement. Our numerical algorithm is based on a replica method previously introduced by the authors, which we extend to efficiently study the entanglement of spatial subsystems of itinerant bosons. We confirm a logarithmic scaling of the Rényi entropy with subsystem size that is expected from conformal field theory, and compute the nonuniversal subleading constant for interaction strengths ranging over two orders of magnitude. In the strongly interacting limit, we find agreement with the known free fermion result.

  9. Implementing Restricted Maximum Likelihood Estimation in Structural Equation Models

    ERIC Educational Resources Information Center

    Cheung, Mike W.-L.

    2013-01-01

    Structural equation modeling (SEM) is now a generic modeling framework for many multivariate techniques applied in the social and behavioral sciences. Many statistical models can be considered either as special cases of SEM or as part of the latent variable modeling framework. One popular extension is the use of SEM to conduct linear mixed-effects…

  10. The Baldwin-Lomax model for separated and wake flows using the entropy envelope concept

    NASA Technical Reports Server (NTRS)

    Brock, J. S.; Ng, W. F.

    1992-01-01

    Implementation of the Baldwin-Lomax algebraic turbulence model is difficult and ambiguous within flows characterized by strong viscous-inviscid interactions and flow separations. A new method of implementation is proposed which uses an entropy envelope concept and is demonstrated to ensure the proper evaluation of modeling parameters. The method is simple, computationally fast, and applicable to both wake and boundary layer flows. The method is general, making it applicable to any turbulence model which requires the automated determination of the proper maxima of a vorticity-based function. The new method is evalulated within two test cases involving strong viscous-inviscid interaction.

  11. Entropy-driven hysteresis in a model of DNA overstretching

    NASA Astrophysics Data System (ADS)

    Whitelam, Stephen; Pronk, Sander; Geissler, Phillip

    2008-03-01

    When pulled along its axis, double-stranded DNA elongates abruptly at a force of about 65 pN. Two physical pictures have been developed to describe this overstretched state. The first proposes that strong forces induce a phase transition to a molten state consisting of unhybridized single strands. The second picture instead introduces an elongated hybridized phase, called S-DNA, structurally and thermodynamically distinct from standard B-DNA. Little thermodynamic evidence exists to discriminate directly between these competing pictures. Here we show that within a microscopic model of DNA we can distinguish between the dynamics associated with each. In experiment, considerable hysteresis in a cycle of stretching and shortening develops as temperature is increased. Since there are few possible causes of hysteresis in a system whose extent is appreciable in only one dimension, such behavior offers a discriminating test of the two pictures of overstretching. Most experiments are performed upon nicked DNA, permitting the detachment (`unpeeling') of strands. We show that the long-wavelength motion accompanying strand separation generates hysteresis, the character of which agrees with experiment only if we assume the existence of S-DNA.

  12. A Three-State Model with Loop Entropy for the Overstretching Transition of DNA

    PubMed Central

    Einert, Thomas R.; Staple, Douglas B.; Kreuzer, Hans-Jürgen; Netz, Roland R.

    2010-01-01

    Abstract We introduce a three-state model for a single DNA chain under tension that distinguishes among B-DNA, S-DNA, and M (molten or denatured) segments and at the same time correctly accounts for the entropy of molten loops, characterized by the exponent c in the asymptotic expression S ∼ –c ln n for the entropy of a loop of length n. Force extension curves are derived exactly by employing a generalized Poland-Scheraga approach and then compared to experimental data. Simultaneous fitting to force-extension data at room temperature and to the denaturation phase transition at zero force is possible and allows us to establish a global phase diagram in the force-temperature plane. Under a stretching force, the effects of the stacking energy (entering as a domain-wall energy between paired and unpaired bases) and the loop entropy are separated. Therefore, we can estimate the loop exponent c independently from the precise value of the stacking energy. The fitted value for c is small, suggesting that nicks dominate the experimental force extension traces of natural DNA. PMID:20643077

  13. RNA Thermodynamic Structural Entropy

    PubMed Central

    Garcia-Martin, Juan Antonio; Clote, Peter

    2015-01-01

    Conformational entropy for atomic-level, three dimensional biomolecules is known experimentally to play an important role in protein-ligand discrimination, yet reliable computation of entropy remains a difficult problem. Here we describe the first two accurate and efficient algorithms to compute the conformational entropy for RNA secondary structures, with respect to the Turner energy model, where free energy parameters are determined from UV absorption experiments. An algorithm to compute the derivational entropy for RNA secondary structures had previously been introduced, using stochastic context free grammars (SCFGs). However, the numerical value of derivational entropy depends heavily on the chosen context free grammar and on the training set used to estimate rule probabilities. Using data from the Rfam database, we determine that both of our thermodynamic methods, which agree in numerical value, are substantially faster than the SCFG method. Thermodynamic structural entropy is much smaller than derivational entropy, and the correlation between length-normalized thermodynamic entropy and derivational entropy is moderately weak to poor. In applications, we plot the structural entropy as a function of temperature for known thermoswitches, such as the repression of heat shock gene expression (ROSE) element, we determine that the correlation between hammerhead ribozyme cleavage activity and total free energy is improved by including an additional free energy term arising from conformational entropy, and we plot the structural entropy of windows of the HIV-1 genome. Our software RNAentropy can compute structural entropy for any user-specified temperature, and supports both the Turner’99 and Turner’04 energy parameters. It follows that RNAentropy is state-of-the-art software to compute RNA secondary structure conformational entropy. Source code is available at https://github.com/clotelab/RNAentropy/; a full web server is available at http

  14. RNA Thermodynamic Structural Entropy.

    PubMed

    Garcia-Martin, Juan Antonio; Clote, Peter

    2015-01-01

    Conformational entropy for atomic-level, three dimensional biomolecules is known experimentally to play an important role in protein-ligand discrimination, yet reliable computation of entropy remains a difficult problem. Here we describe the first two accurate and efficient algorithms to compute the conformational entropy for RNA secondary structures, with respect to the Turner energy model, where free energy parameters are determined from UV absorption experiments. An algorithm to compute the derivational entropy for RNA secondary structures had previously been introduced, using stochastic context free grammars (SCFGs). However, the numerical value of derivational entropy depends heavily on the chosen context free grammar and on the training set used to estimate rule probabilities. Using data from the Rfam database, we determine that both of our thermodynamic methods, which agree in numerical value, are substantially faster than the SCFG method. Thermodynamic structural entropy is much smaller than derivational entropy, and the correlation between length-normalized thermodynamic entropy and derivational entropy is moderately weak to poor. In applications, we plot the structural entropy as a function of temperature for known thermoswitches, such as the repression of heat shock gene expression (ROSE) element, we determine that the correlation between hammerhead ribozyme cleavage activity and total free energy is improved by including an additional free energy term arising from conformational entropy, and we plot the structural entropy of windows of the HIV-1 genome. Our software RNAentropy can compute structural entropy for any user-specified temperature, and supports both the Turner'99 and Turner'04 energy parameters. It follows that RNAentropy is state-of-the-art software to compute RNA secondary structure conformational entropy. Source code is available at https://github.com/clotelab/RNAentropy/; a full web server is available at http

  15. MAXIMUM LIKELIHOOD ESTIMATION FOR PERIODIC AUTOREGRESSIVE MOVING AVERAGE MODELS.

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  16. Two dimensional velocity distribution in open channels using Renyi entropy

    NASA Astrophysics Data System (ADS)

    Kumbhakar, Manotosh; Ghoshal, Koeli

    2016-05-01

    In this study, the entropy concept is employed for describing the two-dimensional velocity distribution in an open channel. Using the principle of maximum entropy, the velocity distribution is derived by maximizing the Renyi entropy by assuming dimensionless velocity as a random variable. The derived velocity equation is capable of describing the variation of velocity along both the vertical and transverse directions with maximum velocity occurring on or below the water surface. The developed model of velocity distribution is tested with field and laboratory observations and is also compared with existing entropy-based velocity distributions. The present model has shown good agreement with the observed data and its prediction accuracy is comparable with the other existing models.

  17. MODELING VERY LONG BASELINE INTERFEROMETRIC IMAGES WITH THE CROSS-ENTROPY GLOBAL OPTIMIZATION TECHNIQUE

    SciTech Connect

    Caproni, A.; Toffoli, R. T.; Monteiro, H.; Abraham, Z.; Teixeira, D. M.

    2011-07-20

    We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N{sub s} elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e.g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting

  18. Efficiently Evaluating Heavy Metal Urban Soil Pollution Using an Improved Entropy-Method-Based Topsis Model.

    PubMed

    Liu, Jie; Liu, Chun; Han, Wei

    2016-10-01

    Urban soil pollution is evaluated utilizing an efficient and simple algorithmic model referred to as the entropy method-based Topsis (EMBT) model. The model focuses on pollution source position to enhance the ability to analyze sources of pollution accurately. Initial application of EMBT to urban soil pollution analysis is actually implied. The pollution degree of sampling point can be efficiently calculated by the model with the pollution degree coefficient, which is efficiently attained by first utilizing the Topsis method to determine evaluation value and then by dividing the evaluation value of the sample point by background value. The Kriging interpolation method combines coordinates of sampling points with the corresponding coefficients and facilitates the formation of heavy metal distribution profile. A case study is completed with modeling results in accordance with actual heavy metal pollution, proving accuracy and practicality of the EMBT model.

  19. Efficiently Evaluating Heavy Metal Urban Soil Pollution Using an Improved Entropy-Method-Based Topsis Model.

    PubMed

    Liu, Jie; Liu, Chun; Han, Wei

    2016-10-01

    Urban soil pollution is evaluated utilizing an efficient and simple algorithmic model referred to as the entropy method-based Topsis (EMBT) model. The model focuses on pollution source position to enhance the ability to analyze sources of pollution accurately. Initial application of EMBT to urban soil pollution analysis is actually implied. The pollution degree of sampling point can be efficiently calculated by the model with the pollution degree coefficient, which is efficiently attained by first utilizing the Topsis method to determine evaluation value and then by dividing the evaluation value of the sample point by background value. The Kriging interpolation method combines coordinates of sampling points with the corresponding coefficients and facilitates the formation of heavy metal distribution profile. A case study is completed with modeling results in accordance with actual heavy metal pollution, proving accuracy and practicality of the EMBT model. PMID:27469469

  20. Modeling specific heat and entropy change in La(Fe-Mn-Si)13-H compounds

    NASA Astrophysics Data System (ADS)

    Piazzi, Marco; Bennati, Cecilia; Curcio, Carmen; Kuepferling, Michaela; Basso, Vittorio

    2016-02-01

    In this paper we model the magnetocaloric effect of LaFexMnySiz-H1.65 compound (x + y + z = 13), a system showing a transition temperature finely tunable around room temperature by Mn substitution. The thermodynamic model takes into account the coupling between magnetism and specific volume as introduced by Bean and Rodbell. We find a good qualitative agreement between experimental and modeled entropy change - Δs(H , T). The main result is that the magnetoelastic coupling drives the phase transition of the system, changing it from second to first order by varying a model parameter η. It is also responsible for a decrease of - Δs at the transition, due to a small lattice contribution to the entropy counteracting the effect of the magnetic one. The role of Mn is reflected exclusively in a decrease of the strength of the exchange interaction, while the value of the coefficient β, responsible for the coupling between volume and exchange energy, is independent on the Mn content and it appears to be an intrinsic property of the La(Fe-Si)13 structure.

  1. A Numerical Study of Entanglement Entropy of the Heisenberg Model on a Bethe Cluster

    NASA Astrophysics Data System (ADS)

    Friedman, Barry; Levine, Greg

    Numerical evidence is presented for a nearest neighbor Heisenberg spin model on a Bethe cluster, that by bisecting the cluster, the generalized Renyi entropy scales as the number of sites in the cluster. This disagrees with spin wave calculations and a naive application of the area law but agrees with previous results for non interacting fermions on the Bethe cluster. It seems this scaling is not an artifact of non interacting particles. As a consequence, the area law in greater then one dimension is more subtle then generally thought and applications of the density matrix renormalization group to Bethe clusters face difficulties at least as a matter of principle.

  2. Interstitial Zn atoms do the trick in thermoelectric zinc antimonide, Zn4Sb3: a combined maximum entropy method X-ray electron density and ab initio electronic structure study.

    PubMed

    Cargnoni, Fausto; Nishibori, Eiji; Rabiller, Philippe; Bertini, Luca; Snyder, G Jeffrey; Christensen, Mogens; Gatti, Carlo; Iversen, Bo Brummerstadt

    2004-08-20

    The experimental electron density of the high-performance thermoelectric material Zn4Sb3 has been determined by maximum entropy (MEM) analysis of short-wavelength synchrotron powder diffraction data. These data are found to be more accurate than conventional single-crystal data due to the reduction of common systematic errors, such as absorption, extinction and anomalous scattering. Analysis of the MEM electron density directly reveals interstitial Zn atoms and a partially occupied main Zn site. Two types of Sb atoms are observed: a free spherical ion (Sb3-) and Sb2(4-) dimers. Analysis of the MEM electron density also reveals possible Sb disorder along the c axis. The disorder, defects and vacancies are all features that contribute to the drastic reduction of the thermal conductivity of the material. Topological analysis of the thermally smeared MEM density has been carried out. Starting with the X-ray structure ab initio computational methods have been used to deconvolute structural information from the space-time data averaging inherent to the XRD experiment. The analysis reveals how interstitial Zn atoms and vacancies affect the electronic structure and transport properties of beta-Zn4Sb3. The structure consists of an ideal A12Sb10 framework in which point defects are distributed. We propose that the material is a 0.184:0.420:0.396 mixture of A12Sb10, A11BCSb10 and A10BCDSb10 cells, in which A, B, C and D are the four Zn sites in the X-ray structure. Given the similar density of states (DOS) of the A12Sb10, A11BCSb10 and A10BCDSb10 cells, one may electronically model the defective stoichiometry of the real system either by n-doping the 12-Zn atom cell or by p-doping the two 13-Zn atom cells. This leads to similar calculated Seebeck coefficients for the A12Sb10, A11BCSb10 and A10BCDSb10 cells (115.0, 123.0 and 110.3 microV K(-1) at T=670 K). The model system is therefore a p-doped semiconductor as found experimentally. The effect is dramatic if these cells are

  3. Offline modeling for product quality prediction of mineral processing using modeling error PDF shaping and entropy minimization.

    PubMed

    Ding, Jinliang; Chai, Tianyou; Wang, Hong

    2011-03-01

    This paper presents a novel offline modeling for product quality prediction of mineral processing which consists of a number of unit processes in series. The prediction of the product quality of the whole mineral process (i.e., the mixed concentrate grade) plays an important role and the establishment of its predictive model is a key issue for the plantwide optimization. For this purpose, a hybrid modeling approach of the mixed concentrate grade prediction is proposed, which consists of a linear model and a nonlinear model. The least-squares support vector machine is adopted to establish the nonlinear model. The inputs of the predictive model are the performance indices of each unit process, while the output is the mixed concentrate grade. In this paper, the model parameter selection is transformed into the shape control of the probability density function (PDF) of the modeling error. In this context, both the PDF-control-based and minimum-entropy-based model parameter selection approaches are proposed. Indeed, this is the first time that the PDF shape control idea is used to deal with system modeling, where the key idea is to turn model parameters so that either the modeling error PDF is controlled to follow a target PDF or the modeling error entropy is minimized. The experimental results using the real plant data and the comparison of the two approaches are discussed. The results show the effectiveness of the proposed approaches. PMID:21233046

  4. Offline modeling for product quality prediction of mineral processing using modeling error PDF shaping and entropy minimization.

    PubMed

    Ding, Jinliang; Chai, Tianyou; Wang, Hong

    2011-03-01

    This paper presents a novel offline modeling for product quality prediction of mineral processing which consists of a number of unit processes in series. The prediction of the product quality of the whole mineral process (i.e., the mixed concentrate grade) plays an important role and the establishment of its predictive model is a key issue for the plantwide optimization. For this purpose, a hybrid modeling approach of the mixed concentrate grade prediction is proposed, which consists of a linear model and a nonlinear model. The least-squares support vector machine is adopted to establish the nonlinear model. The inputs of the predictive model are the performance indices of each unit process, while the output is the mixed concentrate grade. In this paper, the model parameter selection is transformed into the shape control of the probability density function (PDF) of the modeling error. In this context, both the PDF-control-based and minimum-entropy-based model parameter selection approaches are proposed. Indeed, this is the first time that the PDF shape control idea is used to deal with system modeling, where the key idea is to turn model parameters so that either the modeling error PDF is controlled to follow a target PDF or the modeling error entropy is minimized. The experimental results using the real plant data and the comparison of the two approaches are discussed. The results show the effectiveness of the proposed approaches.

  5. Dynamics of Entropy in Quantum-like Model of Decision Making

    NASA Astrophysics Data System (ADS)

    Basieva, Irina; Khrennikov, Andrei; Asano, Masanari; Ohya, Masanori; Tanaka, Yoshiharu

    2011-03-01

    We present a quantum-like model of decision making in games of the Prisoner's Dilemma type. By this model the brain processes information by using representation of mental states in complex Hilbert space. Driven by the master equation the mental state of a player, say Alice, approaches an equilibrium point in the space of density matrices. By using this equilibrium point Alice determines her mixed (i.e., probabilistic) strategy with respect to Bob. Thus our model is a model of thinking through decoherence of initially pure mental state. Decoherence is induced by interaction with memory and external environment. In this paper we study (numerically) dynamics of quantum entropy of Alice's state in the process of decision making. Our analysis demonstrates that this dynamics depends nontrivially on the initial state of Alice's mind on her own actions and her prediction state (for possible actions of Bob.)

  6. Distortion-rate models for entropy-coded lattice vector quantization.

    PubMed

    Raffy, P; Antonini, M; Barlaud, M

    2000-01-01

    The increasing demand for real-time applications requires the use of variable-rate quantizers having good performance in the low bit rate domain. In order to minimize the complexity of quantization, as well as maintaining a reasonably high PSNR ratio, we propose to use an entropy-coded lattice vector quantizer (ECLVQ). These quantizers have proven to outperform the well-known EZW algorithm's performance in terms of rate-distortion tradeoff. In this paper, we focus our attention on the modeling of the mean squared error (MSE) distortion and the prefix code rate for ECLVQ. First, we generalize the distortion model of Jeong and Gibson (1993) on fixed-rate cubic quantizers to lattices under a high rate assumption. Second, we derive new rate models for ECLVQ, efficient at low bit rates without any high rate assumptions. Simulation results prove the precision of our models. PMID:18262939

  7. The impact of resolution upon entropy and information in coarse-grained models

    NASA Astrophysics Data System (ADS)

    Foley, Thomas T.; Shell, M. Scott; Noid, W. G.

    2015-12-01

    By eliminating unnecessary degrees of freedom, coarse-grained (CG) models tremendously facilitate numerical calculations and theoretical analyses of complex phenomena. However, their success critically depends upon the representation of the system and the effective potential that governs the CG degrees of freedom. This work investigates the relationship between the CG representation and the many-body potential of mean force (PMF), W, which is the appropriate effective potential for a CG model that exactly preserves the structural and thermodynamic properties of a given high resolution model. In particular, we investigate the entropic component of the PMF and its dependence upon the CG resolution. This entropic component, SW, is a configuration-dependent relative entropy that determines the temperature dependence of W. As a direct consequence of eliminating high resolution details from the CG model, the coarsening process transfers configurational entropy and information from the configuration space into SW. In order to further investigate these general results, we consider the popular Gaussian Network Model (GNM) for protein conformational fluctuations. We analytically derive the exact PMF for the GNM as a function of the CG representation. In the case of the GNM, -TSW is a positive, configuration-independent term that depends upon the temperature, the complexity of the protein interaction network, and the details of the CG representation. This entropic term demonstrates similar behavior for seven model proteins and also suggests, in each case, that certain resolutions provide a more efficient description of protein fluctuations. These results may provide general insight into the role of resolution for determining the information content, thermodynamic properties, and transferability of CG models. Ultimately, they may lead to a rigorous and systematic framework for optimizing the representation of CG models.

  8. The impact of resolution upon entropy and information in coarse-grained models

    SciTech Connect

    Foley, Thomas T.; Shell, M. Scott; Noid, W. G.

    2015-12-28

    By eliminating unnecessary degrees of freedom, coarse-grained (CG) models tremendously facilitate numerical calculations and theoretical analyses of complex phenomena. However, their success critically depends upon the representation of the system and the effective potential that governs the CG degrees of freedom. This work investigates the relationship between the CG representation and the many-body potential of mean force (PMF), W, which is the appropriate effective potential for a CG model that exactly preserves the structural and thermodynamic properties of a given high resolution model. In particular, we investigate the entropic component of the PMF and its dependence upon the CG resolution. This entropic component, S{sub W}, is a configuration-dependent relative entropy that determines the temperature dependence of W. As a direct consequence of eliminating high resolution details from the CG model, the coarsening process transfers configurational entropy and information from the configuration space into S{sub W}. In order to further investigate these general results, we consider the popular Gaussian Network Model (GNM) for protein conformational fluctuations. We analytically derive the exact PMF for the GNM as a function of the CG representation. In the case of the GNM, −TS{sub W} is a positive, configuration-independent term that depends upon the temperature, the complexity of the protein interaction network, and the details of the CG representation. This entropic term demonstrates similar behavior for seven model proteins and also suggests, in each case, that certain resolutions provide a more efficient description of protein fluctuations. These results may provide general insight into the role of resolution for determining the information content, thermodynamic properties, and transferability of CG models. Ultimately, they may lead to a rigorous and systematic framework for optimizing the representation of CG models.

  9. Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models

    USGS Publications Warehouse

    Curtis, Gary P.; Lu, Dan; Ye, Ming

    2015-01-01

    While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the

  10. Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models

    DOE PAGES

    Lu, Dan; Ye, Ming; Curtis, Gary P.

    2015-08-01

    While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. Our study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict themore » reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. Moreover, these reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Finally

  11. Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models

    SciTech Connect

    Lu, Dan; Ye, Ming; Curtis, Gary P.

    2015-08-01

    While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. Our study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. Moreover, these reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Finally, limitations of

  12. Combined Population Dynamics and Entropy Modelling Supports Patient Stratification in Chronic Myeloid Leukemia

    PubMed Central

    Brehme, Marc; Koschmieder, Steffen; Montazeri, Maryam; Copland, Mhairi; Oehler, Vivian G.; Radich, Jerald P.; Brümmendorf, Tim H.; Schuppert, Andreas

    2016-01-01

    Modelling the parameters of multistep carcinogenesis is key for a better understanding of cancer progression, biomarker identification and the design of individualized therapies. Using chronic myeloid leukemia (CML) as a paradigm for hierarchical disease evolution we show that combined population dynamic modelling and CML patient biopsy genomic analysis enables patient stratification at unprecedented resolution. Linking CD34+ similarity as a disease progression marker to patient-derived gene expression entropy separated established CML progression stages and uncovered additional heterogeneity within disease stages. Importantly, our patient data informed model enables quantitative approximation of individual patients’ disease history within chronic phase (CP) and significantly separates “early” from “late” CP. Our findings provide a novel rationale for personalized and genome-informed disease progression risk assessment that is independent and complementary to conventional measures of CML disease burden and prognosis. PMID:27048866

  13. Transfer entropy--a model-free measure of effective connectivity for the neurosciences.

    PubMed

    Vicente, Raul; Wibral, Michael; Lindner, Michael; Pipa, Gordon

    2011-02-01

    Understanding causal relationships, or effective connectivity, between parts of the brain is of utmost importance because a large part of the brain's activity is thought to be internally generated and, hence, quantifying stimulus response relationships alone does not fully describe brain dynamics. Past efforts to determine effective connectivity mostly relied on model based approaches such as Granger causality or dynamic causal modeling. Transfer entropy (TE) is an alternative measure of effective connectivity based on information theory. TE does not require a model of the interaction and is inherently non-linear. We investigated the applicability of TE as a metric in a test for effective connectivity to electrophysiological data based on simulations and magnetoencephalography (MEG) recordings in a simple motor task. In particular, we demonstrate that TE improved the detectability of effective connectivity for non-linear interactions, and for sensor level MEG signals where linear methods are hampered by signal-cross-talk due to volume conduction.

  14. Combined Population Dynamics and Entropy Modelling Supports Patient Stratification in Chronic Myeloid Leukemia

    NASA Astrophysics Data System (ADS)

    Brehme, Marc; Koschmieder, Steffen; Montazeri, Maryam; Copland, Mhairi; Oehler, Vivian G.; Radich, Jerald P.; Brümmendorf, Tim H.; Schuppert, Andreas

    2016-04-01

    Modelling the parameters of multistep carcinogenesis is key for a better understanding of cancer progression, biomarker identification and the design of individualized therapies. Using chronic myeloid leukemia (CML) as a paradigm for hierarchical disease evolution we show that combined population dynamic modelling and CML patient biopsy genomic analysis enables patient stratification at unprecedented resolution. Linking CD34+ similarity as a disease progression marker to patient-derived gene expression entropy separated established CML progression stages and uncovered additional heterogeneity within disease stages. Importantly, our patient data informed model enables quantitative approximation of individual patients’ disease history within chronic phase (CP) and significantly separates “early” from “late” CP. Our findings provide a novel rationale for personalized and genome-informed disease progression risk assessment that is independent and complementary to conventional measures of CML disease burden and prognosis.

  15. Entropy generation analysis for film boiling: A simple model of quenching

    NASA Astrophysics Data System (ADS)

    Lotfi, Ali; Lakzian, Esmail

    2016-04-01

    In this paper, quenching in high-temperature materials processing is modeled as a superheated isothermal flat plate. In these phenomena, a liquid flows over the highly superheated surfaces for cooling. So the surface and the liquid are separated by the vapor layer that is formed because of the liquid which is in contact with the superheated surface. This is named forced film boiling. As an objective, the distribution of the entropy generation in the laminar forced film boiling is obtained by similarity solution for the first time in the quenching processes. The PDE governing differential equations of the laminar film boiling including continuity, momentum, and energy are reduced to ODE ones, and a dimensionless equation for entropy generation inside the liquid boundary and vapor layer is obtained. Then the ODEs are solved by applying the 4th-order Runge-Kutta method with a shooting procedure. Moreover, the Bejan number is used as a design criterion parameter for a qualitative study about the rate of cooling and the effects of plate speed are studied in the quenching processes. It is observed that for high speed of the plate the rate of cooling (heat transfer) is more.

  16. Weighed scalar averaging in LTB dust models: part I. Statistical fluctuations and gravitational entropy

    NASA Astrophysics Data System (ADS)

    Sussman, Roberto A.

    2013-03-01

    We introduce a weighed scalar average formalism (‘q-average’) for the study of the theoretical properties and the dynamics of spherically symmetric Lemaître-Tolman-Bondi (LTB) dust models. The ‘q-scalars’ that emerge by applying the q-averages to the density, Hubble expansion and spatial curvature (which are common to FLRW models) are directly expressible in terms of curvature and kinematic invariants and identically satisfy FLRW evolution laws without the back-reaction terms that characterize Buchert's average. The local and non-local fluctuations and perturbations with respect to the q-average convey the effects of inhomogeneity through the ratio of curvature and kinematic invariants and the magnitude of radial gradients. All curvature and kinematic proper tensors that characterize the models are expressible as irreducible algebraic expansions on the metric and 4-velocity, whose coefficients are the q-scalars and their linear and quadratic local fluctuation. All invariant contractions of these tensors are quadratic fluctuations, whose q-averages are directly and exactly related to statistical correlation moments of the density and Hubble expansion scalar. We explore the application of this formalism to a definition of a gravitational entropy functional proposed by Hosoya et al (2004 Phys. Rev. Lett. 92 141302-14). We show that a positive entropy production follows from a negative correlation between fluctuations of the density and Hubble scalar, providing a brief outline on its fulfilment in various LTB models and regions. While the q-average formalism is specially suited for LTB (and Szekeres) models, it may provide a valuable theoretical insight into the properties of scalar averaging in inhomogeneous spacetimes in general.

  17. Quantum games entropy

    NASA Astrophysics Data System (ADS)

    Guevara Hidalgo, Esteban

    2007-09-01

    We propose the study of quantum games from the point of view of quantum information theory and statistical mechanics. Every game can be described by a density operator, the von Neumann entropy and the quantum replicator dynamics. There exists a strong relationship between game theories, information theories and statistical physics. The density operator and entropy are the bonds between these theories. The analysis we propose is based on the properties of entropy, the amount of information that a player can obtain about his opponent and a maximum or minimum entropy criterion. The natural trend of a physical system is to its maximum entropy state. The minimum entropy state is a characteristic of a manipulated system, i.e., externally controlled or imposed. There exist tacit rules inside a system that do not need to be specified or clarified and search the system equilibrium based on the collective welfare principle. The other rules are imposed over the system when one or many of its members violate this principle and maximize its individual welfare at the expense of the group.

  18. DNA Nanostructures as Models for Evaluating the Role of Enthalpy and Entropy in Polyvalent Binding

    SciTech Connect

    Nangreave, Jeanette; Yan, Hao; Liu, Yan

    2011-03-30

    DNA nanotechnology allows the design and construction of nanoscale objects that have finely tuned dimensions, orientation, and structure with remarkable ease and convenience. Synthetic DNA nanostructures can be precisely engineered to model a variety of molecules and systems, providing the opportunity to probe very subtle biophysical phenomena. In this study, several such synthetic DNA nanostructures were designed to serve as models to study the binding behavior of polyvalent molecules and gain insight into how small changes to the ligand/receptor scaffolds, intended to vary their conformational flexibility, will affect their association equilibrium. This approach has yielded a quantitative identification of the roles of enthalpy and entropy in the affinity of polyvalent DNA nanostructure interactions, which exhibit an intriguing compensating effect.

  19. Shear viscosity to entropy density ratio in the Boltzmann-Uehling-Uhlenbeck model

    SciTech Connect

    Li, S.X.; Fang, D. Q.; Ma, Y. G.; Zhou, C. L.

    2011-08-15

    The ratio of shear viscosity ({eta}) to entropy density (s) for an equilibrated system is investigated in intermediate-energy heavy-ion collisions below 100A MeV within the framework of the Boltzmann-Uehling-Uhlenbeck model. After the collision system almost reaches a local equilibration, the temperature, pressure and energy density are obtained from the phase-space information and {eta}/s is calculated using the Green-Kubo formulas. The results show that {eta}/s decreases with incident energy and tends toward a smaller value around 0.5, which is not so drastically different from the BNL Relativistic Heavy Ion Collider results in the present model.

  20. Side-chain conformational entropy in protein unfolded states.

    PubMed

    Creamer, T P

    2000-08-15

    The largest force disfavoring the folding of a protein is the loss of conformational entropy. A large contribution to this entropy loss is due to the side-chains, which are restricted, although not immobilized, in the folded protein. In order to accurately estimate the loss of side-chain conformational entropy that occurs upon folding it is necessary to have accurate estimates of the amount of entropy possessed by side-chains in the ensemble of unfolded states. A new scale of side-chain conformational entropies is presented here. This scale was derived from Monte Carlo computer simulations of small peptide models. It is demonstrated that the entropies are independent of host peptide length. This new scale has the advantage over previous scales of being more precise with low standard errors. Better estimates are obtained for long (e.g., Arg and Lys) and rare (e.g., Trp and Met) side-chains. Excellent agreement with previous side-chain entropy scales is achieved, indicating that further advancements in accuracy are likely to be small at best. Strikingly, longer side-chains are found to possess a smaller fraction of the theoretical maximum entropy available than short side-chains. This indicates that rotations about torsions after chi(2) are significantly affected by side-chain interactions with the polypeptide backbone. This finding invalidates previous assumptions about side-chain-backbone interactions. Proteins 2000;40:443-450.

  1. Marginal Maximum A Posteriori Item Parameter Estimation for the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    Roberts, James S.; Thompson, Vanessa M.

    2011-01-01

    A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…

  2. Exact valence bond entanglement entropy and probability distribution in the XXX spin chain and the potts model.

    PubMed

    Jacobsen, J L; Saleur, H

    2008-02-29

    We determine exactly the probability distribution of the number N_(c) of valence bonds connecting a subsystem of length L>1 to the rest of the system in the ground state of the XXX antiferromagnetic spin chain. This provides, in particular, the asymptotic behavior of the valence-bond entanglement entropy S_(VB)=N_(c)ln2=4ln2/pi(2)lnL disproving a recent conjecture that this should be related with the von Neumann entropy, and thus equal to 1/3lnL. Our results generalize to the Q-state Potts model.

  3. EEG entropy measures in anesthesia

    PubMed Central

    Liang, Zhenhu; Wang, Yinghua; Sun, Xue; Li, Duan; Voss, Logan J.; Sleigh, Jamie W.; Hagihira, Satoshi; Li, Xiaoli

    2015-01-01

    Highlights: ► Twelve entropy indices were systematically compared in monitoring depth of anesthesia and detecting burst suppression.► Renyi permutation entropy performed best in tracking EEG changes associated with different anesthesia states.► Approximate Entropy and Sample Entropy performed best in detecting burst suppression. Objective: Entropy algorithms have been widely used in analyzing EEG signals during anesthesia. However, a systematic comparison of these entropy algorithms in assessing anesthesia drugs' effect is lacking. In this study, we compare the capability of 12 entropy indices for monitoring depth of anesthesia (DoA) and detecting the burst suppression pattern (BSP), in anesthesia induced by GABAergic agents. Methods: Twelve indices were investigated, namely Response Entropy (RE) and State entropy (SE), three wavelet entropy (WE) measures [Shannon WE (SWE), Tsallis WE (TWE), and Renyi WE (RWE)], Hilbert-Huang spectral entropy (HHSE), approximate entropy (ApEn), sample entropy (SampEn), Fuzzy entropy, and three permutation entropy (PE) measures [Shannon PE (SPE), Tsallis PE (TPE) and Renyi PE (RPE)]. Two EEG data sets from sevoflurane-induced and isoflurane-induced anesthesia respectively were selected to assess the capability of each entropy index in DoA monitoring and BSP detection. To validate the effectiveness of these entropy algorithms, pharmacokinetic/pharmacodynamic (PK/PD) modeling and prediction probability (Pk) analysis were applied. The multifractal detrended fluctuation analysis (MDFA) as a non-entropy measure was compared. Results: All the entropy and MDFA indices could track the changes in EEG pattern during different anesthesia states. Three PE measures outperformed the other entropy indices, with less baseline variability, higher coefficient of determination (R2) and prediction probability, and RPE performed best; ApEn and SampEn discriminated BSP best. Additionally, these entropy measures showed an advantage in computation

  4. The Holographic Entropy Cone

    SciTech Connect

    Bao, Ning; Nezami, Sepehr; Ooguri, Hirosi; Stoica, Bogdan; Sully, James; Walter, Michael

    2015-09-21

    We initiate a systematic enumeration and classification of entropy inequalities satisfied by the Ryu-Takayanagi formula for conformal field theory states with smooth holographic dual geometries. For 2, 3, and 4 regions, we prove that the strong subadditivity and the monogamy of mutual information give the complete set of inequalities. This is in contrast to the situation for generic quantum systems, where a complete set of entropy inequalities is not known for 4 or more regions. We also find an infinite new family of inequalities applicable to 5 or more regions. The set of all holographic entropy inequalities bounds the phase space of Ryu-Takayanagi entropies, defining the holographic entropy cone. We characterize this entropy cone by reducing geometries to minimal graph models that encode the possible cutting and gluing relations of minimal surfaces. We find that, for a fixed number of regions, there are only finitely many independent entropy inequalities. To establish new holographic entropy inequalities, we introduce a combinatorial proof technique that may also be of independent interest in Riemannian geometry and graph theory.

  5. The Holographic Entropy Cone

    DOE PAGES

    Bao, Ning; Nezami, Sepehr; Ooguri, Hirosi; Stoica, Bogdan; Sully, James; Walter, Michael

    2015-09-21

    We initiate a systematic enumeration and classification of entropy inequalities satisfied by the Ryu-Takayanagi formula for conformal field theory states with smooth holographic dual geometries. For 2, 3, and 4 regions, we prove that the strong subadditivity and the monogamy of mutual information give the complete set of inequalities. This is in contrast to the situation for generic quantum systems, where a complete set of entropy inequalities is not known for 4 or more regions. We also find an infinite new family of inequalities applicable to 5 or more regions. The set of all holographic entropy inequalities bounds the phasemore » space of Ryu-Takayanagi entropies, defining the holographic entropy cone. We characterize this entropy cone by reducing geometries to minimal graph models that encode the possible cutting and gluing relations of minimal surfaces. We find that, for a fixed number of regions, there are only finitely many independent entropy inequalities. To establish new holographic entropy inequalities, we introduce a combinatorial proof technique that may also be of independent interest in Riemannian geometry and graph theory.« less

  6. Integrating Entropy and Closed Frequent Pattern Mining for Social Network Modelling and Analysis

    NASA Astrophysics Data System (ADS)

    Adnan, Muhaimenul; Alhajj, Reda; Rokne, Jon

    The recent increase in the explicitly available social networks has attracted the attention of the research community to investigate how it would be possible to benefit from such a powerful model in producing effective solutions for problems in other domains where the social network is implicit; we argue that social networks do exist around us but the key issue is how to realize and analyze them. This chapter presents a novel approach for constructing a social network model by an integrated framework that first preparing the data to be analyzed and then applies entropy and frequent closed patterns mining for network construction. For a given problem, we first prepare the data by identifying items and transactions, which arc the basic ingredients for frequent closed patterns mining. Items arc main objects in the problem and a transaction is a set of items that could exist together at one time (e.g., items purchased in one visit to the supermarket). Transactions could be analyzed to discover frequent closed patterns using any of the well-known techniques. Frequent closed patterns have the advantage that they successfully grab the inherent information content of the dataset and is applicable to a broader set of domains. Entropies of the frequent closed patterns arc used to keep the dimensionality of the feature vectors to a reasonable size; it is a kind of feature reduction process. Finally, we analyze the dynamic behavior of the constructed social network. Experiments were conducted on a synthetic dataset and on the Enron corpus email dataset. The results presented in the chapter show that social networks extracted from a feature set as frequent closed patterns successfully carry the community structure information. Moreover, for the Enron email dataset, we present an analysis to dynamically indicate the deviations from each user's individual and community profile. These indications of deviations can be very useful to identify unusual events.

  7. Hierarchical minimax entropy modeling and probabilistic principal component visualization for data exploration

    NASA Astrophysics Data System (ADS)

    Wang, Yue J.; Luo, Lan; Li, Haifeng; Freedman, Matthew T.

    1999-05-01

    As a step toward understanding the complex information from data and relationships, structural and discriminative knowledge reveals insight that may prove useful in data interpretation and exploration. This paper reports the development of an automated and intelligent procedure for generating the hierarchy of minimize entropy models and principal component visualization spaces for improved data explanation. The proposed hierarchical mimimax entropy modeling and probabilistic principal component projection are both statistically principles and visually effective at revealing all of the interesting aspects of the data set. The methods involve multiple use of standard finite normal mixture models and probabilistic principal component projections. The strategy is that the top-level model and projection should explain the entire data set, best revealing the presence of clusters and relationships, while lower-level models and projections should display internal structure within individual clusters, such as the presence of subclusters and attribute trends, which might not be apparent in the higher-level models and projections. With may complementary mixture models and visualization projections, each level will be relatively simple while the complete hierarchy maintains overall flexibility yet still conveys considerable structural information. In particular, a model identification procedure is developed to select the optimal number and kernel shapes of local clusters from a class of data, resulting in a standard finite normal mixtures with minimum conditional bias and variance, and a probabilistic principal component neural network is advanced to generate optimal projections, leading to a hierarchical visualization algorithm allowing the complete data set to be analyzed at the top level, with best separated subclusters of data points analyzed at deeper levels. Hierarchial probabilistic principal component visualization involves (1) evaluation of posterior probabilities for

  8. Reconstruction of f(R) Gravity with Ordinary and Entropy-Corrected (m, n)-Type Holographic Dark Energy Model

    NASA Astrophysics Data System (ADS)

    Prabir, Rudra

    2016-07-01

    In this assignment we will present a reconstruction scheme between f(R) gravity with ordinary and entropy corrected (m,n)-type holographic dark energy. The correspondence is established and expressions for the reconstructed f(R) models are determined. To study the evolution of the reconstructed models plots are generated. The stability of the calculated models are also investigated using the squared speed of sound in the background of the reconstructed gravities.

  9. SELECTION OF CANDIDATE EUTROPHICATION MODELS FOR TOTAL MAXIMUM DAILY LOADS ANALYSES

    EPA Science Inventory

    A tiered approach was developed to evaluate candidate eutrophication models to select a common suite of models that could be used for Total Maximum Daily Loads (TMDL) analyses in estuaries, rivers, and lakes/reservoirs. Consideration for linkage to watershed models and ecologica...

  10. Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models

    ERIC Educational Resources Information Center

    Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai

    2011-01-01

    Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…

  11. Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM

    ERIC Educational Resources Information Center

    Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman

    2012-01-01

    This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…

  12. Atomistic-level non-equilibrium model for chemically reactive systems based on steepest-entropy-ascent quantum thermodynamics

    NASA Astrophysics Data System (ADS)

    Li, Guanchen; Al-Abbasi, Omar; von Spakovsky, Michael R.

    2014-10-01

    This paper outlines an atomistic-level framework for modeling the non-equilibrium behavior of chemically reactive systems. The framework called steepest- entropy-ascent quantum thermodynamics (SEA-QT) is based on the paradigm of intrinsic quantum thermodynamic (IQT), which is a theory that unifies quantum mechanics and thermodynamics into a single discipline with wide applications to the study of non-equilibrium phenomena at the atomistic level. SEA-QT is a novel approach for describing the state of chemically reactive systems as well as the kinetic and dynamic features of the reaction process without any assumptions of near-equilibrium states or weak-interactions with a reservoir or bath. Entropy generation is the basis of the dissipation which takes place internal to the system and is, thus, the driving force of the chemical reaction(s). The SEA-QT non-equilibrium model is able to provide detailed information during the reaction process, providing a picture of the changes occurring in key thermodynamic properties (e.g., the instantaneous species concentrations, entropy and entropy generation, reaction coordinate, chemical affinities, reaction rate, etc). As an illustration, the SEA-QT framework is applied to an atomistic-level chemically reactive system governed by the reaction mechanism F + H2 leftrightarrow FH + H.

  13. Canonical Statistical Model for Maximum Expected Immission of Wire Conductor in an Aperture Enclosure

    NASA Technical Reports Server (NTRS)

    Bremner, Paul G.; Vazquez, Gabriel; Christiano, Daniel J.; Trout, Dawn H.

    2016-01-01

    Prediction of the maximum expected electromagnetic pick-up of conductors inside a realistic shielding enclosure is an important canonical problem for system-level EMC design of space craft, launch vehicles, aircraft and automobiles. This paper introduces a simple statistical power balance model for prediction of the maximum expected current in a wire conductor inside an aperture enclosure. It calculates both the statistical mean and variance of the immission from the physical design parameters of the problem. Familiar probability density functions can then be used to predict the maximum expected immission for deign purposes. The statistical power balance model requires minimal EMC design information and solves orders of magnitude faster than existing numerical models, making it ultimately viable for scaled-up, full system-level modeling. Both experimental test results and full wave simulation results are used to validate the foundational model.

  14. Application of nonstationary generalized logistic models for analyzing the annual maximum rainfall data in Korea

    NASA Astrophysics Data System (ADS)

    Kim, S.; Joo, K.; Kim, H.; Heo, J. H.

    2014-12-01

    Recently, the various approaches for the nonstationary frequency analysis have been studied since the effect of climate change was widely recognized for hydrologic data. Most nonstationary studies proposed the nonstationary general extreme value (GEV) and generalized Pareto models for the annual maximum and POT (peak-over-threshold) data, respectively. However, various alternatives is needed to analyze the nonstationary hydrologic data because of the complicated influence of climate change. This study proposed the nonstationary generalized logistic models containing time-dependent location and scale parameters. These models contain only or both nonstationary location and scale parameters that change linearly over time. The parameters are estimated using the method of maximum likelihood based on the Newton-Raphson method. In addition, the proposed models apply to the annual maximum rainfall data of Korea in order to evaluate the applicability of the proposed models.

  15. Climate change implications on maximum monthly stream flow in Cyprus using fuzzy regression models

    NASA Astrophysics Data System (ADS)

    Loukas, A.; Spiliotopoulos, M.

    2010-09-01

    Maximum stream flow data collected from Cyprus Water Development Department and outputs of global circulation models (General Circulation Models, GCM) are used in this study, to develop statistical downscaling techniques in order to investigate the impact of climate change on stream flow at Yermasoyia watershed, Cyprus. The Yermasoyia watershed is located in the southern side of mountain Troodos, northeast of Limassol city and it drains into Yermasoyia reservoir. The watershed area is about 157 km2 and its altitude ranges from 70 m up to 1400 m, above mean sea level. The watershed is constituted mainly by igneous rocks, degraded basalt and handholds. The mean annual precipitation is 638 mm while the mean annual flow is estimated in 22,5 millions m3. The reservoir water surface is 110 hectares and has maximum capacity of 13,6 million m3. Earlier studies have shown that the development of downscaling methodologies using multiple linear fuzzy regression models can give quite satisfactory results. In this study, the outputs of SRES A2 and SRES B2 scenarios of the second version of the Canadian Coupled Global Climate Model (CGCM2) are utilized. This model is based on the earlier CGCM1 (Flato et al. (2000), but with some improvements to address shortcomings identified in the first version. Fuzzy regression is used for the downscaling of maximum monthly stream flow. The methodology is validated by independent historical data and used for the estimation of future maximum stream flow time series. From the 30 years of observed data representing the current climate, the first 25 years (1968-1993) are considered for calibrating the downscaling model while the remaining 5 years (1994-1998) are used in order to validate the model. The model was first developed using the logarithm of observed maximum monthly streamflow as the depended variable and 36 output parameters of GCM as the candidate independent variables. Then, five (5) independent GCM parameters were selected, namely

  16. Entropy, chaos, and excited-state quantum phase transitions in the Dicke model

    NASA Astrophysics Data System (ADS)

    Lóbez, C. M.; Relaño, A.

    2016-07-01

    We study nonequilibrium processes in an isolated quantum system—the Dicke model—focusing on the role played by the transition from integrability to chaos and the presence of excited-state quantum phase transitions. We show that both diagonal and entanglement entropies are abruptly increased by the onset of chaos. Also, this increase ends in both cases just after the system crosses the critical energy of the excited-state quantum phase transition. The link between entropy production, the development of chaos, and the excited-state quantum phase transition is more clear for the entanglement entropy.

  17. Development of Daily Maximum Flare-Flux Forecast Models for Strong Solar Flares

    NASA Astrophysics Data System (ADS)

    Shin, Seulki; Lee, Jin-Yi; Moon, Yong-Jae; Chu, Hyoungseok; Park, Jongyeob

    2016-03-01

    We have developed a set of daily maximum flare-flux forecast models for strong flares (M- and X-class) using multiple linear regression (MLR) and artificial neural network (ANN) methods. Our input parameters are solar-activity data from January 1996 to December 2013 such as sunspot area, X-ray maximum, and weighted total flare flux of the previous day, as well as mean flare rates of McIntosh sunspot group (Zpc) and Mount Wilson magnetic classifications. For a training dataset, we used 61 events each of C-, M-, and X-class from January 1996 to December 2004. For a testing dataset, we used all events from January 2005 to November 2013. A comparison between our maximum flare-flux models and NOAA model based on true skill statistics (TSS) shows that the MLR model for X-class and the average of all flares (M{+}X-class) are much better than the NOAA model. According to the hitting fraction (HF), which is defined as a fraction of events satisfying the condition that the absolute differences of predicted and observed flare flux on a logarithm scale are smaller than or equal to 0.5, our models successfully forecast the maximum flare flux of about two-thirds of the events for strong flares. Since all input parameters for our models are easily available, the models can be operated steadily and automatically on a daily basis for space-weather services.

  18. Maximum Likelihood Item Easiness Models for Test Theory without an Answer Key

    ERIC Educational Resources Information Center

    France, Stephen L.; Batchelder, William H.

    2015-01-01

    Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce…

  19. Modelling the maximum voluntary joint torque/angular velocity relationship in human movement.

    PubMed

    Yeadon, Maurice R; King, Mark A; Wilson, Cassie

    2006-01-01

    The force exerted by a muscle is a function of the activation level and the maximum (tetanic) muscle force. In "maximum" voluntary knee extensions muscle activation is lower for eccentric muscle velocities than for concentric velocities. The aim of this study was to model this "differential activation" in order to calculate the maximum voluntary knee extensor torque as a function of knee angular velocity. Torque data were collected on two subjects during maximal eccentric-concentric knee extensions using an isovelocity dynamometer with crank angular velocities ranging from 50 to 450 degrees s(-1). The theoretical tetanic torque/angular velocity relationship was modelled using a four parameter function comprising two rectangular hyperbolas while the activation/angular velocity relationship was modelled using a three parameter function that rose from submaximal activation for eccentric velocities to full activation for high concentric velocities. The product of these two functions gave a seven parameter function which was fitted to the joint torque/angular velocity data, giving unbiased root mean square differences of 1.9% and 3.3% of the maximum torques achieved. Differential activation accounts for the non-hyperbolic behaviour of the torque/angular velocity data for low concentric velocities. The maximum voluntary knee extensor torque that can be exerted may be modelled accurately as the product of functions defining the maximum torque and the maximum voluntary activation level. Failure to include differential activation considerations when modelling maximal movements will lead to errors in the estimation of joint torque in the eccentric phase and low velocity concentric phase.

  20. A statistical development of entropy for the introductory physics course

    NASA Astrophysics Data System (ADS)

    Schoepf, David C.

    2002-02-01

    Many introductory physics texts introduce the statistical basis for the definition of entropy in addition to the Clausius definition, ΔS=q/T. We use a model based on equally spaced energy levels to present a way that the statistical definition of entropy can be developed at the introductory level. In addition to motivating the statistical definition of entropy, we also develop statistical arguments to answer the following questions: (i) Why does a system approach a state of maximum number of microstates? (ii) What is the equilibrium distribution of particles? (iii) What is the statistical basis of temperature? (iv) What is the statistical basis for the direction of spontaneous energy transfer? Finally, a correspondence between the statistical and the classical Clausius definitions of entropy is made.

  1. Entropy generation analysis of magnetohydrodynamic induction devices

    NASA Astrophysics Data System (ADS)

    Salas, Hugo; Cuevas, Sergio; López de Haro, Mariano

    1999-10-01

    Magnetohydrodynamic (MHD) induction devices such as electromagnetic pumps or electric generators are analysed within the approach of entropy generation. The flow of an electrically-conducting incompressible fluid in an MHD induction machine is described through the well known Hartmann model. Irreversibilities in the system due to ohmic dissipation, flow friction and heat flow are included in the entropy-generation rate. This quantity is used to define an overall efficiency for the induction machine that considers the total loss caused by process irreversibility. For an MHD generator working at maximum power output with walls at constant temperature, an optimum magnetic field strength (i.e. Hartmann number) is found based on the maximum overall efficiency.

  2. Maximum entropy approach to fuzzy control

    NASA Technical Reports Server (NTRS)

    Ramer, Arthur; Kreinovich, Vladik YA.

    1992-01-01

    For the same expert knowledge, if one uses different &- and V-operations in a fuzzy control methodology, one ends up with different control strategies. Each choice of these operations restricts the set of possible control strategies. Since a wrong choice can lead to a low quality control, it is reasonable to try to loose as few possibilities as possible. This idea is formalized and it is shown that it leads to the choice of min(a + b,1) for V and min(a,b) for &. This choice was tried on NASA Shuttle simulator; it leads to a maximally stable control.

  3. Steepest-entropy-ascent quantum thermodynamic modeling of the relaxation process of isolated chemically reactive systems using density of states and the concept of hypoequilibrium state

    NASA Astrophysics Data System (ADS)

    Li, Guanchen; von Spakovsky, Michael R.

    2016-01-01

    This paper presents a study of the nonequilibrium relaxation process of chemically reactive systems using steepest-entropy-ascent quantum thermodynamics (SEAQT). The trajectory of the chemical reaction, i.e., the accessible intermediate states, is predicted and discussed. The prediction is made using a thermodynamic-ensemble approach, which does not require detailed information about the particle mechanics involved (e.g., the collision of particles). Instead, modeling the kinetics and dynamics of the relaxation process is based on the principle of steepest-entropy ascent (SEA) or maximum-entropy production, which suggests a constrained gradient dynamics in state space. The SEAQT framework is based on general definitions for energy and entropy and at least theoretically enables the prediction of the nonequilibrium relaxation of system state at all temporal and spatial scales. However, to make this not just theoretically but computationally possible, the concept of density of states is introduced to simplify the application of the relaxation model, which in effect extends the application of the SEAQT framework even to infinite energy eigenlevel systems. The energy eigenstructure of the reactive system considered here consists of an extremely large number of such levels (on the order of 10130) and yields to the quasicontinuous assumption. The principle of SEA results in a unique trajectory of system thermodynamic state evolution in Hilbert space in the nonequilibrium realm, even far from equilibrium. To describe this trajectory, the concepts of subsystem hypoequilibrium state and temperature are introduced and used to characterize each system-level, nonequilibrium state. This definition of temperature is fundamental rather than phenomenological and is a generalization of the temperature defined at stable equilibrium. In addition, to deal with the large number of energy eigenlevels, the equation of motion is formulated on the basis of the density of states and a set of

  4. Development of Daily Solar Maximum Flare Flux Forecast Models for Strong Flares

    NASA Astrophysics Data System (ADS)

    Shin, Seulki; Chu, Hyoungseok

    2015-08-01

    We have developed a set of daily solar maximum flare flux forecast models for strong flares using Multiple Linear Regression (MLR) and Artificial Neural Network (ANN) methods. We consider input parameters as solar activity data from January 1996 to December 2013 such as sunspot area, X-ray maximum flare flux and weighted total flux of the previous day, and mean flare rates of McIntosh sunspot group (Zpc) and Mount Wilson magnetic classification. For a training data set, we use the same number of 61 events for each C-, M-, and X-class from Jan. 1996 to Dec. 2004, while other previous models use all flares. For a testing data set, we use all flares from Jan. 2005 to Nov. 2013. The statistical parameters from contingency tables show that the ANN models are better for maximum flare flux forecasting than the MLR models. A comparison between our maximum flare flux models and the previous ones based on Heidke Skill Score (HSS) shows that our all models for X-class flare are much better than the other models. According to the Hitting Fraction (HF), which is defined as a fraction of events satisfying that the absolute differences of predicted and observed flare flux in logarithm scale are less than equal to 0.5, our models successfully forecast the maximum flare flux of about two-third events for strong flares. Since all input parameters for our models are easily available, the models can be operated steadily and automatically on daily basis for space weather service.

  5. Safety Assessment of Dangerous Goods Transport Enterprise Based on the Relative Entropy Aggregation in Group Decision Making Model

    PubMed Central

    Wu, Jun; Li, Chengbing; Huo, Yueying

    2014-01-01

    Safety of dangerous goods transport is directly related to the operation safety of dangerous goods transport enterprise. Aiming at the problem of the high accident rate and large harm in dangerous goods logistics transportation, this paper took the group decision making problem based on integration and coordination thought into a multiagent multiobjective group decision making problem; a secondary decision model was established and applied to the safety assessment of dangerous goods transport enterprise. First of all, we used dynamic multivalue background and entropy theory building the first level multiobjective decision model. Secondly, experts were to empower according to the principle of clustering analysis, and combining with the relative entropy theory to establish a secondary rally optimization model based on relative entropy in group decision making, and discuss the solution of the model. Then, after investigation and analysis, we establish the dangerous goods transport enterprise safety evaluation index system. Finally, case analysis to five dangerous goods transport enterprises in the Inner Mongolia Autonomous Region validates the feasibility and effectiveness of this model for dangerous goods transport enterprise recognition, which provides vital decision making basis for recognizing the dangerous goods transport enterprises. PMID:25477954

  6. Safety assessment of dangerous goods transport enterprise based on the relative entropy aggregation in group decision making model.

    PubMed

    Wu, Jun; Li, Chengbing; Huo, Yueying

    2014-01-01

    Safety of dangerous goods transport is directly related to the operation safety of dangerous goods transport enterprise. Aiming at the problem of the high accident rate and large harm in dangerous goods logistics transportation, this paper took the group decision making problem based on integration and coordination thought into a multiagent multiobjective group decision making problem; a secondary decision model was established and applied to the safety assessment of dangerous goods transport enterprise. First of all, we used dynamic multivalue background and entropy theory building the first level multiobjective decision model. Secondly, experts were to empower according to the principle of clustering analysis, and combining with the relative entropy theory to establish a secondary rally optimization model based on relative entropy in group decision making, and discuss the solution of the model. Then, after investigation and analysis, we establish the dangerous goods transport enterprise safety evaluation index system. Finally, case analysis to five dangerous goods transport enterprises in the Inner Mongolia Autonomous Region validates the feasibility and effectiveness of this model for dangerous goods transport enterprise recognition, which provides vital decision making basis for recognizing the dangerous goods transport enterprises. PMID:25477954

  7. Spiking Cortical Model Based Multimodal Medical Image Fusion by Combining Entropy Information with Weber Local Descriptor

    PubMed Central

    Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei

    2016-01-01

    Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation. PMID:27649190

  8. Spiking Cortical Model Based Multimodal Medical Image Fusion by Combining Entropy Information with Weber Local Descriptor.

    PubMed

    Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei

    2016-01-01

    Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation. PMID:27649190

  9. A technique for estimating maximum harvesting effort in a stochastic fishery model.

    PubMed

    Sarkar, Ram Rup; Chattopadhyay, J

    2003-06-01

    Exploitation of biological resources and the harvest of population species are commonly practiced in fisheries, forestry and wild life management. Estimation of maximum harvesting effort has a great impact on the economics of fisheries and other bio-resources. The present paper deals with the problem of a bioeconomic fishery model under environmental variability. A technique for finding the maximum harvesting effort in fluctuating environment has been developed in a two-species competitive system, which shows that under realistic environmental variability the maximum harvesting effort is less than what is estimated in the deterministic model. This method also enables us to find out the safe regions in the parametric space for which the chance of extinction of the species is minimized. A real life fishery problem has been considered to obtain the inaccessible parameters of the system in a systematic way. Such studies may help resource managers to get an idea for controlling the system.

  10. Development of models for maximum and time variation of storm surges at the Tanshui estuary

    NASA Astrophysics Data System (ADS)

    Tsai, C.-P.; You, C.-Y.

    2014-09-01

    In this study, artificial neural networks, including both multilayer perception and the radial basis function neural networks, are applied for modeling and forecasting the maximum and time variation of storm surges at the Tanshui estuary in Taiwan. The physical parameters, including both the local atmospheric pressure and the wind field factors, for finding the maximum storm surges, are first investigated based on the training of neural networks. Then neural network models for forecasting the time series of storm surges are accordingly developed using the major meteorological parameters with time variations. The time series of storm surges for six typhoons were used for training and testing the models, and data for three typhoons were used for model forecasting. The results show that both neural network models perform very well for the forecasting of the time variation of storm surges.

  11. Applying a Weighted Maximum Likelihood Latent Trait Estimator to the Generalized Partial Credit Model

    ERIC Educational Resources Information Center

    Penfield, Randall D.; Bergeron, Jennifer M.

    2005-01-01

    This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…

  12. Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods

    ERIC Educational Resources Information Center

    Zhong, Xiaoling; Yuan, Ke-Hai

    2011-01-01

    In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…

  13. Long branch effects distort maximum likelihood phylogenies in simulations despite selection of the correct model.

    PubMed

    Kück, Patrick; Mayer, Christoph; Wägele, Johann-Wolfgang; Misof, Bernhard

    2012-01-01

    The aim of our study was to test the robustness and efficiency of maximum likelihood with respect to different long branch effects on multiple-taxon trees. We simulated data of different alignment lengths under two different 11-taxon trees and a broad range of different branch length conditions. The data were analyzed with the true model parameters as well as with estimated and incorrect assumptions about among-site rate variation. If length differences between connected branches strongly increase, tree inference with the correct likelihood model assumptions can fail. We found that incorporating invariant sites together with Γ distributed site rates in the tree reconstruction (Γ+I) increases the robustness of maximum likelihood in comparison with models using only Γ. The results show that for some topologies and branch lengths the reconstruction success of maximum likelihood under the correct model is still low for alignments with a length of 100,000 base positions. Altogether, the high confidence that is put in maximum likelihood trees is not always justified under certain tree shapes even if alignment lengths reach 100,000 base positions.

  14. Entropy and climate. I - ERBE observations of the entropy production of the earth

    NASA Technical Reports Server (NTRS)

    Stephens, G. L.; O'Brien, D. M.

    1993-01-01

    An approximate method for estimating the global distributions of the entropy fluxes flowing through the upper boundary of the climate system is introduced, and an estimate of the entropy exchange between the earth and space and the entropy production of the planet is provided. Entropy fluxes calculated from the Earth Radiation Budget Experiment measurements show how the long-wave entropy flux densities dominate the total entropy fluxes at all latitudes compared with the entropy flux densities associated with reflected sunlight, although the short-wave flux densities are important in the context of clear sky-cloudy sky net entropy flux differences. It is suggested that the entropy production of the planet is both constant for the 36 months of data considered and very near its maximum possible value. The mean value of this production is 0.68 x 10 exp 15 W/K, and the amplitude of the annual cycle is approximately 1 to 2 percent of this value.

  15. Entropy, matter, and cosmology

    PubMed Central

    Prigogine, I.; Géhéniau, J.

    1986-01-01

    The role of irreversible processes corresponding to creation of matter in general relativity is investigated. The use of Landau-Lifshitz pseudotensors together with conformal (Minkowski) coordinates suggests that this creation took place in the early universe at the stage of the variation of the conformal factor. The entropy production in this creation process is calculated. It is shown that these dissipative processes lead to the possibility of cosmological models that start from empty conditions and gradually build up matter and entropy. Gravitational entropy takes a simple meaning as associated to the entropy that is necessary to produce matter. This leads to an extension of the third law of thermodynamics, as now the zero point of entropy becomes the space-time structure out of which matter is generated. The theory can be put into a convenient form using a supplementary “C” field in Einstein's field equations. The role of the C field is to express the coupling between gravitation and matter leading to irreversible entropy production. PMID:16593747

  16. Simple Statistical Model to Quantify Maximum Expected EMC in Spacecraft and Avionics Boxes

    NASA Technical Reports Server (NTRS)

    Trout, Dawn H.; Bremner, Paul

    2014-01-01

    This study shows cumulative distribution function (CDF) comparisons of composite a fairing electromagnetic field data obtained by computational electromagnetic 3D full wave modeling and laboratory testing. Test and model data correlation is shown. In addition, this presentation shows application of the power balance and extention of this method to predict the variance and maximum exptected mean of the E-field data. This is valuable for large scale evaluations of transmission inside cavities.

  17. Autonomous entropy-based intelligent experimental design

    NASA Astrophysics Data System (ADS)

    Malakar, Nabin Kumar

    2011-07-01

    The aim of this thesis is to explore the application of probability and information theory in experimental design, and to do so in a way that combines what we know about inference and inquiry in a comprehensive and consistent manner. Present day scientific frontiers involve data collection at an ever-increasing rate. This requires that we find a way to collect the most relevant data in an automated fashion. By following the logic of the scientific method, we couple an inference engine with an inquiry engine to automate the iterative process of scientific learning. The inference engine involves Bayesian machine learning techniques to estimate model parameters based upon both prior information and previously collected data, while the inquiry engine implements data-driven exploration. By choosing an experiment whose distribution of expected results has the maximum entropy, the inquiry engine selects the experiment that maximizes the expected information gain. The coupled inference and inquiry engines constitute an autonomous learning method for scientific exploration. We apply it to a robotic arm to demonstrate the efficacy of the method. Optimizing inquiry involves searching for an experiment that promises, on average, to be maximally informative. If the set of potential experiments is described by many parameters, the search involves a high-dimensional entropy space. In such cases, a brute force search method will be slow and computationally expensive. We develop an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment. This helps to reduce the number of computations necessary to find the optimal experiment. We also extended the method of maximizing entropy, and developed a method of maximizing joint entropy so that it could be used as a principle of collaboration between two robots. This is a major achievement of this thesis, as it allows the information-based collaboration between two robotic units towards a same

  18. Performance of default risk model with barrier option framework and maximum likelihood estimation: Evidence from Taiwan

    NASA Astrophysics Data System (ADS)

    Chou, Heng-Chih; Wang, David

    2007-11-01

    We investigate the performance of a default risk model based on the barrier option framework with maximum likelihood estimation. We provide empirical validation of the model by showing that implied default barriers are statistically significant for a sample of construction firms in Taiwan over the period 1994-2004. We find that our model dominates the commonly adopted models, Merton model, Z-score model and ZETA model. Moreover, we test the n-year-ahead prediction performance of the model and find evidence that the prediction accuracy of the model improves as the forecast horizon decreases. Finally, we assess the effect of estimated default risk on equity returns and find that default risk is able to explain equity returns and that default risk is a variable worth considering in asset-pricing tests, above and beyond size and book-to-market.

  19. A critical examination of the maximum velocity of shortening used in simulation models of human movement.

    PubMed

    Domire, Zachary J; Challis, John H

    2010-12-01

    The maximum velocity of shortening of a muscle is an important parameter in musculoskeletal models. The most commonly used values are derived from animal studies; however, these values are well above the values that have been reported for human muscle. The purpose of this study was to examine the sensitivity of simulations of maximum vertical jumping performance to the parameters describing the force-velocity properties of muscle. Simulations performed with parameters derived from animal studies were similar to measured jump heights from previous experimental studies. While simulations performed with parameters derived from human muscle were much lower than previously measured jump heights. If current measurements of maximum shortening velocity in human muscle are correct, a compensating error must exist. Of the possible compensating errors that could produce this discrepancy, it was concluded that reduced muscle fibre excursion is the most likely candidate.

  20. Estimation of Dynamic Discrete Choice Models by Maximum Likelihood and the Simulated Method of Moments

    PubMed Central

    Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano

    2015-01-01

    We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926

  1. Entropy distance: New quantum phenomena

    SciTech Connect

    Weis, Stephan; Knauf, Andreas

    2012-10-15

    We study a curve of Gibbsian families of complex 3 Multiplication-Sign 3-matrices and point out new features, absent in commutative finite-dimensional algebras: a discontinuous maximum-entropy inference, a discontinuous entropy distance, and non-exposed faces of the mean value set. We analyze these problems from various aspects including convex geometry, topology, and information geometry. This research is motivated by a theory of infomax principles, where we contribute by computing first order optimality conditions of the entropy distance.

  2. Saturating the holographic entropy bound

    SciTech Connect

    Bousso, Raphael; Freivogel, Ben; Leichenauer, Stefan

    2010-10-15

    The covariant entropy bound states that the entropy, S, of matter on a light sheet cannot exceed a quarter of its initial area, A, in Planck units. The gravitational entropy of black holes saturates this inequality. The entropy of matter systems, however, falls short of saturating the bound in known examples. This puzzling gap has led to speculation that a much stronger bound, S < or approx. A{sup 3/4}, may hold true. In this note, we exhibit light sheets whose entropy exceeds A{sup 3/4} by arbitrarily large factors. In open Friedmann-Robertson-Walker universes, such light sheets contain the entropy visible in the sky; in the limit of early curvature domination, the covariant bound can be saturated but not violated. As a corollary, we find that the maximum observable matter and radiation entropy in universes with positive (negative) cosmological constant is of order {Lambda}{sup -1} ({Lambda}{sup -2}), and not |{Lambda}|{sup -3/4} as had hitherto been believed. Our results strengthen the evidence for the covariant entropy bound, while showing that the stronger bound S < or approx. A{sup 3/4} is not universally valid. We conjecture that the stronger bound does hold for static, weakly gravitating systems.

  3. Extending Transfer Entropy Improves Identification of Effective Connectivity in a Spiking Cortical Network Model

    PubMed Central

    Ito, Shinya; Hansen, Michael E.; Heiland, Randy; Lumsdaine, Andrew; Litke, Alan M.; Beggs, John M.

    2011-01-01

    Transfer entropy (TE) is an information-theoretic measure which has received recent attention in neuroscience for its potential to identify effective connectivity between neurons. Calculating TE for large ensembles of spiking neurons is computationally intensive, and has caused most investigators to probe neural interactions at only a single time delay and at a message length of only a single time bin. This is problematic, as synaptic delays between cortical neurons, for example, range from one to tens of milliseconds. In addition, neurons produce bursts of spikes spanning multiple time bins. To address these issues, here we introduce a free software package that allows TE to be measured at multiple delays and message lengths. To assess performance, we applied these extensions of TE to a spiking cortical network model (Izhikevich, 2006) with known connectivity and a range of synaptic delays. For comparison, we also investigated single-delay TE, at a message length of one bin (D1TE), and cross-correlation (CC) methods. We found that D1TE could identify 36% of true connections when evaluated at a false positive rate of 1%. For extended versions of TE, this dramatically improved to 73% of true connections. In addition, the connections correctly identified by extended versions of TE accounted for 85% of the total synaptic weight in the network. Cross correlation methods generally performed more poorly than extended TE, but were useful when data length was short. A computational performance analysis demonstrated that the algorithm for extended TE, when used on currently available desktop computers, could extract effective connectivity from 1 hr recordings containing 200 neurons in ∼5 min. We conclude that extending TE to multiple delays and message lengths improves its ability to assess effective connectivity between spiking neurons. These extensions to TE soon could become practical tools for experimentalists who record hundreds of spiking neurons. PMID:22102894

  4. Entropy and the Shelf Model: A Quantum Physical Approach to a Physical Property

    ERIC Educational Resources Information Center

    Jungermann, Arnd H.

    2006-01-01

    In contrast to most other thermodynamic data, entropy values are not given in relation to a certain--more or less arbitrarily defined--zero level. They are listed in standard thermodynamic tables as absolute values of specific substances. Therefore these values describe a physical property of the listed substances. One of the main tasks of…

  5. Entropy Generation in Regenerative Systems

    NASA Technical Reports Server (NTRS)

    Kittel, Peter

    1995-01-01

    Heat exchange to the oscillating flows in regenerative coolers generates entropy. These flows are characterized by oscillating mass flows and oscillating temperatures. Heat is transferred between the flow and heat exchangers and regenerators. In the former case, there is a steady temperature difference between the flow and the heat exchangers. In the latter case, there is no mean temperature difference. In this paper a mathematical model of the entropy generated is developed for both cases. Estimates of the entropy generated by this process are given for oscillating flows in heat exchangers and in regenerators. The practical significance of this entropy is also discussed.

  6. Using maximum topology matching to explore differences in species distribution models

    USGS Publications Warehouse

    Poco, Jorge; Doraiswamy, Harish; Talbert, Marian K.; Morisette, Jeffrey; Silva, Claudio

    2015-01-01

    Species distribution models (SDM) are used to help understand what drives the distribution of various plant and animal species. These models are typically high dimensional scalar functions, where the dimensions of the domain correspond to predictor variables of the model algorithm. Understanding and exploring the differences between models help ecologists understand areas where their data or understanding of the system is incomplete and will help guide further investigation in these regions. These differences can also indicate an important source of model to model uncertainty. However, it is cumbersome and often impractical to perform this analysis using existing tools, which allows for manual exploration of the models usually as 1-dimensional curves. In this paper, we propose a topology-based framework to help ecologists explore the differences in various SDMs directly in the high dimensional domain. In order to accomplish this, we introduce the concept of maximum topology matching that computes a locality-aware correspondence between similar extrema of two scalar functions. The matching is then used to compute the similarity between two functions. We also design a visualization interface that allows ecologists to explore SDMs using their topological features and to study the differences between pairs of models found using maximum topological matching. We demonstrate the utility of the proposed framework through several use cases using different data sets and report the feedback obtained from ecologists.

  7. Maximum efficiency of state-space models of nanoscale energy conversion devices.

    PubMed

    Einax, Mario; Nitzan, Abraham

    2016-07-01

    The performance of nano-scale energy conversion devices is studied in the framework of state-space models where a device is described by a graph comprising states and transitions between them represented by nodes and links, respectively. Particular segments of this network represent input (driving) and output processes whose properly chosen flux ratio provides the energy conversion efficiency. Simple cyclical graphs yield Carnot efficiency for the maximum conversion yield. We give general proof that opening a link that separate between the two driving segments always leads to reduced efficiency. We illustrate these general result with simple models of a thermoelectric nanodevice and an organic photovoltaic cell. In the latter an intersecting link of the above type corresponds to non-radiative carriers recombination and the reduced maximum efficiency is manifested as a smaller open-circuit voltage. PMID:27394100

  8. Computing maximum-likelihood estimates for parameters of the National Descriptive Model of Mercury in Fish

    USGS Publications Warehouse

    Donato, David I.

    2012-01-01

    This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.

  9. Maximum efficiency of state-space models of nanoscale energy conversion devices.

    PubMed

    Einax, Mario; Nitzan, Abraham

    2016-07-01

    The performance of nano-scale energy conversion devices is studied in the framework of state-space models where a device is described by a graph comprising states and transitions between them represented by nodes and links, respectively. Particular segments of this network represent input (driving) and output processes whose properly chosen flux ratio provides the energy conversion efficiency. Simple cyclical graphs yield Carnot efficiency for the maximum conversion yield. We give general proof that opening a link that separate between the two driving segments always leads to reduced efficiency. We illustrate these general result with simple models of a thermoelectric nanodevice and an organic photovoltaic cell. In the latter an intersecting link of the above type corresponds to non-radiative carriers recombination and the reduced maximum efficiency is manifested as a smaller open-circuit voltage.

  10. Maximum efficiency of state-space models of nanoscale energy conversion devices

    NASA Astrophysics Data System (ADS)

    Einax, Mario; Nitzan, Abraham

    2016-07-01

    The performance of nano-scale energy conversion devices is studied in the framework of state-space models where a device is described by a graph comprising states and transitions between them represented by nodes and links, respectively. Particular segments of this network represent input (driving) and output processes whose properly chosen flux ratio provides the energy conversion efficiency. Simple cyclical graphs yield Carnot efficiency for the maximum conversion yield. We give general proof that opening a link that separate between the two driving segments always leads to reduced efficiency. We illustrate these general result with simple models of a thermoelectric nanodevice and an organic photovoltaic cell. In the latter an intersecting link of the above type corresponds to non-radiative carriers recombination and the reduced maximum efficiency is manifested as a smaller open-circuit voltage.

  11. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  12. Spatial modeling of the highest daily maximum temperature in Korea via max-stable processes

    NASA Astrophysics Data System (ADS)

    Lee, Youngsaeng; Yoon, Sanghoo; Murshed, Md. Sharwar; Kim, Maeng-Ki; Cho, ChunHo; Baek, Hee-Jeong; Park, Jeong-Soo

    2013-11-01

    This paper examines the annual highest daily maximum temperature (DMT) in Korea by using data from 56 weather stations and employing spatial extreme modeling. Our approach is based on max-stable processes (MSP) with Schlather’s characterization. We divide the country into four regions for a better model fit and identify the best model for each region. We show that regional MSP modeling is more suitable than MSP modeling for the entire region and the pointwise generalized extreme value distribution approach. The advantage of spatial extreme modeling is that more precise and robust return levels and some indices of the highest temperatures can be obtained for observation stations and for locations with no observed data, and so help to determine the effects and assessment of vulnerability as well as to downscale extreme events.

  13. Maximum-Likelihood Tree Estimation Using Codon Substitution Models with Multiple Partitions

    PubMed Central

    Zoller, Stefan; Boskova, Veronika; Anisimova, Maria

    2015-01-01

    Many protein sequences have distinct domains that evolve with different rates, different selective pressures, or may differ in codon bias. Instead of modeling these differences by more and more complex models of molecular evolution, we present a multipartition approach that allows maximum-likelihood phylogeny inference using different codon models at predefined partitions in the data. Partition models can, but do not have to, share free parameters in the estimation process. We test this approach with simulated data as well as in a phylogenetic study of the origin of the leucin-rich repeat regions in the type III effector proteins of the pythopathogenic bacteria Ralstonia solanacearum. Our study does not only show that a simple two-partition model resolves the phylogeny better than a one-partition model but also gives more evidence supporting the hypothesis of lateral gene transfer events between the bacterial pathogens and its eukaryotic hosts. PMID:25911229

  14. Modifying high-order aeroelastic math model of a jet transport using maximum likelihood estimation

    NASA Technical Reports Server (NTRS)

    Anissipour, Amir A.; Benson, Russell A.

    1989-01-01

    The design of control laws to damp flexible structural modes requires accurate math models. Unlike the design of control laws for rigid body motion (e.g., where robust control is used to compensate for modeling inaccuracies), structural mode damping usually employs narrow band notch filters. In order to obtain the required accuracy in the math model, maximum likelihood estimation technique is employed to improve the accuracy of the math model using flight data. Presented here are all phases of this methodology: (1) pre-flight analysis (i.e., optimal input signal design for flight test, sensor location determination, model reduction technique, etc.), (2) data collection and preprocessing, and (3) post-flight analysis (i.e., estimation technique and model verification). In addition, a discussion is presented of the software tools used and the need for future study in this field.

  15. Entanglement Entropy of Black Holes

    NASA Astrophysics Data System (ADS)

    Solodukhin, Sergey N.

    2011-12-01

    The entanglement entropy is a fundamental quantity, which characterizes the correlations between sub-systems in a larger quantum-mechanical system. For two sub-systems separated by a surface the entanglement entropy is proportional to the area of the surface and depends on the UV cutoff, which regulates the short-distance correlations. The geometrical nature of entanglement-entropy calculation is particularly intriguing when applied to black holes when the entangling surface is the black-hole horizon. I review a variety of aspects of this calculation: the useful mathematical tools such as the geometry of spaces with conical singularities and the heat kernel method, the UV divergences in the entropy and their renormalization, the logarithmic terms in the entanglement entropy in four and six dimensions and their relation to the conformal anomalies. The focus in the review is on the systematic use of the conical singularity method. The relations to other known approaches such as ’t Hooft’s brick-wall model and the Euclidean path integral in the optical metric are discussed in detail. The puzzling behavior of the entanglement entropy due to fields, which non-minimally couple to gravity, is emphasized. The holographic description of the entanglement entropy of the blackhole horizon is illustrated on the two- and four-dimensional examples. Finally, I examine the possibility to interpret the Bekenstein-Hawking entropy entirely as the entanglement entropy.

  16. Recent developments in maximum likelihood estimation of MTMM models for categorical data

    PubMed Central

    Jeon, Minjeong; Rijmen, Frank

    2014-01-01

    Maximum likelihood (ML) estimation of categorical multitrait-multimethod (MTMM) data is challenging because the likelihood involves high-dimensional integrals over the crossed method and trait factors, with no known closed-form solution. The purpose of the study is to introduce three newly developed ML methods that are eligible for estimating MTMM models with categorical responses: Variational maximization-maximization (e.g., Rijmen and Jeon, 2013), alternating imputation posterior (e.g., Cho and Rabe-Hesketh, 2011), and Monte Carlo local likelihood (e.g., Jeon et al., under revision). Each method is briefly described and its applicability for MTMM models with categorical data are discussed. PMID:24782791

  17. Upper entropy axioms and lower entropy axioms

    NASA Astrophysics Data System (ADS)

    Guo, Jin-Li; Suo, Qi

    2015-04-01

    The paper suggests the concepts of an upper entropy and a lower entropy. We propose a new axiomatic definition, namely, upper entropy axioms, inspired by axioms of metric spaces, and also formulate lower entropy axioms. We also develop weak upper entropy axioms and weak lower entropy axioms. Their conditions are weaker than those of Shannon-Khinchin axioms and Tsallis axioms, while these conditions are stronger than those of the axiomatics based on the first three Shannon-Khinchin axioms. The subadditivity and strong subadditivity of entropy are obtained in the new axiomatics. Tsallis statistics is a special case of satisfying our axioms. Moreover, different forms of information measures, such as Shannon entropy, Daroczy entropy, Tsallis entropy and other entropies, can be unified under the same axiomatics.

  18. Upper entropy axioms and lower entropy axioms

    SciTech Connect

    Guo, Jin-Li Suo, Qi

    2015-04-15

    The paper suggests the concepts of an upper entropy and a lower entropy. We propose a new axiomatic definition, namely, upper entropy axioms, inspired by axioms of metric spaces, and also formulate lower entropy axioms. We also develop weak upper entropy axioms and weak lower entropy axioms. Their conditions are weaker than those of Shannon–Khinchin axioms and Tsallis axioms, while these conditions are stronger than those of the axiomatics based on the first three Shannon–Khinchin axioms. The subadditivity and strong subadditivity of entropy are obtained in the new axiomatics. Tsallis statistics is a special case of satisfying our axioms. Moreover, different forms of information measures, such as Shannon entropy, Daroczy entropy, Tsallis entropy and other entropies, can be unified under the same axiomatics.

  19. Maximum Likelihood Bayesian Averaging of Spatial Variability Models in Unsaturated Fractured Tuff

    SciTech Connect

    Ye, Ming; Neuman, Shlomo P.; Meyer, Philip D.

    2004-05-25

    Hydrologic analyses typically rely on a single conceptual-mathematical model. Yet hydrologic environments are open and complex, rendering them prone to multiple interpretations and mathematical descriptions. Adopting only one of these may lead to statistical bias and underestimation of uncertainty. Bayesian Model Averaging (BMA) provides an optimal way to combine the predictions of several competing models and to assess their joint predictive uncertainty. However, it tends to be computationally demanding and relies heavily on prior information about model parameters. We apply a maximum likelihood (ML) version of BMA (MLBMA) to seven alternative variogram models of log air permeability data from single-hole pneumatic injection tests in six boreholes at the Apache Leap Research Site (ALRS) in central Arizona. Unbiased ML estimates of variogram and drift parameters are obtained using Adjoint State Maximum Likelihood Cross Validation in conjunction with Universal Kriging and Generalized L east Squares. Standard information criteria provide an ambiguous ranking of the models, which does not justify selecting one of them and discarding all others as is commonly done in practice. Instead, we eliminate some of the models based on their negligibly small posterior probabilities and use the rest to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. We then average these four projections, and associated kriging variances, using the posterior probability of each model as weight. Finally, we cross-validate the results by eliminating from consideration all data from one borehole at a time, repeating the above process, and comparing the predictive capability of MLBMA with that of each individual model. We find that MLBMA is superior to any individual geostatistical model of log permeability among those we consider at the ALRS.

  20. Improving prediction of hydraulic conductivity by constraining capillary bundle models to a maximum pore size

    NASA Astrophysics Data System (ADS)

    Iden, Sascha C.; Peters, Andre; Durner, Wolfgang

    2015-11-01

    The prediction of unsaturated hydraulic conductivity from the soil water retention curve by pore-bundle models is a cost-effective and widely applied technique. One problem for conductivity predictions from retention functions with continuous derivatives, i.e. continuous water capacity functions, is that the hydraulic conductivity curve exhibits a sharp drop close to water saturation if the pore-size distribution is wide. So far this artifact has been ignored or removed by introducing an explicit air-entry value into the capillary saturation function. However, this correction leads to a retention function which is not continuously differentiable. We present a new parameterization of the hydraulic properties which uses the original saturation function (e.g. of van Genuchten) and introduces a maximum pore radius only in the pore-bundle model. In contrast to models using an explicit air entry, the resulting conductivity function is smooth and increases monotonically close to saturation. The model concept can easily be applied to any combination of retention curve and pore-bundle model. We derive closed-form expressions for the unimodal and multimodal van Genuchten-Mualem models and apply the model concept to curve fitting and inverse modeling of a transient outflow experiment. Since the new model retains the smoothness and continuous differentiability of the retention model and eliminates the sharp drop in conductivity close to saturation, the resulting hydraulic functions are physically more reasonable and ideal for numerical simulations with the Richards equation or multiphase flow models.

  1. Handling Missing Data With Multilevel Structural Equation Modeling and Full Information Maximum Likelihood Techniques.

    PubMed

    Schminkey, Donna L; von Oertzen, Timo; Bullock, Linda

    2016-08-01

    With increasing access to population-based data and electronic health records for secondary analysis, missing data are common. In the social and behavioral sciences, missing data frequently are handled with multiple imputation methods or full information maximum likelihood (FIML) techniques, but healthcare researchers have not embraced these methodologies to the same extent and more often use either traditional imputation techniques or complete case analysis, which can compromise power and introduce unintended bias. This article is a review of options for handling missing data, concluding with a case study demonstrating the utility of multilevel structural equation modeling using full information maximum likelihood (MSEM with FIML) to handle large amounts of missing data. MSEM with FIML is a parsimonious and hypothesis-driven strategy to cope with large amounts of missing data without compromising power or introducing bias. This technique is relevant for nurse researchers faced with ever-increasing amounts of electronic data and decreasing research budgets. © 2016 Wiley Periodicals, Inc.

  2. Mathematical models for maximum improvement of in vitro protein digestibility of high dietary fibre cookies.

    PubMed

    el-Moniem, G M

    1994-01-01

    Via substituting with wheat flour some high dietary fibre cookies were prepared (with 6, 12, 18 and 24%) from cereal industry by-products: corn bran (CB), rice bran (RB) and barley husk (BH). In vitro protein digestibility assay was used to examine the effect of substituting on protein digestibility. The applied nonlinear mathematical models indicated a higher determination coefficient between experimental and predicted data (R2 > or = 0.999). A maximum in vitro protein digestibility (IVPD) of 88.4, 84.1, 85.2% was obtained when using optimum level substituting with wheat flour (7.9, 9.3 or 5.2%) in the CB, RB or BH respectively for producing cookies. The maximum improvement or minimum reducing IVPD by using fibre sources in producing cookies ranged from -0.25% in RB to 4.9% in CB.

  3. Possible ecosystem impacts of applying maximum sustainable yield policy in food chain models.

    PubMed

    Ghosh, Bapan; Kar, T K

    2013-07-21

    This paper describes the possible impacts of maximum sustainable yield (MSY) and maximum sustainable total yield (MSTY) policy in ecosystems. In general it is observed that exploitation at MSY (of single species) or MSTY (of multispecies) level may cause the extinction of several species. In particular, for traditional prey-predator system, fishing under combined harvesting effort at MSTY (if it exists) level may be a sustainable policy, but if MSTY does not exist then it is due to the extinction of the predator species only. In generalist prey-predator system, harvesting of any one of the species at MSY level is always a sustainable policy, but harvesting of both the species at MSTY level may or may not be a sustainable policy. In addition, we have also investigated the MSY and MSTY policy in a traditional tri-trophic and four trophic food chain models.

  4. Evaluation of Maximum Radionuclide Groundwater Concentrations for Basement Fill Model. Zion Station Restoration Project

    SciTech Connect

    Sullivan, Terry

    2014-12-02

    ZionSolutions is in the process of decommissioning the Zion Nuclear Power Plant in order to establish a new water treatment plant. There is some residual radioactive particles from the plant which need to be brought down to levels so an individual who receives water from the new treatment plant does not receive a radioactive dose in excess of 25 mrem/y⁻¹. The objectives of this report are: (a) To present a simplified conceptual model for release from the buildings with residual subsurface structures that can be used to provide an upper bound on contaminant concentrations in the fill material; (b) Provide maximum water concentrations and the corresponding amount of mass sorbed to the solid fill material that could occur in each building for use in dose assessment calculations; (c) Estimate the maximum concentration in a well located outside of the fill material; and (d) Perform a sensitivity analysis of key parameters.

  5. Climate Projections from the NARCliM Project: Bayesian Model Averaging of Maximum Temperature Projections

    NASA Astrophysics Data System (ADS)

    Olson, R.; Evans, J. P.; Fan, Y.

    2015-12-01

    NARCliM (NSW/ACT Regional Climate Modelling Project) is a regional climate project for Australia and the surrounding region. It dynamically downscales 4 General Circulation Models (GCMs) using three Regional Climate Models (RCMs) to provide climate projections for the CORDEX-AustralAsia region at 50 km resolution, and for south-east Australia at 10 km resolution. The project differs from previous work in the level of sophistication of model selection. Specifically, the selection process for GCMs included (i) conducting literature review to evaluate model performance, (ii) analysing model independence, and (iii) selecting models that span future temperature and precipitation change space. RCMs for downscaling the GCMs were chosen based on their performance for several precipitation events over South-East Australia, and on model independence.Bayesian Model Averaging (BMA) provides a statistically consistent framework for weighing the models based on their likelihood given the available observations. These weights are used to provide probability distribution functions (pdfs) for model projections. We develop a BMA framework for constructing probabilistic climate projections for spatially-averaged variables from the NARCliM project. The first step in the procedure is smoothing model output in order to exclude the influence of internal climate variability. Our statistical model for model-observations residuals is a homoskedastic iid process. Comparing RCMs with Australian Water Availability Project (AWAP) observations is used to determine model weights through Monte Carlo integration. Posterior pdfs of statistical parameters of model-data residuals are obtained using Markov Chain Monte Carlo. The uncertainty in the properties of the model-data residuals is fully accounted for when constructing the projections. We present the preliminary results of the BMA analysis for yearly maximum temperature for New South Wales state planning regions for the period 2060-2079.

  6. Entropy jump across an inviscid shock wave

    NASA Technical Reports Server (NTRS)

    Salas, Manuel D.; Iollo, Angelo

    1995-01-01

    The shock jump conditions for the Euler equations in their primitive form are derived by using generalized functions. The shock profiles for specific volume, speed, and pressure are shown to be the same, however density has a different shock profile. Careful study of the equations that govern the entropy shows that the inviscid entropy profile has a local maximum within the shock layer. We demonstrate that because of this phenomenon, the entropy, propagation equation cannot be used as a conservation law.

  7. Using entropy measures to characterize human locomotion.

    PubMed

    Leverick, Graham; Szturm, Tony; Wu, Christine Q

    2014-12-01

    Entropy measures have been widely used to quantify the complexity of theoretical and experimental dynamical systems. In this paper, the value of using entropy measures to characterize human locomotion is demonstrated based on their construct validity, predictive validity in a simple model of human walking and convergent validity in an experimental study. Results show that four of the five considered entropy measures increase meaningfully with the increased probability of falling in a simple passive bipedal walker model. The same four entropy measures also experienced statistically significant increases in response to increasing age and gait impairment caused by cognitive interference in an experimental study. Of the considered entropy measures, the proposed quantized dynamical entropy (QDE) and quantization-based approximation of sample entropy (QASE) offered the best combination of sensitivity to changes in gait dynamics and computational efficiency. Based on these results, entropy appears to be a viable candidate for assessing the stability of human locomotion.

  8. Thin Interface Asymptotics for an Energy/Entropy Approach to Phase-Field Models with Unequal Conductivities

    NASA Technical Reports Server (NTRS)

    McFadden, G. B.; Wheeler, A. A.; Anderson, D. M.

    1999-01-01

    Karma and Rapped recently developed a new sharp interface asymptotic analysis of the phase-field equations that is especially appropriate for modeling dendritic growth at low undercoolings. Their approach relieves a stringent restriction on the interface thickness that applies in the conventional asymptotic analysis, and has the added advantage that interfacial kinetic effects can also be eliminated. However, their analysis focussed on the case of equal thermal conductivities in the solid and liquid phases; when applied to a standard phase-field model with unequal conductivities, anomalous terms arise in the limiting forms of the boundary conditions for the interfacial temperature that are not present in conventional sharp-interface solidification models, as discussed further by Almgren. In this paper we apply their asymptotic methodology to a generalized phase-field model which is derived using a thermodynamically consistent approach that is based on independent entropy and internal energy gradient functionals that include double wells in both the entropy and internal energy densities. The additional degrees of freedom associated with the generalized phased-field equations can be chosen to eliminate the anomalous terms that arise for unequal conductivities.

  9. Estimation of entropy rate in a fast physical random-bit generator using a chaotic semiconductor laser with intrinsic noise.

    PubMed

    Mikami, Takuya; Kanno, Kazutaka; Aoyama, Kota; Uchida, Atsushi; Ikeguchi, Tohru; Harayama, Takahisa; Sunada, Satoshi; Arai, Ken-ichi; Yoshimura, Kazuyuki; Davis, Peter

    2012-01-01

    We analyze the time for growth of bit entropy when generating nondeterministic bits using a chaotic semiconductor laser model. The mechanism for generating nondeterministic bits is modeled as a 1-bit sampling of the intensity of light output. Microscopic noise results in an ensemble of trajectories whose bit entropy increases with time. The time for the growth of bit entropy, called the memory time, depends on both noise strength and laser dynamics. It is shown that the average memory time decreases logarithmically with increase in noise strength. It is argued that the ratio of change in average memory time with change in logarithm of noise strength can be used to estimate the intrinsic dynamical entropy rate for this method of random bit generation. It is also shown that in this model the entropy rate corresponds to the maximum Lyapunov exponent.

  10. Maximum penalized likelihood estimation in semiparametric mark-recapture-recovery models.

    PubMed

    Michelot, Théo; Langrock, Roland; Kneib, Thomas; King, Ruth

    2016-01-01

    We discuss the semiparametric modeling of mark-recapture-recovery data where the temporal and/or individual variation of model parameters is explained via covariates. Typically, in such analyses a fixed (or mixed) effects parametric model is specified for the relationship between the model parameters and the covariates of interest. In this paper, we discuss the modeling of the relationship via the use of penalized splines, to allow for considerably more flexible functional forms. Corresponding models can be fitted via numerical maximum penalized likelihood estimation, employing cross-validation to choose the smoothing parameters in a data-driven way. Our contribution builds on and extends the existing literature, providing a unified inferential framework for semiparametric mark-recapture-recovery models for open populations, where the interest typically lies in the estimation of survival probabilities. The approach is applied to two real datasets, corresponding to gray herons (Ardea cinerea), where we model the survival probability as a function of environmental condition (a time-varying global covariate), and Soay sheep (Ovis aries), where we model the survival probability as a function of individual weight (a time-varying individual-specific covariate). The proposed semiparametric approach is compared to a standard parametric (logistic) regression and new interesting underlying dynamics are observed in both cases.

  11. Maximum group velocity in a one-dimensional model with a sinusoidally varying staggered potential

    NASA Astrophysics Data System (ADS)

    Nag, Tanay; Sen, Diptiman; Dutta, Amit

    2015-06-01

    We use Floquet theory to study the maximum value of the stroboscopic group velocity in a one-dimensional tight-binding model subjected to an on-site staggered potential varying sinusoidally in time. The results obtained by numerically diagonalizing the Floquet operator are analyzed using a variety of analytical schemes. In the low-frequency limit we use adiabatic theory, while in the high-frequency limit the Magnus expansion of the Floquet Hamiltonian turns out to be appropriate. When the magnitude of the staggered potential is much greater or much less than the hopping, we use degenerate Floquet perturbation theory; we find that dynamical localization occurs in the former case when the maximum group velocity vanishes. Finally, starting from an "engineered" initial state where the particles (taken to be hard-core bosons) are localized in one part of the chain, we demonstrate that the existence of a maximum stroboscopic group velocity manifests in a light-cone-like spreading of the particles in real space.

  12. Robust maximum likelihood estimation for stochastic state space model with observation outliers

    NASA Astrophysics Data System (ADS)

    AlMutawa, J.

    2016-08-01

    The objective of this paper is to develop a robust maximum likelihood estimation (MLE) for the stochastic state space model via the expectation maximisation algorithm to cope with observation outliers. Two types of outliers and their influence are studied in this paper: namely,the additive outlier (AO) and innovative outlier (IO). Due to the sensitivity of the MLE to AO and IO, we propose two techniques for robustifying the MLE: the weighted maximum likelihood estimation (WMLE) and the trimmed maximum likelihood estimation (TMLE). The WMLE is easy to implement with weights estimated from the data; however, it is still sensitive to IO and a patch of AO outliers. On the other hand, the TMLE is reduced to a combinatorial optimisation problem and hard to implement but it is efficient to both types of outliers presented here. To overcome the difficulty, we apply the parallel randomised algorithm that has a low computational cost. A Monte Carlo simulation result shows the efficiency of the proposed algorithms. An earlier version of this paper was presented at the 8th Asian Control Conference, Kaohsiung, Taiwan, 2011.

  13. Global model SMF2 of the F2-layer maximum height

    NASA Astrophysics Data System (ADS)

    Shubin, V. N.; Karpachev, A. T.; Telegin, V. A.; Tsybulya, K. G.

    2015-09-01

    A global model SMF2 (Satellite Model of F2 layer) of the F2-layer height was created. For its creation, data from the topside sounding on board the Interkosmos-19 satellite, as well as the data of radio occultation measurements in the CHAMP, GRACE, and COSMIC experiments, were used. Data from a network of ground-based sounding stations were also additionally used. The model covers all solar activity levels, months, hours of local and universal time, longitudes, and latitudes. The model is a median one within the range of magnetic activity values K p< 3+. The spatial-temporal distribution of hmF2 in the new model is described by mutually orthogonal functions for which the attached Legendre polynomials are used. The temporal distribution is described by an expansion into a Fourier series in UT. The input parameters of the model are geographic coordinates, month, and time (UT or LT). The new model agrees well with the international model of the ionosphere IRI in places where there are many ground-based stations, and it more precisely describes the F2-layer height in places where they are absent: over the oceans and at the equator. Under low solar activity, the standard deviation in the SMF2 model does not exceed 14 km for all hours of the day, as compared to 26.6 km in the IRI-2012 model. The mean relative deviation is by approximately a factor of 4 less than that in the IRI model. Under high solar activity, the maximum standard deviations in the SMF2 model reach 25 km; however, in the IRI they are higher by a factor of ~2. The mean relative deviation is by a factor of ~2 less than in the IRI model. Thus, a hmF2 model that is more precise than IRI-2012 was created.

  14. The role of the deformational entropy in the miscibility of polymer blends investigated using a hybrid statistical mechanics and molecular dynamics model.

    PubMed

    Madkour, Tarek M; Salem, Sarah A; Miller, Stephen A

    2013-04-28

    To fully understand the thermodynamic nature of polymer blends and accurately predict their miscibility on a microscopic level, a hybrid model employing both statistical mechanics and molecular dynamics techniques was developed to effectively predict the total free energy of mixing. The statistical mechanics principles were used to derive an expression for the deformational entropy of the chains in the polymeric blends that could be evaluated from molecular dynamics trajectories. Evaluation of the entropy loss due to the deformation of the polymer chains in the case of coiling as a result of the repulsive interactions between the blend components or in the case of swelling due to the attractive interactions between the polymeric segments predicted a negative value for the deformational entropy resulting in a decrease in the overall entropy change upon mixing. Molecular dynamics methods were then used to evaluate the enthalpy of mixing, entropy of mixing, the loss in entropy due to the deformation of the polymeric chains upon mixing and the total free energy change for a series of polar and non-polar, poly(glycolic acid), PGA, polymer blends. PMID:23493907

  15. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    PubMed

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  16. Maximum likelihood estimation for semiparametric transformation models with interval-censored data

    PubMed Central

    Zeng, Donglin; Mao, Lu; Lin, D. Y.

    2016-01-01

    Interval censoring arises frequently in clinical, epidemiological, financial and sociological studies, where the event or failure of interest is known only to occur within an interval induced by periodic monitoring. We formulate the effects of potentially time-dependent covariates on the interval-censored failure time through a broad class of semiparametric transformation models that encompasses proportional hazards and proportional odds models. We consider nonparametric maximum likelihood estimation for this class of models with an arbitrary number of monitoring times for each subject. We devise an EM-type algorithm that converges stably, even in the presence of time-dependent covariates, and show that the estimators for the regression parameters are consistent, asymptotically normal, and asymptotically efficient with an easily estimated covariance matrix. Finally, we demonstrate the performance of our procedures through simulation studies and application to an HIV/AIDS study conducted in Thailand. PMID:27279656

  17. WOMBAT: a tool for mixed model analyses in quantitative genetics by restricted maximum likelihood (REML).

    PubMed

    Meyer, Karin

    2007-11-01

    WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model; estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses. Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from (http://agbu. une.edu.au/~kmeyer/wombat.html). PMID:17973343

  18. Evaluations of Bayesian and maximum likelihood methods in PK models with below-quantification-limit data.

    PubMed

    Yang, Shuying; Roger, James

    2010-01-01

    Pharmacokinetic (PK) data often contain concentration measurements below the quantification limit (BQL). While specific values cannot be assigned to these observations, nevertheless these observed BQL data are informative and generally known to be lower than the lower limit of quantification (LLQ). Setting BQLs as missing data violates the usual missing at random (MAR) assumption applied to the statistical methods, and therefore leads to biased or less precise parameter estimation. By definition, these data lie within the interval [0, LLQ], and can be considered as censored observations. Statistical methods that handle censored data, such as maximum likelihood and Bayesian methods, are thus useful in the modelling of such data sets. The main aim of this work was to investigate the impact of the amount of BQL observations on the bias and precision of parameter estimates in population PK models (non-linear mixed effects models in general) under maximum likelihood method as implemented in SAS and NONMEM, and a Bayesian approach using Markov chain Monte Carlo (MCMC) as applied in WinBUGS. A second aim was to compare these different methods in dealing with BQL or censored data in a practical situation. The evaluation was illustrated by simulation based on a simple PK model, where a number of data sets were simulated from a one-compartment first-order elimination PK model. Several quantification limits were applied to each of the simulated data to generate data sets with certain amounts of BQL data. The average percentage of BQL ranged from 25% to 75%. Their influence on the bias and precision of all population PK model parameters such as clearance and volume distribution under each estimation approach was explored and compared.

  19. Essential equivalence of the general equation for the nonequilibrium reversible-irreversible coupling (GENERIC) and steepest-entropy-ascent models of dissipation for nonequilibrium thermodynamics.

    PubMed

    Montefusco, Alberto; Consonni, Francesco; Beretta, Gian Paolo

    2015-04-01

    By reformulating the steepest-entropy-ascent (SEA) dynamical model for nonequilibrium thermodynamics in the mathematical language of differential geometry, we compare it with the primitive formulation of the general equation for the nonequilibrium reversible-irreversible coupling (GENERIC) model and discuss the main technical differences of the two approaches. In both dynamical models the description of dissipation is of the "entropy-gradient" type. SEA focuses only on the dissipative, i.e., entropy generating, component of the time evolution, chooses a sub-Riemannian metric tensor as dissipative structure, and uses the local entropy density field as potential. GENERIC emphasizes the coupling between the dissipative and nondissipative components of the time evolution, chooses two compatible degenerate structures (Poisson and degenerate co-Riemannian), and uses the global energy and entropy functionals as potentials. As an illustration, we rewrite the known GENERIC formulation of the Boltzmann equation in terms of the square root of the distribution function adopted by the SEA formulation. We then provide a formal proof that in more general frameworks, whenever all degeneracies in the GENERIC framework are related to conservation laws, the SEA and GENERIC models of the dissipative component of the dynamics are essentially interchangeable, provided of course they assume the same kinematics. As part of the discussion, we note that equipping the dissipative structure of GENERIC with the Leibniz identity makes it automatically SEA on metric leaves.

  20. Essential equivalence of the general equation for the nonequilibrium reversible-irreversible coupling (GENERIC) and steepest-entropy-ascent models of dissipation for nonequilibrium thermodynamics.

    PubMed

    Montefusco, Alberto; Consonni, Francesco; Beretta, Gian Paolo

    2015-04-01

    By reformulating the steepest-entropy-ascent (SEA) dynamical model for nonequilibrium thermodynamics in the mathematical language of differential geometry, we compare it with the primitive formulation of the general equation for the nonequilibrium reversible-irreversible coupling (GENERIC) model and discuss the main technical differences of the two approaches. In both dynamical models the description of dissipation is of the "entropy-gradient" type. SEA focuses only on the dissipative, i.e., entropy generating, component of the time evolution, chooses a sub-Riemannian metric tensor as dissipative structure, and uses the local entropy density field as potential. GENERIC emphasizes the coupling between the dissipative and nondissipative components of the time evolution, chooses two compatible degenerate structures (Poisson and degenerate co-Riemannian), and uses the global energy and entropy functionals as potentials. As an illustration, we rewrite the known GENERIC formulation of the Boltzmann equation in terms of the square root of the distribution function adopted by the SEA formulation. We then provide a formal proof that in more general frameworks, whenever all degeneracies in the GENERIC framework are related to conservation laws, the SEA and GENERIC models of the dissipative component of the dynamics are essentially interchangeable, provided of course they assume the same kinematics. As part of the discussion, we note that equipping the dissipative structure of GENERIC with the Leibniz identity makes it automatically SEA on metric leaves. PMID:25974469

  1. Essential equivalence of the general equation for the nonequilibrium reversible-irreversible coupling (GENERIC) and steepest-entropy-ascent models of dissipation for nonequilibrium thermodynamics

    NASA Astrophysics Data System (ADS)

    Montefusco, Alberto; Consonni, Francesco; Beretta, Gian Paolo

    2015-04-01

    By reformulating the steepest-entropy-ascent (SEA) dynamical model for nonequilibrium thermodynamics in the mathematical language of differential geometry, we compare it with the primitive formulation of the general equation for the nonequilibrium reversible-irreversible coupling (GENERIC) model and discuss the main technical differences of the two approaches. In both dynamical models the description of dissipation is of the "entropy-gradient" type. SEA focuses only on the dissipative, i.e., entropy generating, component of the time evolution, chooses a sub-Riemannian metric tensor as dissipative structure, and uses the local entropy density field as potential. GENERIC emphasizes the coupling between the dissipative and nondissipative components of the time evolution, chooses two compatible degenerate structures (Poisson and degenerate co-Riemannian), and uses the global energy and entropy functionals as potentials. As an illustration, we rewrite the known GENERIC formulation of the Boltzmann equation in terms of the square root of the distribution function adopted by the SEA formulation. We then provide a formal proof that in more general frameworks, whenever all degeneracies in the GENERIC framework are related to conservation laws, the SEA and GENERIC models of the dissipative component of the dynamics are essentially interchangeable, provided of course they assume the same kinematics. As part of the discussion, we note that equipping the dissipative structure of GENERIC with the Leibniz identity makes it automatically SEA on metric leaves.

  2. Dust in High Latitudes in the Community Earth System Model since the Last Glacial Maximum

    NASA Astrophysics Data System (ADS)

    Albani, S.; Mahowald, N. M.

    2015-12-01

    Earth System Models are one of the main tools in modern climate research, and they provide the means to produce future climate projections. Modeling experiments of past climates is one of the pillars of the Coupled Modelling Inter-comparison Project (CMIP) / Paleoclimate Modelling Inter-comparison Project (PMIP) general strategy, aimed at understanding the climate sensitivity to varying forcings. Physical models are useful tools for studying dust transport patterns, as they allow representing the full dust cycle from sources to sinks with an internally consistent approach. Combining information from paleodust records and climate models in coherent studies can be a fruitful approach from different points of view. Based on a new quality-controlled, size- and temporally-resolved data compilation, we used the Community Earth System Model to estimate the mass balance of and variability in the global dust cycle since the Last Glacial Maximum and throughout the Holocene. We analyze the variability of the reconstructed global dust cycle at different climate equilibrium conditions since the LGM until the pre-industrial climate, and compare with palodust records, focusing on the high latitudes, and discuss the uncertainties and the implications for dust and iron deposition to the oceans.

  3. Striatal and hippocampal entropy and recognition signals in category learning: simultaneous processes revealed by model-based fMRI.

    PubMed

    Davis, Tyler; Love, Bradley C; Preston, Alison R

    2012-07-01

    Category learning is a complex phenomenon that engages multiple cognitive processes, many of which occur simultaneously and unfold dynamically over time. For example, as people encounter objects in the world, they simultaneously engage processes to determine their fit with current knowledge structures, gather new information about the objects, and adjust their representations to support behavior in future encounters. Many techniques that are available to understand the neural basis of category learning assume that the multiple processes that subserve it can be neatly separated between different trials of an experiment. Model-based functional magnetic resonance imaging offers a promising tool to separate multiple, simultaneously occurring processes and bring the analysis of neuroimaging data more in line with category learning's dynamic and multifaceted nature. We use model-based imaging to explore the neural basis of recognition and entropy signals in the medial temporal lobe and striatum that are engaged while participants learn to categorize novel stimuli. Consistent with theories suggesting a role for the anterior hippocampus and ventral striatum in motivated learning in response to uncertainty, we find that activation in both regions correlates with a model-based measure of entropy. Simultaneously, separate subregions of the hippocampus and striatum exhibit activation correlated with a model-based recognition strength measure. Our results suggest that model-based analyses are exceptionally useful for extracting information about cognitive processes from neuroimaging data. Models provide a basis for identifying the multiple neural processes that contribute to behavior, and neuroimaging data can provide a powerful test bed for constraining and testing model predictions. PMID:22746951

  4. Striatal and Hippocampal Entropy and Recognition Signals in Category Learning: Simultaneous Processes Revealed by Model-Based fMRI

    PubMed Central

    Davis, Tyler; Love, Bradley C.; Preston, Alison R.

    2012-01-01

    Category learning is a complex phenomenon that engages multiple cognitive processes, many of which occur simultaneously and unfold dynamically over time. For example, as people encounter objects in the world, they simultaneously engage processes to determine their fit with current knowledge structures, gather new information about the objects, and adjust their representations to support behavior in future encounters. Many techniques that are available to understand the neural basis of category learning assume that the multiple processes that subserve it can be neatly separated between different trials of an experiment. Model-based functional magnetic resonance imaging offers a promising tool to separate multiple, simultaneously occurring processes and bring the analysis of neuroimaging data more in line with category learning’s dynamic and multifaceted nature. We use model-based imaging to explore the neural basis of recognition and entropy signals in the medial temporal lobe and striatum that are engaged while participants learn to categorize novel stimuli. Consistent with theories suggesting a role for the anterior hippocampus and ventral striatum in motivated learning in response to uncertainty, we find that activation in both regions correlates with a model-based measure of entropy. Simultaneously, separate subregions of the hippocampus and striatum exhibit activation correlated with a model-based recognition strength measure. Our results suggest that model-based analyses are exceptionally useful for extracting information about cognitive processes from neuroimaging data. Models provide a basis for identifying the multiple neural processes that contribute to behavior, and neuroimaging data can provide a powerful test bed for constraining and testing model predictions. PMID:22746951

  5. Inverse Modeling of Respiratory System during Noninvasive Ventilation by Maximum Likelihood Estimation

    NASA Astrophysics Data System (ADS)

    Saatci, Esra; Akan, Aydin

    2010-12-01

    We propose a procedure to estimate the model parameters of presented nonlinear Resistance-Capacitance (RC) and the widely used linear Resistance-Inductance-Capacitance (RIC) models of the respiratory system by Maximum Likelihood Estimator (MLE). The measurement noise is assumed to be Generalized Gaussian Distributed (GGD), and the variance and the shape factor of the measurement noise are estimated by MLE and Kurtosis method, respectively. The performance of the MLE algorithm is also demonstrated by the Cramer-Rao Lower Bound (CRLB) with artificially produced respiratory signals. Airway flow, mask pressure, and lung volume are measured from patients with Chronic Obstructive Pulmonary Disease (COPD) under the noninvasive ventilation and from healthy subjects. Simulations show that respiratory signals from healthy subjects are better represented by the RIC model compared to the nonlinear RC model. On the other hand, the Patient group respiratory signals are fitted to the nonlinear RC model with lower measurement noise variance, better converged measurement noise shape factor, and model parameter tracks. Also, it is observed that for the Patient group the shape factor of the measurement noise converges to values between 1 and 2 whereas for the Control group shape factor values are estimated in the super-Gaussian area.

  6. Enhancing fecal coliform total maximum daily load models through bacterial source tracking

    USGS Publications Warehouse

    Hyer, K.E.; Moyer, D.L.

    2004-01-01

    Surface water impairment by fecal coliform bacteria is a water quality issue of national scope and importance. In Virginia, more than 400 stream and river segments are on the Commonwealth's 2002 303(d) list because of fecal coliform impairment. Total maximum daily loads (TMDLs) will be developed for most of these listed streams and rivers. Information regarding the major fecal coliform sources that impair surface water quality would enhance the development of effective watershed models and improve TMDLs. Bacterial source tracking (BST) is a recently developed technology for identifying the sources of fecal coliform bacteria and it may be helpful in generating improved TMDLs. Bacterial source tracking was performed, watershed models were developed, and TMDLs were prepared for three streams (Accotink Creek, Christians Creek, and Blacks Run) on Virginia's 303(d) list of impaired waters. Quality assurance of the BST work suggests that these data adequately describe the bacteria sources that are impairing these streams. Initial comparison of simulated bacterial sources with the observed BST data indicated that the fecal coliform sources were represented inaccurately in the initial model simulation. Revised model simulations (based on BST data) appeared to provide a better representation of the sources of fecal coliform bacteria in these three streams. The coupled approach of incorporating BST data into the fecal coliform transport model appears to reduce model uncertainty and should result in an improved TMDL.

  7. Gamma-ray constraints on maximum cosmogenic neutrino fluxes and UHECR source evolution models

    SciTech Connect

    Gelmini, Graciela B.; Kalashev, Oleg; Semikoz, Dmitri V. E-mail: kalashev@ms2.inr.ac.ru

    2012-01-01

    The dip model assumes that the ultra-high energy cosmic rays (UHECRs) above 10{sup 18} eV consist exclusively of protons and is consistent with the spectrum and composition measure by HiRes. Here we present the range of cosmogenic neutrino fluxes in the dip-model which are compatible with a recent determination of the extragalactic very high energy (VHE) gamma-ray diffuse background derived from 2.5 years of Fermi/LAT data. We show that the largest fluxes predicted in the dip model would be detectable by IceCube in about 10 years of observation and are within the reach of a few years of observation with the ARA project. In the incomplete UHECR model in which protons are assumed to dominate only above 10{sup 19} eV, the cosmogenic neutrino fluxes could be a factor of 2 or 3 larger. Any fraction of heavier nuclei in the UHECR at these energies would reduce the maximum cosmogenic neutrino fluxes. We also consider here special evolution models in which the UHECR sources are assumed to have the same evolution of either the star formation rate (SFR), or the gamma-ray burst (GRB) rate, or the active galactic nuclei (AGN) rate in the Universe and found that the last two are disfavored (and in the dip model rejected) by the new VHE gamma-ray background.

  8. Skill and reliability of climate model ensembles at the Last Glacial Maximum and mid-Holocene

    NASA Astrophysics Data System (ADS)

    Hargreaves, J. C.; Annan, J. D.; Ohgaito, R.; Paul, A.; Abe-Ouchi, A.

    2013-03-01

    Paleoclimate simulations provide us with an opportunity to critically confront and evaluate the performance of climate models in simulating the response of the climate system to changes in radiative forcing and other boundary conditions. Hargreaves et al. (2011) analysed the reliability of the Paleoclimate Modelling Intercomparison Project, PMIP2 model ensemble with respect to the MARGO sea surface temperature data synthesis (MARGO Project Members, 2009) for the Last Glacial Maximum (LGM, 21 ka BP). Here we extend that work to include a new comprehensive collection of land surface data (Bartlein et al., 2011), and introduce a novel analysis of the predictive skill of the models. We include output from the PMIP3 experiments, from the two models for which suitable data are currently available. We also perform the same analyses for the PMIP2 mid-Holocene (6 ka BP) ensembles and available proxy data sets. Our results are predominantly positive for the LGM, suggesting that as well as the global mean change, the models can reproduce the observed pattern of change on the broadest scales, such as the overall land-sea contrast and polar amplification, although the more detailed sub-continental scale patterns of change remains elusive. In contrast, our results for the mid-Holocene are substantially negative, with the models failing to reproduce the observed changes with any degree of skill. One cause of this problem could be that the globally- and annually-averaged forcing anomaly is very weak at the mid-Holocene, and so the results are dominated by the more localised regional patterns in the parts of globe for which data are available. The root cause of the model-data mismatch at these scales is unclear. If the proxy calibration is itself reliable, then representativity error in the data-model comparison, and missing climate feedbacks in the models are other possible sources of error.

  9. UniEnt: uniform entropy model for the dynamics of a neuronal population

    NASA Astrophysics Data System (ADS)

    Hernandez Lahme, Damian; Nemenman, Ilya

    Sensory information and motor responses are encoded in the brain in a collective spiking activity of a large number of neurons. Understanding the neural code requires inferring statistical properties of such collective dynamics from multicellular neurophysiological recordings. Questions of whether synchronous activity or silence of multiple neurons carries information about the stimuli or the motor responses are especially interesting. Unfortunately, detection of such high order statistical interactions from data is especially challenging due to the exponentially large dimensionality of the state space of neural collectives. Here we present UniEnt, a method for the inference of strengths of multivariate neural interaction patterns. The method is based on the Bayesian prior that makes no assumptions (uniform a priori expectations) about the value of the entropy of the observed multivariate neural activity, in contrast to popular approaches that maximize this entropy. We then study previously published multi-electrode recordings data from salamander retina, exposing the relevance of higher order neural interaction patterns for information encoding in this system. This work was supported in part by Grants JSMF/220020321 and NSF/IOS/1208126.

  10. Entropy and cosmology.

    NASA Astrophysics Data System (ADS)

    Zucker, M. H.

    temperature and thus, by itself; reverse entropy. The vast encompassing gravitational forces that the universe has at its disposal, forces that dominate the phase of contraction, provide the compacting, compressive mechanism that regenerates heat in an expanded, cooled universe and decreases entropy. And this phenomenon takes place without diminishing or depleting the finite amount of mass/energy with which the universe began. The fact that the universe can reverse the entropic process leads to possibilities previously ignored when assessing which of the three models (open, closed, of flat) most probably represents the future of the universe. After analyzing the models, the conclusion reached here is that the open model is only an expanded version of the closed model and therefore is not open, and the closed model will never collapse to a big crunch and, therefore, is not closed. Which leaves a modified model, oscillating forever between limited phases of expansion and contraction (a universe in "dynamic equilibrium") as the only feasible choice.

  11. Skill and reliability of climate model ensembles at the Last Glacial Maximum and mid Holocene

    NASA Astrophysics Data System (ADS)

    Hargreaves, J. C.; Annan, J. D.; Ohgaito, R.; Paul, A.; Abe-Ouchi, A.

    2012-08-01

    Paleoclimate simulations provide us with an opportunity to critically confront and evaluate the performance of climate models in simulating the response of the climate system to changes in radiative forcing and other boundary conditions. Hargreaves et al. (2011) analysed the reliability of the PMIP2 model ensemble with respect to the MARGO sea surface temperature data synthesis (MARGO Project Members, 2009) for the Last Glacial Maximum (LGM). Here we extend that work to include a new comprehensive collection of land surface data (Bartlein et al., 2011), and introduce a novel analysis of the predictive skill of the models. We include output from the PMIP3 experiments, from the two models for which suitable data are currently available. We also perform the same analyses for the PMIP2 mid-Holocene ensembles and available proxy data sets. Our results are predominantly positive for the LGM, suggesting that as well as the global mean change, the models can reproduce the observed pattern of change on the broadest scales, such as the overall land-sea contrast and polar amplification, although the more detailed regional scale patterns of change remains elusive. In contrast, our results for the mid-Holocene are substantially negative, with the models failing to reproduce the observed changes with any degree of skill. One likely cause of this problem may be that the globally- and annually-averaged forcing anomaly is very weak at the mid-Holocene, and so the results are dominated by the more localised regional patterns. The root cause of the model-data mismatch at regional scales is unclear. If the proxy calibration is itself reliable, then representation error in the data-model comparison, and missing climate feedbacks in the models are other possible sources of error.

  12. Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET

    SciTech Connect

    Gopich, Irina V.

    2015-01-21

    Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.

  13. Geometrical and statistical properties of vision models obtained via maximum differentiation

    NASA Astrophysics Data System (ADS)

    Malo, Jesús; Simoncelli, Eero P.

    2015-03-01

    We examine properties of perceptual image distortion models, computed as the mean squared error in the response of a 2-stage cascaded image transformation. Each stage in the cascade is composed of a linear transformation, followed by a local nonlinear normalization operation. We consider two such models. For the first, the structure of the linear transformations is chosen according to perceptual criteria: a center-surround filter that extracts local contrast, and a filter designed to select visually relevant contrast according to the Standard Spatial Observer. For the second, the linear transformations are chosen based on statistical criterion, so as to eliminate correlations estimated from responses to a set of natural images. For both models, the parameters that govern the scale of the linear filters and the properties of the nonlinear normalization operation, are chosen to achieve minimal/maximal subjective discriminability of pairs of images that have been optimized to minimize/maximize the model, respectively (we refer to this as MAximum Differentiation, or "MAD", Optimization). We find that both representations substantially reduce redundancy (mutual information), with a larger reduction occurring in the second (statistically optimized) model. We also find that both models are highly correlated with subjective scores from the TID2008 database, with slightly better performance seen in the first (perceptually chosen) model. Finally, we use a foveated version of the perceptual model to synthesize visual metamers. Specifically, we generate an example of a distorted image that is optimized so as to minimize the perceptual error over receptive fields that scale with eccentricity, demonstrating that the errors are barely visible despite a substantial MSE relative to the original image.

  14. Competition between Homophily and Information Entropy Maximization in Social Networks

    PubMed Central

    Zhao, Jichang; Liang, Xiao; Xu, Ke

    2015-01-01

    In social networks, it is conventionally thought that two individuals with more overlapped friends tend to establish a new friendship, which could be stated as homophily breeding new connections. While the recent hypothesis of maximum information entropy is presented as the possible origin of effective navigation in small-world networks. We find there exists a competition between information entropy maximization and homophily in local structure through both theoretical and experimental analysis. This competition suggests that a newly built relationship between two individuals with more common friends would lead to less information entropy gain for them. We demonstrate that in the evolution of the social network, both of the two assumptions coexist. The rule of maximum information entropy produces weak ties in the network, while the law of homophily makes the network highly clustered locally and the individuals would obtain strong and trust ties. A toy model is also presented to demonstrate the competition and evaluate the roles of different rules in the evolution of real networks. Our findings could shed light on the social network modeling from a new perspective. PMID:26334994

  15. Competition between Homophily and Information Entropy Maximization in Social Networks.

    PubMed

    Zhao, Jichang; Liang, Xiao; Xu, Ke

    2015-01-01

    In social networks, it is conventionally thought that two individuals with more overlapped friends tend to establish a new friendship, which could be stated as homophily breeding new connections. While the recent hypothesis of maximum information entropy is presented as the possible origin of effective navigation in small-world networks. We find there exists a competition between information entropy maximization and homophily in local structure through both theoretical and experimental analysis. This competition suggests that a newly built relationship between two individuals with more common friends would lead to less information entropy gain for them. We demonstrate that in the evolution of the social network, both of the two assumptions coexist. The rule of maximum information entropy produces weak ties in the network, while the law of homophily makes the network highly clustered locally and the individuals would obtain strong and trust ties. A toy model is also presented to demonstrate the competition and evaluate the roles of different rules in the evolution of real networks. Our findings could shed light on the social network modeling from a new perspective.

  16. Resolution enhancement of hyperspectral imagery using maximum a posteriori estimation with a stochastic mixing model

    NASA Astrophysics Data System (ADS)

    Eismann, Michael Theodore

    A maximum a posteriori estimation method is developed and tested for enhancing the spatial resolution of hyperspectral imagery using higher resolution, coincident, panchromatic or multispectral imagery. The approach incorporates a stochastic mixing model of the underlying spectral scene content to develop a cost function that simultaneously optimizes the estimated hyperspectral scene relative to the observed hyperspectral and auxiliary imagery, as well as the local statistics of the spectral mixing model. The incorporation of the stochastic mixing model is found to be the key ingredient to reconstructing sub-pixel spectral information. It provides the necessary constraints for establishing a well-conditioned linear system of equations that can be solved for the high resolution image estimate. The research presented includes a mathematical formulation of the estimation approach and stochastic mixing model, as well as enhancement results for a variety of both synthetic and actual imagery. Both direct and iterative solution methodologies are developed, the latter being necessary to effectively treat imagery with arbitrarily specified spectral and spatial response functions. The performance of the method is qualitatively and quantitatively compared to that of previously developed resolution enhancement approaches. It is found that this novel approach is generally able to reconstruct sub-pixel information in several principal components of the high resolution hyperspectral image estimate. In contrast, the enhancement for conventional methods such as principal component substitution and least-squares estimation is mostly limited to the first principal component.

  17. Last glacial maximum constraints on the Earth System model HadGEM2-ES

    NASA Astrophysics Data System (ADS)

    Hopcroft, Peter O.; Valdes, Paul J.

    2015-09-01

    We investigate the response of the atmospheric and land surface components of the CMIP5/AR5 Earth System model HadGEM2-ES to pre-industrial (PI: AD 1860) and last glacial maximum (LGM: 21 kyr) boundary conditions. HadGEM2-ES comprises atmosphere, ocean and sea-ice components which are interactively coupled to representations of the carbon cycle, aerosols including mineral dust and tropospheric chemistry. In this study, we focus on the atmosphere-only model HadGEM2-A coupled to terrestrial carbon cycle and aerosol models. This configuration is forced with monthly sea surface temperature and sea-ice fields from equivalent coupled simulations with an older version of the Hadley Centre model, HadCM3. HadGEM2-A simulates extreme cooling over northern continents and nearly complete die back of vegetation in Asia, giving a poor representation of the LGM environment compared with reconstructions of surface temperatures and biome distributions. The model also performs significantly worse for the LGM in comparison with its precursor AR4 model HadCM3M2. Detailed analysis shows that the major factor behind the vegetation die off in HadGEM2-A is a subtle change to the temperature dependence of leaf mortality within the phenology model of HadGEM2. This impacts on both snow-vegetation albedo and vegetation dynamics. A new set of parameters is tested for both the pre-industrial and LGM, showing much improved coverage of vegetation in both time periods, including an improved representation of the needle-leaf forest coverage in Siberia for the pre-industrial. The new parameters and the resulting changes in global vegetation distribution strongly impact the simulated loading of mineral dust, an important aerosol for the LGM. The climate response in an abrupt 4× pre-industrial CO2 simulation is also analysed and shows modest regional impacts on surface temperatures across the Boreal zone.

  18. ENTROPY PRODUCTION IN COLLISIONLESS SYSTEMS. III. RESULTS FROM SIMULATIONS

    SciTech Connect

    Barnes, Eric I.; Egerer, Colin P. E-mail: egerer.coli@uwlax.edu

    2015-05-20

    The equilibria formed by the self-gravitating, collisionless collapse of simple initial conditions have been investigated for decades. We present the results of our attempts to describe the equilibria formed in N-body simulations using thermodynamically motivated models. Previous work has suggested that it is possible to define distribution functions for such systems that describe maximum entropy states. These distribution functions are used to create radial density and velocity distributions for comparison to those from simulations. A wide variety of N-body code conditions are used to reduce the chance that results are biased by numerical issues. We find that a subset of initial conditions studied lead to equilibria that can be accurately described by these models, and that direct calculation of the entropy shows maximum values being achieved.

  19. Entropy-based financial asset pricing.

    PubMed

    Ormos, Mihály; Zibriczky, Dávid

    2014-01-01

    We investigate entropy as a financial risk measure. Entropy explains the equity premium of securities and portfolios in a simpler way and, at the same time, with higher explanatory power than the beta parameter of the capital asset pricing model. For asset pricing we define the continuous entropy as an alternative measure of risk. Our results show that entropy decreases in the function of the number of securities involved in a portfolio in a similar way to the standard deviation, and that efficient portfolios are situated on a hyperbola in the expected return-entropy system. For empirical investigation we use daily returns of 150 randomly selected securities for a period of 27 years. Our regression results show that entropy has a higher explanatory power for the expected return than the capital asset pricing model beta. Furthermore we show the time varying behavior of the beta along with entropy.

  20. Entropy-Based Financial Asset Pricing

    PubMed Central

    Ormos, Mihály; Zibriczky, Dávid

    2014-01-01

    We investigate entropy as a financial risk measure. Entropy explains the equity premium of securities and portfolios in a simpler way and, at the same time, with higher explanatory power than the beta parameter of the capital asset pricing model. For asset pricing we define the continuous entropy as an alternative measure of risk. Our results show that entropy decreases in the function of the number of securities involved in a portfolio in a similar way to the standard deviation, and that efficient portfolios are situated on a hyperbola in the expected return – entropy system. For empirical investigation we use daily returns of 150 randomly selected securities for a period of 27 years. Our regression results show that entropy has a higher explanatory power for the expected return than the capital asset pricing model beta. Furthermore we show the time varying behavior of the beta along with entropy. PMID:25545668

  1. Entropy Generation Across Earth's Bow Shock

    NASA Technical Reports Server (NTRS)

    Parks, George K.; McCarthy, Michael; Fu, Suiyan; Lee E. s; Cao, Jinbin; Goldstein, Melvyn L.; Canu, Patrick; Dandouras, Iannis S.; Reme, Henri; Fazakerley, Andrew; Lin, Naiguo; Wilber, Mark

    2011-01-01

    Earth's bow shock is a transition layer that causes an irreversible change in the state of plasma that is stationary in time. Theories predict entropy increases across the bow shock but entropy has never been directly measured. Cluster and Double Star plasma experiments measure 3D plasma distributions upstream and downstream of the bow shock that allow calculation of Boltzmann's entropy function H and his famous H-theorem, dH/dt O. We present the first direct measurements of entropy density changes across Earth's bow shock. We will show that this entropy generation may be part of the processes that produce the non-thermal plasma distributions is consistent with a kinetic entropy flux model derived from the collisionless Boltzmann equation, giving strong support that solar wind's total entropy across the bow shock remains unchanged. As far as we know, our results are not explained by any existing shock models and should be of interests to theorists.

  2. Entropy-based financial asset pricing.

    PubMed

    Ormos, Mihály; Zibriczky, Dávid

    2014-01-01

    We investigate entropy as a financial risk measure. Entropy explains the equity premium of securities and portfolios in a simpler way and, at the same time, with higher explanatory power than the beta parameter of the capital asset pricing model. For asset pricing we define the continuous entropy as an alternative measure of risk. Our results show that entropy decreases in the function of the number of securities involved in a portfolio in a similar way to the standard deviation, and that efficient portfolios are situated on a hyperbola in the expected return-entropy system. For empirical investigation we use daily returns of 150 randomly selected securities for a period of 27 years. Our regression results show that entropy has a higher explanatory power for the expected return than the capital asset pricing model beta. Furthermore we show the time varying behavior of the beta along with entropy. PMID:25545668

  3. Tropical climate at the last glacial maximum inferred from glacier mass-balance modeling

    USGS Publications Warehouse

    Hostetler, S.W.; Clark, P.U.

    2000-01-01

    Model-derived equilibrium line altitudes (ELAs) of former tropical glaciers support arguments, based on other paleoclimate data, for both the magnitude and spatial pattern of terrestrial cooling in the tropics at the last glacial maximum (LGM). Relative to the present, LGM ELAs were maintained by air temperatures that were 3.5??to 6.6 ??C lower and precipitation that ranged from 63% wetter in Hawaii to 25% drier on Mt. Kenya, Africa. Our results imply the need for a ~3 ??C cooling of LGM sea surface temperatures in the western Pacific warm pool. Sensitivity tests suggest that LGM ELAs could have persisted until 16,000 years before the present in the Peruvian Andes and on Papua, New Guinea.

  4. Application of Markov chain model to daily maximum temperature for thermal comfort in Malaysia

    SciTech Connect

    Nordin, Muhamad Asyraf bin Che; Hassan, Husna

    2015-10-22

    The Markov chain’s first order principle has been widely used to model various meteorological fields, for prediction purposes. In this study, a 14-year (2000-2013) data of daily maximum temperatures in Bayan Lepas were used. Earlier studies showed that the outdoor thermal comfort range based on physiologically equivalent temperature (PET) index in Malaysia is less than 34°C, thus the data obtained were classified into two state: normal state (within thermal comfort range) and hot state (above thermal comfort range). The long-run results show the probability of daily temperature exceed TCR will be only 2.2%. On the other hand, the probability daily temperature within TCR will be 97.8%.

  5. NLSCIDNT user's guide maximum likehood parameter identification computer program with nonlinear rotorcraft model

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A nonlinear, maximum likelihood, parameter identification computer program (NLSCIDNT) is described which evaluates rotorcraft stability and control coefficients from flight test data. The optimal estimates of the parameters (stability and control coefficients) are determined (identified) by minimizing the negative log likelihood cost function. The minimization technique is the Levenberg-Marquardt method, which behaves like the steepest descent method when it is far from the minimum and behaves like the modified Newton-Raphson method when it is nearer the minimum. Twenty-one states and 40 measurement variables are modeled, and any subset may be selected. States which are not integrated may be fixed at an input value, or time history data may be substituted for the state in the equations of motion. Any aerodynamic coefficient may be expressed as a nonlinear polynomial function of selected 'expansion variables'.

  6. Performance in population models for count data, part I: maximum likelihood approximations.

    PubMed

    Plan, Elodie L; Maloney, Alan; Trocóniz, Iñaki F; Karlsson, Mats O

    2009-08-01

    There has been little evaluation of maximum likelihood approximation methods for non-linear mixed effects modelling of count data. The aim of this study was to explore the estimation accuracy of population parameters from six count models, using two different methods and programs. Simulations of 100 data sets were performed in NONMEM for each probability distribution with parameter values derived from a real case study on 551 epileptic patients. Models investigated were: Poisson (PS), Poisson with Markov elements (PMAK), Poisson with a mixture distribution for individual observations (PMIX), Zero Inflated Poisson (ZIP), Generalized Poisson (GP) and Negative Binomial (NB). Estimations of simulated datasets were completed with Laplacian approximation (LAPLACE) in NONMEM and LAPLACE/Gaussian Quadrature (GQ) in SAS. With LAPLACE, the average absolute value of the bias (AVB) in all models was 1.02% for fixed effects, and ranged 0.32-8.24% for the estimation of the random effect of the mean count (lambda). The random effect of the overdispersion parameter present in ZIP, GP and NB was underestimated (-25.87, -15.73 and -21.93% of relative bias, respectively). Analysis with GQ 9 points resulted in an improvement in these parameters (3.80% average AVB). Methods implemented in SAS had a lower fraction of successful minimizations, and GQ 9 points was considerably slower than 1 point. Simulations showed that parameter estimates, even when biased, resulted in data that were only marginally different from data simulated from the true model. Thus all methods investigated appear to provide useful results for the investigated count data models. PMID:19653080

  7. Molecular Sticker Model Stimulation on Silicon for a Maximum Clique Problem.

    PubMed

    Ning, Jianguo; Li, Yanmei; Yu, Wen

    2015-01-01

    Molecular computers (also called DNA computers), as an alternative to traditional electronic computers, are smaller in size but more energy efficient, and have massive parallel processing capacity. However, DNA computers may not outperform electronic computers owing to their higher error rates and some limitations of the biological laboratory. The stickers model, as a typical DNA-based computer, is computationally complete and universal, and can be viewed as a bit-vertically operating machine. This makes it attractive for silicon implementation. Inspired by the information processing method on the stickers computer, we propose a novel parallel computing model called DEM (DNA Electronic Computing Model) on System-on-a-Programmable-Chip (SOPC) architecture. Except for the significant difference in the computing medium--transistor chips rather than bio-molecules--the DEM works similarly to DNA computers in immense parallel information processing. Additionally, a plasma display panel (PDP) is used to show the change of solutions, and helps us directly see the distribution of assignments. The feasibility of the DEM is tested by applying it to compute a maximum clique problem (MCP) with eight vertices. Owing to the limited computing sources on SOPC architecture, the DEM could solve moderate-size problems in polynomial time. PMID:26075867

  8. Molecular Sticker Model Stimulation on Silicon for a Maximum Clique Problem

    PubMed Central

    Ning, Jianguo; Li, Yanmei; Yu, Wen

    2015-01-01

    Molecular computers (also called DNA computers), as an alternative to traditional electronic computers, are smaller in size but more energy efficient, and have massive parallel processing capacity. However, DNA computers may not outperform electronic computers owing to their higher error rates and some limitations of the biological laboratory. The stickers model, as a typical DNA-based computer, is computationally complete and universal, and can be viewed as a bit-vertically operating machine. This makes it attractive for silicon implementation. Inspired by the information processing method on the stickers computer, we propose a novel parallel computing model called DEM (DNA Electronic Computing Model) on System-on-a-Programmable-Chip (SOPC) architecture. Except for the significant difference in the computing medium—transistor chips rather than bio-molecules—the DEM works similarly to DNA computers in immense parallel information processing. Additionally, a plasma display panel (PDP) is used to show the change of solutions, and helps us directly see the distribution of assignments. The feasibility of the DEM is tested by applying it to compute a maximum clique problem (MCP) with eight vertices. Owing to the limited computing sources on SOPC architecture, the DEM could solve moderate-size problems in polynomial time. PMID:26075867

  9. Molecular Sticker Model Stimulation on Silicon for a Maximum Clique Problem.

    PubMed

    Ning, Jianguo; Li, Yanmei; Yu, Wen

    2015-01-01

    Molecular computers (also called DNA computers), as an alternative to traditional electronic computers, are smaller in size but more energy efficient, and have massive parallel processing capacity. However, DNA computers may not outperform electronic computers owing to their higher error rates and some limitations of the biological laboratory. The stickers model, as a typical DNA-based computer, is computationally complete and universal, and can be viewed as a bit-vertically operating machine. This makes it attractive for silicon implementation. Inspired by the information processing method on the stickers computer, we propose a novel parallel computing model called DEM (DNA Electronic Computing Model) on System-on-a-Programmable-Chip (SOPC) architecture. Except for the significant difference in the computing medium--transistor chips rather than bio-molecules--the DEM works similarly to DNA computers in immense parallel information processing. Additionally, a plasma display panel (PDP) is used to show the change of solutions, and helps us directly see the distribution of assignments. The feasibility of the DEM is tested by applying it to compute a maximum clique problem (MCP) with eight vertices. Owing to the limited computing sources on SOPC architecture, the DEM could solve moderate-size problems in polynomial time.

  10. Climate change uncertainty for daily minimum and maximum temperatures: a model inter-comparison

    SciTech Connect

    Lobell, D; Bonfils, C; Duffy, P

    2006-11-09

    Several impacts of climate change may depend more on changes in mean daily minimum (T{sub min}) or maximum (T{sub max}) temperatures than daily averages. To evaluate uncertainties in these variables, we compared projections of T{sub min} and T{sub max} changes by 2046-2065 for 12 climate models under an A2 emission scenario. Average modeled changes in T{sub max} were slightly lower in most locations than T{sub min}, consistent with historical trends exhibiting a reduction in diurnal temperature ranges. However, while average changes in T{sub min} and T{sub max} were similar, the inter-model variability of T{sub min} and T{sub max} projections exhibited substantial differences. For example, inter-model standard deviations of June-August T{sub max} changes were more than 50% greater than for T{sub min} throughout much of North America, Europe, and Asia. Model differences in cloud changes, which exert relatively greater influence on T{sub max} during summer and T{sub min} during winter, were identified as the main source of uncertainty disparities. These results highlight the importance of considering separately projections for T{sub max} and T{sub min} when assessing climate change impacts, even in cases where average projected changes are similar. In addition, impacts that are most sensitive to summertime T{sub min} or wintertime T{sub max} may be more predictable than suggested by analyses using only projections of daily average temperatures.

  11. Global monsoon change during the Last Glacial Maximum: a multi-model study

    NASA Astrophysics Data System (ADS)

    Yan, Mi; Wang, Bin; Liu, Jian

    2016-07-01

    Change of global monsoon (GM) during the Last Glacial Maximum (LGM) is investigated using results from the multi-model ensemble of seven coupled climate models participated in the Coupled Model Intercomparison Project Phase 5. The GM changes during LGM are identified by comparison of the results from the pre-industrial control run and the LGM run. The results show (1) the annual mean GM precipitation and GM domain are reduced by about 10 and 5 %, respectively; (2) the monsoon intensity (demonstrated by the local summer-minus-winter precipitation) is also weakened over most monsoon regions except Australian monsoon; (3) the monsoon precipitation is reduced more during the local summer than winter; (4) distinct from all other regional monsoons, the Australian monsoon is strengthened and the monsoon area is enlarged. Four major factors contribute to these changes. The lower greenhouse gas concentration and the presence of the ice sheets decrease air temperature and water vapor content, resulting in a general weakening of the GM precipitation and reduction of GM domain. The reduced hemispheric difference in seasonal variation of insolation may contribute to the weakened GM intensity. The changed land-ocean configuration in the vicinity of the Maritime Continent, along with the presence of the ice sheets and lower greenhouse gas concentration, result in strengthened land-ocean and North-South hemispheric thermal contrasts, leading to the unique strengthened Australian monsoon. Although some of the results are consistent with the proxy data, uncertainties remain in different models. More comparison is needed between proxy data and model experiments to better understand the changes of the GM during the LGM.

  12. Novel classification method for remote sensing images based on information entropy discretization algorithm and vector space model

    NASA Astrophysics Data System (ADS)

    Xie, Li; Li, Guangyao; Xiao, Mang; Peng, Lei

    2016-04-01

    Various kinds of remote sensing image classification algorithms have been developed to adapt to the rapid growth of remote sensing data. Conventional methods typically have restrictions in either classification accuracy or computational efficiency. Aiming to overcome the difficulties, a new solution for remote sensing image classification is presented in this study. A discretization algorithm based on information entropy is applied to extract features from the data set and a vector space model (VSM) method is employed as the feature representation algorithm. Because of the simple structure of the feature space, the training rate is accelerated. The performance of the proposed method is compared with two other algorithms: back propagation neural networks (BPNN) method and ant colony optimization (ACO) method. Experimental results confirm that the proposed method is superior to the other algorithms in terms of classification accuracy and computational efficiency.

  13. Bayesian, maximum parsimony and UPGMA models for inferring the phylogenies of antelopes using mitochondrial markers.

    PubMed

    Khan, Haseeb A; Arif, Ibrahim A; Bahkali, Ali H; Al Farhan, Ahmad H; Al Homaidan, Ali A

    2008-10-06

    This investigation was aimed to compare the inference of antelope phylogenies resulting from the 16S rRNA, cytochrome-b (cyt-b) and d-loop segments of mitochondrial DNA using three different computational models including Bayesian (BA), maximum parsimony (MP) and unweighted pair group method with arithmetic mean (UPGMA). The respective nucleotide sequences of three Oryx species (Oryx leucoryx, Oryx dammah and Oryx gazella) and an out-group (Addax nasomaculatus) were aligned and subjected to BA, MP and UPGMA models for comparing the topologies of respective phylogenetic trees. The 16S rRNA region possessed the highest frequency of conserved sequences (97.65%) followed by cyt-b (94.22%) and d-loop (87.29%). There were few transitions (2.35%) and none transversions in 16S rRNA as compared to cyt-b (5.61% transitions and 0.17% transversions) and d-loop (11.57% transitions and 1.14% transversions) while comparing the four taxa. All the three mitochondrial segments clearly differentiated the genus Addax from Oryx using the BA or UPGMA models. The topologies of all the gamma-corrected Bayesian trees were identical irrespective of the marker type. The UPGMA trees resulting from 16S rRNA and d-loop sequences were also identical (Oryx dammah grouped with Oryx leucoryx) to Bayesian trees except that the UPGMA tree based on cyt-b showed a slightly different phylogeny (Oryx dammah grouped with Oryx gazella) with a low bootstrap support. However, the MP model failed to differentiate the genus Addax from Oryx. These findings demonstrate the efficiency and robustness of BA and UPGMA methods for phylogenetic analysis of antelopes using mitochondrial markers.

  14. Redefining the maximum sustainable yield for the Schaefer population model including multiplicative environmental noise.

    PubMed

    Bousquet, Nicolas; Duchesne, Thierry; Rivest, Louis-Paul

    2008-09-01

    The focus of this article is to investigate the biological reference points, such as the maximum sustainable yield (MSY), in a common Schaefer (logistic) surplus production model in the presence of a multiplicative environmental noise. This type of model is used in fisheries stock assessment as a first-hand tool for biomass modelling. Under the assumption that catches are proportional to the biomass, we derive new conditions on the environmental noise distribution such that stationarity exists and extinction is avoided. We then get new explicit results about the stationary behavior of the biomass distribution for a particular specification of the noise, namely the biomass distribution itself and a redefinition of the MSY and related quantities that now depend on the value of the variance of the noise. Consequently, we obtain a more precise vision of how less optimistic the stochastic version of the MSY can be than the traditionally used (deterministic) MSY. In addition, we give empirical conditions on the error variance to approximate our specific noise by a lognormal noise, the latter being more natural and leading to easier inference in this context. These conditions are mild enough to make the explicit results of this paper valid in a number of practical applications. The outcomes of two case-studies about northwest Atlantic haddock [Spencer, P.D., Collie, J.S., 1997. Effect of nonlinear predation rates on rebuilding the Georges Bank haddock (Melanogrammus aeglefinus) stock. Can. J. Fish. Aquat. Sci. 54, 2920-2929] and South Atlantic albacore tuna [Millar, R.B., Meyer, R., 2000. Non-linear state space modelling of fisheries biomass dynamics by using Metropolis-Hastings within-Gibbs sampling. Appl. Stat. 49, 327-342] are used to illustrate the impact of our results in bioeconomic terms.

  15. Bayesian, maximum parsimony and UPGMA models for inferring the phylogenies of antelopes using mitochondrial markers.

    PubMed

    Khan, Haseeb A; Arif, Ibrahim A; Bahkali, Ali H; Al Farhan, Ahmad H; Al Homaidan, Ali A

    2008-01-01

    This investigation was aimed to compare the inference of antelope phylogenies resulting from the 16S rRNA, cytochrome-b (cyt-b) and d-loop segments of mitochondrial DNA using three different computational models including Bayesian (BA), maximum parsimony (MP) and unweighted pair group method with arithmetic mean (UPGMA). The respective nucleotide sequences of three Oryx species (Oryx leucoryx, Oryx dammah and Oryx gazella) and an out-group (Addax nasomaculatus) were aligned and subjected to BA, MP and UPGMA models for comparing the topologies of respective phylogenetic trees. The 16S rRNA region possessed the highest frequency of conserved sequences (97.65%) followed by cyt-b (94.22%) and d-loop (87.29%). There were few transitions (2.35%) and none transversions in 16S rRNA as compared to cyt-b (5.61% transitions and 0.17% transversions) and d-loop (11.57% transitions and 1.14% transversions) while comparing the four taxa. All the three mitochondrial segments clearly differentiated the genus Addax from Oryx using the BA or UPGMA models. The topologies of all the gamma-corrected Bayesian trees were identical irrespective of the marker type. The UPGMA trees resulting from 16S rRNA and d-loop sequences were also identical (Oryx dammah grouped with Oryx leucoryx) to Bayesian trees except that the UPGMA tree based on cyt-b showed a slightly different phylogeny (Oryx dammah grouped with Oryx gazella) with a low bootstrap support. However, the MP model failed to differentiate the genus Addax from Oryx. These findings demonstrate the efficiency and robustness of BA and UPGMA methods for phylogenetic analysis of antelopes using mitochondrial markers. PMID:19204824

  16. Quantifying uncertainty in modelled estimates of annual maximum precipitation: confidence intervals

    NASA Astrophysics Data System (ADS)

    Panagoulia, Dionysia; Economou, Polychronis; Caroni, Chrys

    2016-04-01

    The possible nonstationarity of the GEV distribution fitted to annual maximum precipitation under climate change is a topic of active investigation. Of particular significance is how best to construct confidence intervals for items of interest arising from stationary/nonstationary GEV models.We are usually not only interested in parameter estimates but also in quantiles of the GEV distribution and it might be expected that estimates of extreme upper quantiles are far from being normally distributed even for moderate sample sizes.Therefore, we consider constructing confidence intervals for all quantities of interest by bootstrap methods based on resampling techniques. To this end, we examined three bootstrapping approaches to constructing confidence intervals for parameters and quantiles: random-t resampling, fixed-t resampling and the parametric bootstrap. Each approach was used in combination with the normal approximation method, percentile method, basic bootstrap method and bias-corrected method for constructing confidence intervals. We found that all the confidence intervals for the stationary model parameters have similar coverage and mean length. Confidence intervals for the more extreme quantiles tend to become very wide for all bootstrap methods. For nonstationary GEV models with linear time dependence of location or log-linear time dependence of scale, confidence interval coverage probabilities are reasonably accurate for the parameters. For the extreme percentiles, the bias-corrected and accelerated method is best overall, and the fixed-t method also has good average coverage probabilities. Reference: Panagoulia D., Economou P. and Caroni C., Stationary and non-stationary GEV modeling of extreme precipitation over a mountainous area under climate change, Environmetrics, 25 (1), 29-43, 2014.

  17. Statistical bounds and maximum likelihood performance for shot noise limited knife-edge modeled stellar occultation

    NASA Astrophysics Data System (ADS)

    McNicholl, Patrick J.; Crabtree, Peter N.

    2014-09-01

    Applications of stellar occultation by solar system objects have a long history for determining universal time, detecting binary stars, and providing estimates of sizes of asteroids and minor planets. More recently, extension of this last application has been proposed as a technique to provide information (if not complete shadow images) of geosynchronous satellites. Diffraction has long been recognized as a source of distortion for such occultation measurements, and models subsequently developed to compensate for this degradation. Typically these models employ a knife-edge assumption for the obscuring body. In this preliminary study, we report on the fundamental limitations of knife-edge position estimates due to shot noise in an otherwise idealized measurement. In particular, we address the statistical bounds, both Cramér- Rao and Hammersley-Chapman-Robbins, on the uncertainty in the knife-edge position measurement, as well as the performance of the maximum-likelihood estimator. Results are presented as a function of both stellar magnitude and sensor passband; the limiting case of infinite resolving power is also explored.

  18. Beyond Maximum Independent Set: AN Extended Model for Point-Feature Label Placement

    NASA Astrophysics Data System (ADS)

    Haunert, Jan-Henrik; Wolff, Alexander

    2016-06-01

    Map labeling is a classical problem of cartography that has frequently been approached by combinatorial optimization. Given a set of features in the map and for each feature a set of label candidates, a common problem is to select an independent set of labels (that is, a labeling without label-label overlaps) that contains as many labels as possible and at most one label for each feature. To obtain solutions of high cartographic quality, the labels can be weighted and one can maximize the total weight (rather than the number) of the selected labels. We argue, however, that when maximizing the weight of the labeling, interdependences between labels are insufficiently addressed. Furthermore, in a maximum-weight labeling, the labels tend to be densely packed and thus the map background can be occluded too much. We propose extensions of an existing model to overcome these limitations. Since even without our extensions the problem is NP-hard, we cannot hope for an efficient exact algorithm for the problem. Therefore, we present a formalization of our model as an integer linear program (ILP). This allows us to compute optimal solutions in reasonable time, which we demonstrate for randomly generated instances.

  19. Segmenting pectoralis muscle on digital mammograms by a Markov random field-maximum a posteriori model.

    PubMed

    Ge, Mei; Mainprize, James G; Mawdsley, Gordon E; Yaffe, Martin J

    2014-10-01

    Accurate and automatic segmentation of the pectoralis muscle is essential in many breast image processing procedures, for example, in the computation of volumetric breast density from digital mammograms. Its segmentation is a difficult task due to the heterogeneity of the region, neighborhood complexities, and shape variability. The segmentation is achieved by pixel classification through a Markov random field (MRF) image model. Using the image intensity feature as observable data and local spatial information as a priori, the posterior distribution is estimated in a stochastic process. With a variable potential component in the energy function, by the maximum a posteriori (MAP) estimate of the labeling image, given the image intensity feature which is assumed to follow a Gaussian distribution, we achieved convergence properties in an appropriate sense by Metropolis sampling the posterior distribution of the selected energy function. By proposing an adjustable spatial constraint, the MRF-MAP model is able to embody the shape requirement and provide the required flexibility for the model parameter fitting process. We demonstrate that accurate and robust segmentation can be achieved for the curving-triangle-shaped pectoralis muscle in the medio-lateral-oblique (MLO) view, and the semielliptic-shaped muscle in cranio-caudal (CC) view digital mammograms. The applicable mammograms can be either "For Processing" or "For Presentation" image formats. The algorithm was developed using 56 MLO-view and 79 CC-view FFDM "For Processing" images, and quantitatively evaluated against a random selection of 122 MLO-view and 173 CC-view FFDM images of both presentation intent types.

  20. Segmenting pectoralis muscle on digital mammograms by a Markov random field-maximum a posteriori model

    PubMed Central

    Ge, Mei; Mainprize, James G.; Mawdsley, Gordon E.; Yaffe, Martin J.

    2014-01-01

    Abstract. Accurate and automatic segmentation of the pectoralis muscle is essential in many breast image processing procedures, for example, in the computation of volumetric breast density from digital mammograms. Its segmentation is a difficult task due to the heterogeneity of the region, neighborhood complexities, and shape variability. The segmentation is achieved by pixel classification through a Markov random field (MRF) image model. Using the image intensity feature as observable data and local spatial information as a priori, the posterior distribution is estimated in a stochastic process. With a variable potential component in the energy function, by the maximum a posteriori (MAP) estimate of the labeling image, given the image intensity feature which is assumed to follow a Gaussian distribution, we achieved convergence properties in an appropriate sense by Metropolis sampling the posterior distribution of the selected energy function. By proposing an adjustable spatial constraint, the MRF-MAP model is able to embody the shape requirement and provide the required flexibility for the model parameter fitting process. We demonstrate that accurate and robust segmentation can be achieved for the curving-triangle-shaped pectoralis muscle in the medio-lateral-oblique (MLO) view, and the semielliptic-shaped muscle in cranio-caudal (CC) view digital mammograms. The applicable mammograms can be either “For Processing” or “For Presentation” image formats. The algorithm was developed using 56 MLO-view and 79 CC-view FFDM “For Processing” images, and quantitatively evaluated against a random selection of 122 MLO-view and 173 CC-view FFDM images of both presentation intent types. PMID:26158068

  1. Time dependence of Hawking radiation entropy

    SciTech Connect

    Page, Don N.

    2013-09-01

    If a black hole starts in a pure quantum state and evaporates completely by a unitary process, the von Neumann entropy of the Hawking radiation initially increases and then decreases back to zero when the black hole has disappeared. Here numerical results are given for an approximation to the time dependence of the radiation entropy under an assumption of fast scrambling, for large nonrotating black holes that emit essentially only photons and gravitons. The maximum of the von Neumann entropy then occurs after about 53.81% of the evaporation time, when the black hole has lost about 40.25% of its original Bekenstein-Hawking (BH) entropy (an upper bound for its von Neumann entropy) and then has a BH entropy that equals the entropy in the radiation, which is about 59.75% of the original BH entropy 4πM{sub 0}{sup 2}, or about 7.509M{sub 0}{sup 2} ≈ 6.268 × 10{sup 76}(M{sub 0}/M{sub s}un){sup 2}, using my 1976 calculations that the photon and graviton emission process into empty space gives about 1.4847 times the BH entropy loss of the black hole. Results are also given for black holes in initially impure states. If the black hole starts in a maximally mixed state, the von Neumann entropy of the Hawking radiation increases from zero up to a maximum of about 119.51% of the original BH entropy, or about 15.018M{sub 0}{sup 2} ≈ 1.254 × 10{sup 77}(M{sub 0}/M{sub s}un){sup 2}, and then decreases back down to 4πM{sub 0}{sup 2} = 1.049 × 10{sup 77}(M{sub 0}/M{sub s}un){sup 2}.

  2. Time dependence of Hawking radiation entropy

    NASA Astrophysics Data System (ADS)

    Page, Don N.

    2013-09-01

    If a black hole starts in a pure quantum state and evaporates completely by a unitary process, the von Neumann entropy of the Hawking radiation initially increases and then decreases back to zero when the black hole has disappeared. Here numerical results are given for an approximation to the time dependence of the radiation entropy under an assumption of fast scrambling, for large nonrotating black holes that emit essentially only photons and gravitons. The maximum of the von Neumann entropy then occurs after about 53.81% of the evaporation time, when the black hole has lost about 40.25% of its original Bekenstein-Hawking (BH) entropy (an upper bound for its von Neumann entropy) and then has a BH entropy that equals the entropy in the radiation, which is about 59.75% of the original BH entropy 4πM02, or about 7.509M02 ≈ 6.268 × 1076(M0/Msolar)2, using my 1976 calculations that the photon and graviton emission process into empty space gives about 1.4847 times the BH entropy loss of the black hole. Results are also given for black holes in initially impure states. If the black hole starts in a maximally mixed state, the von Neumann entropy of the Hawking radiation increases from zero up to a maximum of about 119.51% of the original BH entropy, or about 15.018M02 ≈ 1.254 × 1077(M0/Msolar)2, and then decreases back down to 4πM02 = 1.049 × 1077(M0/Msolar)2.

  3. Enthalpy relaxation of polymers: comparing the predictive power of two configurational entropy models extending the AGV approach

    NASA Astrophysics Data System (ADS)

    Andreozzi, L.; Faetti, M.; Zulli, F.; Giordano, M.

    2004-10-01

    The Tool-Narayanaswamy-Moynihan (TNM) phenomenological model is widely accepted in order to describe the structural relaxation of glasses. However several quantitative discrepancies can be found in the literature that cannot be entirely ascribed to the experimental errors. In this work we compare the predictive power of two recently proposed configurational entropy approaches extending the TNM formalism. Both of them change the treatment of non linearity by adding a free parameter. We use Differential Scanning Calorimetry (DSC) experiments in order to test the models in two different polymers. One of them is a commercial PMMA sample, the other is a side chain liquid crystal azo-benzene polymer properly synthesized for optical nanorecording purposes. Different results were found for the two systems. In the PMMA sample only one of the new models was able to improve the agreement between DSC experiments and theory with respect to the TNM model, whereas in the second polymer both the approaches were able to describe the experiments better than TNM model.

  4. Interacting price model and fluctuation behavior analysis from Lempel-Ziv complexity and multi-scale weighted-permutation entropy

    NASA Astrophysics Data System (ADS)

    Li, Rui; Wang, Jun

    2016-01-01

    A financial price model is developed based on the voter interacting system in this work. The Lempel-Ziv complexity is introduced to analyze the complex behaviors of the stock market. Some stock market stylized facts including fat tails, absence of autocorrelation and volatility clustering are investigated for the proposed price model firstly. Then the complexity of fluctuation behaviors of the real stock markets and the proposed price model are mainly explored by Lempel-Ziv complexity (LZC) analysis and multi-scale weighted-permutation entropy (MWPE) analysis. A series of LZC analyses of the returns and the absolute returns of daily closing prices and moving average prices are performed. Moreover, the complexity of the returns, the absolute returns and their corresponding intrinsic mode functions (IMFs) derived from the empirical mode decomposition (EMD) with MWPE is also investigated. The numerical empirical study shows similar statistical and complex behaviors between the proposed price model and the real stock markets, which exhibits that the proposed model is feasible to some extent.

  5. The concept of global monsoon applied to the last glacial maximum: A multi-model analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Dabang; Tian, Zhiping; Lang, Xianmei; Kageyama, Masa; Ramstein, Gilles

    2015-10-01

    The last glacial maximum (LGM, ca. 21,000 years ago) has been extensively investigated for better understanding of past glacial climates. Global-scale monsoon changes, however, have not yet been determined. In this study, we examine global monsoon area (GMA) and precipitation (GMP) as well as GMP intensity (GMPI) at the LGM using the experiments of 17 climate models chosen from the Paleoclimate Modelling Intercomparison Project (PMIP) according to their ability to reproduce the present global monsoon climate. Compared to the reference period (referring to the present day, ca. 1985, for three atmospheric plus two atm-slab ocean models and the pre-industrial period, ca. 1750, for 12 fully coupled atmosphere-ocean or atmosphere-ocean-vegetation models), the LGM monsoon area increased over land and decreased over the oceans. The boreal land monsoon areas generally shifted southward, while the northern boundary of land monsoon areas retreated southward over southern Africa and South America. Both the LGM GMP and GMPI decreased in most of the models. The GMP decrease mainly resulted from the reduced monsoon precipitation over the oceans, while the GMPI decrease was derived from the weakened intensity of monsoon precipitation over land and the boreal ocean. Quantitatively, the LGM GMP deficit was due to, first, the GMA reduction and, second, the GMPI weakening. In response to the LGM large ice sheets and lower greenhouse gas concentrations in the atmosphere, the global surface and tropospheric temperatures cooled, the boreal summer meridional temperature gradient increased, and the summer land-sea thermal contrast at 40°S - 70°N decreased. These are the underlying dynamic mechanisms for the LGM monsoon changes. Qualitatively, simulations agree with reconstructions in all land monsoon areas except in the western part of northern Australia where disagreements occur and in South America and the southern part of southern Africa where there is uncertainty in reconstructions

  6. Efficiency at maximum power and efficiency fluctuations in a linear Brownian heat-engine model.

    PubMed

    Park, Jong-Min; Chun, Hyun-Myung; Noh, Jae Dong

    2016-07-01

    We investigate the stochastic thermodynamics of a two-particle Langevin system. Each particle is in contact with a heat bath at different temperatures T_{1} and T_{2} (maximum power η_{MP} is given by η_{MP}=1-sqrt[T_{2}/T_{1}]. This universal form has been known as a characteristic of endoreversible heat engines. Our result extends the universal behavior of η_{MP} to nonendoreversible engines. We also obtain the large deviation function of the probability distribution for the stochastic efficiency in the overdamped limit. The large deviation function takes the minimum value at macroscopic efficiency η=η[over ¯] and increases monotonically until it reaches plateaus when η≤η_{L} and η≥η_{R} with model-dependent parameters η_{R} and η_{L}. PMID:27575096

  7. Efficiency at maximum power and efficiency fluctuations in a linear Brownian heat-engine model

    NASA Astrophysics Data System (ADS)

    Park, Jong-Min; Chun, Hyun-Myung; Noh, Jae Dong

    2016-07-01

    We investigate the stochastic thermodynamics of a two-particle Langevin system. Each particle is in contact with a heat bath at different temperatures T1 and T2 (maximum power ηM P is given by ηM P=1 -√{T2/T1 } . This universal form has been known as a characteristic of endoreversible heat engines. Our result extends the universal behavior of ηM P to nonendoreversible engines. We also obtain the large deviation function of the probability distribution for the stochastic efficiency in the overdamped limit. The large deviation function takes the minimum value at macroscopic efficiency η =η ¯ and increases monotonically until it reaches plateaus when η ≤ηL and η ≥ηR with model-dependent parameters ηR and ηL.

  8. Efficiency at maximum power and efficiency fluctuations in a linear Brownian heat-engine model.

    PubMed

    Park, Jong-Min; Chun, Hyun-Myung; Noh, Jae Dong

    2016-07-01

    We investigate the stochastic thermodynamics of a two-particle Langevin system. Each particle is in contact with a heat bath at different temperatures T_{1} and T_{2} (maximum power η_{MP} is given by η_{MP}=1-sqrt[T_{2}/T_{1}]. This universal form has been known as a characteristic of endoreversible heat engines. Our result extends the universal behavior of η_{MP} to nonendoreversible engines. We also obtain the large deviation function of the probability distribution for the stochastic efficiency in the overdamped limit. The large deviation function takes the minimum value at macroscopic efficiency η=η[over ¯] and increases monotonically until it reaches plateaus when η≤η_{L} and η≥η_{R} with model-dependent parameters η_{R} and η_{L}.

  9. Entropy-stabilized oxides

    PubMed Central

    Rost, Christina M.; Sachet, Edward; Borman, Trent; Moballegh, Ali; Dickey, Elizabeth C.; Hou, Dong; Jones, Jacob L.; Curtarolo, Stefano; Maria, Jon-Paul

    2015-01-01

    Configurational disorder can be compositionally engineered into mixed oxide by populating a single sublattice with many distinct cations. The formulations promote novel and entropy-stabilized forms of crystalline matter where metal cations are incorporated in new ways. Here, through rigorous experiments, a simple thermodynamic model, and a five-component oxide formulation, we demonstrate beyond reasonable doubt that entropy predominates the thermodynamic landscape, and drives a reversible solid-state transformation between a multiphase and single-phase state. In the latter, cation distributions are proven to be random and homogeneous. The findings validate the hypothesis that deliberate configurational disorder provides an orthogonal strategy to imagine and discover new phases of crystalline matter and untapped opportunities for property engineering. PMID:26415623

  10. Modeled estimates of global reef habitat and carbonate production since the Last Glacial Maximum

    NASA Astrophysics Data System (ADS)

    Kleypas, J. A.

    1997-08-01

    Estimated changes in reef area and CaCO3 production since the last glacial maximum (LGM) are presented for the first time, based on a model (ReefHab) which uses measured environmental data to predict global distribution of reef habitat. Suitable reef habitat is defined by temperature, salinity, nutrients, and the depth-attenuated level of photosynthetically available radiation (PAR). CaCO3 production is calculated as a function of PAR. When minimum PAR levels were chosen to restrict reef growth to 30 m depth and less, modern reef area totaled 584-746 × 10³ km². Global carbonate production, which takes into account topographic relief as a control on carbonate accumulation, was 1.00 Gt yr-1. These values are close to the most widely accepted estimates of reef area and carbonate production and demonstrate that basic environmental data can be used to define reef habitat and calcification. To simulate reef habitat changes since the LGM, the model was run at 1-kyr intervals, using appropriate sea level and temperature values. These runs show that at the LGM, reef area was restricted to 20% of that today and carbonate production to 27%, due primarily to a reduction in available space at the lower sea level and secondarily to lower sea surface temperatures. Nonetheless, these values suggest that reef growth prior to shelf flooding was more extensive than previously thought. A crude estimate of reef-released CO2 to the atmosphere since the LGM is of the same order of magnitude as the atmospheric CO2 change recorded in the Vostok ice core, which emphasizes the role of neritic carbonates within the global carbon cycle. This model currently addresses only the major physical and chemical controls on reef carbonate production, but it provides a template for estimating shallow tropical carbonate production both in the present and in the past. As such, the model highlights several long-standing issues regarding reef carbonates, particularly in terms of better defining the roles

  11. Low Streamflow Forcasting using Minimum Relative Entropy

    NASA Astrophysics Data System (ADS)

    Cui, H.; Singh, V. P.

    2013-12-01

    Minimum relative entropy spectral analysis is derived in this study, and applied to forecast streamflow time series. Proposed method extends the autocorrelation in the manner that the relative entropy of underlying process is minimized so that time series data can be forecasted. Different prior estimation, such as uniform, exponential and Gaussian assumption, is taken to estimate the spectral density depending on the autocorrelation structure. Seasonal and nonseasonal low streamflow series obtained from Colorado River (Texas) under draught condition is successfully forecasted using proposed method. Minimum relative entropy determines spectral of low streamflow series with higher resolution than conventional method. Forecasted streamflow is compared to the prediction using Burg's maximum entropy spectral analysis (MESA) and Configurational entropy. The advantage and disadvantage of each method in forecasting low streamflow is discussed.

  12. Finger Vein Segmentation from Infrared Images Based on a Modified Separable Mumford Shah Model and Local Entropy Thresholding

    PubMed Central

    Vlachos, Marios; Dermatas, Evangelos

    2015-01-01

    A novel method for finger vein pattern extraction from infrared images is presented. This method involves four steps: preprocessing which performs local normalization of the image intensity, image enhancement, image segmentation, and finally postprocessing for image cleaning. In the image enhancement step, an image which will be both smooth and similar to the original is sought. The enhanced image is obtained by minimizing the objective function of a modified separable Mumford Shah Model. Since, this minimization procedure is computationally intensive for large images, a local application of the Mumford Shah Model in small window neighborhoods is proposed. The finger veins are located in concave nonsmooth regions and, so, in order to distinct them from the other tissue parts, all the differences between the smooth neighborhoods, obtained by the local application of the model, and the corresponding windows of the original image are added. After that, veins in the enhanced image have been sufficiently emphasized. Thus, after image enhancement, an accurate segmentation can be obtained readily by a local entropy thresholding method. Finally, the resulted binary image may suffer from some misclassifications and, so, a postprocessing step is performed in order to extract a robust finger vein pattern. PMID:26120357

  13. Coarse-grained models using local-density potentials optimized with the relative entropy: Application to implicit solvation

    NASA Astrophysics Data System (ADS)

    Sanyal, Tanmoy; Shell, M. Scott

    2016-07-01

    Bottom-up multiscale techniques are frequently used to develop coarse-grained (CG) models for simulations at extended length and time scales but are often limited by a compromise between computational efficiency and accuracy. The conventional approach to CG nonbonded interactions uses pair potentials which, while computationally efficient, can neglect the inherently multibody contributions of the local environment of a site to its energy, due to degrees of freedom that were coarse-grained out. This effect often causes the CG potential to depend strongly on the overall system density, composition, or other properties, which limits its transferability to states other than the one at which it was parameterized. Here, we propose to incorporate multibody effects into CG potentials through additional nonbonded terms, beyond pair interactions, that depend in a mean-field manner on local densities of different atomic species. This approach is analogous to embedded atom and bond-order models that seek to capture multibody electronic effects in metallic systems. We show that the relative entropy coarse-graining framework offers a systematic route to parameterizing such local density potentials. We then characterize this approach in the development of implicit solvation strategies for interactions between model hydrophobes in an aqueous environment.

  14. Technical evaluation of a total maximum daily load model for Upper Klamath and Agency Lakes, Oregon

    USGS Publications Warehouse

    Wood, Tamara M.; Wherry, Susan A.; Carter, James L.; Kuwabara, James S.; Simon, Nancy S.; Rounds, Stewart A.

    2013-01-01

    We reviewed a mass balance model developed in 2001 that guided establishment of the phosphorus total maximum daily load (TMDL) for Upper Klamath and Agency Lakes, Oregon. The purpose of the review was to evaluate the strengths and weaknesses of the model and to determine whether improvements could be made using information derived from studies since the model was first developed. The new data have contributed to the understanding of processes in the lakes, particularly internal loading of phosphorus from sediment, and include measurements of diffusive fluxes of phosphorus from the bottom sediments, groundwater advection, desorption from iron oxides at high pH in a laboratory setting, and estimates of fluxes of phosphorus bound to iron and aluminum oxides. None of these processes in isolation, however, is large enough to account for the episodically high values of whole-lake internal loading calculated from a mass balance, which can range from 10 to 20 milligrams per square meter per day for short periods. The possible role of benthic invertebrates in lake sediments in the internal loading of phosphorus in the lake has become apparent since the development of the TMDL model. Benthic invertebrates can increase diffusive fluxes several-fold through bioturbation and biodiffusion, and, if the invertebrates are bottom feeders, they can recycle phosphorus to the water column through metabolic excretion. These organisms have high densities (1,822–62,178 individuals per square meter) in Upper Klamath Lake. Conversion of the mean density of tubificid worms (Oligochaeta) and chironomid midges (Diptera), two of the dominant taxa, to an areal flux rate based on laboratory measurements of metabolic excretion of two abundant species suggested that excretion by benthic invertebrates is at least as important as any of the other identified processes for internal loading to the water column. Data from sediment cores collected around Upper Klamath Lake since the development of the

  15. Cyclic entropy: An alternative to inflationary cosmology

    NASA Astrophysics Data System (ADS)

    Frampton, Paul Howard

    2015-07-01

    We address how to construct an infinitely cyclic universe model. A major consideration is to make the entropy cyclic which requires the entropy to be reset to zero in each cycle expansion → turnaround → contraction → bounce → etc. Here, we reset entropy at the turnaround by selecting the introverse (visible universe) from the extroverse which is generated by the accelerated expansion. In the model, the observed homogeneity is explained by the low entropy at the bounce. The observed flatness arises from the contraction together with the reduction in size between the expanding and contracting universe. The present flatness is predicted to be very precise.

  16. Reconstructions of f( T) gravity from entropy-corrected holographic and new agegraphic dark energy models in power-law and logarithmic versions

    NASA Astrophysics Data System (ADS)

    Saha, Pameli; Debnath, Ujjal

    2016-09-01

    Here, we peruse cosmological usage of the most promising candidates of dark energy in the framework of f( T) gravity theory where T represents the torsion scalar teleparallel gravity. We reconstruct the different f( T) modified gravity models in the spatially flat Friedmann-Robertson-Walker universe according to entropy-corrected versions of the holographic and new agegraphic dark energy models in power-law and logarithmic corrections, which describe an accelerated expansion history of the universe. We conclude that the equation of state parameter of the entropy-corrected models can transit from the quintessence state to the phantom regime as indicated by recent observations or can lie entirely in the phantom region. Also, using these models, we investigate the different areas of the stability with the help of the squared speed of sound.

  17. A derivation of the master equation from path entropy maximization

    PubMed Central

    Lee, Julian; Pressé, Steve

    2012-01-01

    The master equation and, more generally, Markov processes are routinely used as models for stochastic processes. They are often justified on the basis of randomization and coarse-graining assumptions. Here instead, we derive nth-order Markov processes and the master equation as unique solutions to an inverse problem. We find that when constraints are not enough to uniquely determine the stochastic model, an nth-order Markov process emerges as the unique maximum entropy solution to this otherwise underdetermined problem. This gives a rigorous alternative for justifying such models while providing a systematic recipe for generalizing widely accepted stochastic models usually assumed to follow from the first principles. PMID:22920099

  18. Maximum Stress Estimation Model for Multi-Span Waler Beams with Deflections at the Supports Using Average Strains

    PubMed Central

    Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon

    2015-01-01

    The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads. PMID:25831087

  19. Side-chain entropy and packing in proteins.

    PubMed

    Bromberg, S; Dill, K A

    1994-07-01

    What role does side-chain packing play in protein stability and structure? To address this question, we compare a lattice model with side chains (SCM) to a linear lattice model without side chains (LCM). Self-avoiding configurations are enumerated in 2 and 3 dimensions exhaustively for short chains and by Monte Carlo sampling for chains up to 50 main-chain monomers long. This comparison shows that (1) side-chain degrees of freedom increase the entropy of open conformations, but side-chain steric exclusion decreases the entropy of compact conformations, thus producing a substantial entropy that opposes folding; (2) there is side-chain "freezing" or ordering, i.e., a sharp decrease in entropy, near maximum compactness; and (3) the different types of contacts among side chains (s) and main-chain elements (m) have different frequencies, and the frequencies have different dependencies on compactness. mm contacts contribute significantly only at high densities, suggesting that main-chain hydrogen bonding in proteins may be promoted by compactness. The distributions of mm, ms, and ss contacts in compact SCM configurations are similar to the distributions in protein structures in the Brookhaven Protein Data Bank. We propose that packing in proteins is more like the packing of nuts and bolts in a jar than like the pairwise matching of jigsaw puzzle pieces. PMID:7920265

  20. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    ERIC Educational Resources Information Center

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…