Science.gov

Sample records for maximum entropy models

  1. Maximum entropy model for business cycle synchronization

    NASA Astrophysics Data System (ADS)

    Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui

    2014-11-01

    The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.

  2. Maximum entropy models of ecosystem functioning

    NASA Astrophysics Data System (ADS)

    Bertram, Jason

    2014-12-01

    Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes' broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.

  3. Maximum entropy models of ecosystem functioning

    SciTech Connect

    Bertram, Jason

    2014-12-05

    Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes’ broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.

  4. Maximum-entropy principle as Galerkin modelling paradigm

    NASA Astrophysics Data System (ADS)

    Noack, Bernd R.; Niven, Robert K.; Rowley, Clarence W.

    2012-11-01

    We show how the empirical Galerkin method, leading e.g. to POD models, can be derived from maximum-entropy principles building on Noack & Niven 2012 JFM. In particular, principles are proposed (1) for the Galerkin expansion, (2) for the Galerkin system identification, and (3) for the probability distribution of the attractor. Examples will illustrate the advantages of the entropic modelling paradigm. Partially supported by the ANR Chair of Excellence TUCOROM and an ADFA/UNSW Visiting Fellowship.

  5. Stimulus-dependent Maximum Entropy Models of Neural Population Codes

    PubMed Central

    Segev, Ronen; Schneidman, Elad

    2013-01-01

    Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME) model—a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population. PMID:23516339

  6. Understanding Peripheral Bat Populations Using Maximum-Entropy Suitability Modeling

    PubMed Central

    Barnhart, Paul R.; Gillam, Erin H.

    2016-01-01

    Individuals along the periphery of a species distribution regularly encounter more challenging environmental and climatic conditions than conspecifics near the center of the distribution. Due to these potential constraints, individuals in peripheral margins are expected to change their habitat and behavioral characteristics. Managers typically rely on species distribution maps when developing adequate management practices. However, these range maps are often too simplistic and do not provide adequate information as to what fine-scale biotic and abiotic factors are driving a species occurrence. In the last decade, habitat suitability modelling has become widely used as a substitute for simplistic distribution mapping which allows regional managers the ability to fine-tune management resources. The objectives of this study were to use maximum-entropy modeling to produce habitat suitability models for seven species that have a peripheral margin intersecting the state of North Dakota, according to current IUCN distributions, and determine the vegetative and climatic characteristics driving these models. Mistnetting resulted in the documentation of five species outside the IUCN distribution in North Dakota, indicating that current range maps for North Dakota, and potentially the northern Great Plains, are in need of update. Maximum-entropy modeling showed that temperature and not precipitation were the variables most important for model production. This fine-scale result highlights the importance of habitat suitability modelling as this information cannot be extracted from distribution maps. Our results provide baseline information needed for future research about how and why individuals residing in the peripheral margins of a species’ distribution may show marked differences in habitat use as a result of urban expansion, habitat loss, and climate change compared to more centralized populations. PMID:27935936

  7. From Maximum Entropy Models to Non-Stationarity and Irreversibility

    NASA Astrophysics Data System (ADS)

    Cofre, Rodrigo; Cessac, Bruno; Maldonado, Cesar

    The maximum entropy distribution can be obtained from a variational principle. This is important as a matter of principle and for the purpose of finding approximate solutions. One can exploit this fact to obtain relevant information about the underlying stochastic process. We report here in recent progress in three aspects to this approach.1- Biological systems are expected to show some degree of irreversibility in time. Based on the transfer matrix technique to find the spatio-temporal maximum entropy distribution, we build a framework to quantify the degree of irreversibility of any maximum entropy distribution.2- The maximum entropy solution is characterized by a functional called Gibbs free energy (solution of the variational principle). The Legendre transformation of this functional is the rate function, which controls the speed of convergence of empirical averages to their ergodic mean. We show how the correct description of this functional is determinant for a more rigorous characterization of first and higher order phase transitions.3- We assess the impact of a weak time-dependent external stimulus on the collective statistics of spiking neuronal networks. We show how to evaluate this impact on any higher order spatio-temporal correlation. RC supported by ERC advanced Grant ``Bridges'', BC: KEOPS ANR-CONICYT, Renvision and CM: CONICYT-FONDECYT No. 3140572.

  8. On the maximum-entropy/autoregressive modeling of time series

    NASA Technical Reports Server (NTRS)

    Chao, B. F.

    1984-01-01

    The autoregressive (AR) model of a random process is interpreted in the light of the Prony's relation which relates a complex conjugate pair of poles of the AR process in the z-plane (or the z domain) on the one hand, to the complex frequency of one complex harmonic function in the time domain on the other. Thus the AR model of a time series is one that models the time series as a linear combination of complex harmonic functions, which include pure sinusoids and real exponentials as special cases. An AR model is completely determined by its z-domain pole configuration. The maximum-entropy/autogressive (ME/AR) spectrum, defined on the unit circle of the z-plane (or the frequency domain), is nothing but a convenient, but ambiguous visual representation. It is asserted that the position and shape of a spectral peak is determined by the corresponding complex frequency, and the height of the spectral peak contains little information about the complex amplitude of the complex harmonic functions.

  9. Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.

    PubMed

    Chatzis, Sotirios P; Andreou, Andreas S

    2015-11-01

    Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.

  10. Galerkin POD Model Closure with Triadic Interactions by the Maximum Entropy Method

    NASA Astrophysics Data System (ADS)

    Hérouard, Nicolas; Niven, Robert K.; Noack, Bernd R.; Abel, Markus W.; Schlegel, Michael

    2016-11-01

    The maximum entropy method of Jaynes provides a method to infer the expected or most probable state of a system, by maximizing the relative entropy subject to physical constraints such as conservation of mass, energy and power. A maximum entropy closure for reduced-order models of fluid flows based on principal orthogonal decomposition (POD) is developed, to infer the probability density function for the POD modal amplitudes. This closure takes into account energy transfers by triadic interactions between modes, by extension of a theoretical model of these interactions in incompressible flow. The framework is applied to several incompressible flow systems including the cylinder wake, both at low and high Reynolds number (oscillatory and turbulent flow conditions), with important implications for the triadic structure and power balance (energy cascade) in the system. Australian Research Council Discovery Projects Grant DP140104402.

  11. Generalized Maximum Entropy

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter; Stutz, John

    2005-01-01

    A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].

  12. Inferring Pairwise Interactions from Biological Data Using Maximum-Entropy Probability Models

    PubMed Central

    Stein, Richard R.; Marks, Debora S.; Sander, Chris

    2015-01-01

    Maximum entropy-based inference methods have been successfully used to infer direct interactions from biological datasets such as gene expression data or sequence ensembles. Here, we review undirected pairwise maximum-entropy probability models in two categories of data types, those with continuous and categorical random variables. As a concrete example, we present recently developed inference methods from the field of protein contact prediction and show that a basic set of assumptions leads to similar solution strategies for inferring the model parameters in both variable types. These parameters reflect interactive couplings between observables, which can be used to predict global properties of the biological system. Such methods are applicable to the important problems of protein 3-D structure prediction and association of gene–gene networks, and they enable potential applications to the analysis of gene alteration patterns and to protein design. PMID:26225866

  13. Photosynthetic models with maximum entropy production in irreversible charge transfer steps.

    PubMed

    Juretić, Davor; Zupanović, Pasko

    2003-12-01

    Steady-state bacterial photosynthesis is modelled as cyclic chemical reaction and is examined with respect to overall efficiency, power transfer efficiency, and entropy production. A nonlinear flux-force relationship is assumed. The simplest two-state kinetic model bears complete analogy with the performance of an ideal (zero ohmic resistance of the P-N junction) solar cell. In both cases power transfer to external load is much higher than the 50% allowed by the impedance matching theorem for the linear flux-force relationship. When maximum entropy production is required in the transition with a load, one obtains high optimal photochemical yield of 97% and power transfer efficiency of 91%. In more complex photosynthetic models, entropy production is maximized in all irreversible electron/proton (non-slip) transitions in an iterative procedure. The resulting steady-state is stable with respect to an extremely wide range of initial values for forward rate constants. Optimal proton current increases proportionally to light intensity and decreases with an increase in the proton-motive force (the backpressure effect). Optimal affinity transfer efficiency is very high and nearly perfectly constant for different light absorption rates and for different electrochemical proton gradients. Optimal overall efficiency (of solar into proton-motive power) ranges from 10% (bacteriorhodopsin) to 19% (chlorophyll-based bacterial photosynthesis). Optimal time constants in a photocycle span a wide range from nanoseconds to milliseconds, just as corresponding experimental constants do. We conclude that photosynthetic proton pumps operate close to the maximum entropy production mode, connecting biological to thermodynamic evolution in a coupled self-amplifying process.

  14. Convex Accelerated Maximum Entropy Reconstruction

    PubMed Central

    Worley, Bradley

    2016-01-01

    Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm – called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm – is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra. PMID:26894476

  15. Modeling the Multiple-Antenna Wireless Channel Using Maximum Entropy Methods

    NASA Astrophysics Data System (ADS)

    Guillaud, M.; Debbah, M.; Moustakas, A. L.

    2007-11-01

    Analytical descriptions of the statistics of wireless channel models are desirable tools for communication systems engineering. When multiple antennas are available at the transmit and/or the receive side (the Multiple-Input Multiple-Output, or MIMO, case), the statistics of the matrix H representing the gains between the antennas of a transmit and a receive antenna array, and in particular the correlation between its coefficients, are known to be of paramount importance for the design of such systems. However these characteristics depend on the operating environment, since the electromagnetic propagation paths are dictated by the surroundings of the antenna arrays, and little knowledge about these is available at the time of system design. An approach using the Maximum Entropy principle to derive probability density functions for the channel matrix, based on various degrees of knowledge about the environment, is presented. The general idea is to apply the maximum entropy principle to obtain the distribution of each parameter of interest (e.g. correlation), and then to marginalize them out to obtain the full channel distribution. It was shown in previous works, using sophisticated integrals from statistical physics, that by using the full spatial correlation matrix E{vec(H)vec(H)H} as the intermediate modeling parameter, this method can yield surprisingly concise channel descriptions. In this case, the joint probability density function is shown to be merely a function of the Frobenius norm of the channel matrix |H|F. In the present paper, we investigate the case where information about the average covariance matrix is available (e.g. through measurements). The maximum entropy distribution of the covariance is derived under this constraint. Furthermore, we consider also the doubly correlated case, where the intermediate modeling parameters are chosen as the transmit- and receive-side channel covariance matrices (respectively E{HHH} and E{HHH}). We compare the

  16. Maximum Entropy Guide for BSS

    NASA Astrophysics Data System (ADS)

    Górriz, J. M.; Puntonet, C. G.; Medialdea, E. G.; Rojas, F.

    2005-11-01

    This paper proposes a novel method for Blindly Separating unobservable independent component (IC) Signals (BSS) based on the use of a maximum entropy guide (MEG). The paper also includes a formal proof on the convergence of the proposed algorithm using the guiding operator, a new concept in the genetic algorithm (GA) scenario. The Guiding GA (GGA) presented in this work, is able to extract IC with faster rate than the previous ICA algorithms, based on maximum entropy contrast functions, as input space dimension increases. It shows significant accuracy and robustness than the previous approaches in any case.

  17. Steepest entropy ascent model for far-nonequilibrium thermodynamics: Unified implementation of the maximum entropy production principle

    NASA Astrophysics Data System (ADS)

    Beretta, Gian Paolo

    2014-10-01

    By suitable reformulations, we cast the mathematical frameworks of several well-known different approaches to the description of nonequilibrium dynamics into a unified formulation valid in all these contexts, which extends to such frameworks the concept of steepest entropy ascent (SEA) dynamics introduced by the present author in previous works on quantum thermodynamics. Actually, the present formulation constitutes a generalization also for the quantum thermodynamics framework. The analysis emphasizes that in the SEA modeling principle a key role is played by the geometrical metric with respect to which to measure the length of a trajectory in state space. In the near-thermodynamic-equilibrium limit, the metric tensor is directly related to the Onsager's generalized resistivity tensor. Therefore, through the identification of a suitable metric field which generalizes the Onsager generalized resistance to the arbitrarily far-nonequilibrium domain, most of the existing theories of nonequilibrium thermodynamics can be cast in such a way that the state exhibits the spontaneous tendency to evolve in state space along the path of SEA compatible with the conservation constraints and the boundary conditions. The resulting unified family of SEA dynamical models is intrinsically and strongly consistent with the second law of thermodynamics. The non-negativity of the entropy production is a general and readily proved feature of SEA dynamics. In several of the different approaches to nonequilibrium description we consider here, the SEA concept has not been investigated before. We believe it defines the precise meaning and the domain of general validity of the so-called maximum entropy production principle. Therefore, it is hoped that the present unifying approach may prove useful in providing a fresh basis for effective, thermodynamically consistent, numerical models and theoretical treatments of irreversible conservative relaxation towards equilibrium from far nonequilibrium

  18. Modelling streambank erosion potential using maximum entropy in a central Appalachian watershed

    NASA Astrophysics Data System (ADS)

    Pitchford, J.; Strager, M.; Riley, A.; Lin, L.; Anderson, J.

    2015-03-01

    We used maximum entropy to model streambank erosion potential (SEP) in a central Appalachian watershed to help prioritize sites for management. Model development included measuring erosion rates, application of a quantitative approach to locate Target Eroding Areas (TEAs), and creation of maps of boundary conditions. We successfully constructed a probability distribution of TEAs using the program Maxent. All model evaluation procedures indicated that the model was an excellent predictor, and that the major environmental variables controlling these processes were streambank slope, soil characteristics, bank position, and underlying geology. A classification scheme with low, moderate, and high levels of SEP derived from logistic model output was able to differentiate sites with low erosion potential from sites with moderate and high erosion potential. A major application of this type of modelling framework is to address uncertainty in stream restoration planning, ultimately helping to bridge the gap between restoration science and practice.

  19. Maximum entropy production in daisyworld

    NASA Astrophysics Data System (ADS)

    Maunu, Haley A.; Knuth, Kevin H.

    2012-05-01

    Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.

  20. Maximum entropy perception-action space: a Bayesian model of eye movement selection

    NASA Astrophysics Data System (ADS)

    Colas, Francis; Bessière, Pierre; Girard, Benoît

    2011-03-01

    In this article, we investigate the issue of the selection of eye movements in a free-eye Multiple Object Tracking task. We propose a Bayesian model of retinotopic maps with a complex logarithmic mapping. This model is structured in two parts: a representation of the visual scene, and a decision model based on the representation. We compare different decision models based on different features of the representation and we show that taking into account uncertainty helps predict the eye movements of subjects recorded in a psychophysics experiment. Finally, based on experimental data, we postulate that the complex logarithmic mapping has a functional relevance, as the density of objects in this space in more uniform than expected. This may indicate that the representation space and control strategies are such that the object density is of maximum entropy.

  1. Modeling the Mass Action Dynamics of Metabolism with Fluctuation Theorems and Maximum Entropy

    NASA Astrophysics Data System (ADS)

    Cannon, William; Thomas, Dennis; Baxter, Douglas; Zucker, Jeremy; Goh, Garrett

    The laws of thermodynamics dictate the behavior of biotic and abiotic systems. Simulation methods based on statistical thermodynamics can provide a fundamental understanding of how biological systems function and are coupled to their environment. While mass action kinetic simulations are based on solving ordinary differential equations using rate parameters, analogous thermodynamic simulations of mass action dynamics are based on modeling states using chemical potentials. The latter have the advantage that standard free energies of formation/reaction and metabolite levels are much easier to determine than rate parameters, allowing one to model across a large range of scales. Bridging theory and experiment, statistical thermodynamics simulations allow us to both predict activities of metabolites and enzymes and use experimental measurements of metabolites and proteins as input data. Even if metabolite levels are not available experimentally, it is shown that a maximum entropy assumption is quite reasonable and in many cases results in both the most energetically efficient process and the highest material flux.

  2. Tissue Radiation Response with Maximum Tsallis Entropy

    SciTech Connect

    Sotolongo-Grau, O.; Rodriguez-Perez, D.; Antoranz, J. C.; Sotolongo-Costa, Oscar

    2010-10-08

    The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.

  3. Tissue radiation response with maximum Tsallis entropy.

    PubMed

    Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar

    2010-10-08

    The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.

  4. On the sufficiency of pairwise interactions in maximum entropy models of networks

    NASA Astrophysics Data System (ADS)

    Nemenman, Ilya; Merchan, Lina

    Biological information processing networks consist of many components, which are coupled by an even larger number of complex multivariate interactions. However, analyses of data sets from fields as diverse as neuroscience, molecular biology, and behavior have reported that observed statistics of states of some biological networks can be approximated well by maximum entropy models with only pairwise interactions among the components. Based on simulations of random Ising spin networks with p-spin (p > 2) interactions, here we argue that this reduction in complexity can be thought of as a natural property of some densely interacting networks in certain regimes, and not necessarily as a special property of living systems. This work was supported in part by James S. McDonnell Foundation Grant No. 220020321.

  5. Online Robot Dead Reckoning Localization Using Maximum Relative Entropy Optimization With Model Constraints

    SciTech Connect

    Urniezius, Renaldas

    2011-03-14

    The principle of Maximum relative Entropy optimization was analyzed for dead reckoning localization of a rigid body when observation data of two attached accelerometers was collected. Model constraints were derived from the relationships between the sensors. The experiment's results confirmed that accelerometers each axis' noise can be successfully filtered utilizing dependency between channels and the dependency between time series data. Dependency between channels was used for a priori calculation, and a posteriori distribution was derived utilizing dependency between time series data. There was revisited data of autocalibration experiment by removing the initial assumption that instantaneous rotation axis of a rigid body was known. Performance results confirmed that such an approach could be used for online dead reckoning localization.

  6. On the Sufficiency of Pairwise Interactions in Maximum Entropy Models of Networks

    NASA Astrophysics Data System (ADS)

    Merchan, Lina; Nemenman, Ilya

    2016-03-01

    Biological information processing networks consist of many components, which are coupled by an even larger number of complex multivariate interactions. However, analyses of data sets from fields as diverse as neuroscience, molecular biology, and behavior have reported that observed statistics of states of some biological networks can be approximated well by maximum entropy models with only pairwise interactions among the components. Based on simulations of random Ising spin networks with p-spin (p>2) interactions, here we argue that this reduction in complexity can be thought of as a natural property of densely interacting networks in certain regimes, and not necessarily as a special property of living systems. By connecting our analysis to the theory of random constraint satisfaction problems, we suggest a reason for why some biological systems may operate in this regime.

  7. Maximum entropy principal for transportation

    SciTech Connect

    Bilich, F.; Da Silva, R.

    2008-11-06

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.

  8. a Maximum Entropy Model of the Bearded Capuchin Monkey Habitat Incorporating Topography and Spectral Unmixing Analysis

    NASA Astrophysics Data System (ADS)

    Howard, A. M.; Bernardes, S.; Nibbelink, N.; Biondi, L.; Presotto, A.; Fragaszy, D. M.; Madden, M.

    2012-07-01

    Movement patterns of bearded capuchin monkeys (Cebus (Sapajus) libidinosus) in northeastern Brazil are likely impacted by environmental features such as elevation, vegetation density, or vegetation type. Habitat preferences of these monkeys provide insights regarding the impact of environmental features on species ecology and the degree to which they incorporate these features in movement decisions. In order to evaluate environmental features influencing movement patterns and predict areas suitable for movement, we employed a maximum entropy modelling approach, using observation points along capuchin monkey daily routes as species presence points. We combined these presence points with spatial data on important environmental features from remotely sensed data on land cover and topography. A spectral mixing analysis procedure was used to generate fraction images that represent green vegetation, shade and soil of the study area. A Landsat Thematic Mapper scene of the area of study was geometrically and atmospherically corrected and used as input in a Minimum Noise Fraction (MNF) procedure and a linear spectral unmixing approach was used to generate the fraction images. These fraction images and elevation were the environmental layer inputs for our logistic MaxEnt model of capuchin movement. Our models' predictive power (test AUC) was 0.775. Areas of high elevation (>450 m) showed low probabilities of presence, and percent green vegetation was the greatest overall contributor to model AUC. This work has implications for predicting daily movement patterns of capuchins in our field site, as suitability values from our model may relate to habitat preference and facility of movement.

  9. Revisiting the global surface energy budgets with maximum-entropy-production model of surface heat fluxes

    NASA Astrophysics Data System (ADS)

    Huang, Shih-Yu; Deng, Yi; Wang, Jingfeng

    2016-10-01

    The maximum-entropy-production (MEP) model of surface heat fluxes, based on contemporary non-equilibrium thermodynamics, information theory, and atmospheric turbulence theory, is used to re-estimate the global surface heat fluxes. The MEP model predicted surface fluxes automatically balance the surface energy budgets at all time and space scales without the explicit use of near-surface temperature and moisture gradient, wind speed and surface roughness data. The new MEP-based global annual mean fluxes over the land surface, using input data of surface radiation, temperature data from National Aeronautics and Space Administration-Clouds and the Earth's Radiant Energy System (NASA CERES) supplemented by surface specific humidity data from the Modern-Era Retrospective Analysis for Research and Applications (MERRA), agree closely with previous estimates. The new estimate of ocean evaporation, not using the MERRA reanalysis data as model inputs, is lower than previous estimates, while the new estimate of ocean sensible heat flux is higher than previously reported. The MEP model also produces the first global map of ocean surface heat flux that is not available from existing global reanalysis products.

  10. Stochastic model of the NASA/MSFC ground facility for large space structures with uncertain parameters: The maximum entropy approach

    NASA Technical Reports Server (NTRS)

    Hsia, Wei-Shen

    1987-01-01

    A stochastic control model of the NASA/MSFC Ground Facility for Large Space Structures (LSS) control verification through Maximum Entropy (ME) principle adopted in Hyland's method was presented. Using ORACLS, a computer program was implemented for this purpose. Four models were then tested and the results presented.

  11. Computational design of hepatitis C vaccines using maximum entropy models and population dynamics

    NASA Astrophysics Data System (ADS)

    Hart, Gregory; Ferguson, Andrew

    Hepatitis C virus (HCV) afflicts 170 million people and kills 350,000 annually. Vaccination offers the most realistic and cost effective hope of controlling this epidemic. Despite 20 years of research, no vaccine is available. A major obstacle is the virus' extreme genetic variability and rapid mutational escape from immune pressure. Improvements in the vaccine design process are urgently needed. Coupling data mining with spin glass models and maximum entropy inference, we have developed a computational approach to translate sequence databases into empirical fitness landscapes. These landscapes explicitly connect viral genotype to phenotypic fitness and reveal vulnerable targets that can be exploited to rationally design immunogens. Viewing these landscapes as the mutational ''playing field'' over which the virus is constrained to evolve, we have integrated them with agent-based models of the viral mutational and host immune response dynamics, establishing a data-driven immune simulator of HCV infection. We have employed this simulator to perform in silico screening of HCV immunogens. By systematically identifying a small number of promising vaccine candidates, these models can accelerate the search for a vaccine by massively reducing the experimental search space.

  12. Maximum-Entropy Models of Sequenced Immune Repertoires Predict Antigen-Antibody Affinity

    PubMed Central

    Marcatili, Paolo; Pagnani, Andrea

    2016-01-01

    The immune system has developed a number of distinct complex mechanisms to shape and control the antibody repertoire. One of these mechanisms, the affinity maturation process, works in an evolutionary-like fashion: after binding to a foreign molecule, the antibody-producing B-cells exhibit a high-frequency mutation rate in the genome region that codes for the antibody active site. Eventually, cells that produce antibodies with higher affinity for their cognate antigen are selected and clonally expanded. Here, we propose a new statistical approach based on maximum entropy modeling in which a scoring function related to the binding affinity of antibodies against a specific antigen is inferred from a sample of sequences of the immune repertoire of an individual. We use our inference strategy to infer a statistical model on a data set obtained by sequencing a fairly large portion of the immune repertoire of an HIV-1 infected patient. The Pearson correlation coefficient between our scoring function and the IC50 neutralization titer measured on 30 different antibodies of known sequence is as high as 0.77 (p-value 10−6), outperforming other sequence- and structure-based models. PMID:27074145

  13. A Novel Maximum Entropy Markov Model for Human Facial Expression Recognition

    PubMed Central

    2016-01-01

    Research in video based FER systems has exploded in the past decade. However, most of the previous methods work well when they are trained and tested on the same dataset. Illumination settings, image resolution, camera angle, and physical characteristics of the people differ from one dataset to another. Considering a single dataset keeps the variance, which results from differences, to a minimum. Having a robust FER system, which can work across several datasets, is thus highly desirable. The aim of this work is to design, implement, and validate such a system using different datasets. In this regard, the major contribution is made at the recognition module which uses the maximum entropy Markov model (MEMM) for expression recognition. In this model, the states of the human expressions are modeled as the states of an MEMM, by considering the video-sensor observations as the observations of MEMM. A modified Viterbi is utilized to generate the most probable expression state sequence based on such observations. Lastly, an algorithm is designed which predicts the expression state from the generated state sequence. Performance is compared against several existing state-of-the-art FER systems on six publicly available datasets. A weighted average accuracy of 97% is achieved across all datasets. PMID:27635654

  14. Predicting the distribution of the Asian tapir in Peninsular Malaysia using maximum entropy modeling.

    PubMed

    Clements, Gopalasamy Reuben; Rayan, D Mark; Aziz, Sheema Abdul; Kawanishi, Kae; Traeholt, Carl; Magintan, David; Yazi, Muhammad Fadlli Abdul; Tingley, Reid

    2012-12-01

    In 2008, the IUCN threat status of the Asian tapir (Tapirus indicus) was reclassified from 'vulnerable' to 'endangered'. The latest distribution map from the IUCN Red List suggests that the tapirs' native range is becoming increasingly fragmented in Peninsular Malaysia, but distribution data collected by local researchers suggest a more extensive geographical range. Here, we compile a database of 1261 tapir occurrence records within Peninsular Malaysia, and demonstrate that this species, indeed, has a much broader geographical range than the IUCN range map suggests. However, extreme spatial and temporal bias in these records limits their utility for conservation planning. Therefore, we used maximum entropy (MaxEnt) modeling to elucidate the potential extent of the Asian tapir's occurrence in Peninsular Malaysia while accounting for bias in existing distribution data. Our MaxEnt model predicted that the Asian tapir has a wider geographic range than our fine-scale data and the IUCN range map both suggest. Approximately 37% of Peninsular Malaysia contains potentially suitable tapir habitats. Our results justify a revision to the Asian tapir's extent of occurrence in the IUCN Red List. Furthermore, our modeling demonstrated that selectively logged forests encompass 45% of potentially suitable tapir habitats, underscoring the importance of these habitats for the conservation of this species in Peninsular Malaysia.

  15. Zipf's law, power laws and maximum entropy

    NASA Astrophysics Data System (ADS)

    Visser, Matt

    2013-04-01

    Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.

  16. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    USGS Publications Warehouse

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  17. Application of Maximum Entropy principle to modeling torsion angle probability distribution in proteins

    NASA Astrophysics Data System (ADS)

    Rowicka, Małgorzata; Otwinowski, Zbyszek

    2004-04-01

    Using the Maximum Entropy principle, we find probability distribution of torsion angles in proteins. We estimate parameters of this distribution numerically, by implementing the conjugate gradient method in Polak-Ribiere variant. We investigate practical approximations of the theoretical distribution. We discuss the information content of these approximations and compare them with standard histogram method. Our data are pairs of main chain torsion angles for a selected subset of high resolution non-homologous protein structures from Protein Data Bank.

  18. A subjective supply-demand model: the maximum Boltzmann/Shannon entropy solution

    NASA Astrophysics Data System (ADS)

    Piotrowski, Edward W.; Sładkowski, Jan

    2009-03-01

    The present authors have put forward a projective geometry model of rational trading. The expected (mean) value of the time that is necessary to strike a deal and the profit strongly depend on the strategies adopted. A frequent trader often prefers maximal profit intensity to the maximization of profit resulting from a separate transaction because the gross profit/income is the adopted/recommended benchmark. To investigate activities that have different periods of duration we define, following the queuing theory, the profit intensity as a measure of this economic category. The profit intensity in repeated trading has a unique property of attaining its maximum at a fixed point regardless of the shape of demand curves for a wide class of probability distributions of random reverse transactions (i.e. closing of the position). These conclusions remain valid for an analogous model based on supply analysis. This type of market game is often considered in research aiming at finding an algorithm that maximizes profit of a trader who negotiates prices with the Rest of the World (a collective opponent), possessing a definite and objective supply profile. Such idealization neglects the sometimes important influence of an individual trader on the demand/supply profile of the Rest of the World and in extreme cases questions the very idea of demand/supply profile. Therefore we put forward a trading model in which the demand/supply profile of the Rest of the World induces the (rational) trader to (subjectively) presume that he/she lacks (almost) all knowledge concerning the market but his/her average frequency of trade. This point of view introduces maximum entropy principles into the model and broadens the range of economic phenomena that can be perceived as a sort of thermodynamical system. As a consequence, the profit intensity has a fixed point with an astonishing connection with Fibonacci classical works and looking for the quickest algorithm for obtaining the extremum of a

  19. Maximum Entropy Production Modeling of Evapotranspiration Partitioning on Heterogeneous Terrain and Canopy Cover: advantages and limitations.

    NASA Astrophysics Data System (ADS)

    Gutierrez-Jurado, H. A.; Guan, H.; Wang, J.; Wang, H.; Bras, R. L.; Simmons, C. T.

    2015-12-01

    Quantification of evapotranspiration (ET) and its partition over regions of heterogeneous topography and canopy poses a challenge using traditional approaches. In this study, we report the results of a novel field experiment design guided by the Maximum Entropy Production model of ET (MEP-ET), formulated for estimating evaporation and transpiration from homogeneous soil and canopy. A catchment with complex terrain and patchy vegetation in South Australia was instrumented to measure temperature, humidity and net radiation at soil and canopy surfaces. Performance of the MEP-ET model to quantify transpiration and soil evaporation was evaluated during wet and dry conditions with independently and directly measured transpiration from sapflow and soil evaporation using the Bowen Ratio Energy Balance (BREB). MEP-ET transpiration shows remarkable agreement with that obtained through sapflow measurements during wet conditions, but consistently overestimates the flux during dry periods. However, an additional term introduced to the original MEP-ET model accounting for higher stomatal regulation during dry spells, based on differences between leaf and air vapor pressure deficits and temperatures, significantly improves the model performance. On the other hand, MEP-ET soil evaporation is in good agreement with that from BREB regardless of moisture conditions. The experimental design allows a plot and tree scale quantification of evaporation and transpiration respectively. This study confirms for the first time that the MEP-ET originally developed for homogeneous open bare soil and closed canopy can be used for modeling ET over heterogeneous land surfaces. Furthermore, we show that with the addition of an empirical function simulating the plants ability to regulate transpiration, and based on the same measurements of temperature and humidity, the method can produce reliable estimates of ET during both wet and dry conditions without compromising its parsimony.

  20. Quantile-based Bayesian maximum entropy approach for spatiotemporal modeling of ambient air quality levels.

    PubMed

    Yu, Hwa-Lung; Wang, Chih-Hsin

    2013-02-05

    Understanding the daily changes in ambient air quality concentrations is important to the assessing human exposure and environmental health. However, the fine temporal scales (e.g., hourly) involved in this assessment often lead to high variability in air quality concentrations. This is because of the complex short-term physical and chemical mechanisms among the pollutants. Consequently, high heterogeneity is usually present in not only the averaged pollution levels, but also the intraday variance levels of the daily observations of ambient concentration across space and time. This characteristic decreases the estimation performance of common techniques. This study proposes a novel quantile-based Bayesian maximum entropy (QBME) method to account for the nonstationary and nonhomogeneous characteristics of ambient air pollution dynamics. The QBME method characterizes the spatiotemporal dependence among the ambient air quality levels based on their location-specific quantiles and accounts for spatiotemporal variations using a local weighted smoothing technique. The epistemic framework of the QBME method can allow researchers to further consider the uncertainty of space-time observations. This study presents the spatiotemporal modeling of daily CO and PM10 concentrations across Taiwan from 1998 to 2009 using the QBME method. Results show that the QBME method can effectively improve estimation accuracy in terms of lower mean absolute errors and standard deviations over space and time, especially for pollutants with strong nonhomogeneous variances across space. In addition, the epistemic framework can allow researchers to assimilate the site-specific secondary information where the observations are absent because of the common preferential sampling issues of environmental data. The proposed QBME method provides a practical and powerful framework for the spatiotemporal modeling of ambient pollutants.

  1. Modelling non-Gaussianity of background and observational errors by the Maximum Entropy method

    NASA Astrophysics Data System (ADS)

    Pires, Carlos; Talagrand, Olivier; Bocquet, Marc

    2010-05-01

    The Best Linear Unbiased Estimator (BLUE) has widely been used in atmospheric-oceanic data assimilation. However, when data errors have non-Gaussian pdfs, the BLUE differs from the absolute Minimum Variance Unbiased Estimator (MVUE), minimizing the mean square analysis error. The non-Gaussianity of errors can be due to the statistical skewness and positiveness of some physical observables (e.g. moisture, chemical species) or due to the nonlinearity of the data assimilation models and observation operators acting on Gaussian errors. Non-Gaussianity of assimilated data errors can be justified from a priori hypotheses or inferred from statistical diagnostics of innovations (observation minus background). Following this rationale, we compute measures of innovation non-Gaussianity, namely its skewness and kurtosis, relating it to: a) the non-Gaussianity of the individual error themselves, b) the correlation between nonlinear functions of errors, and c) the heteroscedasticity of errors within diagnostic samples. Those relationships impose bounds for skewness and kurtosis of errors which are critically dependent on the error variances, thus leading to a necessary tuning of error variances in order to accomplish consistency with innovations. We evaluate the sub-optimality of the BLUE as compared to the MVUE, in terms of excess of error variance, under the presence of non-Gaussian errors. The error pdfs are obtained by the maximum entropy method constrained by error moments up to fourth order, from which the Bayesian probability density function and the MVUE are computed. The impact is higher for skewed extreme innovations and grows in average with the skewness of data errors, especially if those skewnesses have the same sign. Application has been performed to the quality-accepted ECMWF innovations of brightness temperatures of a set of High Resolution Infrared Sounder channels. In this context, the MVUE has led in some extreme cases to a potential reduction of 20-60% error

  2. Role of adjacency-matrix degeneracy in maximum-entropy-weighted network models

    NASA Astrophysics Data System (ADS)

    Sagarra, O.; Pérez Vicente, C. J.; Díaz-Guilera, A.

    2015-11-01

    Complex network null models based on entropy maximization are becoming a powerful tool to characterize and analyze data from real systems. However, it is not easy to extract good and unbiased information from these models: A proper understanding of the nature of the underlying events represented in them is crucial. In this paper we emphasize this fact stressing how an accurate counting of configurations compatible with given constraints is fundamental to build good null models for the case of networks with integer-valued adjacency matrices constructed from an aggregation of one or multiple layers. We show how different assumptions about the elements from which the networks are built give rise to distinctively different statistics, even when considering the same observables to match those of real data. We illustrate our findings by applying the formalism to three data sets using an open-source software package accompanying the present work and demonstrate how such differences are clearly seen when measuring network observables.

  3. The two-box model of climate: limitations and applications to planetary habitability and maximum entropy production studies.

    PubMed

    Lorenz, Ralph D

    2010-05-12

    The 'two-box model' of planetary climate is discussed. This model has been used to demonstrate consistency of the equator-pole temperature gradient on Earth, Mars and Titan with what would be predicted from a principle of maximum entropy production (MEP). While useful for exposition and for generating first-order estimates of planetary heat transports, it has too low a resolution to investigate climate systems with strong feedbacks. A two-box MEP model agrees well with the observed day : night temperature contrast observed on the extrasolar planet HD 189733b.

  4. Dynamical maximum entropy approach to flocking

    NASA Astrophysics Data System (ADS)

    Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.

    2014-04-01

    We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.

  5. The two-box model of climate: limitations and applications to planetary habitability and maximum entropy production studies

    PubMed Central

    Lorenz, Ralph D.

    2010-01-01

    The ‘two-box model’ of planetary climate is discussed. This model has been used to demonstrate consistency of the equator–pole temperature gradient on Earth, Mars and Titan with what would be predicted from a principle of maximum entropy production (MEP). While useful for exposition and for generating first-order estimates of planetary heat transports, it has too low a resolution to investigate climate systems with strong feedbacks. A two-box MEP model agrees well with the observed day : night temperature contrast observed on the extrasolar planet HD 189733b. PMID:20368253

  6. Statistical optimization for passive scalar transport: maximum entropy production versus maximum Kolmogorov-Sinai entropy

    NASA Astrophysics Data System (ADS)

    Mihelich, M.; Faranda, D.; Dubrulle, B.; Paillard, D.

    2015-03-01

    We derive rigorous results on the link between the principle of maximum entropy production and the principle of maximum Kolmogorov-Sinai entropy for a Markov model of the passive scalar diffusion called the Zero Range Process. We show analytically that both the entropy production and the Kolmogorov-Sinai entropy, seen as functions of a parameter f connected to the jump probability, admit a unique maximum denoted fmaxEP and fmaxKS. The behaviour of these two maxima is explored as a function of the system disequilibrium and the system resolution N. The main result of this paper is that fmaxEP and fmaxKS have the same Taylor expansion at first order in the deviation from equilibrium. We find that fmaxEP hardly depends on N whereas fmaxKS depends strongly on N. In particular, for a fixed difference of potential between the reservoirs, fmaxEP(N) tends towards a non-zero value, while fmaxKS(N) tends to 0 when N goes to infinity. For values of N typical of those adopted by Paltridge and climatologists working on maximum entropy production (N ≍ 10-100), we show that fmaxEP and fmaxKS coincide even far from equilibrium. Finally, we show that one can find an optimal resolution N* such that fmaxEP and fmaxKS coincide, at least up to a second-order parameter proportional to the non-equilibrium fluxes imposed to the boundaries. We find that the optimal resolution N* depends on the non-equilibrium fluxes, so that deeper convection should be represented on finer grids. This result points to the inadequacy of using a single grid for representing convection in climate and weather models. Moreover, the application of this principle to passive scalar transport parametrization is therefore expected to provide both the value of the optimal flux, and of the optimal number of degrees of freedom (resolution) to describe the system.

  7. Maximum entropy production - Full steam ahead

    NASA Astrophysics Data System (ADS)

    Lorenz, Ralph D.

    2012-05-01

    The application of a principle of Maximum Entropy Production (MEP, or less ambiguously MaxEP) to planetary climate is discussed. This idea suggests that if sufficiently free of dynamical constraints, the atmospheric and oceanic heat flows across a planet may conspire to maximize the generation of mechanical work, or entropy. Thermodynamic and information-theoretic aspects of this idea are discussed. These issues are also discussed in the context of dust devils, convective vortices found in strongly-heated desert areas.

  8. Maximum Entropy-Based Ecological Niche Model and Bio-Climatic Determinants of Lone Star Tick (Amblyomma americanum) Niche

    PubMed Central

    Raghavan, Ram K.; Goodin, Douglas G.; Hanzlicek, Gregg A.; Zolnerowich, Gregory; Dryden, Michael W.; Anderson, Gary A.; Ganta, Roman R.

    2016-01-01

    Abstract The potential distribution of Amblyomma americanum ticks in Kansas was modeled using maximum entropy (MaxEnt) approaches based on museum and field-collected species occurrence data. Various bioclimatic variables were used in the model as potentially influential factors affecting the A. americanum niche. Following reduction of dimensionality among predictor variables using principal components analysis, which revealed that the first two principal axes explain over 87% of the variance, the model indicated that suitable conditions for this medically important tick species cover a larger area in Kansas than currently believed. Soil moisture, temperature, and precipitation were highly correlated with the first two principal components and were influential factors in the A. americanum ecological niche. Assuming that the niche estimated in this study covers the occupied distribution, which needs to be further confirmed by systematic surveys, human exposure to this known disease vector may be considerably under-appreciated in the state. PMID:26824880

  9. Weak scale from the maximum entropy principle

    NASA Astrophysics Data System (ADS)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2015-03-01

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  10. Predicting the potential environmental suitability for Theileria orientalis transmission in New Zealand cattle using maximum entropy niche modelling.

    PubMed

    Lawrence, K E; Summers, S R; Heath, A C G; McFadden, A M J; Pulford, D J; Pomroy, W E

    2016-07-15

    The tick-borne haemoparasite Theileria orientalis is the most important infectious cause of anaemia in New Zealand cattle. Since 2012 a previously unrecorded type, T. orientalis type 2 (Ikeda), has been associated with disease outbreaks of anaemia, lethargy, jaundice and deaths on over 1000 New Zealand cattle farms, with most of the affected farms found in the upper North Island. The aim of this study was to model the relative environmental suitability for T. orientalis transmission throughout New Zealand, to predict the proportion of cattle farms potentially suitable for active T. orientalis infection by region, island and the whole of New Zealand and to estimate the average relative environmental suitability per farm by region, island and the whole of New Zealand. The relative environmental suitability for T. orientalis transmission was estimated using the Maxent (maximum entropy) modelling program. The Maxent model predicted that 99% of North Island cattle farms (n=36,257), 64% South Island cattle farms (n=15,542) and 89% of New Zealand cattle farms overall (n=51,799) could potentially be suitable for T. orientalis transmission. The average relative environmental suitability of T. orientalis transmission at the farm level was 0.34 in the North Island, 0.02 in the South Island and 0.24 overall. The study showed that the potential spatial distribution of T. orientalis environmental suitability was much greater than presumed in the early part of the Theileria associated bovine anaemia (TABA) epidemic. Maximum entropy offers a computer efficient method of modelling the probability of habitat suitability for an arthropod vectored disease. This model could help estimate the boundaries of the endemically stable and endemically unstable areas for T. orientalis transmission within New Zealand and be of considerable value in informing practitioner and farmer biosecurity decisions in these respective areas.

  11. Maximum entropy spherical deconvolution for diffusion MRI.

    PubMed

    Alexander, Daniel C

    2005-01-01

    This paper proposes a maximum entropy method for spherical deconvolution. Spherical deconvolution arises in various inverse problems. This paper uses the method to reconstruct the distribution of microstructural fibre orientations from diffusion MRI measurements. Analysis shows that the PASMRI algorithm, one of the most accurate diffusion MRI reconstruction algorithms in the literature, is a special case of the maximum entropy spherical deconvolution. Experiments compare the new method to linear spherical deconvolution, used previously in diffusion MRI, and to the PASMRI algorithm. The new method compares favourably both in simulation and on standard brain-scan data.

  12. Deep-sea benthic megafaunal habitat suitability modelling: A global-scale maximum entropy model for xenophyophores

    NASA Astrophysics Data System (ADS)

    Ashford, Oliver S.; Davies, Andrew J.; Jones, Daniel O. B.

    2014-12-01

    Xenophyophores are a group of exclusively deep-sea agglutinating rhizarian protozoans, at least some of which are foraminifera. They are an important constituent of the deep-sea megafauna that are sometimes found in sufficient abundance to act as a significant source of habitat structure for meiofaunal and macrofaunal organisms. This study utilised maximum entropy modelling (Maxent) and a high-resolution environmental database to explore the environmental factors controlling the presence of Xenophyophorea and two frequently sampled xenophyophore species that are taxonomically stable: Syringammina fragilissima and Stannophyllum zonarium. These factors were also used to predict the global distribution of each taxon. Areas of high habitat suitability for xenophyophores were highlighted throughout the world's oceans, including in a large number of areas yet to be suitably sampled, but the Northeast and Southeast Atlantic Ocean, Gulf of Mexico and Caribbean Sea, the Red Sea and deep-water regions of the Malay Archipelago represented particular hotspots. The two species investigated showed more specific habitat requirements when compared to the model encompassing all xenophyophore records, perhaps in part due to the smaller number and relatively more clustered nature of the presence records available for modelling at present. The environmental variables depth, oxygen parameters, nitrate concentration, carbon-chemistry parameters and temperature were of greatest importance in determining xenophyophore distributions, but, somewhat surprisingly, hydrodynamic parameters were consistently shown to have low importance, possibly due to the paucity of well-resolved global hydrodynamic datasets. The results of this study (and others of a similar type) have the potential to guide further sample collection, environmental policy, and spatial planning of marine protected areas and industrial activities that impact the seafloor, particularly those that overlap with aggregations of

  13. Soil Moisture and Vegetation Controls on Surface Energy Balance Using the Maximum Entropy Production Model of Evapotranspiration

    NASA Astrophysics Data System (ADS)

    Wang, J.; Parolari, A.; Huang, S. Y.

    2014-12-01

    The objective of this study is to formulate and test plant water stress parameterizations for the recently proposed maximum entropy production (MEP) model of evapotranspiration (ET) over vegetated surfaces. . The MEP model of ET is a parsimonious alternative to existing land surface parameterizations of surface energy fluxes from net radiation, temperature, humidity, and a small number of parameters. The MEP model was previously tested for vegetated surfaces under well-watered and dry, dormant conditions, when the surface energy balance is relatively insensitive to plant physiological activity. Under water stressed conditions, however, the plant water stress response strongly affects the surface energy balance. This effect occurs through plant physiological adjustments that reduce ET to maintain leaf turgor pressure as soil moisture is depleted during drought. To improve MEP model of ET predictions under water stress conditions, the model was modified to incorporate this plant-mediated feedback between soil moisture and ET. We compare MEP model predictions to observations under a range of field conditions, including bare soil, grassland, and forest. The results indicate a water stress function that combines the soil water potential in the surface soil layer with the atmospheric humidity successfully reproduces observed ET decreases during drought. In addition to its utility as a modeling tool, the calibrated water stress functions also provide a means to infer ecosystem influence on the land surface state. Challenges associated with sampling model input data (i.e., net radiation, surface temperature, and surface humidity) are also discussed.

  14. Maximum Tsallis entropy with generalized Gini and Gini mean difference indices constraints

    NASA Astrophysics Data System (ADS)

    Khosravi Tanak, A.; Mohtashami Borzadaran, G. R.; Ahmadi, J.

    2017-04-01

    Using the maximum entropy principle with Tsallis entropy, some distribution families for modeling income distribution are obtained. By considering income inequality measures, maximum Tsallis entropy distributions under the constraint on generalized Gini and Gini mean difference indices are derived. It is shown that the Tsallis entropy maximizers with the considered constraints belong to generalized Pareto family.

  15. Bayesian maximum entropy integration of ozone observations and model predictions: an application for attainment demonstration in North Carolina.

    PubMed

    de Nazelle, Audrey; Arunachalam, Saravanan; Serre, Marc L

    2010-08-01

    States in the USA are required to demonstrate future compliance of criteria air pollutant standards by using both air quality monitors and model outputs. In the case of ozone, the demonstration tests aim at relying heavily on measured values, due to their perceived objectivity and enforceable quality. Weight given to numerical models is diminished by integrating them in the calculations only in a relative sense. For unmonitored locations, the EPA has suggested the use of a spatial interpolation technique to assign current values. We demonstrate that this approach may lead to erroneous assignments of nonattainment and may make it difficult for States to establish future compliance. We propose a method that combines different sources of information to map air pollution, using the Bayesian Maximum Entropy (BME) Framework. The approach gives precedence to measured values and integrates modeled data as a function of model performance. We demonstrate this approach in North Carolina, using the State's ozone monitoring network in combination with outputs from the Multiscale Air Quality Simulation Platform (MAQSIP) modeling system. We show that the BME data integration approach, compared to a spatial interpolation of measured data, improves the accuracy and the precision of ozone estimations across the state.

  16. Maximum entropy production rate in quantum thermodynamics

    NASA Astrophysics Data System (ADS)

    Beretta, Gian Paolo

    2010-06-01

    In the framework of the recent quest for well-behaved nonlinear extensions of the traditional Schrödinger-von Neumann unitary dynamics that could provide fundamental explanations of recent experimental evidence of loss of quantum coherence at the microscopic level, a recent paper [Gheorghiu-Svirschevski 2001 Phys. Rev. A 63 054102] reproposes the nonlinear equation of motion proposed by the present author [see Beretta G P 1987 Found. Phys. 17 365 and references therein] for quantum (thermo)dynamics of a single isolated indivisible constituent system, such as a single particle, qubit, qudit, spin or atomic system, or a Bose-Einstein or Fermi-Dirac field. As already proved, such nonlinear dynamics entails a fundamental unifying microscopic proof and extension of Onsager's reciprocity and Callen's fluctuation-dissipation relations to all nonequilibrium states, close and far from thermodynamic equilibrium. In this paper we propose a brief but self-contained review of the main results already proved, including the explicit geometrical construction of the equation of motion from the steepest-entropy-ascent ansatz and its exact mathematical and conceptual equivalence with the maximal-entropy-generation variational-principle formulation presented in Gheorghiu-Svirschevski S 2001 Phys. Rev. A 63 022105. Moreover, we show how it can be extended to the case of a composite system to obtain the general form of the equation of motion, consistent with the demanding requirements of strong separability and of compatibility with general thermodynamics principles. The irreversible term in the equation of motion describes the spontaneous attraction of the state operator in the direction of steepest entropy ascent, thus implementing the maximum entropy production principle in quantum theory. The time rate at which the path of steepest entropy ascent is followed has so far been left unspecified. As a step towards the identification of such rate, here we propose a possible, well

  17. Maximum entropy and Bayesian methods. Proceedings.

    NASA Astrophysics Data System (ADS)

    Grandy, W. T., Jr.; Schick, L. H.

    This volume contains a selection of papers presented at the Tenth Annual Workshop on Maximum Entropy and Bayesian Methods. The thirty-six papers included cover a wide range of applications in areas such as economics and econometrics, astronomy and astrophysics, general physics, complex systems, image reconstruction, and probability and mathematics. Together they give an excellent state-of-the-art overview of fundamental methods of data analysis.

  18. Inferring global wind energetics from a simple Earth system model based on the principle of maximum entropy production

    NASA Astrophysics Data System (ADS)

    Karkar, S.; Paillard, D.

    2015-03-01

    The question of total available wind power in the atmosphere is highly debated, as well as the effect large scale wind farms would have on the climate. Bottom-up approaches, such as those proposed by wind turbine engineers often lead to non-physical results (non-conservation of energy, mostly), while top-down approaches have proven to give physically consistent results. This paper proposes an original method for the calculation of mean annual wind energetics in the atmosphere, without resorting to heavy numerical integration of the entire dynamics. The proposed method is derived from a model based on the Maximum of Entropy Production (MEP) principle, which has proven to efficiently describe the annual mean temperature and energy fluxes, despite its simplicity. Because the atmosphere is represented with only one vertical layer and there is no vertical wind component, the model fails to represent the general circulation patterns such as cells or trade winds. However, interestingly, global energetic diagnostics are well captured by the mere combination of a simple MEP model and a flux inversion method.

  19. Automatic salient object detection via maximum entropy estimation.

    PubMed

    Chen, Xiao; Zhao, Hongwei; Liu, Pingping; Zhou, Baoyu; Ren, Weiwu

    2013-05-15

    This Letter proposes a rapid method for automatic salient object detection inspired by the idea that an image consists of redundant information and novelty fluctuations. We believe object detection can be achieved by removing the nonsalient parts and focusing on the salient object. Considering the relation between the composition of the image and the aim of object detection, we constructed what we believe is a more reliable saliency map to evaluate the image composition. The local energy feature is combined with a simple biologically inspired model (color, intensity, orientation) to strengthen the integrity of the object in the saliency map. We estimated the entropy of the object via the maximum entropy method. Then, we removed pixels of minimal intensity from the original image and compute the entropy of the resulting images, correlating this entropy with the object entropy. Our experimental results show that the algorithm outperforms the state-of-the-art methods and is more suitable in real-time applications.

  20. Predicting Changes in Macrophyte Community Structure from Functional Traits in a Freshwater Lake: A Test of Maximum Entropy Model

    PubMed Central

    Fu, Hui; Zhong, Jiayou; Yuan, Guixiang; Guo, Chunjing; Lou, Qian; Zhang, Wei; Xu, Jun; Ni, Leyi; Xie, Ping; Cao, Te

    2015-01-01

    Trait-based approaches have been widely applied to investigate how community dynamics respond to environmental gradients. In this study, we applied a series of maximum entropy (maxent) models incorporating functional traits to unravel the processes governing macrophyte community structure along water depth gradient in a freshwater lake. We sampled 42 plots and 1513 individual plants, and measured 16 functional traits and abundance of 17 macrophyte species. Study results showed that maxent model can be highly robust (99.8%) in predicting the species relative abundance of macrophytes with observed community-weighted mean (CWM) traits as the constraints, while relative low (about 30%) with CWM traits fitted from water depth gradient as the constraints. The measured traits showed notably distinct importance in predicting species abundances, with lowest for perennial growth form and highest for leaf dry mass content. For tuber and leaf nitrogen content, there were significant shifts in their effects on species relative abundance from positive in shallow water to negative in deep water. This result suggests that macrophyte species with tuber organ and greater leaf nitrogen content would become more abundant in shallow water, but would become less abundant in deep water. Our study highlights how functional traits distributed across gradients provide a robust path towards predictive community ecology. PMID:26167856

  1. Dynamics of the Anderson model for dilute magnetic alloys: A quantum Monte Carlo and maximum entropy study

    SciTech Connect

    Silver, R.N.; Gubernatis, J.E.; Sivia, D.S. ); Jarrell, M. . Dept. of Physics)

    1990-01-01

    In this article we describe the results of a new method for calculating the dynamical properties of the Anderson model. QMC generates data about the Matsubara Green's functions in imaginary time. To obtain dynamical properties, one must analytically continue these data to real time. This is an extremely ill-posed inverse problem similar to the inversion of a Laplace transform from incomplete and noisy data. Our method is a general one, applicable to the calculation of dynamical properties from a wide variety of quantum simulations. We use Bayesian methods of statistical inference to determine the dynamical properties based on both the QMC data and any prior information we may have such as sum rules, symmetry, high frequency limits, etc. This provides a natural means of combining perturbation theory and numerical simulations in order to understand dynamical many-body problems. Specifically we use the well-established maximum entropy (ME) method for image reconstruction. We obtain the spectral density and transport coefficients over the entire range of model parameters accessible by QMC, with data having much larger statistical error than required by other proposed analytic continuation methods.

  2. Predicting the Current and Future Potential Distributions of Lymphatic Filariasis in Africa Using Maximum Entropy Ecological Niche Modelling

    PubMed Central

    Slater, Hannah; Michael, Edwin

    2012-01-01

    Modelling the spatial distributions of human parasite species is crucial to understanding the environmental determinants of infection as well as for guiding the planning of control programmes. Here, we use ecological niche modelling to map the current potential distribution of the macroparasitic disease, lymphatic filariasis (LF), in Africa, and to estimate how future changes in climate and population could affect its spread and burden across the continent. We used 508 community-specific infection presence data collated from the published literature in conjunction with five predictive environmental/climatic and demographic variables, and a maximum entropy niche modelling method to construct the first ecological niche maps describing potential distribution and burden of LF in Africa. We also ran the best-fit model against climate projections made by the HADCM3 and CCCMA models for 2050 under A2a and B2a scenarios to simulate the likely distribution of LF under future climate and population changes. We predict a broad geographic distribution of LF in Africa extending from the west to the east across the middle region of the continent, with high probabilities of occurrence in the Western Africa compared to large areas of medium probability interspersed with smaller areas of high probability in Central and Eastern Africa and in Madagascar. We uncovered complex relationships between predictor ecological niche variables and the probability of LF occurrence. We show for the first time that predicted climate change and population growth will expand both the range and risk of LF infection (and ultimately disease) in an endemic region. We estimate that populations at risk to LF may range from 543 and 804 million currently, and that this could rise to between 1.65 to 1.86 billion in the future depending on the climate scenario used and thresholds applied to signify infection presence. PMID:22359670

  3. A spatiotemporal dengue fever early warning model accounting for nonlinear associations with meteorological factors: a Bayesian maximum entropy approach

    NASA Astrophysics Data System (ADS)

    Lee, Chieh-Han; Yu, Hwa-Lung; Chien, Lung-Chang

    2014-05-01

    Dengue fever has been identified as one of the most widespread vector-borne diseases in tropical and sub-tropical. In the last decade, dengue is an emerging infectious disease epidemic in Taiwan especially in the southern area where have annually high incidences. For the purpose of disease prevention and control, an early warning system is urgently needed. Previous studies have showed significant relationships between climate variables, in particular, rainfall and temperature, and the temporal epidemic patterns of dengue cases. However, the transmission of the dengue fever is a complex interactive process that mostly understated the composite space-time effects of dengue fever. This study proposes developing a one-week ahead warning system of dengue fever epidemics in the southern Taiwan that considered nonlinear associations between weekly dengue cases and meteorological factors across space and time. The early warning system based on an integration of distributed lag nonlinear model (DLNM) and stochastic Bayesian Maximum Entropy (BME) analysis. The study identified the most significant meteorological measures including weekly minimum temperature and maximum 24-hour rainfall with continuous 15-week lagged time to dengue cases variation under condition of uncertainty. Subsequently, the combination of nonlinear lagged effects of climate variables and space-time dependence function is implemented via a Bayesian framework to predict dengue fever occurrences in the southern Taiwan during 2012. The result shows the early warning system is useful for providing potential outbreak spatio-temporal prediction of dengue fever distribution. In conclusion, the proposed approach can provide a practical disease control tool for environmental regulators seeking more effective strategies for dengue fever prevention.

  4. Maximum entropy and Bayesian methods. Proceedings.

    NASA Astrophysics Data System (ADS)

    Fougère, P. F.

    Bayesian probability theory and maximum entropy are the twin foundations of consistent inductive reasoning about the physical world. This volume contains thirty-two papers which are devoted to both foundations and applications and combine tutorial presentations and more research oriented contributions. Together these provide a state of the art account of latest developments in such diverse areas as coherent imaging, regression analysis, tomography, neural networks, plasma theory, quantum mechanics, and others. The methods described will be of great interest to mathematicians, physicists, astronomers, crystallographers, engineers and those involved in all aspects of signal processing.

  5. Binary versus non-binary information in real time series: empirical results and maximum-entropy matrix models

    NASA Astrophysics Data System (ADS)

    Almog, Assaf; Garlaschelli, Diego

    2014-09-01

    The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of multiple time series of activity of the constituent units, such as stocks or neurons, respectively. While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relations between binary and non-binary properties of financial time series. These relations are a novel quantification of the fact that extreme price increments occur more often when most stocks move in the same direction. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices and their mapping to spin models in statistical physics, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to accurately replicate, and mathematically characterize, the observed binary/non-binary relations. We also obtain a phase diagram allowing us to identify, based only on the instantaneous aggregate return of a set of multiple time series, a regime where the so-called ‘market mode’ has an optimal interpretation in terms of collective (endogenous) effects, a regime where it is parsimoniously explained by pure noise, and a regime where it can be regarded as a combination of endogenous and exogenous factors. Our approach allows us to connect spin models, simple stochastic processes, and ensembles of time series inferred from partial information.

  6. Maximum entropy method helps study multifractal processes

    NASA Astrophysics Data System (ADS)

    Balcerak, Ernie

    2011-11-01

    Many natural phenomena exhibit scaling behavior, in which parts of the system resemble the whole. Topography is one example—in some landscapes, shapes seen on a small scale look similar to shapes seen at larger scales. Some processes with scaling behavior are multifractal processes, in which the scaling parameters are described by probability distributions. Nieves et al. show that a method known as the maximum entropy method, which has been applied in information theory and statistical mechanics, can be applied generally to study the statistics of multifractal processes. The authors note that the method, which could be applied to a wide variety of geophysical systems, makes it possible to infer information on multifractal processes even beyond scales where observations are available. (Geophysical Research Letters, doi:10.1029/2011GL048716, 2011)

  7. BIOSMILE: A semantic role labeling system for biomedical verbs using a maximum-entropy model with automatically generated template features

    PubMed Central

    Tsai, Richard Tzong-Han; Chou, Wen-Chi; Su, Ying-Shan; Lin, Yu-Chun; Sung, Cheng-Lung; Dai, Hong-Jie; Yeh, Irene Tzu-Hsuan; Ku, Wei; Sung, Ting-Yi; Hsu, Wen-Lian

    2007-01-01

    Background Bioinformatics tools for automatic processing of biomedical literature are invaluable for both the design and interpretation of large-scale experiments. Many information extraction (IE) systems that incorporate natural language processing (NLP) techniques have thus been developed for use in the biomedical field. A key IE task in this field is the extraction of biomedical relations, such as protein-protein and gene-disease interactions. However, most biomedical relation extraction systems usually ignore adverbial and prepositional phrases and words identifying location, manner, timing, and condition, which are essential for describing biomedical relations. Semantic role labeling (SRL) is a natural language processing technique that identifies the semantic roles of these words or phrases in sentences and expresses them as predicate-argument structures. We construct a biomedical SRL system called BIOSMILE that uses a maximum entropy (ME) machine-learning model to extract biomedical relations. BIOSMILE is trained on BioProp, our semi-automatic, annotated biomedical proposition bank. Currently, we are focusing on 30 biomedical verbs that are frequently used or considered important for describing molecular events. Results To evaluate the performance of BIOSMILE, we conducted two experiments to (1) compare the performance of SRL systems trained on newswire and biomedical corpora; and (2) examine the effects of using biomedical-specific features. The experimental results show that using BioProp improves the F-score of the SRL system by 21.45% over an SRL system that uses a newswire corpus. It is noteworthy that adding automatically generated template features improves the overall F-score by a further 0.52%. Specifically, ArgM-LOC, ArgM-MNR, and Arg2 achieve statistically significant performance improvements of 3.33%, 2.27%, and 1.44%, respectively. Conclusion We demonstrate the necessity of using a biomedical proposition bank for training SRL systems in the

  8. Maximum entropy principle and partial probability weighted moments

    NASA Astrophysics Data System (ADS)

    Deng, Jian; Pandey, M. D.; Xie, W. C.

    2012-05-01

    Maximum entropy principle (MaxEnt) is usually used for estimating the probability density function under specified moment constraints. The density function is then integrated to obtain the cumulative distribution function, which needs to be inverted to obtain a quantile corresponding to some specified probability. In such analysis, consideration of higher ordermoments is important for accurate modelling of the distribution tail. There are three drawbacks for this conventional methodology: (1) Estimates of higher order (>2) moments from a small sample of data tend to be highly biased; (2) It can merely cope with problems with complete or noncensored samples; (3) Only probability weighted moments of integer orders have been utilized. These difficulties inevitably induce bias and inaccuracy of the resultant quantile estimates and therefore have been the main impediments to the application of the MaxEnt Principle in extreme quantile estimation. This paper attempts to overcome these problems and presents a distribution free method for estimating the quantile function of a non-negative randomvariable using the principle of maximum partial entropy subject to constraints of the partial probability weighted moments estimated from censored sample. The main contributions include: (1) New concepts, i.e., partial entropy, fractional partial probability weighted moments, and partial Kullback-Leibler measure are elegantly defined; (2) Maximum entropy principle is re-formulated to be constrained by fractional partial probability weighted moments; (3) New distribution free quantile functions are derived. Numerical analyses are performed to assess the accuracy of extreme value estimates computed from censored samples.

  9. Determining Dynamical Path Distributions usingMaximum Relative Entropy

    DTIC Science & Technology

    2015-05-31

    information. MaxCal is just The Principle of Maximum Entropy (MaxEnt) where constraints are changing in time. This simply amounts to an additional...Determining Dynamical Path Distributions using Maximum Relative Entropy The views, opinions and/or findings contained in this report are those of the...SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS (ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 Maximum Entropy

  10. Maximum entropy distribution of stock price fluctuations

    NASA Astrophysics Data System (ADS)

    Bartiromo, Rosario

    2013-04-01

    In this paper we propose to use the principle of absence of arbitrage opportunities in its entropic interpretation to obtain the distribution of stock price fluctuations by maximizing its information entropy. We show that this approach leads to a physical description of the underlying dynamics as a random walk characterized by a stochastic diffusion coefficient and constrained to a given value of the expected volatility, in this way taking into account the information provided by the existence of an option market. The model is validated by a comprehensive comparison with observed distributions of both price return and diffusion coefficient. Expected volatility is the only parameter in the model and can be obtained by analysing option prices. We give an analytic formulation of the probability density function for price returns which can be used to extract expected volatility from stock option data.

  11. A multiscale maximum entropy moment closure for locally regulated space-time point process models of population dynamics.

    PubMed

    Raghib, Michael; Hill, Nicholas A; Dieckmann, Ulf

    2011-05-01

    The prevalence of structure in biological populations challenges fundamental assumptions at the heart of continuum models of population dynamics based only on mean densities (local or global). Individual-based models (IBMs) were introduced during the last decade in an attempt to overcome this limitation by following explicitly each individual in the population. Although the IBM approach has been quite useful, the capability to follow each individual usually comes at the expense of analytical tract ability, which limits the generality of the statements that can be made. For the specific case of spatial structure in populations of sessile (and identical) organisms, space-time point processes with local regulation seem to cover the middle ground between analytical tract ability and a higher degree of biological realism. This approach has shown that simplified representations of fecundity, local dispersal and density-dependent mortality weighted by the local competitive environment are sufficient to generate spatial patterns that mimic field observations. Continuum approximations of these stochastic processes try to distill their fundamental properties, and they keep track of not only mean densities, but also higher order spatial correlations. However, due to the non-linearities involved they result in infinite hierarchies of moment equations. This leads to the problem of finding a 'moment closure'; that is, an appropriate order of (lower order) truncation, together with a method of expressing the highest order density not explicitly modelled in the truncated hierarchy in terms of the lower order densities. We use the principle of constrained maximum entropy to derive a closure relationship for truncation at second order using normalisation and the product densities of first and second orders as constraints, and apply it to one such hierarchy. The resulting 'maxent' closure is similar to the Kirkwood superposition approximation, or 'power-3' closure, but it is

  12. NOTE FROM THE EDITOR: Bayesian and Maximum Entropy Methods Bayesian and Maximum Entropy Methods

    NASA Astrophysics Data System (ADS)

    Dobrzynski, L.

    2008-10-01

    The Bayesian and Maximum Entropy Methods are now standard routines in various data analyses, irrespective of ones own preference to the more conventional approach based on so-called frequentists understanding of the notion of the probability. It is not the purpose of the Editor to show all achievements of these methods in various branches of science, technology and medicine. In the case of condensed matter physics most of the oldest examples of Bayesian analysis can be found in the excellent tutorial textbooks by Sivia and Skilling [1], and Bretthorst [2], while the application of the Maximum Entropy Methods were described in `Maximum Entropy in Action' [3]. On the list of questions addressed one finds such problems as deconvolution and reconstruction of the complicated spectra, e.g. counting the number of lines hidden within the spectrum observed with always finite resolution, reconstruction of charge, spin and momentum density distribution from an incomplete sets of data, etc. On the theoretical side one might find problems like estimation of interatomic potentials [4], application of the MEM to quantum Monte Carlo data [5], Bayesian approach to inverse quantum statistics [6], very general to statistical mechanics [7] etc. Obviously, in spite of the power of the Bayesian and Maximum Entropy Methods, it is not possible for everything to be solved in a unique way by application of these particular methods of analysis, and one of the problems which is often raised is connected not only with a uniqueness of a reconstruction of a given distribution (map) but also with its accuracy (error maps). In this `Comments' section we present a few papers showing more recent advances and views, and highlighting some of the aforementioned problems. References [1] Sivia D S and Skilling J 2006 Data Analysis: A Bayesian Tutorial 2nd edn (Oxford: Oxford University Press) [2] Bretthorst G L 1988 Bayesian Spectruim Analysis and Parameter Estimation (Berlin: Springer) [3] Buck B and

  13. Combining experiments and simulations using the maximum entropy principle.

    PubMed

    Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten

    2014-02-01

    A key component of computational biology is to compare the results of computer modelling with experimental measurements. Despite substantial progress in the models and algorithms used in many areas of computational biology, such comparisons sometimes reveal that the computations are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results that are at not in complete and quantitative accordance with experiments. A common solution to this problem is to explicitly ensure agreement between the two by perturbing the potential energy function towards the experimental data. So far, a general consensus for how such perturbations should be implemented has been lacking. Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges.

  14. Beyond maximum entropy: Fractal pixon-based image reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, R. C.; Pina, R. K.

    1994-01-01

    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other methods, including Goodness-of-Fit (e.g. Least-Squares and Lucy-Richardson) and Maximum Entropy (ME). Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME.

  15. Maximum-Entropy Inference with a Programmable Annealer

    PubMed Central

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-01-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition. PMID:26936311

  16. Maximum-Entropy Inference with a Programmable Annealer

    NASA Astrophysics Data System (ADS)

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-03-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.

  17. Maximum-Entropy Inference with a Programmable Annealer.

    PubMed

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A

    2016-03-03

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.

  18. Modeling loop entropy.

    PubMed

    Chirikjian, Gregory S

    2011-01-01

    Proteins fold from a highly disordered state into a highly ordered one. Traditionally, the folding problem has been stated as one of predicting "the" tertiary structure from sequential information. However, new evidence suggests that the ensemble of unfolded forms may not be as disordered as once believed, and that the native form of many proteins may not be described by a single conformation, but rather an ensemble of its own. Quantifying the relative disorder in the folded and unfolded ensembles as an entropy difference may therefore shed light on the folding process. One issue that clouds discussions of "entropy" is that many different kinds of entropy can be defined: entropy associated with overall translational and rotational Brownian motion, configurational entropy, vibrational entropy, conformational entropy computed in internal or Cartesian coordinates (which can even be different from each other), conformational entropy computed on a lattice, each of the above with different solvation and solvent models, thermodynamic entropy measured experimentally, etc. The focus of this work is the conformational entropy of coil/loop regions in proteins. New mathematical modeling tools for the approximation of changes in conformational entropy during transition from unfolded to folded ensembles are introduced. In particular, models for computing lower and upper bounds on entropy for polymer models of polypeptide coils both with and without end constraints are presented. The methods reviewed here include kinematics (the mathematics of rigid-body motions), classical statistical mechanics, and information theory.

  19. Maximum power entropy method for ecological data analysis

    NASA Astrophysics Data System (ADS)

    Komori, Osamu; Eguchi, Shinto

    2015-01-01

    In ecology predictive models of the geographical distribution of certain species are widely used to capture the spatial diversity. Recently a method of Maxent based on Gibbs distribution is frequently employed to have reasonable accuracy of a target distribution of species at a site using environmental features such as temperature, precipitation, elevation and so on. It requires only presence data, which is a big advantage to the case where absence data is not available or unreliable. It also incorporates our limited knowledge into the model about the target distribution such that the expected values of environmental features are equal to the empirical average. Moreover, the visualization of the inhabiting probability of species is easily done with the aid of geographical coordination information from Global Biodiversity Inventory Facility (GBIF) in a statistical software R. However, the maximum entropy distribution in Maxent is derived from the Boltzmann-Gibbs-Shannon entropy, which causes unstable estimation of the parameters in the model when some outliers in the data are observed. To overcome the weak point and to have deep understandings of the relation among the total number of species, the Boltzmann-Gibbs-Shannon entropy and Simpson's index, we propose a maximum power entropy method based on beta-divergence, which is a special case of U-divergence. It includes the Boltzmann-Gibbs-Shannon entropy as a special case, so it could have better performance of estimation of the target distribution by appropriately choosing the value of the power index beta. We demonstrate the performance of the proposed method by simulation studies as well as publicly available real data.

  20. The equivalence of minimum entropy production and maximum thermal efficiency in endoreversible heat engines.

    PubMed

    Haseli, Y

    2016-05-01

    The objective of this study is to investigate the thermal efficiency and power production of typical models of endoreversible heat engines at the regime of minimum entropy generation rate. The study considers the Curzon-Ahlborn engine, the Novikov's engine, and the Carnot vapor cycle. The operational regimes at maximum thermal efficiency, maximum power output and minimum entropy production rate are compared for each of these engines. The results reveal that in an endoreversible heat engine, a reduction in entropy production corresponds to an increase in thermal efficiency. The three criteria of minimum entropy production, the maximum thermal efficiency, and the maximum power may become equivalent at the condition of fixed heat input.

  1. Maximum Entropy for the International Division of Labor.

    PubMed

    Lei, Hongmei; Chen, Ying; Li, Ruiqi; He, Deli; Zhang, Jiang

    2015-01-01

    As a result of the international division of labor, the trade value distribution on different products substantiated by international trade flows can be regarded as one country's strategy for competition. According to the empirical data of trade flows, countries may spend a large fraction of export values on ubiquitous and competitive products. Meanwhile, countries may also diversify their exports share on different types of products to reduce the risk. In this paper, we report that the export share distribution curves can be derived by maximizing the entropy of shares on different products under the product's complexity constraint once the international market structure (the country-product bipartite network) is given. Therefore, a maximum entropy model provides a good fit to empirical data. The empirical data is consistent with maximum entropy subject to a constraint on the expected value of the product complexity for each country. One country's strategy is mainly determined by the types of products this country can export. In addition, our model is able to fit the empirical export share distribution curves of nearly every country very well by tuning only one parameter.

  2. Maximum Entropy for the International Division of Labor

    PubMed Central

    Lei, Hongmei; Chen, Ying; Li, Ruiqi; He, Deli; Zhang, Jiang

    2015-01-01

    As a result of the international division of labor, the trade value distribution on different products substantiated by international trade flows can be regarded as one country’s strategy for competition. According to the empirical data of trade flows, countries may spend a large fraction of export values on ubiquitous and competitive products. Meanwhile, countries may also diversify their exports share on different types of products to reduce the risk. In this paper, we report that the export share distribution curves can be derived by maximizing the entropy of shares on different products under the product’s complexity constraint once the international market structure (the country-product bipartite network) is given. Therefore, a maximum entropy model provides a good fit to empirical data. The empirical data is consistent with maximum entropy subject to a constraint on the expected value of the product complexity for each country. One country’s strategy is mainly determined by the types of products this country can export. In addition, our model is able to fit the empirical export share distribution curves of nearly every country very well by tuning only one parameter. PMID:26172052

  3. Distribution of the Habitat Suitability of the Main Malaria Vector in French Guiana Using Maximum Entropy Modeling.

    PubMed

    Moua, Yi; Roux, Emmanuel; Girod, Romain; Dusfour, Isabelle; de Thoisy, Benoit; Seyler, Frédérique; Briolant, Sébastien

    2016-12-22

    Malaria is an important health issue in French Guiana. Its principal mosquito vector in this region is Anopheles darlingi Root. Knowledge of the spatial distribution of this species is still very incomplete due to the extent of French Guiana and the difficulty to access most of the territory. Species distribution modeling based on the maximal entropy procedure was used to predict the spatial distribution of An. darlingi using 39 presence sites. The resulting model provided significantly high prediction performances (mean 10-fold cross-validated partial area under the curve and continuous Boyce index equal to, respectively, 1.11-with a level of omission error of 20%-and 0.42). The model also provided a habitat suitability map and environmental response curves in accordance with the known entomological situation. Several environmental characteristics that had a positive correlation with the presence of An. darlingi were highlighted: nonpermanent anthropogenic changes of the natural environment, the presence of roads and tracks, and opening of the forest. Some geomorphological landforms and high altitude landscapes appear to be unsuitable for An. darlingi The species distribution modeling was able to reliably predict the distribution of suitable habitats for An. darlingi in French Guiana. Results allowed completion of the knowledge of the spatial distribution of the principal malaria vector in this Amazonian region, and identification of the main factors that favor its presence. They should contribute to the definition of a necessary targeted vector control strategy in a malaria pre-elimination stage, and allow extrapolation of the acquired knowledge to other Amazonian or malaria-endemic contexts.

  4. Multi-site, multivariate weather generator using maximum entropy bootstrap

    NASA Astrophysics Data System (ADS)

    Srivastav, Roshan K.; Simonovic, Slobodan P.

    2014-05-01

    Weather generators are increasingly becoming viable alternate models to assess the effects of future climate change scenarios on water resources systems. In this study, a new multisite, multivariate maximum entropy bootstrap weather generator (MEBWG) is proposed for generating daily weather variables, which has the ability to mimic both, spatial and temporal dependence structure in addition to other historical statistics. The maximum entropy bootstrap (MEB) involves two main steps: (1) random sampling from the empirical cumulative distribution function with endpoints selected to allow limited extrapolation and (2) reordering of the random series to respect the rank ordering of the original time series (temporal dependence structure). To capture the multi-collinear structure between the weather variables and between the sites, we combine orthogonal linear transformation with MEB. Daily weather data, which include precipitation, maximum temperature and minimum temperature from 27 years of record from the Upper Thames River Basin in Ontario, Canada, are used to analyze the ability of MEBWG based weather generator. Results indicate that the statistics from the synthetic replicates were not significantly different from the observed data and the model is able to preserve the 27 CLIMDEX indices very well. The MEBWG model shows better performance in terms of extrapolation and computational efficiency when compared to multisite, multivariate K-nearest neighbour model.

  5. Improving predictability of time series using maximum entropy methods

    NASA Astrophysics Data System (ADS)

    Chliamovitch, G.; Dupuis, A.; Golub, A.; Chopard, B.

    2015-04-01

    We discuss how maximum entropy methods may be applied to the reconstruction of Markov processes underlying empirical time series and compare this approach to usual frequency sampling. It is shown that, in low dimension, there exists a subset of the space of stochastic matrices for which the MaxEnt method is more efficient than sampling, in the sense that shorter historical samples have to be considered to reach the same accuracy. Considering short samples is of particular interest when modelling smoothly non-stationary processes, which provides, under some conditions, a powerful forecasting tool. The method is illustrated for a discretized empirical series of exchange rates.

  6. The maximum entropy formalism and the idiosyncratic theory of biodiversity.

    PubMed

    Pueyo, Salvador; He, Fangliang; Zillio, Tommaso

    2007-11-01

    Why does the neutral theory, which is based on unrealistic assumptions, predict diversity patterns so accurately? Answering questions like this requires a radical change in the way we tackle them. The large number of degrees of freedom of ecosystems pose a fundamental obstacle to mechanistic modelling. However, there are tools of statistical physics, such as the maximum entropy formalism (MaxEnt), that allow transcending particular models to simultaneously work with immense families of models with different rules and parameters, sharing only well-established features. We applied MaxEnt allowing species to be ecologically idiosyncratic, instead of constraining them to be equivalent as the neutral theory does. The answer we found is that neutral models are just a subset of the majority of plausible models that lead to the same patterns. Small variations in these patterns naturally lead to the main classical species abundance distributions, which are thus unified in a single framework.

  7. The maximum entropy formalism and the idiosyncratic theory of biodiversity

    PubMed Central

    Pueyo, Salvador; He, Fangliang; Zillio, Tommaso

    2007-01-01

    Why does the neutral theory, which is based on unrealistic assumptions, predict diversity patterns so accurately? Answering questions like this requires a radical change in the way we tackle them. The large number of degrees of freedom of ecosystems pose a fundamental obstacle to mechanistic modelling. However, there are tools of statistical physics, such as the maximum entropy formalism (MaxEnt), that allow transcending particular models to simultaneously work with immense families of models with different rules and parameters, sharing only well-established features. We applied MaxEnt allowing species to be ecologically idiosyncratic, instead of constraining them to be equivalent as the neutral theory does. The answer we found is that neutral models are just a subset of the majority of plausible models that lead to the same patterns. Small variations in these patterns naturally lead to the main classical species abundance distributions, which are thus unified in a single framework. PMID:17692099

  8. Maximum entropy regularization of the geomagnetic core field inverse problem

    NASA Astrophysics Data System (ADS)

    Jackson, Andrew; Constable, Catherine; Gillet, Nicolas

    2007-12-01

    The maximum entropy technique is an accepted method of image reconstruction when the image is made up of pixels of unknown positive intensity (e.g. a grey-scale image). The problem of reconstructing the magnetic field at the core-mantle boundary from surface data is a problem where the target image, the value of the radial field Br, can be of either sign. We adopt a known extension of the usual maximum entropy method that can be applied to images consisting of pixels of unconstrained sign. We find that we are able to construct images which have high dynamic ranges, but which still have very simple structure. In the spherical harmonic domain they have smoothly decreasing power spectra. It is also noteworthy that these models have far less complex null flux curve topology (lines on which the radial field vanishes) than do models which are quadratically regularized. Problems such as the one addressed are ubiquitous in geophysics, and it is suggested that the applications of the method could be much more widespread than is currently the case.

  9. Possible dynamical explanations for Paltridge's principle of maximum entropy production

    SciTech Connect

    Virgo, Nathaniel Ikegami, Takashi

    2014-12-05

    Throughout the history of non-equilibrium thermodynamics a number of theories have been proposed in which complex, far from equilibrium flow systems are hypothesised to reach a steady state that maximises some quantity. Perhaps the most celebrated is Paltridge's principle of maximum entropy production for the horizontal heat flux in Earth's atmosphere, for which there is some empirical support. There have been a number of attempts to derive such a principle from maximum entropy considerations. However, we currently lack a more mechanistic explanation of how any particular system might self-organise into a state that maximises some quantity. This is in contrast to equilibrium thermodynamics, in which models such as the Ising model have been a great help in understanding the relationship between the predictions of MaxEnt and the dynamics of physical systems. In this paper we show that, unlike in the equilibrium case, Paltridge-type maximisation in non-equilibrium systems cannot be achieved by a simple dynamical feedback mechanism. Nevertheless, we propose several possible mechanisms by which maximisation could occur. Showing that these occur in any real system is a task for future work. The possibilities presented here may not be the only ones. We hope that by presenting them we can provoke further discussion about the possible dynamical mechanisms behind extremum principles for non-equilibrium systems, and their relationship to predictions obtained through MaxEnt.

  10. How multiplicity determines entropy and the derivation of the maximum entropy principle for complex systems.

    PubMed

    Hanel, Rudolf; Thurner, Stefan; Gell-Mann, Murray

    2014-05-13

    The maximum entropy principle (MEP) is a method for obtaining the most likely distribution functions of observables from statistical systems by maximizing entropy under constraints. The MEP has found hundreds of applications in ergodic and Markovian systems in statistical mechanics, information theory, and statistics. For several decades there has been an ongoing controversy over whether the notion of the maximum entropy principle can be extended in a meaningful way to nonextensive, nonergodic, and complex statistical systems and processes. In this paper we start by reviewing how Boltzmann-Gibbs-Shannon entropy is related to multiplicities of independent random processes. We then show how the relaxation of independence naturally leads to the most general entropies that are compatible with the first three Shannon-Khinchin axioms, the (c,d)-entropies. We demonstrate that the MEP is a perfectly consistent concept for nonergodic and complex statistical systems if their relative entropy can be factored into a generalized multiplicity and a constraint term. The problem of finding such a factorization reduces to finding an appropriate representation of relative entropy in a linear basis. In a particular example we show that path-dependent random processes with memory naturally require specific generalized entropies. The example is to our knowledge the first exact derivation of a generalized entropy from the microscopic properties of a path-dependent random process.

  11. How multiplicity determines entropy and the derivation of the maximum entropy principle for complex systems

    PubMed Central

    Hanel, Rudolf; Thurner, Stefan; Gell-Mann, Murray

    2014-01-01

    The maximum entropy principle (MEP) is a method for obtaining the most likely distribution functions of observables from statistical systems by maximizing entropy under constraints. The MEP has found hundreds of applications in ergodic and Markovian systems in statistical mechanics, information theory, and statistics. For several decades there has been an ongoing controversy over whether the notion of the maximum entropy principle can be extended in a meaningful way to nonextensive, nonergodic, and complex statistical systems and processes. In this paper we start by reviewing how Boltzmann–Gibbs–Shannon entropy is related to multiplicities of independent random processes. We then show how the relaxation of independence naturally leads to the most general entropies that are compatible with the first three Shannon–Khinchin axioms, the -entropies. We demonstrate that the MEP is a perfectly consistent concept for nonergodic and complex statistical systems if their relative entropy can be factored into a generalized multiplicity and a constraint term. The problem of finding such a factorization reduces to finding an appropriate representation of relative entropy in a linear basis. In a particular example we show that path-dependent random processes with memory naturally require specific generalized entropies. The example is to our knowledge the first exact derivation of a generalized entropy from the microscopic properties of a path-dependent random process. PMID:24782541

  12. Maximum entropy Eddington factors in flux-limited neutrino diffusion

    NASA Astrophysics Data System (ADS)

    Cernohorsky, Jan; Vandenhorn, L. J.; Cooperstein, J.

    A neutrino transport scheme for use in dense stellar environments and collapsing stars is constructed. The maximum entropy principle is used to establish the general form of the angular neutrino distribution functions. The two Lagrange multipliers introduced by this procedure are determined by using the Flux-limited Diffusion Theory (FDT) of Levermore and Pomraning. The anisotropic scattering contribution is taken into account. Its inclusion leads to a modification of the Levermore-Pomraning approach. The transition from a multigroup to an energy integrated transport scheme for FDT is investigated. The link to the two fluid model of Cooperstein et al is made. This extended two fluid model parametrizes the thermal and chemical disequilibrium between matter and neutrinos. The variable Eddington factors are now self-consistently determined through a local dimensionless quantity, rather than by macroscopic geometrical prescription.

  13. Approximate maximum-entropy moment closures for gas dynamics

    NASA Astrophysics Data System (ADS)

    McDonald, James G.

    2016-11-01

    Accurate prediction of flows that exist between the traditional continuum regime and the free-molecular regime have proven difficult to obtain. Current methods are either inaccurate in this regime or prohibitively expensive for practical problems. Moment closures have long held the promise of providing new, affordable, accurate methods in this regime. The maximum-entropy hierarchy of closures seems to offer particularly attractive physical and mathematical properties. Unfortunately, several difficulties render the practical implementation of maximum-entropy closures very difficult. This work examines the use of simple approximations to these maximum-entropy closures and shows that physical accuracy that is vastly improved over continuum methods can be obtained without a significant increase in computational cost. Initially the technique is demonstrated for a simple one-dimensional gas. It is then extended to the full three-dimensional setting. The resulting moment equations are used for the numerical solution of shock-wave profiles with promising results.

  14. Maximum entropy, word-frequency, Chinese characters, and multiple meanings.

    PubMed

    Yan, Xiaoyong; Minnhagen, Petter

    2015-01-01

    The word-frequency distribution of a text written by an author is well accounted for by a maximum entropy distribution, the RGF (random group formation)-prediction. The RGF-distribution is completely determined by the a priori values of the total number of words in the text (M), the number of distinct words (N) and the number of repetitions of the most common word (k(max)). It is here shown that this maximum entropy prediction also describes a text written in Chinese characters. In particular it is shown that although the same Chinese text written in words and Chinese characters have quite differently shaped distributions, they are nevertheless both well predicted by their respective three a priori characteristic values. It is pointed out that this is analogous to the change in the shape of the distribution when translating a given text to another language. Another consequence of the RGF-prediction is that taking a part of a long text will change the input parameters (M, N, k(max)) and consequently also the shape of the frequency distribution. This is explicitly confirmed for texts written in Chinese characters. Since the RGF-prediction has no system-specific information beyond the three a priori values (M, N, k(max)), any specific language characteristic has to be sought in systematic deviations from the RGF-prediction and the measured frequencies. One such systematic deviation is identified and, through a statistical information theoretical argument and an extended RGF-model, it is proposed that this deviation is caused by multiple meanings of Chinese characters. The effect is stronger for Chinese characters than for Chinese words. The relation between Zipf's law, the Simon-model for texts and the present results are discussed.

  15. Maximum Entropy, Word-Frequency, Chinese Characters, and Multiple Meanings

    PubMed Central

    Yan, Xiaoyong; Minnhagen, Petter

    2015-01-01

    The word-frequency distribution of a text written by an author is well accounted for by a maximum entropy distribution, the RGF (random group formation)-prediction. The RGF-distribution is completely determined by the a priori values of the total number of words in the text (M), the number of distinct words (N) and the number of repetitions of the most common word (kmax). It is here shown that this maximum entropy prediction also describes a text written in Chinese characters. In particular it is shown that although the same Chinese text written in words and Chinese characters have quite differently shaped distributions, they are nevertheless both well predicted by their respective three a priori characteristic values. It is pointed out that this is analogous to the change in the shape of the distribution when translating a given text to another language. Another consequence of the RGF-prediction is that taking a part of a long text will change the input parameters (M, N, kmax) and consequently also the shape of the frequency distribution. This is explicitly confirmed for texts written in Chinese characters. Since the RGF-prediction has no system-specific information beyond the three a priori values (M, N, kmax), any specific language characteristic has to be sought in systematic deviations from the RGF-prediction and the measured frequencies. One such systematic deviation is identified and, through a statistical information theoretical argument and an extended RGF-model, it is proposed that this deviation is caused by multiple meanings of Chinese characters. The effect is stronger for Chinese characters than for Chinese words. The relation between Zipf’s law, the Simon-model for texts and the present results are discussed. PMID:25955175

  16. Stochastic model of the NASA/MSFC ground facility for large space structures with uncertain parameters: The maximum entropy approach, part 2

    NASA Technical Reports Server (NTRS)

    Hsia, Wei Shen

    1989-01-01

    A validated technology data base is being developed in the areas of control/structures interaction, deployment dynamics, and system performance for Large Space Structures (LSS). A Ground Facility (GF), in which the dynamics and control systems being considered for LSS applications can be verified, was designed and built. One of the important aspects of the GF is to verify the analytical model for the control system design. The procedure is to describe the control system mathematically as well as possible, then to perform tests on the control system, and finally to factor those results into the mathematical model. The reduction of the order of a higher order control plant was addressed. The computer program was improved for the maximum entropy principle adopted in Hyland's MEOP method. The program was tested against the testing problem. It resulted in a very close match. Two methods of model reduction were examined: Wilson's model reduction method and Hyland's optimal projection (OP) method. Design of a computer program for Hyland's OP method was attempted. Due to the difficulty encountered at the stage where a special matrix factorization technique is needed in order to obtain the required projection matrix, the program was successful up to the finding of the Linear Quadratic Gaussian solution but not beyond. Numerical results along with computer programs which employed ORACLS are presented.

  17. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution

    NASA Astrophysics Data System (ADS)

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.

  18. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution.

    PubMed

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.

  19. Beyond maximum entropy: Fractal Pixon-based image reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard C.; Pina, R. K.

    1994-01-01

    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.

  20. Crowd macro state detection using entropy model

    NASA Astrophysics Data System (ADS)

    Zhao, Ying; Yuan, Mengqi; Su, Guofeng; Chen, Tao

    2015-08-01

    In the crowd security research area a primary concern is to identify the macro state of crowd behaviors to prevent disasters and to supervise the crowd behaviors. The entropy is used to describe the macro state of a self-organization system in physics. The entropy change indicates the system macro state change. This paper provides a method to construct crowd behavior microstates and the corresponded probability distribution using the individuals' velocity information (magnitude and direction). Then an entropy model was built up to describe the crowd behavior macro state. Simulation experiments and video detection experiments were conducted. It was verified that in the disordered state, the crowd behavior entropy is close to the theoretical maximum entropy; while in ordered state, the entropy is much lower than half of the theoretical maximum entropy. The crowd behavior macro state sudden change leads to the entropy change. The proposed entropy model is more applicable than the order parameter model in crowd behavior detection. By recognizing the entropy mutation, it is possible to detect the crowd behavior macro state automatically by utilizing cameras. Results will provide data support on crowd emergency prevention and on emergency manual intervention.

  1. Predicting the potential distribution of main malaria vectors Anopheles stephensi, An. culicifacies s.l. and An. fluviatilis s.l. in Iran based on maximum entropy model.

    PubMed

    Pakdad, Kamran; Hanafi-Bojd, Ahmad Ali; Vatandoost, Hassan; Sedaghat, Mohammad Mehdi; Raeisi, Ahmad; Moghaddam, Abdolreza Salahi; Foroushani, Abbas Rahimi

    2017-05-01

    Malaria is considered as a major public health problem in southern areas of Iran. The goal of this study was to predict best ecological niches of three main malaria vectors of Iran: Anopheles stephensi, Anopheles culicifacies s.l. and Anopheles fluviatilis s.l. A databank was created which included all published data about Anopheles species of Iran from 1961 to 2015. The suitable environmental niches for the three above mentioned Anopheles species were predicted using maximum entropy model (MaxEnt). AUC (area under Roc curve) values were 0.943, 0.974 and 0.956 for An. stephensi, An. culicifacies s.l. and An. fluviatilis s.l respectively, which are considered as high potential power of model in the prediction of species niches. The biggest bioclimatic contributor for An. stephensi and An. fluviatilis s.l. was bio 15 (precipitation seasonality), 25.5% and 36.1% respectively, followed by bio 1 (annual mean temperature), 20.8% for An. stephensi and bio 4 (temperature seasonality) with 49.4% contribution for An. culicifacies s.l. This is the first step in the mapping of the country's malaria vectors. Hence, future weather situation can change the dispersal maps of Anopheles. Iran is under elimination phase of malaria, so that such spatio-temporal studies are essential and could provide guideline for decision makers for IVM strategies in problematic areas.

  2. Propane spectral resolution enhancement by the maximum entropy method

    NASA Technical Reports Server (NTRS)

    Bonavito, N. L.; Stewart, K. P.; Hurley, E. J.; Yeh, K. C.; Inguva, R.

    1990-01-01

    The Burg algorithm for maximum entropy power spectral density estimation is applied to a time series of data obtained from a Michelson interferometer and compared with a standard FFT estimate for resolution capability. The propane transmittance spectrum was estimated by use of the FFT with a 2 to the 18th data sample interferogram, giving a maximum unapodized resolution of 0.06/cm. This estimate was then interpolated by zero filling an additional 2 to the 18th points, and the final resolution was taken to be 0.06/cm. Comparison of the maximum entropy method (MEM) estimate with the FFT was made over a 45/cm region of the spectrum for several increasing record lengths of interferogram data beginning at 2 to the 10th. It is found that over this region the MEM estimate with 2 to the 16th data samples is in close agreement with the FFT estimate using 2 to the 18th samples.

  3. Development of an Anisotropic Geological-Based Land Use Regression and Bayesian Maximum Entropy Model for Estimating Groundwater Radon across Northing Carolina

    NASA Astrophysics Data System (ADS)

    Messier, K. P.; Serre, M. L.

    2015-12-01

    Radon (222Rn) is a naturally occurring chemically inert, colorless, and odorless radioactive gas produced from the decay of uranium (238U), which is ubiquitous in rocks and soils worldwide. Exposure to 222Rn is likely the second leading cause of lung cancer after cigarette smoking via inhalation; however, exposure through untreated groundwater is also a contributing factor to both inhalation and ingestion routes. A land use regression (LUR) model for groundwater 222Rn with anisotropic geological and 238U based explanatory variables is developed, which helps elucidate the factors contributing to elevated 222Rn across North Carolina. Geological and uranium based variables are constructed in elliptical buffers surrounding each observation such that they capture the lateral geometric anisotropy present in groundwater 222Rn. Moreover, geological features are defined at three different geological spatial scales to allow the model to distinguish between large area and small area effects of geology on groundwater 222Rn. The LUR is also integrated into the Bayesian Maximum Entropy (BME) geostatistical framework to increase accuracy and produce a point-level LUR-BME model of groundwater 222Rn across North Carolina including prediction uncertainty. The LUR-BME model of groundwater 222Rn results in a leave-one out cross-validation of 0.46 (Pearson correlation coefficient= 0.68), effectively predicting within the spatial covariance range. Modeled results of 222Rn concentrations show variability among Intrusive Felsic geological formations likely due to average bedrock 238U defined on the basis of overlying stream-sediment 238U concentrations that is a widely distributed consistently analyzed point-source data.

  4. Inverse Spin Glass and Related Maximum Entropy Problems

    NASA Astrophysics Data System (ADS)

    Castellana, Michele; Bialek, William

    2014-09-01

    If we have a system of binary variables and we measure the pairwise correlations among these variables, then the least structured or maximum entropy model for their joint distribution is an Ising model with pairwise interactions among the spins. Here we consider inhomogeneous systems in which we constrain, for example, not the full matrix of correlations, but only the distribution from which these correlations are drawn. In this sense, what we have constructed is an inverse spin glass: rather than choosing coupling constants at random from a distribution and calculating correlations, we choose the correlations from a distribution and infer the coupling constants. We argue that such models generate a block structure in the space of couplings, which provides an explicit solution of the inverse problem. This allows us to generate a phase diagram in the space of (measurable) moments of the distribution of correlations. We expect that these ideas will be most useful in building models for systems that are nonequilibrium statistical mechanics problems, such as networks of real neurons.

  5. Triadic conceptual structure of the maximum entropy approach to evolution.

    PubMed

    Herrmann-Pillath, Carsten; Salthe, Stanley N

    2011-03-01

    Many problems in evolutionary theory are cast in dyadic terms, such as the polar oppositions of organism and environment. We argue that a triadic conceptual structure offers an alternative perspective under which the information generating role of evolution as a physical process can be analyzed, and propose a new diagrammatic approach. Peirce's natural philosophy was deeply influenced by his reception of both Darwin's theory and thermodynamics. Thus, we elaborate on a new synthesis which puts together his theory of signs and modern Maximum Entropy approaches to evolution in a process discourse. Following recent contributions to the naturalization of Peircean semiosis, pointing towards 'physiosemiosis' or 'pansemiosis', we show that triadic structures involve the conjunction of three different kinds of causality, efficient, formal and final. In this, we accommodate the state-centered thermodynamic framework to a process approach. We apply this on Ulanowicz's analysis of autocatalytic cycles as primordial patterns of life. This paves the way for a semiotic view of thermodynamics which is built on the idea that Peircean interpretants are systems of physical inference devices evolving under natural selection. In this view, the principles of Maximum Entropy, Maximum Power, and Maximum Entropy Production work together to drive the emergence of information carrying structures, which at the same time maximize information capacity as well as the gradients of energy flows, such that ultimately, contrary to Schrödinger's seminal contribution, the evolutionary process is seen to be a physical expression of the Second Law.

  6. A maximum entropy method for MEG source imaging

    SciTech Connect

    Khosla, D. |; Singh, M.

    1996-12-31

    The estimation of three-dimensional dipole current sources on the cortical surface from the measured magnetoencephalogram (MEG) is a highly under determined inverse problem as there are many {open_quotes}feasible{close_quotes} images which are consistent with the MEG data. Previous approaches to this problem have concentrated on the use of weighted minimum norm inverse methods. While these methods ensure a unique solution, they often produce overly smoothed solutions and exhibit severe sensitivity to noise. In this paper we explore the maximum entropy approach to obtain better solutions to the problem. This estimation technique selects that image from the possible set of feasible images which has the maximum entropy permitted by the information available to us. In order to account for the presence of noise in the data, we have also incorporated a noise rejection or likelihood term into our maximum entropy method. This makes our approach mirror a Bayesian maximum a posteriori (MAP) formulation. Additional information from other functional techniques like functional magnetic resonance imaging (fMRI) can be incorporated in the proposed method in the form of a prior bias function to improve solutions. We demonstrate the method with experimental phantom data from a clinical 122 channel MEG system.

  7. Nonparametric supervised learning by linear interpolation with maximum entropy.

    PubMed

    Gupta, Maya R; Gray, Robert M; Olshen, Richard A

    2006-05-01

    Nonparametric neighborhood methods for learning entail estimation of class conditional probabilities based on relative frequencies of samples that are "near-neighbors" of a test point. We propose and explore the behavior of a learning algorithm that uses linear interpolation and the principle of maximum entropy (LIME). We consider some theoretical properties of the LIME algorithm: LIME weights have exponential form; the estimates are consistent; and the estimates are robust to additive noise. In relation to bias reduction, we show that near-neighbors contain a test point in their convex hull asymptotically. The common linear interpolation solution used for regression on grids or look-up-tables is shown to solve a related maximum entropy problem. LIME simulation results support use of the method, and performance on a pipeline integrity classification problem demonstrates that the proposed algorithm has practical value.

  8. Maximum information entropy: a foundation for ecological theory.

    PubMed

    Harte, John; Newman, Erica A

    2014-07-01

    The maximum information entropy (MaxEnt) principle is a successful method of statistical inference that has recently been applied to ecology. Here, we show how MaxEnt can accurately predict patterns such as species-area relationships (SARs) and abundance distributions in macroecology and be a foundation for ecological theory. We discuss the conceptual foundation of the principle, why it often produces accurate predictions of probability distributions in science despite not incorporating explicit mechanisms, and how mismatches between predictions and data can shed light on driving mechanisms in ecology. We also review possible future extensions of the maximum entropy theory of ecology (METE), a potentially important foundation for future developments in ecological theory.

  9. Maximum entropy distributions of scale-invariant processes.

    PubMed

    Nieves, Veronica; Wang, Jingfeng; Bras, Rafael L; Wood, Elizabeth

    2010-09-10

    Organizations of many variables in nature such as soil moisture and topography exhibit patterns with no dominant scales. The maximum entropy (ME) principle is proposed to show how these variables can be statistically described using their scale-invariant properties and geometric mean. The ME principle predicts with great simplicity the probability distribution of a scale-invariant process in terms of macroscopic observables. The ME principle offers a universal and unified framework for characterizing such multiscaling processes.

  10. PNNL: A Supervised Maximum Entropy Approach to Word Sense Disambiguation

    SciTech Connect

    Tratz, Stephen C.; Sanfilippo, Antonio P.; Gregory, Michelle L.; Chappell, Alan R.; Posse, Christian; Whitney, Paul D.

    2007-06-23

    In this paper, we described the PNNL Word Sense Disambiguation system as applied to the English All-Word task in Se-mEval 2007. We use a supervised learning approach, employing a large number of features and using Information Gain for dimension reduction. Our Maximum Entropy approach combined with a rich set of features produced results that are significantly better than baseline and are the highest F-score for the fined-grained English All-Words subtask.

  11. Predicting the current potential and future world wide distribution of the onion maggot, Delia antiqua using maximum entropy ecological niche modeling.

    PubMed

    Ning, Shuoying; Wei, Jiufeng; Feng, Jinian

    2017-01-01

    Climate change will markedly impact biology, population ecology, and spatial distribution patterns of insect pests because of the influence of future greenhouse effects on insect development and population dynamics. Onion maggot, Delia antiqua, larvae are subterranean pests with limited mobility, that directly feed on bulbs of Allium sp. and render them completely unmarketable. Modeling the spatial distribution of such a widespread and damaging pest is crucial not only to identify current potentially suitable climactic areas but also to predict where the pest is likely to spread in the future so that appropriate monitoring and management programs can be developed. In this study, Maximum Entropy Niche Modeling was used to estimate the current potential distribution of D. antiqua and to predict the future distribution of this species in 2030, 2050, 2070 and 2080 by using emission scenario (A2) with 7 climate variables. The results of this study show that currently highly suitable habitats for D.antiqua occur throughout most of East Asia, some regions of North America, Western Europe, and Western Asian countries near the Caspian sea and Black Sea. In the future, we predict an even broader distribution of this pest spread more extensively throughout Asia, North America and Europe, particularly in most of European countries, Central regions of United States and much of East Asia. Our present day and future predictions can enhance strategic planning of agricultural organizations by identifying regions that will need to develop Integrated Pest Management programs to manage the onion maggot. The distribution forecasts will also help governments to optimize economic investments in management programs for this pest by identifying regions that are or will become less suitable for current and future infestations.

  12. Predicting the current potential and future world wide distribution of the onion maggot, Delia antiqua using maximum entropy ecological niche modeling

    PubMed Central

    Feng, Jinian

    2017-01-01

    Climate change will markedly impact biology, population ecology, and spatial distribution patterns of insect pests because of the influence of future greenhouse effects on insect development and population dynamics. Onion maggot, Delia antiqua, larvae are subterranean pests with limited mobility, that directly feed on bulbs of Allium sp. and render them completely unmarketable. Modeling the spatial distribution of such a widespread and damaging pest is crucial not only to identify current potentially suitable climactic areas but also to predict where the pest is likely to spread in the future so that appropriate monitoring and management programs can be developed. In this study, Maximum Entropy Niche Modeling was used to estimate the current potential distribution of D. antiqua and to predict the future distribution of this species in 2030, 2050, 2070 and 2080 by using emission scenario (A2) with 7 climate variables. The results of this study show that currently highly suitable habitats for D.antiqua occur throughout most of East Asia, some regions of North America, Western Europe, and Western Asian countries near the Caspian sea and Black Sea. In the future, we predict an even broader distribution of this pest spread more extensively throughout Asia, North America and Europe, particularly in most of European countries, Central regions of United States and much of East Asia. Our present day and future predictions can enhance strategic planning of agricultural organizations by identifying regions that will need to develop Integrated Pest Management programs to manage the onion maggot. The distribution forecasts will also help governments to optimize economic investments in management programs for this pest by identifying regions that are or will become less suitable for current and future infestations. PMID:28158259

  13. Maximum entropy reconstruction of the configurational density of states from microcanonical simulations

    NASA Astrophysics Data System (ADS)

    Davis, Sergio

    2013-02-01

    In this work we develop a method for inferring the underlying configurational density of states of a molecular system by combining information from several microcanonical molecular dynamics or Monte Carlo simulations at different energies. This method is based on Jaynes' Maximum Entropy formalism (MaxEnt) for Bayesian statistical inference under known expectation values. We present results of its application to measure thermodynamic entropy and free energy differences in embedded-atom models of metals.

  14. Quasiparticle density of states by inversion with maximum entropy method

    NASA Astrophysics Data System (ADS)

    Sui, Xiao-Hong; Wang, Han-Ting; Tang, Hui; Su, Zhao-Bin

    2016-10-01

    We propose to extract the quasiparticle density of states (DOS) of the superconductor directly from the experimentally measured superconductor-insulator-superconductor junction tunneling data by applying the maximum entropy method to the nonlinear systems. It merits the advantage of model independence with minimum a priori assumptions. Various components of the proposed method have been carefully investigated, including the meaning of the targeting function, the mock function, as well as the role and the designation of the input parameters. The validity of the developed scheme is shown by two kinds of tests for systems with known DOS. As a preliminary application to a Bi2Sr2CaCu2O8 +δ sample with its critical temperature Tc=89 K , we extract the DOS from the measured intrinsic Josephson junction current data at temperatures of T =4.2 K , 45 K , 55 K , 95 K , and 130 K . The energy gap decreases with increasing temperature below Tc, while above Tc, a kind of energy gap survives, which provides an angle to investigate the pseudogap phenomenon in high-Tc superconductors. The developed method itself might be a useful tool for future applications in various fields.

  15. Estimating Thermal Inertia with a Maximum Entropy Boundary Condition

    NASA Astrophysics Data System (ADS)

    Nearing, G.; Moran, M. S.; Scott, R.; Ponce-Campos, G.

    2012-04-01

    Thermal inertia, P [Jm-2s-1/2K-1], is a physical property the land surface which determines resistance to temperature change under seasonal or diurnal heating. It is a function of volumetric heat capacity, c [Jm-3K-1], and thermal conductivity, k [Wm-1K-1] of the soil near the surface: P=√ck. Thermal inertia of soil varies with moisture content due the difference between thermal properties of water and air, and a number of studies have demonstrated that it is feasible to estimate soil moisture given thermal inertia (e.g. Lu et al, 2009, Murray and Verhoef, 2007). We take the common approach to estimating thermal inertia using measurements of surface temperature by modeling the Earth's surface as a 1-dimensional homogeneous diffusive half-space. In this case, surface temperature is a function of the ground heat flux (G) boundary condition and thermal inertia and a daily value of P was estimated by matching measured and modeled diurnal surface temperature fluctuations. The difficulty is in measuring G; we demonstrate that the new maximum entropy production (MEP) method for partitioning net radiation into surface energy fluxes (Wang and Bras, 2011) provides a suitable boundary condition for estimating P. Adding the diffusion representation of heat transfer in the soil reduces the number of free parameters in the MEP model from two to one, and we provided a sensitivity analysis which suggests that, for the purpose of estimating P, it is preferable to parameterize the coupled MEP-diffusion model by the ratio of thermal inertia of the soil to the effective thermal inertia of convective heat transfer to the atmosphere. We used this technique to estimate thermal inertia at two semiarid, non-vegetated locations in the Walnut Gulch Experimental Watershed in southeast AZ, USA and compared these estimates to estimates of P made using the Xue and Cracknell (1995) solution for a linearized ground heat flux boundary condition, and we found that the MEP-diffusion model produced

  16. Gravitational entropies in LTB dust models

    NASA Astrophysics Data System (ADS)

    Sussman, Roberto A.; Larena, Julien

    2014-04-01

    We consider generic Lemaître-Tolman-Bondi (LTB) dust models to probe the gravitational entropy proposals of Clifton, Ellis and Tavakol (CET) and of Hosoya and Buchert (HB). We also consider a variant of the HB proposal based on a suitable quasi-local scalar weighted average. We show that the conditions for entropy growth for all proposals are directly related to a negative correlation of similar fluctuations of the energy density and Hubble scalar. While this correlation is evaluated locally for the CET proposal, it must be evaluated in a non-local domain dependent manner for the two HB proposals. By looking at the fulfilment of these conditions at the relevant asymptotic limits we are able to provide a well grounded qualitative description of the full time evolution and radial asymptotic scaling of the three entropies in generic models. The following rigorous analytic results are obtained for the three proposals: (i) entropy grows when the density growing mode is dominant, (ii) all ever-expanding hyperbolic models reach a stable terminal equilibrium characterized by an inhomogeneous entropy maximum in their late time evolution; (iii) regions with decaying modes and collapsing elliptic models exhibit unstable equilibria associated with an entropy minimum (iv) near singularities the CET entropy diverges while the HB entropies converge; (v) the CET entropy converges for all models in the radial asymptotic range, whereas the HB entropies only converge for models asymptotic to a Friedmann-Lemaître-Robertson-Walker background. The fact that different independent proposals yield fairly similar conditions for entropy production, time evolution and radial scaling in generic LTB models seems to suggest that their common notion of a ‘gravitational entropy’ may be a theoretically robust concept applicable to more general spacetimes.

  17. On estimating distributions with the maximum entropy principle

    NASA Astrophysics Data System (ADS)

    Zhigunov, V. P.; Kostkina, T. B.; Spiridonov, A. A.

    1988-12-01

    The possibility to use the maximum entropy principle to estimate distributions from measurements with known resolution functions has been considered. The general analytical form of the distribution estimate has been obtained. The statistical properties of this estimate, i.e. the error matrix and bias, have been analyzed. The method is generalized for the case when the unknown distribution is considered to be close to a certain known one. The proposed method is illustrated by a number of numerical experiments. The results are compared with those obtained by other methods.

  18. Time-Reversal Acoustics and Maximum-Entropy Imaging

    SciTech Connect

    Berryman, J G

    2001-08-22

    Target location is a common problem in acoustical imaging using either passive or active data inversion. Time-reversal methods in acoustics have the important characteristic that they provide a means of determining the eigenfunctions and eigenvalues of the scattering operator for either of these problems. Each eigenfunction may often be approximately associated with an individual scatterer. The resulting decoupling of the scattered field from a collection of targets is a very useful aid to localizing the targets, and suggests a number of imaging and localization algorithms. Two of these are linear subspace methods and maximum-entropy imaging.

  19. Nuclear-weighted X-ray maximum entropy method - NXMEM.

    PubMed

    Christensen, Sebastian; Bindzus, Niels; Christensen, Mogens; Brummerstedt Iversen, Bo

    2015-01-01

    Subtle structural features such as disorder and anharmonic motion may be accurately characterized from nuclear density distributions (NDDs). As a viable alternative to neutron diffraction, this paper introduces a new approach named the nuclear-weighted X-ray maximum entropy method (NXMEM) for reconstructing pseudo NDDs. It calculates an electron-weighted nuclear density distribution (eNDD), exploiting that X-ray diffraction delivers data of superior quality, requires smaller sample volumes and has higher availability. NXMEM is tested on two widely different systems: PbTe and Ba(8)Ga(16)Sn(30). The first compound, PbTe, possesses a deceptively simple crystal structure on the macroscopic level that is unable to account for its excellent thermoelectric properties. The key mechanism involves local distortions, and the capability of NXMEM to probe this intriguing feature is established with simulated powder diffraction data. In the second compound, Ba(8)Ga(16)Sn(30), disorder among the Ba guest atoms is analysed with both experimental and simulated single-crystal diffraction data. In all cases, NXMEM outperforms the maximum entropy method by substantially enhancing the nuclear resolution. The induced improvements correlate with the amount of available data, rendering NXMEM especially powerful for powder and low-resolution single-crystal diffraction. The NXMEM procedure can be implemented in existing software and facilitates widespread characterization of disorder in functional materials.

  20. LIBOR troubles: Anomalous movements detection based on maximum entropy

    NASA Astrophysics Data System (ADS)

    Bariviera, Aurelio F.; Martín, María T.; Plastino, Angelo; Vampa, Victoria

    2016-05-01

    According to the definition of the London Interbank Offered Rate (LIBOR), contributing banks should give fair estimates of their own borrowing costs in the interbank market. Between 2007 and 2009, several banks made inappropriate submissions of LIBOR, sometimes motivated by profit-seeking from their trading positions. In 2012, several newspapers' articles began to cast doubt on LIBOR integrity, leading surveillance authorities to conduct investigations on banks' behavior. Such procedures resulted in severe fines imposed to involved banks, who recognized their financial inappropriate conduct. In this paper, we uncover such unfair behavior by using a forecasting method based on the Maximum Entropy principle. Our results are robust against changes in parameter settings and could be of great help for market surveillance.

  1. Application of maximum entropy method for earthquake signatures using GPSTEC

    NASA Astrophysics Data System (ADS)

    Revathi, R.; Lakshminarayana, S.; Koteswara Rao, S.; Ramesh, K. S.; Uday Kiran, K.

    2016-04-01

    Spectral analysis of ionospheric disturbances of seismic origin will aid for the detection and prediction of the unavoidable natural disasters like Earthquakes. These disturbances for an earthquake occurred in Kawalu, West Java Indonesia with a magnitude of 4.3 on Richter scale was analyzed. The earthquake has occurred on 12th December 2013 at 7:02 hours universal time coordinate i.e at 12:32 hours local time coordinate. Maximum entropy method was applied on the ionopsheric disturbances seen on the earthquake day. The enhancement in the energy of the ionosphere has a high value at the beginning. It had a slow initial decrement and then a rapid fall down is observed. The method may profoundly represent the effect impending earthquake.

  2. Conjugate variables in continuous maximum-entropy inference.

    PubMed

    Davis, Sergio; Gutiérrez, Gonzalo

    2012-11-01

    For a continuous maximum-entropy distribution (obtained from an arbitrary number of simultaneous constraints), we derive a general relation connecting the Lagrange multipliers and the expectation values of certain particularly constructed functions of the states of the system. From this relation, an estimator for a given Lagrange multiplier can be constructed from derivatives of the corresponding constraining function. These estimators sometimes lead to the determination of the Lagrange multipliers by way of solving a linear system, and, in general, they provide another tool to widen the applicability of Jaynes's formalism. This general relation, especially well suited for computer simulation techniques, also provides some insight into the interpretation of the hypervirial relations known in statistical mechanics and the recently derived microcanonical dynamical temperature. We illustrate the usefulness of these new relations with several applications in statistics.

  3. A maximum entropy reconstruction technique for tomographic particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Bilsky, A. V.; Lozhkin, V. A.; Markovich, D. M.; Tokarev, M. P.

    2013-04-01

    This paper studies a novel approach for reducing tomographic PIV computational complexity. The proposed approach is an algebraic reconstruction technique, termed MENT (maximum entropy). This technique computes the three-dimensional light intensity distribution several times faster than SMART, using at least ten times less memory. Additionally, the reconstruction quality remains nearly the same as with SMART. This paper presents the theoretical computation performance comparison for MENT, SMART and MART, followed by validation using synthetic particle images. Both the theoretical assessment and validation of synthetic images demonstrate significant computational time reduction. The data processing accuracy of MENT was compared to that of SMART in a slot jet experiment. A comparison of the average velocity profiles shows a high level of agreement between the results obtained with MENT and those obtained with SMART.

  4. Maximum Entropy Production As a Framework for Understanding How Living Systems Evolve, Organize and Function

    NASA Astrophysics Data System (ADS)

    Vallino, J. J.; Algar, C. K.; Huber, J. A.; Fernandez-Gonzalez, N.

    2014-12-01

    The maximum entropy production (MEP) principle holds that non equilibrium systems with sufficient degrees of freedom will likely be found in a state that maximizes entropy production or, analogously, maximizes potential energy destruction rate. The theory does not distinguish between abiotic or biotic systems; however, we will show that systems that can coordinate function over time and/or space can potentially dissipate more free energy than purely Markovian processes (such as fire or a rock rolling down a hill) that only maximize instantaneous entropy production. Biological systems have the ability to store useful information acquired via evolution and curated by natural selection in genomic sequences that allow them to execute temporal strategies and coordinate function over space. For example, circadian rhythms allow phototrophs to "predict" that sun light will return and can orchestrate metabolic machinery appropriately before sunrise, which not only gives them a competitive advantage, but also increases the total entropy production rate compared to systems that lack such anticipatory control. Similarly, coordination over space, such a quorum sensing in microbial biofilms, can increase acquisition of spatially distributed resources and free energy and thereby enhance entropy production. In this talk we will develop a modeling framework to describe microbial biogeochemistry based on the MEP conjecture constrained by information and resource availability. Results from model simulations will be compared to laboratory experiments to demonstrate the usefulness of the MEP approach.

  5. Exploiting Acoustic and Syntactic Features for Automatic Prosody Labeling in a Maximum Entropy Framework

    PubMed Central

    Sridhar, Vivek Kumar Rangarajan; Bangalore, Srinivas; Narayanan, Shrikanth S.

    2009-01-01

    In this paper, we describe a maximum entropy-based automatic prosody labeling framework that exploits both language and speech information. We apply the proposed framework to both prominence and phrase structure detection within the Tones and Break Indices (ToBI) annotation scheme. Our framework utilizes novel syntactic features in the form of supertags and a quantized acoustic–prosodic feature representation that is similar to linear parameterizations of the prosodic contour. The proposed model is trained discriminatively and is robust in the selection of appropriate features for the task of prosody detection. The proposed maximum entropy acoustic–syntactic model achieves pitch accent and boundary tone detection accuracies of 86.0% and 93.1% on the Boston University Radio News corpus, and, 79.8% and 90.3% on the Boston Directions corpus. The phrase structure detection through prosodic break index labeling provides accuracies of 84% and 87% on the two corpora, respectively. The reported results are significantly better than previously reported results and demonstrate the strength of maximum entropy model in jointly modeling simple lexical, syntactic, and acoustic features for automatic prosody labeling. PMID:19603083

  6. Comparison between experiments and predictions based on maximum entropy for sprays from a pressure atomizer

    NASA Astrophysics Data System (ADS)

    Li, X.; Chin, L. P.; Tankin, R. S.; Jackson, T.; Stutrud, J.; Switzer, G.

    1991-07-01

    Measurements were made of the droplet size and velocity distributions in a hollow cone spray from a pressure atomizer using a phase/Doppler particle analyzer. The maximum entropy principle is used to predict these distributions. The constraints imposed in this model involve conversation of mass, momentum, and energy. Estimates of the source terms associated with these constraints are made based on physical reasoning. Agreement between the measurements and the predictions is very good.

  7. Application of the maximum relative entropy method to the physics of ferromagnetic materials

    NASA Astrophysics Data System (ADS)

    Giffin, Adom; Cafaro, Carlo; Ali, Sean Alan

    2016-08-01

    It is known that the Maximum relative Entropy (MrE) method can be used to both update and approximate probability distributions functions in statistical inference problems. In this manuscript, we apply the MrE method to infer magnetic properties of ferromagnetic materials. In addition to comparing our approach to more traditional methodologies based upon the Ising model and Mean Field Theory, we also test the effectiveness of the MrE method on conventionally unexplored ferromagnetic materials with defects.

  8. Maximum Entropy/Optimal Projection (MEOP) control design synthesis: Optimal quantification of the major design tradeoffs

    NASA Technical Reports Server (NTRS)

    Hyland, D. C.; Bernstein, D. S.

    1987-01-01

    The underlying philosophy and motivation of the optimal projection/maximum entropy (OP/ME) stochastic modeling and reduced control design methodology for high order systems with parameter uncertainties are discussed. The OP/ME design equations for reduced-order dynamic compensation including the effect of parameter uncertainties are reviewed. The application of the methodology to several Large Space Structures (LSS) problems of representative complexity is illustrated.

  9. Maximum entropy principle based estimation of performance distribution in queueing theory.

    PubMed

    He, Dayi; Li, Ran; Huang, Qi; Lei, Ping

    2014-01-01

    In related research on queuing systems, in order to determine the system state, there is a widespread practice to assume that the system is stable and that distributions of the customer arrival ratio and service ratio are known information. In this study, the queuing system is looked at as a black box without any assumptions on the distribution of the arrival and service ratios and only keeping the assumption on the stability of the queuing system. By applying the principle of maximum entropy, the performance distribution of queuing systems is derived from some easily accessible indexes, such as the capacity of the system, the mean number of customers in the system, and the mean utilization of the servers. Some special cases are modeled and their performance distributions are derived. Using the chi-square goodness of fit test, the accuracy and generality for practical purposes of the principle of maximum entropy approach is demonstrated.

  10. Multi-dimensional validation of a maximum-entropy-based interpolative moment closure

    NASA Astrophysics Data System (ADS)

    Tensuda, Boone R.; McDonald, James G.; Groth, Clinton P. T.

    2016-11-01

    The performance of a novel maximum-entropy-based 14-moment interpolative closure is examined for multi-dimensional flows via validation of the closure for several established benchmark problems. Despite its consideration of heat transfer, this 14-moment closure contains closed-form expressions for the closing fluxes, unlike the maximum-entropy models on which it is based. While still retaining singular behaviour in some regions of realizable moment space, the interpolative closure proves to have a large region of hyperbolicity while remaining computationally tractable. Furthermore, the singular nature has been shown to be advantageous for practical simulations. The multi-dimensional cases considered here include Couette flow, heat transfer between infinite parallel plates, subsonic flow past a circular cylinder, and lid-driven cavity flow. The 14-moment predictions are compared to analytical, DSMC, and experimental results as well the results of other closures. For each case, a range of Knudsen numbers are explored in order to assess the validity and accuracy of the closure in different regimes. For Couette flow and heat transfer between flat plates, it is shown that the closure predictions are consistent with the expected analytical solutions in all regimes. In the cases of flow past a circular cylinder and lid-driven cavity flow, the closure is found to give more accurate results than the related lower-order maximum-entropy Gaussian and maximum-entropy-based regularized Gaussian closures. The ability to predict important non-equilibrium phenomena, such as a counter-gradient heat flux, is also established.

  11. Maximum entropy analytic continuation for frequency-dependent transport coefficients with nonpositive spectral weight

    NASA Astrophysics Data System (ADS)

    Reymbaut, A.; Gagnon, A.-M.; Bergeron, D.; Tremblay, A.-M. S.

    2017-03-01

    The computation of transport coefficients, even in linear response, is a major challenge for theoretical methods that rely on analytic continuation of correlation functions obtained numerically in Matsubara space. While maximum entropy methods can be used for certain correlation functions, this is not possible in general, important examples being the Seebeck, Hall, Nernst, and Reggi-Leduc coefficients. Indeed, positivity of the spectral weight on the positive real-frequency axis is not guaranteed in these cases. The spectral weight can even be complex in the presence of broken time-reversal symmetry. Various workarounds, such as the neglect of vertex corrections or the study of the infinite frequency or Kelvin limits, have been proposed. Here, we show that one can define auxiliary response functions that allow one to extract the desired real-frequency susceptibilities from maximum entropy methods in the most general multiorbital cases with no particular symmetry. As a benchmark case, we study the longitudinal thermoelectric response and corresponding Onsager coefficient in the single-band two-dimensional Hubbard model treated with dynamical mean-field theory and continuous-time quantum Monte Carlo. We thereby extend the maximum entropy analytic continuation with auxiliary functions (MaxEntAux method), developed for the study of the superconducting pairing dynamics of correlated materials, to transport coefficients.

  12. Maximum entropy principle for stationary states underpinned by stochastic thermodynamics

    NASA Astrophysics Data System (ADS)

    Ford, Ian J.

    2015-11-01

    The selection of an equilibrium state by maximizing the entropy of a system, subject to certain constraints, is often powerfully motivated as an exercise in logical inference, a procedure where conclusions are reached on the basis of incomplete information. But such a framework can be more compelling if it is underpinned by dynamical arguments, and we show how this can be provided by stochastic thermodynamics, where an explicit link is made between the production of entropy and the stochastic dynamics of a system coupled to an environment. The separation of entropy production into three components allows us to select a stationary state by maximizing the change, averaged over all realizations of the motion, in the principal relaxational or nonadiabatic component, equivalent to requiring that this contribution to the entropy production should become time independent for all realizations. We show that this recovers the usual equilibrium probability density function (pdf) for a conservative system in an isothermal environment, as well as the stationary nonequilibrium pdf for a particle confined to a potential under nonisothermal conditions, and a particle subject to a constant nonconservative force under isothermal conditions. The two remaining components of entropy production account for a recently discussed thermodynamic anomaly between over- and underdamped treatments of the dynamics in the nonisothermal stationary state.

  13. Application of maximum entropy method for droplet size distribution prediction using instability analysis of liquid sheet

    NASA Astrophysics Data System (ADS)

    Movahednejad, E.; Ommi, F.; Hosseinalipour, S. M.; Chen, C. P.; Mahdavi, S. A.

    2011-12-01

    This paper describes the implementation of the instability analysis of wave growth on liquid jet surface, and maximum entropy principle (MEP) for prediction of droplet diameter distribution in primary breakup region. The early stage of the primary breakup, which contains the growth of wave on liquid-gas interface, is deterministic; whereas the droplet formation stage at the end of primary breakup is random and stochastic. The stage of droplet formation after the liquid bulk breakup can be modeled by statistical means based on the maximum entropy principle. The MEP provides a formulation that predicts the atomization process while satisfying constraint equations based on conservations of mass, momentum and energy. The deterministic aspect considers the instability of wave motion on jet surface before the liquid bulk breakup using the linear instability analysis, which provides information of the maximum growth rate and corresponding wavelength of instabilities in breakup zone. The two sub-models are coupled together using momentum source term and mean diameter of droplets. This model is also capable of considering drag force on droplets through gas-liquid interaction. The predicted results compared favorably with the experimentally measured droplet size distributions for hollow-cone sprays.

  14. Application of the method of maximum entropy in the mean to classification problems

    NASA Astrophysics Data System (ADS)

    Gzyl, Henryk; ter Horst, Enrique; Molina, German

    2015-11-01

    In this note we propose an application of the method of maximum entropy in the mean to solve a class of inverse problems comprising classification problems and feasibility problems appearing in optimization. Such problems may be thought of as linear inverse problems with convex constraints imposed on the solution as well as on the data. The method of maximum entropy in the mean proves to be a very useful tool to deal with this type of problems.

  15. Exploration of the Maximum Entropy/Optimal Projection Approach to Control Design Synthesis for Large Space Structures.

    DTIC Science & Technology

    1985-02-01

    Energy Analysis , a branch of dynamic modal analysis developed for analyzing acoustic vibration problems, its present stage of development embodies a...Maximum Entropy Stochastic Modelling and Reduced-Order Design Synthesis is a rigorous new approach to this class of problems. Inspired by Statistical

  16. Unification of field theory and maximum entropy methods for learning probability densities.

    PubMed

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  17. Estimation of typhoon rainfall in GaoPing River: A Multivariate Maximum Entropy Method

    NASA Astrophysics Data System (ADS)

    Pei-Jui, Wu; Hwa-Lung, Yu

    2016-04-01

    The heavy rainfall from typhoons is the main factor of the natural disaster in Taiwan, which causes the significant loss of human lives and properties. Statistically average 3.5 typhoons invade Taiwan every year, and the serious typhoon, Morakot in 2009, impacted Taiwan in recorded history. Because the duration, path and intensity of typhoon, also affect the temporal and spatial rainfall type in specific region , finding the characteristics of the typhoon rainfall type is advantageous when we try to estimate the quantity of rainfall. This study developed a rainfall prediction model and can be divided three parts. First, using the EEOF(extended empirical orthogonal function) to classify the typhoon events, and decompose the standard rainfall type of all stations of each typhoon event into the EOF and PC(principal component). So we can classify the typhoon events which vary similarly in temporally and spatially as the similar typhoon types. Next, according to the classification above, we construct the PDF(probability density function) in different space and time by means of using the multivariate maximum entropy from the first to forth moment statistically. Therefore, we can get the probability of each stations of each time. Final we use the BME(Bayesian Maximum Entropy method) to construct the typhoon rainfall prediction model , and to estimate the rainfall for the case of GaoPing river which located in south of Taiwan.This study could be useful for typhoon rainfall predictions in future and suitable to government for the typhoon disaster prevention .

  18. Entropy-based portfolio models: Practical issues

    NASA Astrophysics Data System (ADS)

    Shirazi, Yasaman Izadparast; Sabiruzzaman, Md.; Hamzah, Nor Aishah

    2015-10-01

    Entropy is a nonparametric alternative of variance and has been used as a measure of risk in portfolio analysis. In this paper, the computation of entropy risk for a given set of data is discussed with illustration. A comparison between entropy-based portfolio models is made. We propose a natural extension of the mean entropy portfolio to make it more general and diversified. In terms of performance, this new model is similar to the mean-entropy portfolio when applied to real and simulated data, and offers higher return if no constraint is set for the desired return; also it is found to be the most diversified portfolio model.

  19. A novel impact identification algorithm based on a linear approximation with maximum entropy

    NASA Astrophysics Data System (ADS)

    Sanchez, N.; Meruane, V.; Ortiz-Bernardin, A.

    2016-09-01

    This article presents a novel impact identification algorithm that uses a linear approximation handled by a statistical inference model based on the maximum-entropy principle, termed linear approximation with maximum entropy (LME). Unlike other regression algorithms as artificial neural networks (ANNs) and support vector machines, the proposed algorithm requires only parameter to be selected and the impact is identified after solving a convex optimization problem that has a unique solution. In addition, with LME data is processed in a period of time that is comparable to the one of other algorithms. The performance of the proposed methodology is validated by considering an experimental aluminum plate. Time varying strain data is measured using four piezoceramic sensors bonded to the plate. To demonstrate the potential of the proposed approach over existing ones, results obtained via LME are compared with those of ANN and least square support vector machines. The results demonstrate that with a low number of sensors it is possible to accurately locate and quantify impacts on a structure and that LME outperforms other impact identification algorithms.

  20. Generalized maximum entropy approach to quasistationary states in long-range systems

    NASA Astrophysics Data System (ADS)

    Martelloni, Gabriele; Martelloni, Gianluca; de Buyl, Pierre; Fanelli, Duccio

    2016-02-01

    Systems with long-range interactions display a short-time relaxation towards quasistationary states (QSSs) whose lifetime increases with the system size. In the paradigmatic Hamiltonian mean-field model (HMF) out-of-equilibrium phase transitions are predicted and numerically detected which separate homogeneous (zero magnetization) and inhomogeneous (nonzero magnetization) QSSs. In the former regime, the velocity distribution presents (at least) two large, symmetric bumps, which cannot be self-consistently explained by resorting to the conventional Lynden-Bell maximum entropy approach. We propose a generalized maximum entropy scheme which accounts for the pseudoconservation of additional charges, the even momenta of the single-particle distribution. These latter are set to the asymptotic values, as estimated by direct integration of the underlying Vlasov equation, which formally holds in the thermodynamic limit. Methodologically, we operate in the framework of a generalized Gibbs ensemble, as sometimes defined in statistical quantum mechanics, which contains an infinite number of conserved charges. The agreement between theory and simulations is satisfying, both above and below the out-of-equilibrium transition threshold. A previously unaccessible feature of the QSSs, the multiple bumps in the velocity profile, is resolved by our approach.

  1. Maximum Likelihood Fusion Model

    DTIC Science & Technology

    2014-08-09

    data fusion, hypothesis testing,maximum likelihood estimation, mobile robot navigation REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT...61 vi 9 Bibliography 62 vii 10 LIST OF FIGURES 1.1 Illustration of mobile robotic agents. Land rovers such as (left) Pioneer robots ...simultaneous localization and mapping 1 15 Figure 1.1: Illustration of mobile robotic agents. Land rovers such as (left) Pioneer robots , (center) Segways

  2. Most likely maximum entropy for population analysis: A case study in decompression sickness prevention

    NASA Astrophysics Data System (ADS)

    Bennani, Youssef; Pronzato, Luc; Rendas, Maria João

    2015-01-01

    We estimate the density of a set of biophysical parameters from region censored observations. We propose a new Maximum Entropy (maxent) estimator formulated as finding the most likely constrained maxent density. By using the Ŕnyi entropy of order two instead of the Shannon entropy, we are lead to a quadratic optimization problem with linear inequality constraints that has an efficient numerical solution. We compare the proposed estimator to the NPMLE and to the best fitting maxent solutions in real data from hyperbaric diving, showing that the resulting distribution has better generalization performance than NPMLE or maxent alone.

  3. Maximum entropy approach to statistical inference for an ocean acoustic waveguide.

    PubMed

    Knobles, D P; Sagers, J D; Koch, R A

    2012-02-01

    A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations.

  4. A Bayes-Maximum Entropy method for multi-sensor data fusion

    SciTech Connect

    Beckerman, M.

    1991-01-01

    In this paper we introduce a Bayes-Maximum Entropy formalism for multi-sensor data fusion, and present an application of this methodology to the fusion of ultrasound and visual sensor data as acquired by a mobile robot. In our approach the principle of maximum entropy is applied to the construction of priors and likelihoods from the data. Distances between ultrasound and visual points of interest in a dual representation are used to define Gibbs likelihood distributions. Both one- and two-dimensional likelihoods are presented, and cast into a form which makes explicit their dependence upon the mean. The Bayesian posterior distributions are used to test a null hypothesis, and Maximum Entropy Maps used for navigation are updated using the resulting information from the dual representation. 14 refs., 9 figs.

  5. Maximum entropy restoration of blurred and oversaturated Hubble Space Telescope imagery.

    PubMed

    Bonavito, N L; Dorband, J E; Busse, T

    1993-10-10

    A brief introduction to image reconstruction is made and the basic concepts of the maximum entropy method are outlined. A statistical inference algorithm based on this method is presented. The algorithm is tested on simulated data and applied to real data. The latter is from a 1024 × 1024 Hubble Space Telescope image of the binary stellar system R Aquarii, which suffers from both spherical aberration and detector saturation. Under these constraints the maximum entropy method produces an image that agrees closely with observed results. The calculations were performed on the MasPar MP-1 single-instruction/multiple-data computer.

  6. Infrared image segmentation method based on spatial coherence histogram and maximum entropy

    NASA Astrophysics Data System (ADS)

    Liu, Songtao; Shen, Tongsheng; Dai, Yao

    2014-11-01

    In order to segment the target well and suppress background noises effectively, an infrared image segmentation method based on spatial coherence histogram and maximum entropy is proposed. First, spatial coherence histogram is presented by weighting the importance of the different position of these pixels with the same gray-level, which is obtained by computing their local density. Then, after enhancing the image by spatial coherence histogram, 1D maximum entropy method is used to segment the image. The novel method can not only get better segmentation results, but also have a faster computation time than traditional 2D histogram-based segmentation methods.

  7. Maximum joint entropy and information-based collaboration of automated learning machines

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Knuth, K. H.; Lary, D. J.

    2012-05-01

    We are working to develop automated intelligent agents, which can act and react as learning machines with minimal human intervention. To accomplish this, an intelligent agent is viewed as a question-asking machine, which is designed by coupling the processes of inference and inquiry to form a model-based learning unit. In order to select maximally-informative queries, the intelligent agent needs to be able to compute the relevance of a question. This is accomplished by employing the inquiry calculus, which is dual to the probability calculus, and extends information theory by explicitly requiring context. Here, we consider the interaction between two questionasking intelligent agents, and note that there is a potential information redundancy with respect to the two questions that the agents may choose to pose. We show that the information redundancy is minimized by maximizing the joint entropy of the questions, which simultaneously maximizes the relevance of each question while minimizing the mutual information between them. Maximum joint entropy is therefore an important principle of information-based collaboration, which enables intelligent agents to efficiently learn together.

  8. Determination of zero-coupon and spot rates from treasury data by maximum entropy methods

    NASA Astrophysics Data System (ADS)

    Gzyl, Henryk; Mayoral, Silvia

    2016-08-01

    An interesting and important inverse problem in finance consists of the determination of spot rates or prices of the zero coupon bonds, when the only information available consists of the prices of a few coupon bonds. A variety of methods have been proposed to deal with this problem. Here we present variants of a non-parametric method to treat with such problems, which neither imposes an analytic form on the rates or bond prices, nor imposes a model for the (random) evolution of the yields. The procedure consists of transforming the problem of the determination of the prices of the zero coupon bonds into a linear inverse problem with convex constraints, and then applying the method of maximum entropy in the mean. This method is flexible enough to provide a possible solution to a mispricing problem.

  9. Frequency-domain localization of alpha rhythm in humans via a maximum entropy approach

    NASA Astrophysics Data System (ADS)

    Patel, Pankaj; Khosla, Deepak; Al-Dayeh, Louai; Singh, Manbir

    1997-05-01

    Generators of spontaneous human brain activity such as alpha rhythm may be easier and more accurate to localize in frequency-domain than in time-domain since these generators are characterized by a specific frequency range. We carried out a frequency-domain analysis of synchronous alpha sources by generating equivalent potential maps using the Fourier transform of each channel of electro-encephalographic (EEG) recordings. SInce the alpha rhythm recorded by EEG scalp measurements is probably produced by several independent generators, a distributed source imaging approach was considered more appropriate than a model based on a single equivalent current dipole. We used an imaging approach based on a Bayesian maximum entropy technique. Reconstructed sources were superposed on corresponding anatomy form magnetic resonance imaging. Results from human studies suggest that reconstructed sources responsible for alpha rhythm are mainly located in the occipital and parieto- occipital lobes.

  10. Maximum-entropy expectation-maximization algorithm for image reconstruction and sensor field estimation.

    PubMed

    Hong, Hunsop; Schonfeld, Dan

    2008-06-01

    In this paper, we propose a maximum-entropy expectation-maximization (MEEM) algorithm. We use the proposed algorithm for density estimation. The maximum-entropy constraint is imposed for smoothness of the estimated density function. The derivation of the MEEM algorithm requires determination of the covariance matrix in the framework of the maximum-entropy likelihood function, which is difficult to solve analytically. We, therefore, derive the MEEM algorithm by optimizing a lower-bound of the maximum-entropy likelihood function. We note that the classical expectation-maximization (EM) algorithm has been employed previously for 2-D density estimation. We propose to extend the use of the classical EM algorithm for image recovery from randomly sampled data and sensor field estimation from randomly scattered sensor networks. We further propose to use our approach in density estimation, image recovery and sensor field estimation. Computer simulation experiments are used to demonstrate the superior performance of the proposed MEEM algorithm in comparison to existing methods.

  11. Monitoring of Time-Dependent System Profiles by Multiplex Gas Chromatography with Maximum Entropy Demodulation

    NASA Technical Reports Server (NTRS)

    Becker, Joseph F.; Valentin, Jose

    1996-01-01

    The maximum entropy technique was successfully applied to the deconvolution of overlapped chromatographic peaks. An algorithm was written in which the chromatogram was represented as a vector of sample concentrations multiplied by a peak shape matrix. Simulation results demonstrated that there is a trade off between the detector noise and peak resolution in the sense that an increase of the noise level reduced the peak separation that could be recovered by the maximum entropy method. Real data originated from a sample storage column was also deconvoluted using maximum entropy. Deconvolution is useful in this type of system because the conservation of time dependent profiles depends on the band spreading processes in the chromatographic column, which might smooth out the finer details in the concentration profile. The method was also applied to the deconvolution of previously interpretted Pioneer Venus chromatograms. It was found in this case that the correct choice of peak shape function was critical to the sensitivity of maximum entropy in the reconstruction of these chromatograms.

  12. Maximum entropy, fractal dimension and lacunarity in quantification of cellular rejection in myocardial biopsy of patients submitted to heart transplantation

    NASA Astrophysics Data System (ADS)

    Neves, L. A.; Oliveira, F. R.; Peres, F. A.; Moreira, R. D.; Moriel, A. R.; de Godoy, M. F.; Murta Junior, L. O.

    2011-03-01

    This paper presents a method for the quantification of cellular rejection in endomyocardial biopsies of patients submitted to heart transplant. The model is based on automatic multilevel thresholding, which employs histogram quantification techniques, histogram slope percentage analysis and the calculation of maximum entropy. The structures were quantified with the aid of the multi-scale fractal dimension and lacunarity for the identification of behavior patterns in myocardial cellular rejection in order to determine the most adequate treatment for each case.

  13. Separation of Stochastic and Deterministic Information from Seismological Time Series with Nonlinear Dynamics and Maximum Entropy Methods

    SciTech Connect

    Gutierrez, Rafael M.; Useche, Gina M.; Buitrago, Elias

    2007-11-13

    We present a procedure developed to detect stochastic and deterministic information contained in empirical time series, useful to characterize and make models of different aspects of complex phenomena represented by such data. This procedure is applied to a seismological time series to obtain new information to study and understand geological phenomena. We use concepts and methods from nonlinear dynamics and maximum entropy. The mentioned method allows an optimal analysis of the available information.

  14. Inverting ion images without Abel inversion: maximum entropy reconstruction of velocity maps.

    PubMed

    Dick, Bernhard

    2014-01-14

    A new method for the reconstruction of velocity maps from ion images is presented, which is based on the maximum entropy concept. In contrast to other methods used for Abel inversion the new method never applies an inversion or smoothing to the data. Instead, it iteratively finds the map which is the most likely cause for the observed data, using the correct likelihood criterion for data sampled from a Poissonian distribution. The entropy criterion minimizes the information content in this map, which hence contains no information for which there is no evidence in the data. Two implementations are proposed, and their performance is demonstrated with simulated and experimental data: Maximum Entropy Velocity Image Reconstruction (MEVIR) obtains a two-dimensional slice through the velocity distribution and can be compared directly to Abel inversion. Maximum Entropy Velocity Legendre Reconstruction (MEVELER) finds one-dimensional distribution functions Q(l)(v) in an expansion of the velocity distribution in Legendre polynomials P((cos θ) for the angular dependence. Both MEVIR and MEVELER can be used for the analysis of ion images with intensities as low as 0.01 counts per pixel, with MEVELER performing significantly better than MEVIR for images with low intensity. Both methods perform better than pBASEX, in particular for images with less than one average count per pixel.

  15. Estimation of Groundwater Radon in North Carolina Using Land Use Regression and Bayesian Maximum Entropy.

    PubMed

    Messier, Kyle P; Campbell, Ted; Bradley, Philip J; Serre, Marc L

    2015-08-18

    Radon ((222)Rn) is a naturally occurring chemically inert, colorless, and odorless radioactive gas produced from the decay of uranium ((238)U), which is ubiquitous in rocks and soils worldwide. Exposure to (222)Rn is likely the second leading cause of lung cancer after cigarette smoking via inhalation; however, exposure through untreated groundwater is also a contributing factor to both inhalation and ingestion routes. A land use regression (LUR) model for groundwater (222)Rn with anisotropic geological and (238)U based explanatory variables is developed, which helps elucidate the factors contributing to elevated (222)Rn across North Carolina. The LUR is also integrated into the Bayesian Maximum Entropy (BME) geostatistical framework to increase accuracy and produce a point-level LUR-BME model of groundwater (222)Rn across North Carolina including prediction uncertainty. The LUR-BME model of groundwater (222)Rn results in a leave-one out cross-validation r(2) of 0.46 (Pearson correlation coefficient = 0.68), effectively predicting within the spatial covariance range. Modeled results of (222)Rn concentrations show variability among intrusive felsic geological formations likely due to average bedrock (238)U defined on the basis of overlying stream-sediment (238)U concentrations that is a widely distributed consistently analyzed point-source data.

  16. Nonequilibrium thermodynamics and maximum entropy production in the Earth system: applications and implications.

    PubMed

    Kleidon, Axel

    2009-06-01

    The Earth system is maintained in a unique state far from thermodynamic equilibrium, as, for instance, reflected in the high concentration of reactive oxygen in the atmosphere. The myriad of processes that transform energy, that result in the motion of mass in the atmosphere, in oceans, and on land, processes that drive the global water, carbon, and other biogeochemical cycles, all have in common that they are irreversible in their nature. Entropy production is a general consequence of these processes and measures their degree of irreversibility. The proposed principle of maximum entropy production (MEP) states that systems are driven to steady states in which they produce entropy at the maximum possible rate given the prevailing constraints. In this review, the basics of nonequilibrium thermodynamics are described, as well as how these apply to Earth system processes. Applications of the MEP principle are discussed, ranging from the strength of the atmospheric circulation, the hydrological cycle, and biogeochemical cycles to the role that life plays in these processes. Nonequilibrium thermodynamics and the MEP principle have potentially wide-ranging implications for our understanding of Earth system functioning, how it has evolved in the past, and why it is habitable. Entropy production allows us to quantify an objective direction of Earth system change (closer to vs further away from thermodynamic equilibrium, or, equivalently, towards a state of MEP). When a maximum in entropy production is reached, MEP implies that the Earth system reacts to perturbations primarily with negative feedbacks. In conclusion, this nonequilibrium thermodynamic view of the Earth system shows great promise to establish a holistic description of the Earth as one system. This perspective is likely to allow us to better understand and predict its function as one entity, how it has evolved in the past, and how it is modified by human activities in the future.

  17. Evaluation of Maximum Entropy Moment Closure for Solution To Radiative Heat Transfer Equation

    NASA Astrophysics Data System (ADS)

    Fan, Doreen

    The maximum entropy moment closure for the two-moment approximation of the radiative transfer equation is presented. The resulting moment equations, known as the M1 model, are solved using a finite-volume method with adaptive mesh refinement (AMR) and two Riemann-solver based flux function solvers: a Roe-type and a Harten-Lax van Leer (HLL) solver. Three different boundary schemes are also presented and discussed. When compared to the discrete ordinates method (DOM) in several representative one- and two-dimensional radiation transport problems, the results indicate that while the M1 model cannot accurately resolve multi-directional radiation transport occurring in low-absorption media, it does provide reasonably accurate solutions, both qualitatively and quantitatively, when compared to the DOM predictions in most of the test cases involving either absorbing-emitting or scattering media. The results also show that the M1 model is computationally less expensive than DOM for more realistic radiation transport problems involving scattering and complex geometries.

  18. Structural damage assessment using linear approximation with maximum entropy and transmissibility data

    NASA Astrophysics Data System (ADS)

    Meruane, V.; Ortiz-Bernardin, A.

    2015-03-01

    Supervised learning algorithms have been proposed as a suitable alternative to model updating methods in structural damage assessment, being Artificial Neural Networks the most frequently used. Notwithstanding, the slow learning speed and the large number of parameters that need to be tuned within the training stage have been a major bottleneck in their application. This article presents a new algorithm for real-time damage assessment that uses a linear approximation method in conjunction with antiresonant frequencies that are identified from transmissibility functions. The linear approximation is handled by a statistical inference model based on the maximum-entropy principle. The merits of this new approach are twofold: training is avoided and data is processed in a period of time that is comparable to the one of Neural Networks. The performance of the proposed methodology is validated by considering three experimental structures: an eight-degree-of-freedom (DOF) mass-spring system, a beam, and an exhaust system of a car. To demonstrate the potential of the proposed algorithm over existing ones, the obtained results are compared with those of a model updating method based on parallel genetic algorithms and a multilayer feedforward neural network approach.

  19. Entanglement entropy in top-down models

    NASA Astrophysics Data System (ADS)

    Jones, Peter A. R.; Taylor, Marika

    2016-08-01

    We explore holographic entanglement entropy in ten-dimensional supergravity solutions. It has been proposed that entanglement entropy can be computed in such top-down models using minimal surfaces which asymptotically wrap the compact part of the geometry. We show explicitly in a wide range of examples that the holographic entan-glement entropy thus computed agrees with the entanglement entropy computed using the Ryu-Takayanagi formula from the lower-dimensional Einstein metric obtained from reduc-tion over the compact space. Our examples include not only consistent truncations but also cases in which no consistent truncation exists and Kaluza-Klein holography is used to identify the lower-dimensional Einstein metric. We then give a general proof, based on the Lewkowycz-Maldacena approach, of the top-down entanglement entropy formula.

  20. In-medium dispersion relations of charmonia studied by the maximum entropy method

    NASA Astrophysics Data System (ADS)

    Ikeda, Atsuro; Asakawa, Masayuki; Kitazawa, Masakiyo

    2017-01-01

    We study in-medium spectral properties of charmonia in the vector and pseudoscalar channels at nonzero momenta on quenched lattices, especially focusing on their dispersion relation and the weight of the peak. We measure the lattice Euclidean correlation functions with nonzero momenta on the anisotropic quenched lattices and study the spectral functions with the maximum entropy method. The dispersion relations of charmonia and the momentum dependence of the weight of the peak are analyzed with the maximum entropy method together with the errors estimated probabilistically in this method. We find a significant increase of the masses of charmonia in medium. We also find that the functional form of the charmonium dispersion relations is not changed from that in the vacuum within the error even at T ≃1.6 Tc for all the channels we analyze.

  1. Estimation of fine particulate matter in Taipei using landuse regression and bayesian maximum entropy methods.

    PubMed

    Yu, Hwa-Lung; Wang, Chih-Hsih; Liu, Ming-Che; Kuo, Yi-Ming

    2011-06-01

    Fine airborne particulate matter (PM2.5) has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS), the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME) method. The resulting epistemic framework can assimilate knowledge bases including: (a) empirical-based spatial trends of PM concentration based on landuse regression, (b) the spatio-temporal dependence among PM observation information, and (c) site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan) from 2005-2007.

  2. Charmonium spectra and dispersion relations with maximum entropy method in extended vector space

    NASA Astrophysics Data System (ADS)

    Ikeda, Atsuro

    2014-09-01

    We study charmonium properties at finite temperature and finite momentum in quenched lattice QCD with an extended maximum entropy method. We analyze the spectral functions and the dispersion relations of charmonia in an extended vector space, which is a product space of two different lattice correlators. We find that there is a mass shift of charmonium in pseudoscalar and vector channels at finite temperature. Our result shows that the dispersion relations are nevertheless consistent with Lorentz invariant form even near the dissociation temperature.

  3. Reconstruction of motional states of neutral atoms via maximum entropy principle

    NASA Astrophysics Data System (ADS)

    Drobný, Gabriel; Bužek, Vladimír

    2002-05-01

    We present a scheme for a reconstruction of states of quantum systems from incomplete tomographiclike data. The proposed scheme is based on the Jaynes principle of maximum entropy. We apply our algorithm for a reconstruction of motional quantum states of neutral atoms. As an example we analyze the experimental data obtained by Salomon and co-workers and we reconstruct Wigner functions of motional quantum states of Cs atoms trapped in an optical lattice.

  4. Hydrodynamic equations for electrons in graphene obtained from the maximum entropy principle

    SciTech Connect

    Barletti, Luigi

    2014-08-15

    The maximum entropy principle is applied to the formal derivation of isothermal, Euler-like equations for semiclassical fermions (electrons and holes) in graphene. After proving general mathematical properties of the equations so obtained, their asymptotic form corresponding to significant physical regimes is investigated. In particular, the diffusive regime, the Maxwell-Boltzmann regime (high temperature), the collimation regime and the degenerate gas limit (vanishing temperature) are considered.

  5. REMARKS ON THE MAXIMUM ENTROPY METHOD APPLIED TO FINITE TEMPERATURE LATTICE QCD.

    SciTech Connect

    UMEDA, T.; MATSUFURU, H.

    2005-07-25

    We make remarks on the Maximum Entropy Method (MEM) for studies of the spectral function of hadronic correlators in finite temperature lattice QCD. We discuss the virtues and subtlety of MEM in the cases that one does not have enough number of data points such as at finite temperature. Taking these points into account, we suggest several tests which one should examine to keep the reliability for the results, and also apply them using mock and lattice QCD data.

  6. Scour development around submarine pipelines due to current based on the maximum entropy theory

    NASA Astrophysics Data System (ADS)

    Zhang, Jing; Shi, Bing; Guo, Yakun; Xu, Weilin; Yang, Kejun; Zhao, Enjin

    2016-10-01

    This paper presents the results from laboratory experiments and theoretical analysis to investigate the development of scour around submarine pipeline under steady current conditions. Experiments show that the scour process takes place in two stages: the initial rapid scour and the subsequent gradual scour development stage. An empirical formula for calculating the equilibrium scour depth (the maximum scour depth) is developed by using the regression method. This formula together with the maximum entropy theory can be applied to establish a formula to predict the scour process for given water depth, diameter of pipeline and flow velocity. Good agreement between the predicted and measured scour depth is obtained.

  7. From maximum power to a trade-off optimization of low-dissipation heat engines: Influence of control parameters and the role of entropy generation

    NASA Astrophysics Data System (ADS)

    Gonzalez-Ayala, Julian; Calvo Hernández, A.; Roco, J. M. M.

    2017-02-01

    For a low-dissipation heat engine model we present the role of the partial contact times and the total operational time as control parameters to switch from maximum power state to maximum Ω trade-off state. The symmetry of the dissipation coefficients may be used in the design of the heat engine to offer, in such switching, a suitable compromise between efficiency gain, power losses, and entropy change. Bounds for entropy production, efficiency, and power output are presented for transitions between both regimes. In the maximum power and maximum Ω trade-off cases the relevant space of parameters are analyzed together with the configuration of minimum entropy production. A detailed analysis of the parameter's space shows physically prohibited regions in which there is no longer a heat engine and another region that is physically well behaved but is not suitable for possible optimization criteria.

  8. From maximum power to a trade-off optimization of low-dissipation heat engines: Influence of control parameters and the role of entropy generation.

    PubMed

    Gonzalez-Ayala, Julian; Calvo Hernández, A; Roco, J M M

    2017-02-01

    For a low-dissipation heat engine model we present the role of the partial contact times and the total operational time as control parameters to switch from maximum power state to maximum Ω trade-off state. The symmetry of the dissipation coefficients may be used in the design of the heat engine to offer, in such switching, a suitable compromise between efficiency gain, power losses, and entropy change. Bounds for entropy production, efficiency, and power output are presented for transitions between both regimes. In the maximum power and maximum Ω trade-off cases the relevant space of parameters are analyzed together with the configuration of minimum entropy production. A detailed analysis of the parameter's space shows physically prohibited regions in which there is no longer a heat engine and another region that is physically well behaved but is not suitable for possible optimization criteria.

  9. Merging daily sea surface temperature data from multiple satellites using a Bayesian maximum entropy method

    NASA Astrophysics Data System (ADS)

    Tang, Shaolei; Yang, Xiaofeng; Dong, Di; Li, Ziwei

    2015-12-01

    Sea surface temperature (SST) is an important variable for understanding interactions between the ocean and the atmosphere. SST fusion is crucial for acquiring SST products of high spatial resolution and coverage. This study introduces a Bayesian maximum entropy (BME) method for blending daily SSTs from multiple satellite sensors. A new spatiotemporal covariance model of an SST field is built to integrate not only single-day SSTs but also time-adjacent SSTs. In addition, AVHRR 30-year SST climatology data are introduced as soft data at the estimation points to improve the accuracy of blended results within the BME framework. The merged SSTs, with a spatial resolution of 4 km and a temporal resolution of 24 hours, are produced in the Western Pacific Ocean region to demonstrate and evaluate the proposed methodology. Comparisons with in situ drifting buoy observations show that the merged SSTs are accurate and the bias and root-mean-square errors for the comparison are 0.15°C and 0.72°C, respectively.

  10. Causal nexus between energy consumption and carbon dioxide emission for Malaysia using maximum entropy bootstrap approach.

    PubMed

    Gul, Sehrish; Zou, Xiang; Hassan, Che Hashim; Azam, Muhammad; Zaman, Khalid

    2015-12-01

    This study investigates the relationship between energy consumption and carbon dioxide emission in the causal framework, as the direction of causality remains has a significant policy implication for developed and developing countries. The study employed maximum entropy bootstrap (Meboot) approach to examine the causal nexus between energy consumption and carbon dioxide emission using bivariate as well as multivariate framework for Malaysia, over a period of 1975-2013. This is a unified approach without requiring the use of conventional techniques based on asymptotical theory such as testing for possible unit root and cointegration. In addition, it can be applied in the presence of non-stationary of any type including structural breaks without any type of data transformation to achieve stationary. Thus, it provides more reliable and robust inferences which are insensitive to time span as well as lag length used. The empirical results show that there is a unidirectional causality running from energy consumption to carbon emission both in the bivariate model and multivariate framework, while controlling for broad money supply and population density. The results indicate that Malaysia is an energy-dependent country and hence energy is stimulus to carbon emissions.

  11. Maximum entropy inference of seabed attenuation parameters using ship radiated broadband noise.

    PubMed

    Knobles, D P

    2015-12-01

    The received acoustic field generated by a single passage of a research vessel on the New Jersey continental shelf is employed to infer probability distributions for the parameter values representing the frequency dependence of the seabed attenuation and the source levels of the ship. The statistical inference approach employed in the analysis is a maximum entropy methodology. The average value of the error function, needed to uniquely specify a conditional posterior probability distribution, is estimated with data samples from time periods in which the ship-receiver geometry is dominated by either the stern or bow aspect. The existence of ambiguities between the source levels and the environmental parameter values motivates an attempt to partially decouple these parameter values. The main result is the demonstration that parameter values for the attenuation (α and the frequency exponent), the sediment sound speed, and the source levels can be resolved through a model space reduction technique. The results of this multi-step statistical inference developed for ship radiated noise is then tested by processing towed source data over the same bandwidth and source track to estimate continuous wave source levels that were measured independently with a reference hydrophone on the tow body.

  12. Two dimensional IR-FID-CPMG acquisition and adaptation of a maximum entropy reconstruction

    NASA Astrophysics Data System (ADS)

    Rondeau-Mouro, C.; Kovrlija, R.; Van Steenberge, E.; Moussaoui, S.

    2016-04-01

    By acquiring the FID signal in two-dimensional TD-NMR spectroscopy, it is possible to characterize mixtures or complex samples composed of solid and liquid phases. We have developed a new sequence for this purpose, called IR-FID-CPMG, making it possible to correlate spin-lattice T1 and spin-spin T2 relaxation times, including both liquid and solid phases in samples. We demonstrate here the potential of a new algorithm for the 2D inverse Laplace transformation of IR-FID-CPMG data based on an adapted reconstruction of the maximum entropy method, combining the standard decreasing exponential decay function with an additional term drawn from Abragam's FID function. The results show that the proposed IR-FID-CPMG sequence and its related inversion model allow accurate characterization and quantification of both solid and liquid phases in multiphasic and compartmentalized systems. Moreover, it permits to distinguish between solid phases having different T1 relaxation times or to highlight cross-relaxation phenomena.

  13. Understanding frequency distributions of path-dependent processes with non-multinomial maximum entropy approaches

    NASA Astrophysics Data System (ADS)

    Hanel, Rudolf; Corominas-Murtra, Bernat; Thurner, Stefan

    2017-03-01

    Path-dependent stochastic processes are often non-ergodic and observables can no longer be computed within the ensemble picture. The resulting mathematical difficulties pose severe limits to the analytical understanding of path-dependent processes. Their statistics is typically non-multinomial in the sense that the multiplicities of the occurrence of states is not a multinomial factor. The maximum entropy principle is tightly related to multinomial processes, non-interacting systems, and to the ensemble picture; it loses its meaning for path-dependent processes. Here we show that an equivalent to the ensemble picture exists for path-dependent processes, such that the non-multinomial statistics of the underlying dynamical process, by construction, is captured correctly in a functional that plays the role of a relative entropy. We demonstrate this for self-reinforcing Pólya urn processes, which explicitly generalize multinomial statistics. We demonstrate the adequacy of this constructive approach towards non-multinomial entropies by computing frequency and rank distributions of Pólya urn processes. We show how microscopic update rules of a path-dependent process allow us to explicitly construct a non-multinomial entropy functional, that, when maximized, predicts the time-dependent distribution function.

  14. A basic introduction to the thermodynamics of the Earth system far from equilibrium and maximum entropy production.

    PubMed

    Kleidon, A

    2010-05-12

    The Earth system is remarkably different from its planetary neighbours in that it shows pronounced, strong global cycling of matter. These global cycles result in the maintenance of a unique thermodynamic state of the Earth's atmosphere which is far from thermodynamic equilibrium (TE). Here, I provide a simple introduction of the thermodynamic basis to understand why Earth system processes operate so far away from TE. I use a simple toy model to illustrate the application of non-equilibrium thermodynamics and to classify applications of the proposed principle of maximum entropy production (MEP) to such processes into three different cases of contrasting flexibility in the boundary conditions. I then provide a brief overview of the different processes within the Earth system that produce entropy, review actual examples of MEP in environmental and ecological systems, and discuss the role of interactions among dissipative processes in making boundary conditions more flexible. I close with a brief summary and conclusion.

  15. Quantum maximum-entropy principle for closed quantum hydrodynamic transport within a Wigner function formalism

    SciTech Connect

    Trovato, M.; Reggiani, L.

    2011-12-15

    By introducing a quantum entropy functional of the reduced density matrix, the principle of quantum maximum entropy is asserted as fundamental principle of quantum statistical mechanics. Accordingly, we develop a comprehensive theoretical formalism to construct rigorously a closed quantum hydrodynamic transport within a Wigner function approach. The theoretical formalism is formulated in both thermodynamic equilibrium and nonequilibrium conditions, and the quantum contributions are obtained by only assuming that the Lagrange multipliers can be expanded in powers of ({h_bar}/2{pi}){sup 2}. In particular, by using an arbitrary number of moments, we prove that (1) on a macroscopic scale all nonlocal effects, compatible with the uncertainty principle, are imputable to high-order spatial derivatives, both of the numerical density n and of the effective temperature T; (2) the results available from the literature in the framework of both a quantum Boltzmann gas and a degenerate quantum Fermi gas are recovered as a particular case; (3) the statistics for the quantum Fermi and Bose gases at different levels of degeneracy are explicitly incorporated; (4) a set of relevant applications admitting exact analytical equations are explicitly given and discussed; (5) the quantum maximum entropy principle keeps full validity in the classical limit, when ({h_bar}/2{pi}){yields}0.

  16. Quantum maximum-entropy principle for closed quantum hydrodynamic transport within a Wigner function formalism.

    PubMed

    Trovato, M; Reggiani, L

    2011-12-01

    By introducing a quantum entropy functional of the reduced density matrix, the principle of quantum maximum entropy is asserted as fundamental principle of quantum statistical mechanics. Accordingly, we develop a comprehensive theoretical formalism to construct rigorously a closed quantum hydrodynamic transport within a Wigner function approach. The theoretical formalism is formulated in both thermodynamic equilibrium and nonequilibrium conditions, and the quantum contributions are obtained by only assuming that the Lagrange multipliers can be expanded in powers of h(2). In particular, by using an arbitrary number of moments, we prove that (1) on a macroscopic scale all nonlocal effects, compatible with the uncertainty principle, are imputable to high-order spatial derivatives, both of the numerical density n and of the effective temperature T; (2) the results available from the literature in the framework of both a quantum Boltzmann gas and a degenerate quantum Fermi gas are recovered as a particular case; (3) the statistics for the quantum Fermi and Bose gases at different levels of degeneracy are explicitly incorporated; (4) a set of relevant applications admitting exact analytical equations are explicitly given and discussed; (5) the quantum maximum entropy principle keeps full validity in the classical limit, when h → 0.

  17. Application of a multiscale maximum entropy image restoration algorithm to HXMT observations

    NASA Astrophysics Data System (ADS)

    Guan, Ju; Song, Li-Ming; Huo, Zhuo-Xi

    2016-08-01

    This paper introduces a multiscale maximum entropy (MSME) algorithm for image restoration of the Hard X-ray Modulation Telescope (HXMT), which is a collimated scan X-ray satellite mainly devoted to a sensitive all-sky survey and pointed observations in the 1-250 keV range. The novelty of the MSME method is to use wavelet decomposition and multiresolution support to control noise amplification at different scales. Our work is focused on the application and modification of this method to restore diffuse sources detected by HXMT scanning observations. An improved method, the ensemble multiscale maximum entropy (EMSME) algorithm, is proposed to alleviate the problem of mode mixing exiting in MSME. Simulations have been performed on the detection of the diffuse source Cen A by HXMT in all-sky survey mode. The results show that the MSME method is adapted to the deconvolution task of HXMT for diffuse source detection and the improved method could suppress noise and improve the correlation and signal-to-noise ratio, thus proving itself a better algorithm for image restoration. Through one all-sky survey, HXMT could reach a capacity of detecting a diffuse source with maximum differential flux of 0.5 mCrab. Supported by Strategic Priority Research Program on Space Science, Chinese Academy of Sciences (XDA04010300) and National Natural Science Foundation of China (11403014)

  18. A Maximum-Entropy approach for accurate document annotation in the biomedical domain

    PubMed Central

    2012-01-01

    The increasing number of scientific literature on the Web and the absence of efficient tools used for classifying and searching the documents are the two most important factors that influence the speed of the search and the quality of the results. Previous studies have shown that the usage of ontologies makes it possible to process document and query information at the semantic level, which greatly improves the search for the relevant information and makes one step further towards the Semantic Web. A fundamental step in these approaches is the annotation of documents with ontology concepts, which can also be seen as a classification task. In this paper we address this issue for the biomedical domain and present a new automated and robust method, based on a Maximum Entropy approach, for annotating biomedical literature documents with terms from the Medical Subject Headings (MeSH). The experimental evaluation shows that the suggested Maximum Entropy approach for annotating biomedical documents with MeSH terms is highly accurate, robust to the ambiguity of terms, and can provide very good performance even when a very small number of training documents is used. More precisely, we show that the proposed algorithm obtained an average F-measure of 92.4% (precision 99.41%, recall 86.77%) for the full range of the explored terms (4,078 MeSH terms), and that the algorithm’s performance is resilient to terms’ ambiguity, achieving an average F-measure of 92.42% (precision 99.32%, recall 86.87%) in the explored MeSH terms which were found to be ambiguous according to the Unified Medical Language System (UMLS) thesaurus. Finally, we compared the results of the suggested methodology with a Naive Bayes and a Decision Trees classification approach, and we show that the Maximum Entropy based approach performed with higher F-Measure in both ambiguous and monosemous MeSH terms. PMID:22541593

  19. A Maximum-Entropy approach for accurate document annotation in the biomedical domain.

    PubMed

    Tsatsaronis, George; Macari, Natalia; Torge, Sunna; Dietze, Heiko; Schroeder, Michael

    2012-04-24

    The increasing number of scientific literature on the Web and the absence of efficient tools used for classifying and searching the documents are the two most important factors that influence the speed of the search and the quality of the results. Previous studies have shown that the usage of ontologies makes it possible to process document and query information at the semantic level, which greatly improves the search for the relevant information and makes one step further towards the Semantic Web. A fundamental step in these approaches is the annotation of documents with ontology concepts, which can also be seen as a classification task. In this paper we address this issue for the biomedical domain and present a new automated and robust method, based on a Maximum Entropy approach, for annotating biomedical literature documents with terms from the Medical Subject Headings (MeSH).The experimental evaluation shows that the suggested Maximum Entropy approach for annotating biomedical documents with MeSH terms is highly accurate, robust to the ambiguity of terms, and can provide very good performance even when a very small number of training documents is used. More precisely, we show that the proposed algorithm obtained an average F-measure of 92.4% (precision 99.41%, recall 86.77%) for the full range of the explored terms (4,078 MeSH terms), and that the algorithm's performance is resilient to terms' ambiguity, achieving an average F-measure of 92.42% (precision 99.32%, recall 86.87%) in the explored MeSH terms which were found to be ambiguous according to the Unified Medical Language System (UMLS) thesaurus. Finally, we compared the results of the suggested methodology with a Naive Bayes and a Decision Trees classification approach, and we show that the Maximum Entropy based approach performed with higher F-Measure in both ambiguous and monosemous MeSH terms.

  20. A study of the maximum entropy technique for phase space tomography

    NASA Astrophysics Data System (ADS)

    Hock, K. M.; Ibison, M. G.

    2013-02-01

    We study a problem with the Maximum Entropy Technique (MENT) when applied to tomographic measurements of the transverse phase space of electron beams, and suggest some ways to improve its reliability. We show that the outcome of a phase space reconstruction can be highly sensitive to the choice of projection angles. It is quite likely to obtain reconstructed distributions of the phase space that are obviously different from the actual distributions. We propose a method to obtain a ``good'' choice of projections angles using a normalised phase space. We demonstrate that the resulting reconstructions of the phase space can be significantly improved.

  1. The industrial use of filtered back projection and maximum entropy reconstruction algorithms

    SciTech Connect

    Kruger, R.P.; London, J.R.

    1982-11-01

    Industrial tomography involves applications where experimental conditions may vary greatly. Some applications resemble more conventional medical tomography because a large number of projections are available. However, in other situations, scan time restrictions, object accessibility, or equipment limitations will reduce the number and/or angular range of the projections. This paper presents results from studies where both experimental conditions exist. The use of two algorithms, the more conventional filtered back projection (FBP) and the maximum entropy (MENT), are discussed and applied to several examples.

  2. Reply to ``Comment on `Mobility spectrum computational analysis using a maximum entropy approach' ''

    NASA Astrophysics Data System (ADS)

    Mironov, O. A.; Myronov, M.; Kiatgamolchai, S.; Kantser, V. G.

    2004-03-01

    In their Comment [J. Antoszewski, D. D. Redfern, L. Faraone, J. R. Meyer, I. Vurgaftman, and J. Lindemuth, Phys. Rev. E 69, 038701 (2004)] on our paper [S. Kiatgamolchai, M. Myronov, O. A. Mironov, V. G. Kantser, E. H. C. Parker, and T. E. Whall, Phys. Rev. E 66, 036705 (2002)] the authors present computational results obtained with the improved quantitative mobility spectrum analysis technique implemented in the commercial software of Lake Shore Cryotronics. We suggest that this is just information additional to the mobility spectrum analysis (MSA) in general without any direct relation to our maximum entropy MSA (ME-MSA) algorithm.

  3. An Entropy Model for Artificial Grammar Learning

    PubMed Central

    Pothos, Emmanuel M.

    2010-01-01

    A model is proposed to characterize the type of knowledge acquired in artificial grammar learning (AGL). In particular, Shannon entropy is employed to compute the complexity of different test items in an AGL task, relative to the training items. According to this model, the more predictable a test item is from the training items, the more likely it is that this item should be selected as compatible with the training items. The predictions of the entropy model are explored in relation to the results from several previous AGL datasets and compared to other AGL measures. This particular approach in AGL resonates well with similar models in categorization and reasoning which also postulate that cognitive processing is geared towards the reduction of entropy. PMID:21607072

  4. High resolution VLBI polarization imaging of AGN with the maximum entropy method

    NASA Astrophysics Data System (ADS)

    Coughlan, Colm P.; Gabuzda, Denise C.

    2016-12-01

    Radio polarization images of the jets of Active Galactic Nuclei (AGN) can provide a deep insight into the launching and collimation mechanisms of relativistic jets. However, even at VLBI scales, resolution is often a limiting factor in the conclusions that can be drawn from observations. The maximum entropy method (MEM) is a deconvolution algorithm that can outperform the more common CLEAN algorithm in many cases, particularly when investigating structures present on scales comparable to or smaller than the nominal beam size with `super-resolution'. A new implementation of the MEM suitable for single- or multiple-wavelength VLBI polarization observations has been developed and is described here. Monte Carlo simulations comparing the performances of CLEAN and MEM at reconstructing the properties of model images are presented; these demonstrate the enhanced reliability of MEM over CLEAN when images of the fractional polarization and polarization angle are constructed using convolving beams that are appreciably smaller than the full CLEAN beam. The results of using this new MEM software to image VLBA observations of the AGN 0716+714 at six different wavelengths are presented, and compared to corresponding maps obtained with CLEAN. MEM and CLEAN maps of Stokes I, the polarized flux, the fractional polarization and the polarization angle are compared for convolving beams ranging from the full CLEAN beam down to a beam one-third of this size. MEM's ability to provide more trustworthy polarization imaging than a standard CLEAN-based deconvolution when convolving beams appreciably smaller than the full CLEAN beam are used is discussed.

  5. Algorithms for optimized maximum entropy and diagnostic tools for analytic continuation

    NASA Astrophysics Data System (ADS)

    Bergeron, Dominic; Tremblay, A.-M. S.

    2016-08-01

    Analytic continuation of numerical data obtained in imaginary time or frequency has become an essential part of many branches of quantum computational physics. It is, however, an ill-conditioned procedure and thus a hard numerical problem. The maximum-entropy approach, based on Bayesian inference, is the most widely used method to tackle that problem. Although the approach is well established and among the most reliable and efficient ones, useful developments of the method and of its implementation are still possible. In addition, while a few free software implementations are available, a well-documented, optimized, general purpose, and user-friendly software dedicated to that specific task is still lacking. Here we analyze all aspects of the implementation that are critical for accuracy and speed and present a highly optimized approach to maximum entropy. Original algorithmic and conceptual contributions include (1) numerical approximations that yield a computational complexity that is almost independent of temperature and spectrum shape (including sharp Drude peaks in broad background, for example) while ensuring quantitative accuracy of the result whenever precision of the data is sufficient, (2) a robust method of choosing the entropy weight α that follows from a simple consistency condition of the approach and the observation that information- and noise-fitting regimes can be identified clearly from the behavior of χ2 with respect to α , and (3) several diagnostics to assess the reliability of the result. Benchmarks with test spectral functions of different complexity and an example with an actual physical simulation are presented. Our implementation, which covers most typical cases for fermions, bosons, and response functions, is available as an open source, user-friendly software.

  6. Multifrequency synthesis algorithm based on the generalized maximum entropy method: application to 0954+658

    NASA Astrophysics Data System (ADS)

    Bajkova, Anisa T.; Pushkarev, Alexander B.

    2011-10-01

    We propose the multifrequency synthesis (MFS) algorithm with the spectral correction of frequency-dependent source brightness distribution based on the maximum entropy method. In order to take into account the spectral terms of nth order in the Taylor expansion for the frequency-dependent brightness distribution, we use a generalized form of the maximum entropy method. This is suitable for the reconstruction of not only positive-definite functions, but also sign-variable functions. With the proposed algorithm, we aim to produce both an improved total intensity image and a two-dimensional spectral index distribution over the source. We also consider the problem of the frequency-dependent variation of the radio-core positions of self-absorbed active galactic nuclei, which should be taken into account in a correct MFS. The proposed MFS algorithm has first been tested on simulated data and then applied to the four-frequency synthesis imaging of the radio source 0954+658 using Very Large Baseline Array observational data obtained quasi-simultaneously at 5, 8, 15 and 22 GHz.

  7. Improvement of the detector resolution in X-ray spectrometry by using the maximum entropy method

    NASA Astrophysics Data System (ADS)

    Fernández, Jorge E.; Scot, Viviana; Giulio, Eugenio Di; Sabbatucci, Lorenzo

    2015-11-01

    In every X-ray spectroscopy measurement the influence of the detection system causes loss of information. Different mechanisms contribute to form the so-called detector response function (DRF): the detector efficiency, the escape of photons as a consequence of photoelectric or scattering interactions, the spectrum smearing due to the energy resolution, and, in solid states detectors (SSD), the charge collection artifacts. To recover the original spectrum, it is necessary to remove the detector influence by solving the so-called inverse problem. The maximum entropy unfolding technique solves this problem by imposing a set of constraints, taking advantage of the known a priori information and preserving the positive-defined character of the X-ray spectrum. This method has been included in the tool UMESTRAT (Unfolding Maximum Entropy STRATegy), which adopts a semi-automatic strategy to solve the unfolding problem based on a suitable combination of the codes MAXED and GRAVEL, developed at PTB. In the past UMESTRAT proved the capability to resolve characteristic peaks which were revealed as overlapped by a Si SSD, giving good qualitative results. In order to obtain quantitative results, UMESTRAT has been modified to include the additional constraint of the total number of photons of the spectrum, which can be easily determined by inverting the diagonal efficiency matrix. The features of the improved code are illustrated with some examples of unfolding from three commonly used SSD like Si, Ge, and CdTe. The quantitative unfolding can be considered as a software improvement of the detector resolution.

  8. Maximum entropy reconstruction method for moment-based solution of the BGK equation

    NASA Astrophysics Data System (ADS)

    Summy, Dustin; Pullin, D. I.

    2016-11-01

    We describe a method for a moment-based solution of the BGK equation. The starting point is a set of equations for a moment representation which must have even-ordered highest moments. The partial-differential equations for these moments are unclosed, containing higher-order moments in the flux terms. These are evaluated using a maximum-entropy reconstruction of the one-particle velocity distribution function f (x , t) , using the known moments. An analytic, asymptotic solution describing the singular behavior of the maximum-entropy construction near to the local equilibrium velocity distribution is presented, and is used to construct a complete hybrid closure scheme for the case of fourth-order and lower moments. For the steady-flow normal shock wave, this produces a set of 9 ordinary differential equations describing the shock structure. For a variable hard-sphere gas these can be solved numerically. Comparisons with results using the direct-simulation Monte-Carlo method will be presented. Supported partially by NSF award DMS 1418903.

  9. Non-destructive depth profiling using variable kinetic energy- x-ray photoelectron spectroscopy with maximum entropy regularization

    NASA Astrophysics Data System (ADS)

    Krajewski, James J.

    This study will describe a nondestructive method to determine compositional depth profiles of thicker films using Variable Kinetic Energy X-ray Photoelectron Spectroscopy (VKE-XPS) data by applying proven regularization methods successfully used in Angle-Resolved X-ray Photoelectron Spectroscopy (AR-XPS). To demonstrate the applicability of various regularization procedures to the experimental VKE-XPS data, simulated TiO2/Si film structures of two different thicknesses and known compositional profiles were "created" and then analyzed. It is found that superior results are attained when using a maximum entropy-like method with an initial model/prior knowledge of thickness is similar to the simulated film thickness. Other regularization functions, Slopes, Curvature and Total Variance Analysis (TVA) give acceptable results when there is no prior knowledge since they do not depend on an accurate initial model. The maximum entropy algorithm is then applied to two actual films of TiO2 deposited on silicon substrate. These results will show the applicability of generating compositional depth profiles with experimental VKE-XPS data. Accuracy of the profiles is confirmed by subjecting these actual films to a variety of "alternate" analytical thin film techniques including Sputtered Angle Resolved Photoelectron Spectroscopy, Auger Electron Spectroscopy, Rutherford Backscattering Spectroscopy, Focused Ion Beam Spectroscopy, Transmission and Scanning Electron Spectroscopy and Variable Angle Spectroscopic Ellipsometry. Future work will include applying different regularizations functions to better fit the MaxEnt composition depth profile other than those described in this study.

  10. Maximum entropy production, carbon assimilation, and the spatial organization of vegetation in river basins

    PubMed Central

    del Jesus, Manuel; Foti, Romano; Rinaldo, Andrea; Rodriguez-Iturbe, Ignacio

    2012-01-01

    The spatial organization of functional vegetation types in river basins is a major determinant of their runoff production, biodiversity, and ecosystem services. The optimization of different objective functions has been suggested to control the adaptive behavior of plants and ecosystems, often without a compelling justification. Maximum entropy production (MEP), rooted in thermodynamics principles, provides a tool to justify the choice of the objective function controlling vegetation organization. The application of MEP at the ecosystem scale results in maximum productivity (i.e., maximum canopy photosynthesis) as the thermodynamic limit toward which the organization of vegetation appears to evolve. Maximum productivity, which incorporates complex hydrologic feedbacks, allows us to reproduce the spatial macroscopic organization of functional types of vegetation in a thoroughly monitored river basin, without the need for a reductionist description of the underlying microscopic dynamics. The methodology incorporates the stochastic characteristics of precipitation and the associated soil moisture on a spatially disaggregated framework. Our results suggest that the spatial organization of functional vegetation types in river basins naturally evolves toward configurations corresponding to dynamically accessible local maxima of the maximum productivity of the ecosystem. PMID:23213227

  11. Estimation of Wild Fire Risk Area based on Climate and Maximum Entropy in Korean Peninsular

    NASA Astrophysics Data System (ADS)

    Kim, T.; Lim, C. H.; Song, C.; Lee, W. K.

    2015-12-01

    The number of forest fires and accompanying human injuries and physical damages has been increased by frequent drought. In this study, forest fire danger zone of Korea is estimated to predict and prepare for future forest fire hazard regions. The MaxEnt (Maximum Entropy) model is used to estimate the forest fire hazard region which estimates the probability distribution of the status. The MaxEnt model is primarily for the analysis of species distribution, but its applicability for various natural disasters is getting recognition. The detailed forest fire occurrence data collected by the MODIS for past 5 years (2010-2014) is used as occurrence data for the model. Also meteorology, topography, vegetation data are used as environmental variable. In particular, various meteorological variables are used to check impact of climate such as annual average temperature, annual precipitation, precipitation of dry season, annual effective humidity, effective humidity of dry season, aridity index. Consequently, the result was valid based on the AUC(Area Under the Curve) value (= 0.805) which is used to predict accuracy in the MaxEnt model. Also predicted forest fire locations were practically corresponded with the actual forest fire distribution map. Meteorological variables such as effective humidity showed the greatest contribution, and topography variables such as TWI (Topographic Wetness Index) and slope also contributed on the forest fire. As a result, the east coast and the south part of Korea peninsula were predicted to have high risk on the forest fire. In contrast, high-altitude mountain area and the west coast appeared to be safe with the forest fire. The result of this study is similar with former studies, which indicates high risks of forest fire in accessible area and reflects climatic characteristics of east and south part in dry season. To sum up, we estimated the forest fire hazard zone with existing forest fire locations and environment variables and had

  12. Comparison of maximum entropy and quadrature-based moment closures for shock transitions prediction in one-dimensional gaskinetic theory

    NASA Astrophysics Data System (ADS)

    Laplante, Jérémie; Groth, Clinton P. T.

    2016-11-01

    The Navier-Stokes-Fourier (NSF) equations are conventionally used to model continuum flow near local thermodynamic equilibrium. In the presence of more rarefied flows, there exists a transitional regime in which the NSF equations no longer hold, and where particle-based methods become too expensive for practical problems. To close this gap, moment closure techniques having the potential of being both valid and computationally tractable for these applications are sought. In this study, a number of five-moment closures for a model one-dimensional kinetic equation are assessed and compared. In particular, four different moment closures are applied to the solution of stationary shocks. The first of these is a Grad-type moment closure, which is known to fail for moderate departures from equilibrium. The second is an interpolative closure based on maximization of thermodynamic entropy which has previously been shown to provide excellent results for 1D gaskinetic theory. Additionally, two quadrature methods of moments (QMOM) are considered. One method is based on the representation of the distribution function in terms of a combination of three Dirac delta functions. The second method, an extended QMOM (EQMOM), extends the quadrature-based approach by assuming a bi-Maxwellian representation of the distribution function. The closing fluxes are analyzed in each case and the region of physical realizability is examined for the closures. Numerical simulations of stationary shock structures as predicted by each moment closure are compared to reference kinetic and the corresponding NSF-like equation solutions. It is shown that the bi-Maxwellian and interpolative maximum-entropy-based moment closures are able to closely reproduce the results of the true maximum-entropy distribution closure for this case very well, whereas the other methods do not. For moderate departures from local thermodynamic equilibrium, the Grad-type and QMOM closures produced unphysical subshocks and were

  13. Maximum Entropy Methods as the Bridge Between Microscopic and Macroscopic Theory

    NASA Astrophysics Data System (ADS)

    Taylor, Jamie M.

    2016-09-01

    This paper is concerned with an investigation into a function of macroscopic variables known as the singular potential, building on previous work by Ball and Majumdar. The singular potential is a function of the admissible statistical averages of probability distributions on a state space, defined so that it corresponds to the maximum possible entropy given known observed statistical averages, although non-classical entropy-like objective functions will also be considered. First the set of admissible moments must be established, and under the conditions presented in this work the set is open, bounded and convex allowing a description in terms of supporting hyperplanes, which provides estimates on the development of singularities for related probability distributions. Under appropriate conditions it is shown that the singular potential is strictly convex, as differentiable as the microscopic entropy, and blows up uniformly as the macroscopic variable tends to the boundary of the set of admissible moments. Applications of the singular potential are then discussed, and particular consideration will be given to certain free-energy functionals typical in mean-field theory, demonstrating an equivalence between certain microscopic and macroscopic free-energy functionals. This allows statements about L^1-local minimisers of Onsager's free energy to be obtained which cannot be given by two-sided variations, and overcomes the need to ensure local minimisers are bounded away from zero and +∞ before taking L^∞ variations. The analysis also permits the definition of a dual order parameter for which Onsager's free energy allows an explicit representation. Also, the difficulties in approximating the singular potential by everywhere defined functions, in particular by polynomial functions, are addressed, with examples demonstrating the failure of the Taylor approximation to preserve relevant shape properties of the singular potential.

  14. Computation of short periodical variations of pole coordinates using maximum entropy spectral analysis and an ormsby filter

    NASA Astrophysics Data System (ADS)

    Kosek, W.

    1987-06-01

    The main purpose of this paper is the search for the optimum method of detecting weak short periodical variations of pole coordinates determined by different techniques in the MERIT Campaign. The optimum filter, length of Maximum Entropy Spectral Analysis ( MESA) for these analyses was investigated on the basis of the Rovelli-Vulpiani formula. The unbiased autocovariance estimation multiplied by different lag windows was introduced into this formula for a better estimation of the optimum filter length. The optimum filter length in the MESA is discussed on the basis of the model data similar to the observed data. The model data was disturbed by white and red noises with standard deviations greater than the average amplitude of an oscillation in the model. Each short periodical oscillation in pole coordinates was calculated by a properly, defined Ormsby band pass filter. Their sum creates a short periodical signal part which subtracted from smoothed pole coordinates diminishes their standard deviations and their autocovariance estimations.

  15. Process-conditioned investing with incomplete information using maximum causal entropy

    NASA Astrophysics Data System (ADS)

    Ziebart, Brian D.

    2012-05-01

    Investing to optimally maximize the growth rate of wealth based on sequences of event outcomes has many information-theoretic interpretations. Namely, the mutual information characterizes the benefit of additional side information being available when making investment decisions [1] in settings where the probabilistic relationships between side information and event outcomes are known. Additionally, the relative variant of the principle of maximum entropy [2] provides the optimal investment allocation in the more general setting where the relationships between side information and event outcomes are only partially known [3]. In this paper, we build upon recent work characterizing the growth rates of investment in settings with inter-dependent side information and event outcome sequences [4]. We consider the extension to settings with inter-dependent event outcomes and side information where the probabilistic relationships between side information and event outcomes are only partially known. We introduce the principle of minimum relative causal entropy to obtain the optimal worst-case investment allocations for this setting. We present efficient algorithms for obtaining these investment allocations using convex optimization techniques and dynamic programming that illustrates a close connection to optimal control theory.

  16. A maximum (non-extensive) entropy approach to equity options bid-ask spread

    NASA Astrophysics Data System (ADS)

    Tapiero, Oren J.

    2013-07-01

    The cross-section of options bid-ask spreads with their strikes are modelled by maximising the Kaniadakis entropy. A theoretical model results with the bid-ask spread depending explicitly on the implied volatility; the probability of expiring at-the-money and an asymmetric information parameter (κ). Considering AIG as a test case for the period between January 2006 and October 2008, we find that information flows uniquely from the trading activity in the underlying asset to its derivatives. Suggesting that κ is possibly an option implied measure of the current state of trading liquidity in the underlying asset.

  17. Modelling the spreading rate of controlled communicable epidemics through an entropy-based thermodynamic model

    NASA Astrophysics Data System (ADS)

    Wang, WenBin; Wu, ZiNiu; Wang, ChunFeng; Hu, RuiFeng

    2013-11-01

    A model based on a thermodynamic approach is proposed for predicting the dynamics of communicable epidemics assumed to be governed by controlling efforts of multiple scales so that an entropy is associated with the system. All the epidemic details are factored into a single and time-dependent coefficient, the functional form of this coefficient is found through four constraints, including notably the existence of an inflexion point and a maximum. The model is solved to give a log-normal distribution for the spread rate, for which a Shannon entropy can be defined. The only parameter, that characterizes the width of the distribution function, is uniquely determined through maximizing the rate of entropy production. This entropy-based thermodynamic (EBT) model predicts the number of hospitalized cases with a reasonable accuracy for SARS in the year 2003. This EBT model can be of use for potential epidemics such as avian influenza and H7N9 in China.

  18. On the stability of the moments of the maximum entropy wind wave spectrum

    SciTech Connect

    Pena, H.G.

    1983-03-01

    The stability of some current wind wave parameters as a function of high-frequency cut-off and degrees of freedom of the spectrum has been numerically investigated when computed in terms of the moments of the wave energy spectrum. From the Pierson-Moskovitz wave spectrum type, a sea surface profile is simulated and its wave energy spectrum is estimated by the Maximum Entropy Method (MEM). As the degrees of freedom of the MEM spectral estimation are varied, the results show a much better stability of the wave parameters as compared to the classical periodogram and correlogram spectral approaches. The stability of wave parameters as a function of high-frequency cut-off has the same result as obtained by the classical techniques.

  19. Background adjustment of cDNA microarray images by Maximum Entropy distributions.

    PubMed

    Argyropoulos, Christos; Daskalakis, Antonis; Nikiforidis, George C; Sakellaropoulos, George C

    2010-08-01

    Many empirical studies have demonstrated the exquisite sensitivity of both traditional and novel statistical and machine intelligence algorithms to the method of background adjustment used to analyze microarray datasets. In this paper we develop a statistical framework that approaches background adjustment as a classic stochastic inverse problem, whose noise characteristics are given in terms of Maximum Entropy distributions. We derive analytic closed form approximations to the combined problem of estimating the magnitude of the background in microarray images and adjusting for its presence. The proposed method reduces standardized measures of log expression variability across replicates in situations of known differential and non-differential gene expression without increasing the bias. Additionally, it results in computationally efficient procedures for estimation and learning based on sufficient statistics and can filter out spot measures with intensities that are numerically close to the background level resulting in a noise reduction of about 7%.

  20. Sampling properties of the maximum entropy estimators for the extreme-value type-1 distribution

    NASA Astrophysics Data System (ADS)

    Phien, Huynh Ngoc

    1986-10-01

    The extreme-value type-1 (EV1) distribution can be viewed as the distribution that satisfies two specified expected values. These expected values give rise to a method of parameter estimation referred to as the method of maximum entropy (MME). The main purpose of this note is to provide a scheme to estimate the variances and covariance of the MME estimators. As a by-product of the simulation runs used, some useful sampling properties of the MME estimators are obtained. These clearly show that the MME is a good method for fitting the EV1 distribution, and the approximations obtained analytically for the variance of estimates of the T-year event are of sufficient accuracy.

  1. An understanding of human dynamics in urban subway traffic from the Maximum Entropy Principle

    NASA Astrophysics Data System (ADS)

    Yong, Nuo; Ni, Shunjiang; Shen, Shifei; Ji, Xuewei

    2016-08-01

    We studied the distribution of entry time interval in Beijing subway traffic by analyzing the smart card transaction data, and then deduced the probability distribution function of entry time interval based on the Maximum Entropy Principle. Both theoretical derivation and data statistics indicated that the entry time interval obeys power-law distribution with an exponential cutoff. In addition, we pointed out the constraint conditions for the distribution form and discussed how the constraints affect the distribution function. It is speculated that for bursts and heavy tails in human dynamics, when the fitted power exponent is less than 1.0, it cannot be a pure power-law distribution, but with an exponential cutoff, which may be ignored in the previous studies.

  2. Maximum entropy image reconstruction - A practical non-information-theoretic approach

    NASA Astrophysics Data System (ADS)

    Nityananda, R.; Narayan, R.

    1982-12-01

    An alternative motivation for the maximum entropy method (MEM) is given and its practical implementation discussed. The need for nonlinear restoration methods in general is considered, arguing in favor of nonclassical techniques such as MEM. Earlier work on MEM is summarized and the present approach is introduced. The whole family of restoration methods based on maximizing the integral of some function of the brightness is addressed. Criteria for the choice of the function are given and their properties are discussed. A parameter for measuring the resolution of the restored map is identified, and a scheme for controlling it by adding a constant to the zero-spacing correlation is introduced. Numerical schemes for implementing MEM are discussed and restorations obtained with various choices of the brightness function are compared. Data noise is discussed, showing that the standard least squares approach leads to a bias in the restoration.

  3. Continuity of the maximum-entropy inference: Convex geometry and numerical ranges approach

    SciTech Connect

    Rodman, Leiba; Spitkovsky, Ilya M. E-mail: ilya@math.wm.edu; Szkoła, Arleta Weis, Stephan

    2016-01-15

    We study the continuity of an abstract generalization of the maximum-entropy inference—a maximizer. It is defined as a right-inverse of a linear map restricted to a convex body which uniquely maximizes on each fiber of the linear map a continuous function on the convex body. Using convex geometry we prove, amongst others, the existence of discontinuities of the maximizer at limits of extremal points not being extremal points themselves and apply the result to quantum correlations. Further, we use numerical range methods in the case of quantum inference which refers to two observables. One result is a complete characterization of points of discontinuity for 3 × 3 matrices.

  4. Deconvolution of complex echo signals by the maximum entropy method in ultrasonic nondestructive inspection

    NASA Astrophysics Data System (ADS)

    Bazulin, A. E.; Bazulin, E. G.

    2009-11-01

    The problem of inversion of convolution with the echo signal point source function is considered with the use of the regularization and maximum entropy method and further reconstruction of two-dimensional images by the method of projection in the spectral domain. The inverse convolution problem is solved for the complex-valued signal that is obtained from the real valued signal through the Hilbert transform. Numerical and experimental simulation is performed. A possibility of enhancing the echo signal along the ray’s resolution and of lowering the spectrum’s noise level with the use of complex signals (pseudo-random sequences) is demonstrated. The results are compared with those obtained using the autoregression method and the reference hologram method.

  5. Forest Tree Species Distribution Mapping Using Landsat Satellite Imagery and Topographic Variables with the Maximum Entropy Method in Mongolia

    NASA Astrophysics Data System (ADS)

    Hao Chiang, Shou; Valdez, Miguel; Chen, Chi-Farn

    2016-06-01

    Forest is a very important ecosystem and natural resource for living things. Based on forest inventories, government is able to make decisions to converse, improve and manage forests in a sustainable way. Field work for forestry investigation is difficult and time consuming, because it needs intensive physical labor and the costs are high, especially surveying in remote mountainous regions. A reliable forest inventory can give us a more accurate and timely information to develop new and efficient approaches of forest management. The remote sensing technology has been recently used for forest investigation at a large scale. To produce an informative forest inventory, forest attributes, including tree species are unavoidably required to be considered. In this study the aim is to classify forest tree species in Erdenebulgan County, Huwsgul province in Mongolia, using Maximum Entropy method. The study area is covered by a dense forest which is almost 70% of total territorial extension of Erdenebulgan County and is located in a high mountain region in northern Mongolia. For this study, Landsat satellite imagery and a Digital Elevation Model (DEM) were acquired to perform tree species mapping. The forest tree species inventory map was collected from the Forest Division of the Mongolian Ministry of Nature and Environment as training data and also used as ground truth to perform the accuracy assessment of the tree species classification. Landsat images and DEM were processed for maximum entropy modeling, and this study applied the model with two experiments. The first one is to use Landsat surface reflectance for tree species classification; and the second experiment incorporates terrain variables in addition to the Landsat surface reflectance to perform the tree species classification. All experimental results were compared with the tree species inventory to assess the classification accuracy. Results show that the second one which uses Landsat surface reflectance coupled

  6. Block entropy and quantum phase transition in the anisotropic Kondo necklace model

    NASA Astrophysics Data System (ADS)

    Mendoza-Arenas, J. J.; Franco, R.; Silva-Valencia, J.

    2010-06-01

    We study the von Neumann block entropy in the Kondo necklace model for different anisotropies η in the XY interaction between conduction spins using the density matrix renormalization group method. It was found that the block entropy presents a maximum for each η considered, and, comparing it with the results of the quantum criticality of the model based on the behavior of the energy gap, we observe that the maximum block entropy occurs at the quantum critical point between an antiferromagnetic and a Kondo singlet state, so this measure of entanglement is useful for giving information about where a quantum phase transition occurs in this model. We observe that the block entropy also presents a maximum at the quantum critical points that are obtained when an anisotropy Δ is included in the Kondo exchange between localized and conduction spins; when Δ diminishes for a fixed value of η, the critical point increases, favoring the antiferromagnetic phase.

  7. Application of maximum-entropy spectral estimation to deconvolution of XPS data. [X-ray Photoelectron Spectroscopy

    NASA Technical Reports Server (NTRS)

    Vasquez, R. P.; Klein, J. D.; Barton, J. J.; Grunthaner, F. J.

    1981-01-01

    A comparison is made between maximum-entropy spectral estimation and traditional methods of deconvolution used in electron spectroscopy. The maximum-entropy method is found to have higher resolution-enhancement capabilities and, if the broadening function is known, can be used with no adjustable parameters with a high degree of reliability. The method and its use in practice are briefly described, and a criterion is given for choosing the optimal order for the prediction filter based on the prediction-error power sequence. The method is demonstrated on a test case and applied to X-ray photoelectron spectra.

  8. The Maximum Entropy Method for Optical Spectrum Analysis of Real-Time TDDFT

    NASA Astrophysics Data System (ADS)

    Toogoshi, M.; Kano, S. S.; Zempo, Y.

    2015-09-01

    The maximum entropy method (MEM) is one of the key techniques for spectral analysis. The major feature is that spectra in the low frequency part can be described by the short time-series data. Thus, we applied MEM to analyse the spectrum from the time dependent dipole moment obtained from the time-dependent density functional theory (TDDFT) calculation in real time. It is intensively studied for computing optical properties. In the MEM analysis, however, the maximum lag of the autocorrelation is restricted by the total number of time-series data. We proposed that, as an improved MEM analysis, we use the concatenated data set made from the several-times repeated raw data. We have applied this technique to the spectral analysis of the TDDFT dipole moment of ethylene and oligo-fluorene with n = 8. As a result, the higher resolution can be obtained, which is closer to that of FT with practically time-evoluted data as the same total number of time steps. The efficiency and the characteristic feature of this technique are presented in this paper.

  9. The effect of the shape function on small-angle scattering analysis by the maximum entropy method

    SciTech Connect

    Jemian, P.R.; Allen, A.J. |

    1992-09-15

    Analysis of small-angle scattering data to obtain a particle size distribution is dependent upon the shape function used to model the scattering. Using a maximum entropy analysis of small-angle scattering data, the effect of shape function selection on obtained size distribution is demonstrated using three different shape functions to describe the same scattering data from each of two steels. The alloys have been revealed by electron microscopy to contain a distribution of randomly oriented and mainly non-interacting, irregular, ellipsoidal precipitates. Comparison is made between the different forms of the shape function. Effect of an incident wavelength distribution is also shown. The importance of testing appropriate shape functions and validating these against other microstructural studies is discussed.

  10. Spectral analysis of the Chandler wobble: comparison of the discrete Fourier analysis and the maximum entropy method

    NASA Astrophysics Data System (ADS)

    Brzezinski, A.

    2014-12-01

    The methods of spectral analysis are applied to solve the following two problems concerning the free Chandler wobble (CW): 1) to estimate the CW resonance parameters, the period T and the quality factor Q, and 2) to perform the excitation balance of the observed free wobble. It appears, however, that the results depend on the algorithm of spectral analysis applied. Here we compare the following two algorithms which are frequently applied for analysis of the polar motion data, the classical discrete Fourier analysis and the maximum entropy method corresponding to the autoregressive modeling of the input time series. We start from general description of both methods and of their application to the analysis of the Earth orientation observations. Then we compare results of the analysis of the polar motion and the related excitation data.

  11. Non-equilibrium thermodynamics, maximum entropy production and Earth-system evolution.

    PubMed

    Kleidon, Axel

    2010-01-13

    The present-day atmosphere is in a unique state far from thermodynamic equilibrium. This uniqueness is for instance reflected in the high concentration of molecular oxygen and the low relative humidity in the atmosphere. Given that the concentration of atmospheric oxygen has likely increased throughout Earth-system history, we can ask whether this trend can be generalized to a trend of Earth-system evolution that is directed away from thermodynamic equilibrium, why we would expect such a trend to take place and what it would imply for Earth-system evolution as a whole. The justification for such a trend could be found in the proposed general principle of maximum entropy production (MEP), which states that non-equilibrium thermodynamic systems maintain steady states at which entropy production is maximized. Here, I justify and demonstrate this application of MEP to the Earth at the planetary scale. I first describe the non-equilibrium thermodynamic nature of Earth-system processes and distinguish processes that drive the system's state away from equilibrium from those that are directed towards equilibrium. I formulate the interactions among these processes from a thermodynamic perspective and then connect them to a holistic view of the planetary thermodynamic state of the Earth system. In conclusion, non-equilibrium thermodynamics and MEP have the potential to provide a simple and holistic theory of Earth-system functioning. This theory can be used to derive overall evolutionary trends of the Earth's past, identify the role that life plays in driving thermodynamic states far from equilibrium, identify habitability in other planetary environments and evaluate human impacts on Earth-system functioning.

  12. Entropy Based Modelling for Estimating Demographic Trends.

    PubMed

    Li, Guoqi; Zhao, Daxuan; Xu, Yi; Kuo, Shyh-Hao; Xu, Hai-Yan; Hu, Nan; Zhao, Guangshe; Monterola, Christopher

    2015-01-01

    In this paper, an entropy-based method is proposed to forecast the demographical changes of countries. We formulate the estimation of future demographical profiles as a constrained optimization problem, anchored on the empirically validated assumption that the entropy of age distribution is increasing in time. The procedure of the proposed method involves three stages, namely: 1) Prediction of the age distribution of a country's population based on an "age-structured population model"; 2) Estimation the age distribution of each individual household size with an entropy-based formulation based on an "individual household size model"; and 3) Estimation the number of each household size based on a "total household size model". The last stage is achieved by projecting the age distribution of the country's population (obtained in stage 1) onto the age distributions of individual household sizes (obtained in stage 2). The effectiveness of the proposed method is demonstrated by feeding real world data, and it is general and versatile enough to be extended to other time dependent demographic variables.

  13. An Instructive Model of Entropy

    ERIC Educational Resources Information Center

    Zimmerman, Seth

    2010-01-01

    This article first notes the misinterpretation of a common thought experiment, and the misleading comment that "systems tend to flow from less probable to more probable macrostates". It analyses the experiment, generalizes it and introduces a new tool of investigation, the simplectic structure. A time-symmetric model is built upon this structure,…

  14. Maximum-entropy calculation of free energy distributions for two forms of myoglobin.

    PubMed

    Poland, Douglas

    2002-03-01

    The temperature dependence of the heat capacity of myoglobin depends dramatically on pH. At low pH (near 4.5), there are two weak maxima in the heat capacity at low and intermediate temperatures, respectively, whereas at high pH (near 10.7), there is one strong maximum at high temperature. Using literature data for the low-pH form (Hallerbach and Hinz, 1999) and for the high-pH form (Makhatadze and Privalov, 1995), we applied a recently developed technique (Poland, 2001d) to calculate the free energy distributions for the two forms of the protein. In this method, the temperature dependence of the heat capacity is used to calculate moments of the protein enthalpy distribution function, which in turn, using the maximum-entropy method, are used to construct the actual distribution function. The enthalpy distribution function for a protein gives the fraction of protein molecules in solution having a given value of the enthalpy, which can be interpreted as the probability that a molecule picked at random has a given enthalpy value. Given the enthalpy distribution functions at several temperatures, one can then construct a master free energy function from which the probability distributions at all temperatures can be calculated. For the high-pH form of myoglobin, the enthalpy distribution function that is obtained exhibits bimodal behavior at the temperature corresponding to the maximum in the heat capacity (Poland, 2001a), reflecting the presence of two populations of molecules (native and unfolded). For this form of myoglobin, the temperature evolution of the relative probabilities of the two populations can be obtained in detail from the master free energy function. In contrast, the enthalpy distribution function for the low-pH form of myoglobin does not show any special structure at any temperature. In this form of myoglobin the enthalpy distribution function simply exhibits a single maximum at all temperatures, with the position of the maximum increasing to higher

  15. Recovery of lifetime distributions from frequency-domain fluorometry data by means of the quantified maximum entropy method.

    PubMed

    Brochon, J C; Pouget, J; Valeur, B

    1995-06-01

    The new quantified version of the maximum entropy method allows one to recover lifetime distributions with a precise statement of the accuracy of position, surface, and broadness of peaks in the distribution. Applications to real data (2,6-ANS in aqueous solutions of sodium dodecyl sulfate micelles of Β-cyclodextrin) are presented.

  16. Analysis of the Velocity Distribution in Partially-Filled Circular Pipe Employing the Principle of Maximum Entropy.

    PubMed

    Jiang, Yulin; Li, Bin; Chen, Jie

    2016-01-01

    The flow velocity distribution in partially-filled circular pipe was investigated in this paper. The velocity profile is different from full-filled pipe flow, since the flow is driven by gravity, not by pressure. The research findings show that the position of maximum flow is below the water surface, and varies with the water depth. In the region of near tube wall, the fluid velocity is mainly influenced by the friction of the wall and the pipe bottom slope, and the variation of velocity is similar to full-filled pipe. But near the free water surface, the velocity distribution is mainly affected by the contractive tube wall and the secondary flow, and the variation of the velocity is relatively small. Literature retrieval results show relatively less research has been shown on the practical expression to describe the velocity distribution of partially-filled circular pipe. An expression of two-dimensional (2D) velocity distribution in partially-filled circular pipe flow was derived based on the principle of maximum entropy (POME). Different entropies were compared according to fluid knowledge, and non-extensive entropy was chosen. A new cumulative distribution function (CDF) of partially-filled circular pipe velocity in terms of flow depth was hypothesized. Combined with the CDF hypothesis, the 2D velocity distribution was derived, and the position of maximum velocity distribution was analyzed. The experimental results show that the estimated velocity values based on the principle of maximum Tsallis wavelet entropy are in good agreement with measured values.

  17. The quantum dynamics of interfacial hydrogen: Path integral maximum entropy calculation of adsorbate vibrational line shapes for the H/Ni(111) system

    NASA Astrophysics Data System (ADS)

    Kim, Dongsup; Doll, J. D.; Gubernatis, J. E.

    1997-01-01

    Vibrational line shapes for a hydrogen atom on an embedded atom model (EAM) of the Ni(111) surface are extracted from path integral Monte Carlo data. Maximum entropy methods are utilized to stabilize this inversion. Our results indicate that anharmonic effects are significant, particularly for vibrational motion parallel to the surface. Unlike their normal mode analogs, calculated quantum line shapes for the EAM potential predict the correct ordering of vibrational features corresponding to parallel and perpendicular adsorbate motion.

  18. Reconstruction of an atmospheric tracer source using the principle of maximum entropy. I: Theory

    NASA Astrophysics Data System (ADS)

    Bocquet, Marc

    2005-07-01

    Over recent years, tracing back sources of chemical species dispersed through the atmosphere has been of considerable importance, with an emphasis on increasing the precision of the source resolution. This need stems from many problems: being able to estimate the emissions of pollutants; spotting the source of radionuclides; evaluating diffuse gas fluxes; etc.We study the high-resolution retrieval on a continental scale of the source of a passive atmospheric tracer, given a set of concentration measurements. In the first of this two-part paper, we lay out and develop theoretical grounds for the reconstruction. Our approach is based on the principle of maximum entropy on the mean. It offers a general framework in which the information input prior to the inversion is used in a flexible and controlled way. The inversion is shown to be equivalent to the minimization of an optimal cost function, expressed in the dual space of observations. Examples of such cost functions are given for different priors of interest to the retrieval of an atmospheric tracer. In this respect, variational assimilation (4D-Var), as well as projection techniques, are obtained as biproducts of the method. The framework is enlarged to incorporate noisy data in the inversion scheme. Part II of this paper is devoted to the application and testing of these methods.

  19. Maximum entropy estimation of glutamate and glutamine in MR spectroscopic imaging.

    PubMed

    Rathi, Yogesh; Ning, Lipeng; Michailovich, Oleg; Liao, HuiJun; Gagoski, Borjan; Grant, P Ellen; Shenton, Martha E; Stern, Robert; Westin, Carl-Fredrik; Lin, Alexander

    2014-01-01

    Magnetic resonance spectroscopic imaging (MRSI) is often used to estimate the concentration of several brain metabolites. Abnormalities in these concentrations can indicate specific pathology, which can be quite useful in understanding the disease mechanism underlying those changes. Due to higher concentration, metabolites such as N-acetylaspartate (NAA), Creatine (Cr) and Choline (Cho) can be readily estimated using standard Fourier transform techniques. However, metabolites such as Glutamate (Glu) and Glutamine (Gln) occur in significantly lower concentrations and their resonance peaks are very close to each other making it difficult to accurately estimate their concentrations (separately). In this work, we propose to use the theory of 'Spectral Zooming' or high-resolution spectral analysis to separate the Glutamate and Glutamine peaks and accurately estimate their concentrations. The method works by estimating a unique power spectral density, which corresponds to the maximum entropy solution of a zero-mean stationary Gaussian process. We demonstrate our estimation technique on several physical phantom data sets as well as on in-vivo brain spectroscopic imaging data. The proposed technique is quite general and can be used to estimate the concentration of any other metabolite of interest.

  20. Maximum Entropy Estimation of Probability Distribution of Variables in Higher Dimensions from Lower Dimensional Data.

    PubMed

    Das, Jayajit; Mukherjee, Sayak; Hodge, Susan E

    2015-07-01

    A common statistical situation concerns inferring an unknown distribution Q(x) from a known distribution P(y), where X (dimension n), and Y (dimension m) have a known functional relationship. Most commonly, n ≤ m, and the task is relatively straightforward for well-defined functional relationships. For example, if Y1 and Y2 are independent random variables, each uniform on [0, 1], one can determine the distribution of X = Y1 + Y2; here m = 2 and n = 1. However, biological and physical situations can arise where n > m and the functional relation Y→X is non-unique. In general, in the absence of additional information, there is no unique solution to Q in those cases. Nevertheless, one may still want to draw some inferences about Q. To this end, we propose a novel maximum entropy (MaxEnt) approach that estimates Q(x) based only on the available data, namely, P(y). The method has the additional advantage that one does not need to explicitly calculate the Lagrange multipliers. In this paper we develop the approach, for both discrete and continuous probability distributions, and demonstrate its validity. We give an intuitive justification as well, and we illustrate with examples.

  1. Maximum entropy estimation of a Benzene contaminated plume using ecotoxicological assays.

    PubMed

    Wahyudi, Agung; Bartzke, Mariana; Küster, Eberhard; Bogaert, Patrick

    2013-01-01

    Ecotoxicological bioassays, e.g. based on Danio rerio teratogenicity (DarT) or the acute luminescence inhibition with Vibrio fischeri, could potentially lead to significant benefits for detecting on site contaminations on qualitative or semi-quantitative bases. The aim was to use the observed effects of two ecotoxicological assays for estimating the extent of a Benzene groundwater contamination plume. We used a Maximum Entropy (MaxEnt) method to rebuild a bivariate probability table that links the observed toxicity from the bioassays with Benzene concentrations. Compared with direct mapping of the contamination plume as obtained from groundwater samples, the MaxEnt concentration map exhibits on average slightly higher concentrations though the global pattern is close to it. This suggest MaxEnt is a valuable method to build a relationship between quantitative data, e.g. contaminant concentrations, and more qualitative or indirect measurements, in a spatial mapping framework, which is especially useful when clear quantitative relation is not at hand.

  2. Fast Maximum Entropy Moment Closure Approach to Solving the Boltzmann Equation

    NASA Astrophysics Data System (ADS)

    Summy, Dustin; Pullin, Dale

    2015-11-01

    We describe a method for a moment-based solution of the Boltzmann Equation (BE). This is applicable to an arbitrary set of velocity moments whose transport is governed by partial-differential equations (PDEs) derived from the BE. The equations are unclosed, containing both higher-order moments and molecular-collision terms. These are evaluated using a maximum-entropy reconstruction of the velocity distribution function f (c , x , t) , from the known moments, within a finite-box domain of single-particle velocity (c) space. Use of a finite-domain alleviates known problems (Junk and Unterreiter, Continuum Mech. Thermodyn., 2002) concerning existence and uniqueness of the reconstruction. Unclosed moments are evaluated with quadrature while collision terms are calculated using any desired method. This allows integration of the moment PDEs in time. The high computational cost of the general method is greatly reduced by careful choice of the velocity moments, allowing the necessary integrals to be reduced from three- to one-dimensional in the case of strictly 1D flows. A method to extend this enhancement to fully 3D flows is discussed. Comparison with relaxation and shock-wave problems using the DSMC method will be presented. Partially supported by NSF grant DMS-1418903.

  3. Bayesian Maximum Entropy space/time estimation of surface water chloride in Maryland using river distances.

    PubMed

    Jat, Prahlad; Serre, Marc L

    2016-12-01

    Widespread contamination of surface water chloride is an emerging environmental concern. Consequently accurate and cost-effective methods are needed to estimate chloride along all river miles of potentially contaminated watersheds. Here we introduce a Bayesian Maximum Entropy (BME) space/time geostatistical estimation framework that uses river distances, and we compare it with Euclidean BME to estimate surface water chloride from 2005 to 2014 in the Gunpowder-Patapsco, Severn, and Patuxent subbasins in Maryland. River BME improves the cross-validation R(2) by 23.67% over Euclidean BME, and river BME maps are significantly different than Euclidean BME maps, indicating that it is important to use river BME maps to assess water quality impairment. The river BME maps of chloride concentration show wide contamination throughout Baltimore and Columbia-Ellicott cities, the disappearance of a clean buffer separating these two large urban areas, and the emergence of multiple localized pockets of contamination in surrounding areas. The number of impaired river miles increased by 0.55% per year in 2005-2009 and by 1.23% per year in 2011-2014, corresponding to a marked acceleration of the rate of impairment. Our results support the need for control measures and increased monitoring of unassessed river miles.

  4. Initial system-bath state via the maximum-entropy principle

    NASA Astrophysics Data System (ADS)

    Dai, Jibo; Len, Yink Loong; Ng, Hui Khoon

    2016-11-01

    The initial state of a system-bath composite is needed as the input for prediction from any quantum evolution equation to describe subsequent system-only reduced dynamics or the noise on the system from joint evolution of the system and the bath. The conventional wisdom is to write down an uncorrelated state as if the system and the bath were prepared in the absence of each other; yet, such a factorized state cannot be the exact description in the presence of system-bath interactions. Here, we show how to go beyond the simplistic factorized-state prescription using ideas from quantum tomography: We employ the maximum-entropy principle to deduce an initial system-bath state consistent with the available information. For the generic case of weak interactions, we obtain an explicit formula for the correction to the factorized state. Such a state turns out to have little correlation between the system and the bath, which we can quantify using our formula. This has implications, in particular, on the subject of subsequent non-completely positive dynamics of the system. Deviation from predictions based on such an almost uncorrelated state is indicative of accidental control of hidden degrees of freedom in the bath.

  5. Computational Amide I Spectroscopy for Refinement of Disordered Peptide Ensembles: Maximum Entropy and Related Approaches

    NASA Astrophysics Data System (ADS)

    Reppert, Michael; Tokmakoff, Andrei

    The structural characterization of intrinsically disordered peptides (IDPs) presents a challenging biophysical problem. Extreme heterogeneity and rapid conformational interconversion make traditional methods difficult to interpret. Due to its ultrafast (ps) shutter speed, Amide I vibrational spectroscopy has received considerable interest as a novel technique to probe IDP structure and dynamics. Historically, Amide I spectroscopy has been limited to delivering global secondary structural information. More recently, however, the method has been adapted to study structure at the local level through incorporation of isotope labels into the protein backbone at specific amide bonds. Thanks to the acute sensitivity of Amide I frequencies to local electrostatic interactions-particularly hydrogen bonds-spectroscopic data on isotope labeled residues directly reports on local peptide conformation. Quantitative information can be extracted using electrostatic frequency maps which translate molecular dynamics trajectories into Amide I spectra for comparison with experiment. Here we present our recent efforts in the development of a rigorous approach to incorporating Amide I spectroscopic restraints into refined molecular dynamics structural ensembles using maximum entropy and related approaches. By combining force field predictions with experimental spectroscopic data, we construct refined structural ensembles for a family of short, strongly disordered, elastin-like peptides in aqueous solution.

  6. Time-dependent radiative transfer through thin films: Chapman Enskog-maximum entropy method

    NASA Astrophysics Data System (ADS)

    Abulwafa, E. M.; Hassan, T.; El-Wakil, S. A.; Razi Naqvi, K.

    2005-09-01

    Approximate solutions to the time-dependent radiative transfer equation, also called the phonon radiative transfer equation, for a plane-parallel system have been obtained by combining the flux-limited Chapman-Enskog approximation with the maximum entropy method. For problems involving heat transfer at small scales (short times and/or thin films), the results found by this combined approach are closer to the outcome of the more labour-intensive Laguerre-Galerkin technique (a moment method described recently by the authors) than the results obtained by using the diffusion equation (Fourier's law) or the telegraph equation (Cattaneo's law). The results for heat flux and temperature are presented in graphical form for xL = 0.01, 0.1, 1 and 10, and at τ = 0.01, 0.1, 1.0 and 10, where xL is the film thickness in mean free paths, and τ is the value of time in mean free times.

  7. Entropy-based consistent model driven architecture

    NASA Astrophysics Data System (ADS)

    Niepostyn, Stanisław Jerzy

    2016-09-01

    A description of software architecture is a plan of the IT system construction, therefore any architecture gaps affect the overall success of an entire project. The definitions mostly describe software architecture as a set of views which are mutually unrelated, hence potentially inconsistent. Software architecture completeness is also often described in an ambiguous way. As a result most methods of IT systems building comprise many gaps and ambiguities, thus presenting obstacles for software building automation. In this article the consistency and completeness of software architecture are mathematically defined based on calculation of entropy of the architecture description. Following this approach, in this paper we also propose our method of automatic verification of consistency and completeness of the software architecture development method presented in our previous article as Consistent Model Driven Architecture (CMDA). The proposed FBS (Functionality-Behaviour-Structure) entropy-based metric applied in our CMDA approach enables IT architects to decide whether the modelling process is complete and consistent. With this metric, software architects could assess the readiness of undergoing modelling work for the start of IT system building. It even allows them to assess objectively whether the designed software architecture of the IT system could be implemented at all. The overall benefit of such an approach is that it facilitates the preparation of complete and consistent software architecture more effectively as well as it enables assessing and monitoring of the ongoing modelling development status. We demonstrate this with a few industry examples of IT system designs.

  8. Developing Soil Moisture Profiles Utilizing Remotely Sensed MW and TIR Based SM Estimates Through Principle of Maximum Entropy

    NASA Astrophysics Data System (ADS)

    Mishra, V.; Cruise, J. F.; Mecikalski, J. R.

    2015-12-01

    Developing accurate vertical soil moisture profiles with minimum input requirements is important to agricultural as well as land surface modeling. Earlier studies show that the principle of maximum entropy (POME) can be utilized to develop vertical soil moisture profiles with accuracy (MAE of about 1% for a monotonically dry profile; nearly 2% for monotonically wet profiles and 3.8% for mixed profiles) with minimum constraints (surface, mean and bottom soil moisture contents). In this study, the constraints for the vertical soil moisture profiles were obtained from remotely sensed data. Low resolution (25 km) MW soil moisture estimates (AMSR-E) were downscaled to 4 km using a soil evaporation efficiency index based disaggregation approach. The downscaled MW soil moisture estimates served as a surface boundary condition, while 4 km resolution TIR based Atmospheric Land Exchange Inverse (ALEXI) estimates provided the required mean root-zone soil moisture content. Bottom soil moisture content is assumed to be a soil dependent constant. Mulit-year (2002-2011) gridded profiles were developed for the southeastern United States using the POME method. The soil moisture profiles were compared to those generated in land surface models (Land Information System (LIS) and an agricultural model DSSAT) along with available NRCS SCAN sites in the study region. The end product, spatial soil moisture profiles, can be assimilated into agricultural and hydrologic models in lieu of precipitation for data scarce regions.Developing accurate vertical soil moisture profiles with minimum input requirements is important to agricultural as well as land surface modeling. Previous studies have shown that the principle of maximum entropy (POME) can be utilized with minimal constraints to develop vertical soil moisture profiles with accuracy (MAE = 1% for monotonically dry profiles; MAE = 2% for monotonically wet profiles and MAE = 3.8% for mixed profiles) when compared to laboratory and field

  9. Cluster Prototypes and Fuzzy Memberships Jointly Leveraged Cross-Domain Maximum Entropy Clustering

    PubMed Central

    Qian, Pengjiang; Jiang, Yizhang; Deng, Zhaohong; Hu, Lingzhi; Sun, Shouwei; Wang, Shitong; Muzic, Raymond F.

    2016-01-01

    The classical maximum entropy clustering (MEC) algorithm usually cannot achieve satisfactory results in the situations where the data is insufficient, incomplete, or distorted. To address this problem, inspired by transfer learning, the specific cluster prototypes and fuzzy memberships jointly leveraged (CPM-JL) framework for cross-domain MEC (CDMEC) is firstly devised in this paper, and then the corresponding algorithm referred to as CPM-JL-CDMEC and the dedicated validity index named fuzzy memberships-based cross-domain difference measurement (FM-CDDM) are concurrently proposed. In general, the contributions of this paper are fourfold: 1) benefiting from the delicate CPM-JL framework, CPM-JL-CDMEC features high-clustering effectiveness and robustness even in some complex data situations; 2) the reliability of FM-CDDM has been demonstrated to be close to well-established external criteria, e.g., normalized mutual information and rand index, and it does not require additional label information. Hence, using FM-CDDM as a dedicated validity index significantly enhances the applicability of CPM-JL-CDMEC under realistic scenarios; 3) the performance of CPM-JL-CDMEC is generally better than, at least equal to, that of MEC because CPM-JL-CDMEC can degenerate into the standard MEC algorithm after adopting the proper parameters, and which avoids the issue of negative transfer; and 4) in order to maximize privacy protection, CPM-JL-CDMEC employs the known cluster prototypes and their associated fuzzy memberships rather than the raw data in the source domain as prior knowledge. The experimental studies thoroughly evaluated and demonstrated these advantages on both synthetic and real-life transfer datasets. PMID:26684257

  10. A maximum entropy approach to detect close-in giant planets around active stars

    NASA Astrophysics Data System (ADS)

    Petit, P.; Donati, J.-F.; Hébrard, E.; Morin, J.; Folsom, C. P.; Böhm, T.; Boisse, I.; Borgniet, S.; Bouvier, J.; Delfosse, X.; Hussain, G.; Jeffers, S. V.; Marsden, S. C.; Barnes, J. R.

    2015-12-01

    Context. The high spot coverage of young active stars is responsible for distortions of spectral lines that hamper the detection of close-in planets through radial velocity methods. Aims: We aim to progress towards more efficient exoplanet detection around active stars by optimizing the use of Doppler imaging in radial velocity measurements. Methods: We propose a simple method to simultaneously extract a brightness map and a set of orbital parameters through a tomographic inversion technique derived from classical Doppler mapping. Based on the maximum entropy principle, the underlying idea is to determine the set of orbital parameters that minimizes the information content of the resulting Doppler map. We carry out a set of numerical simulations to perform a preliminary assessment of the robustness of our method, using an actual Doppler map of the very active star HR 1099 to produce a realistic synthetic data set for various sets of orbital parameters of a single planet in a circular orbit. Results: Using a simulated time series of 50 line profiles affected by a peak-to-peak activity jitter of 2.5 km s-1, in most cases we are able to recover the radial velocity amplitude, orbital phase, and orbital period of an artificial planet down to a radial velocity semi-amplitude of the order of the radial velocity scatter due to the photon noise alone (about 50 m s-1 in our case). One noticeable exception occurs when the planetary orbit is close to co-rotation, in which case significant biases are observed in the reconstructed radial velocity amplitude, while the orbital period and phase remain robustly recovered. Conclusions: The present method constitutes a very simple way to extract orbital parameters from heavily distorted line profiles of active stars, when more classical radial velocity detection methods generally fail. It is easily adaptable to most existing Doppler imaging codes, paving the way towards a systematic search for close-in planets orbiting young, rapidly

  11. Comparison of cosmology and seabed acoustics measurements using statistical inference from maximum entropy

    NASA Astrophysics Data System (ADS)

    Knobles, David; Stotts, Steven; Sagers, Jason

    2012-03-01

    Why can one obtain from similar measurements a greater amount of information about cosmological parameters than seabed parameters in ocean waveguides? The cosmological measurements are in the form of a power spectrum constructed from spatial correlations of temperature fluctuations within the microwave background radiation. The seabed acoustic measurements are in the form of spatial correlations along the length of a spatial aperture. This study explores the above question from the perspective of posterior probability distributions obtained from maximizing a relative entropy functional. An answer is in part that the seabed in shallow ocean environments generally has large temporal and spatial inhomogeneities, whereas the early universe was a nearly homogeneous cosmological soup with small but important fluctuations. Acoustic propagation models used in shallow water acoustics generally do not capture spatial and temporal variability sufficiently well, which leads to model error dominating the statistical inference problem. This is not the case in cosmology. Further, the physics of the acoustic modes in cosmology is that of a standing wave with simple initial conditions, whereas for underwater acoustics it is a traveling wave in a strongly inhomogeneous bounded medium.

  12. Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle

    PubMed Central

    2016-01-01

    We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined. PMID:26997886

  13. Using the Maximum Entropy Principle as a Unifying Theory for Characterization and Sampling of Multi-scaling Processes in Hydrometeorology

    DTIC Science & Technology

    2010-02-25

    a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. a...REPORT Using the Maximum Entropy Principle as a Unifying Theory for Characterization and Sampling of Multi-scaling Processes in Hydrometeorology 14... Theory for Characterization and Sampling of Multi-scaling Processes in Hydrometeorology Rafael L. Bras and Jingfeng Wang 25 February 2010 Activities and

  14. A Software Technology Transition Entropy Based Engineering Model

    DTIC Science & Technology

    2002-03-01

    using Shannon’s statistical approach to entropy. The TechTx Entropy Feedback model 1 Piaget , Jean ...32 2. Structure Changes – Internal - External Relationship ( Piaget )... 32 3. Technology Model...instruments used as necessary between the subject and the object to be reached. ( Piaget 1977, p. 72). For philosophical musings in software engineering, we

  15. Location of Cu2+ in CHA zeolite investigated by X-ray diffraction using the Rietveld/maximum entropy method

    PubMed Central

    Andersen, Casper Welzel; Bremholm, Martin; Vennestrøm, Peter Nicolai Ravnborg; Blichfeld, Anders Bank; Lundegaard, Lars Fahl; Iversen, Bo Brummerstedt

    2014-01-01

    Accurate structural models of reaction centres in zeolite catalysts are a prerequisite for mechanistic studies and further improvements to the catalytic performance. The Rietveld/maximum entropy method is applied to synchrotron powder X-ray diffraction data on fully dehydrated CHA-type zeolites with and without loading of catalytically active Cu2+ for the selective catalytic reduction of NOx with NH3. The method identifies the known Cu2+ sites in the six-membered ring and a not previously observed site in the eight-membered ring. The sum of the refined Cu occupancies for these two sites matches the chemical analysis and thus all the Cu is accounted for. It is furthermore shown that approximately 80% of the Cu2+ is located in the new 8-ring site for an industrially relevant CHA zeolite with Si/Al = 15.5 and Cu/Al = 0.45. Density functional theory calculations are used to corroborate the positions and identity of the two Cu sites, leading to the most complete structural description of dehydrated silicoaluminate CHA loaded with catalytically active Cu2+ cations. PMID:25485118

  16. Location of Cu(2+) in CHA zeolite investigated by X-ray diffraction using the Rietveld/maximum entropy method.

    PubMed

    Andersen, Casper Welzel; Bremholm, Martin; Vennestrøm, Peter Nicolai Ravnborg; Blichfeld, Anders Bank; Lundegaard, Lars Fahl; Iversen, Bo Brummerstedt

    2014-11-01

    Accurate structural models of reaction centres in zeolite catalysts are a prerequisite for mechanistic studies and further improvements to the catalytic performance. The Rietveld/maximum entropy method is applied to synchrotron powder X-ray diffraction data on fully dehydrated CHA-type zeolites with and without loading of catalytically active Cu(2+) for the selective catalytic reduction of NO x with NH3. The method identifies the known Cu(2+) sites in the six-membered ring and a not previously observed site in the eight-membered ring. The sum of the refined Cu occupancies for these two sites matches the chemical analysis and thus all the Cu is accounted for. It is furthermore shown that approximately 80% of the Cu(2+) is located in the new 8-ring site for an industrially relevant CHA zeolite with Si/Al = 15.5 and Cu/Al = 0.45. Density functional theory calculations are used to corroborate the positions and identity of the two Cu sites, leading to the most complete structural description of dehydrated silicoaluminate CHA loaded with catalytically active Cu(2+) cations.

  17. Using maximum entropy to predict suitable habitat for the endangered dwarf wedgemussel in the Maryland Coastal Plain

    USGS Publications Warehouse

    Campbell, Cara; Hilderbrand, Robert H.

    2017-01-01

    Species distribution modelling can be useful for the conservation of rare and endangered species. Freshwater mussel declines have thinned species ranges producing spatially fragmented distributions across large areas. Spatial fragmentation in combination with a complex life history and heterogeneous environment makes predictive modelling difficult.A machine learning approach (maximum entropy) was used to model occurrences and suitable habitat for the federally endangered dwarf wedgemussel, Alasmidonta heterodon, in Maryland's Coastal Plain catchments. Landscape-scale predictors (e.g. land cover, land use, soil characteristics, geology, flow characteristics, and climate) were used to predict the suitability of individual stream segments for A. heterodon.The best model contained variables at three scales: minimum elevation (segment scale), percentage Tertiary deposits, low intensity development, and woody wetlands (sub-catchment), and percentage low intensity development, pasture/hay agriculture, and average depth to the water table (catchment). Despite a very small sample size owing to the rarity of A. heterodon, cross-validated prediction accuracy was 91%.Most predicted suitable segments occur in catchments not known to contain A. heterodon, which provides opportunities for new discoveries or population restoration. These model predictions can guide surveys toward the streams with the best chance of containing the species or, alternatively, away from those streams with little chance of containing A. heterodon.Developed reaches had low predicted suitability for A. heterodon in the Coastal Plain. Urban and exurban sprawl continues to modify stream ecosystems in the region, underscoring the need to preserve existing populations and to discover and protect new populations.

  18. THE LICK AGN MONITORING PROJECT: VELOCITY-DELAY MAPS FROM THE MAXIMUM-ENTROPY METHOD FOR Arp 151

    SciTech Connect

    Bentz, Misty C.; Barth, Aaron J.; Walsh, Jonelle L.; Horne, Keith; Bennert, Vardha Nicola; Treu, Tommaso; Canalizo, Gabriela; Filippenko, Alexei V.; Gates, Elinor L.; Malkan, Matthew A.; Minezaki, Takeo; Woo, Jong-Hak

    2010-09-01

    We present velocity-delay maps for optical H I, He I, and He II recombination lines in Arp 151, recovered by fitting a reverberation model to spectrophotometric monitoring data using the maximum-entropy method. H I response is detected over the range 0-15 days, with the response confined within the virial envelope. The Balmer-line maps have similar morphologies but exhibit radial stratification, with progressively longer delays for H{gamma} to H{beta} to H{alpha}. The He I and He II response is confined within 1-2 days. There is a deficit of prompt response in the Balmer-line cores but strong prompt response in the red wings. Comparison with simple models identifies two classes that reproduce these features: free-falling gas and a half-illuminated disk with a hot spot at small radius on the receding lune. Symmetrically illuminated models with gas orbiting in an inclined disk or an isotropic distribution of randomly inclined circular orbits can reproduce the virial structure but not the observed asymmetry. Radial outflows are also largely ruled out by the observed asymmetry. A warped-disk geometry provides a physically plausible mechanism for the asymmetric illumination and hot spot features. Simple estimates show that a disk in the broad-line region of Arp 151 could be unstable to warping induced by radiation pressure. Our results demonstrate the potential power of detailed modeling combined with monitoring campaigns at higher cadence to characterize the gas kinematics and physical processes that give rise to the broad emission lines in active galactic nuclei.

  19. Entropy, complexity, and Markov diagrams for random walk cancer models.

    PubMed

    Newton, Paul K; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter

    2014-12-19

    The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.

  20. Entropy, complexity, and Markov diagrams for random walk cancer models

    NASA Astrophysics Data System (ADS)

    Newton, Paul K.; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter

    2014-12-01

    The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.

  1. Oseen vortex as a maximum entropy state of a two dimensional fluid

    NASA Astrophysics Data System (ADS)

    Montgomery, D. C.; Matthaeus, W. H.

    2011-07-01

    During the last four decades, a considerable number of investigations has been carried out into the evolution of turbulence in two dimensional Navier-Stokes flows. Much of the information has come from numerical solution of the (otherwise insoluble) dynamical equations and thus has necessarily required some kind of boundary conditions: spatially periodic, no-slip, stress-free, or free-slip. The theoretical framework that has proved to be of the most predictive value has been one employing an entropy functional (sometimes called the Boltzmann entropy) whose maximization has been correlated well in several cases with the late-time configurations into which the computed turbulence has relaxed. More recently, flow in the unbounded domain has been addressed by Gallay and Wayne who have shown a late-time relaxation to the classical Oseen vortex (also sometimes called the Lamb-Oseen vortex) for situations involving a finite net circulation or non-zero total integrated vorticity. Their proof involves powerful but difficult mathematics that might be thought to be beyond the preparation of many practicing fluid dynamicists. The purpose of this present paper is to remark that relaxation to the Oseen vortex can also be predicted in the more intuitive framework that has previously proved useful in predicting computational results with boundary conditions: that of an appropriate entropy maximization. The results make no assumption about the size of the Reynolds numbers, as long as they are finite, and the viscosity is treated as finite throughout.

  2. Coupling diffusion and maximum entropy models to estimate thermal inertia

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Thermal inertia is a physical property of soil at the land surface related to water content. We have developed a method for estimating soil thermal inertia using two daily measurements of surface temperature, to capture the diurnal range, and diurnal time series of net radiation and specific humidi...

  3. Adaptive Statistical Language Modeling; A Maximum Entropy Approach

    DTIC Science & Technology

    1994-04-19

    ACCUSE] ACCUSE, ACCUSED, ACCUSES, ACCUSING [ACCUSTOM] : ACCUSTOMED [ ACCUTANE ] ACCUTANE [ACE] :ACE [ACHIEVE] : ACHIEVE, ACHIEVED, ACHIEVES, ACHIEVING...Appendix C. Best Triggers by the MI-3g Measure ACCUSING 4= FILED COURT ACCUTANE 4- ACCUTANE ACNE DEFECTS HOFFMANN ROCHE BIRTH DRUG’S DRUG PREG- NANT...TOP GOING INDUSTRY PRESIDENT WANT OFTEN LOT OWN TOO WHERE ACME = ACME STEEL ACNE = ACNE RETIN DRUG ACCUTANE HOFFMANN ROCHE PRESCRIPTION DRUG’S SKIN

  4. The Study on Business Growth Process Management Entropy Model

    NASA Astrophysics Data System (ADS)

    Jing, Duan

    Enterprise's growth is a dynamic process. The factors of enterprise development are changing all the time. For this reason, it is difficult to study management entropy growth-oriented enterprises from static view. Its characteristic is the business enterprise growth stage, and puts forward a kind of measuring and calculating model based on enterprise management entropy for business scale, the enterprise ability and development speed. According to entropy measured by the model, enterprise can adopt revolution measure in the moment of truth. It can make the enterprise avoid crisis and take the road of sustainable development.

  5. Maximum entropy approach for batch-arrival queue under N policy with an un-reliable server and single vacation

    NASA Astrophysics Data System (ADS)

    Ke, Jau-Chuan; Lin, Chuen-Horng

    2008-11-01

    We consider the M[x]/G/1 queueing system, in which the server operates N policy and a single vacation. As soon as the system becomes empty the server leaves for a vacation of random length V. When he returns from the vacation and the system size is greater than or equal to a threshold value N, he starts to serve the waiting customers. If he finds fewer customers than N. he waits in the system until the system size reaches or exceeds N. The server is subject to breakdowns according to a Poisson process and his repair time obeys an arbitrary distribution. We use maximum entropy principle to derive the approximate formulas for the steady-state probability distributions of the queue length. We perform a comparative analysis between the approximate results with established exact results for various batch size, vacation time, service time and repair time distributions. We demonstrate that the maximum entropy approach is efficient enough for practical purpose and is a feasible method for approximating the solution of complex queueing systems.

  6. Estimating Prior Model Probabilities Using an Entropy Principle

    NASA Astrophysics Data System (ADS)

    Ye, M.; Meyer, P. D.; Neuman, S. P.; Pohlmann, K.

    2004-12-01

    Considering conceptual model uncertainty is an important process in environmental uncertainty/risk analyses. Bayesian Model Averaging (BMA) (Hoeting et al., 1999) and its Maximum Likelihood version, MLBMA, (Neuman, 2003) jointly assess predictive uncertainty of competing alternative models to avoid bias and underestimation of uncertainty caused by relying on one single model. These methods provide posterior distribution (or, equivalently, leading moments) of quantities of interests for decision-making. One important step of these methods is to specify prior probabilities of alternative models for the calculation of posterior model probabilities. This problem, however, has not been satisfactorily resolved and equally likely prior model probabilities are usually accepted as a neutral choice. Ye et al. (2004) have shown that whereas using equally likely prior model probabilities has led to acceptable geostatistical estimates of log air permeability data from fractured unsaturated tuff at the Apache Leap Research Site (ALRS) in Arizona, identifying more accurate prior probabilities can improve these estimates. In this paper we present a new methodology to evaluate prior model probabilities by maximizing Shannon's entropy with restrictions postulated a priori based on model plausibility relationships. It yields optimum prior model probabilities conditional on prior information used to postulate the restrictions. The restrictions and corresponding prior probabilities can be modified as more information becomes available. The proposed method is relatively easy to use in practice as it is generally less difficult for experts to postulate relationships between models than to specify numerical prior model probability values. Log score, mean square prediction error (MSPE) and mean absolute predictive error (MAPE) criteria consistently show that applying our new method to the ALRS data reduces geostatistical estimation errors provided relationships between models are

  7. Maximum caliber inference and the stochastic Ising model.

    PubMed

    Cafaro, Carlo; Ali, Sean Alan

    2016-11-01

    We investigate the maximum caliber variational principle as an inference algorithm used to predict dynamical properties of complex nonequilibrium, stationary, statistical systems in the presence of incomplete information. Specifically, we maximize the path entropy over discrete time step trajectories subject to normalization, stationarity, and detailed balance constraints together with a path-dependent dynamical information constraint reflecting a given average global behavior of the complex system. A general expression for the transition probability values associated with the stationary random Markov processes describing the nonequilibrium stationary system is computed. By virtue of our analysis, we uncover that a convenient choice of the dynamical information constraint together with a perturbative asymptotic expansion with respect to its corresponding Lagrange multiplier of the general expression for the transition probability leads to a formal overlap with the well-known Glauber hyperbolic tangent rule for the transition probability for the stochastic Ising model in the limit of very high temperatures of the heat reservoir.

  8. Maximum caliber inference and the stochastic Ising model

    NASA Astrophysics Data System (ADS)

    Cafaro, Carlo; Ali, Sean Alan

    2016-11-01

    We investigate the maximum caliber variational principle as an inference algorithm used to predict dynamical properties of complex nonequilibrium, stationary, statistical systems in the presence of incomplete information. Specifically, we maximize the path entropy over discrete time step trajectories subject to normalization, stationarity, and detailed balance constraints together with a path-dependent dynamical information constraint reflecting a given average global behavior of the complex system. A general expression for the transition probability values associated with the stationary random Markov processes describing the nonequilibrium stationary system is computed. By virtue of our analysis, we uncover that a convenient choice of the dynamical information constraint together with a perturbative asymptotic expansion with respect to its corresponding Lagrange multiplier of the general expression for the transition probability leads to a formal overlap with the well-known Glauber hyperbolic tangent rule for the transition probability for the stochastic Ising model in the limit of very high temperatures of the heat reservoir.

  9. Maximum Entropy and the Inference of Pattern and Dynamics in Ecology

    NASA Astrophysics Data System (ADS)

    Harte, John

    Constrained maximization of information entropy yields least biased probability distributions. From physics to economics, from forensics to medicine, this powerful inference method has enriched science. Here I apply this method to ecology, using constraints derived from ratios of ecological state variables, and infer functional forms for the ecological metrics describing patterns in the abundance, distribution, and energetics of species. I show that a static version of the theory describes remarkably well observed patterns in quasi-steady-state ecosystems across a wide range of habitats, spatial scales, and taxonomic groups. A systematic pattern of failure is observed, however, for ecosystems either losing species following disturbance or diversifying in evolutionary time; I show that this problem may be remedied with a stochastic-dynamic extension of the theory.

  10. Irreversible entropy model for damage diagnosis in resistors

    SciTech Connect

    Cuadras, Angel Crisóstomo, Javier; Ovejas, Victoria J.; Quilez, Marcos

    2015-10-28

    We propose a method to characterize electrical resistor damage based on entropy measurements. Irreversible entropy and the rate at which it is generated are more convenient parameters than resistance for describing damage because they are essentially positive in virtue of the second law of thermodynamics, whereas resistance may increase or decrease depending on the degradation mechanism. Commercial resistors were tested in order to characterize the damage induced by power surges. Resistors were biased with constant and pulsed voltage signals, leading to power dissipation in the range of 4–8 W, which is well above the 0.25 W nominal power to initiate failure. Entropy was inferred from the added power and temperature evolution. A model is proposed to understand the relationship among resistance, entropy, and damage. The power surge dissipates into heat (Joule effect) and damages the resistor. The results show a correlation between entropy generation rate and resistor failure. We conclude that damage can be conveniently assessed from irreversible entropy generation. Our results for resistors can be easily extrapolated to other systems or machines that can be modeled based on their resistance.

  11. Poisson-gap sampling and forward maximum entropy reconstruction for enhancing the resolution and sensitivity of protein NMR data.

    PubMed

    Hyberts, Sven G; Takeuchi, Koh; Wagner, Gerhard

    2010-02-24

    The Fourier transform has been the gold standard for transforming data from the time domain to the frequency domain in many spectroscopic methods, including NMR spectroscopy. While reliable, it has the drawback that it requires a grid of uniformely sampled data points, which is not efficient for decaying signals, and it also suffers from artifacts when dealing with nondecaying signals. Over several decades, many alternative sampling and transformation schemes have been proposed. Their common problem is that relative signal amplitudes are not well-preserved. Here we demonstrate the superior performance of a sine-weighted Poisson-gap distribution sparse-sampling scheme combined with forward maximum entropy (FM) reconstruction. While the relative signal amplitudes are well-preserved, we also find that the signal-to-noise ratio is enhanced up to 4-fold per unit of data acquisition time relative to traditional linear sampling.

  12. Maximum entropy principle for predicting response to multiple-drug exposure in bacteria and human cancer cells

    NASA Astrophysics Data System (ADS)

    Wood, Kevin; Nishida, Satoshi; Sontag, Eduardo; Cluzel, Philippe

    2012-02-01

    Drugs are commonly used in combinations larger than two for treating infectious disease. However, it is generally impossible to infer the net effect of a multi-drug combination on cell growth directly from the effects of individual drugs. We combined experiments with maximum entropy methods to develop a mechanism-independent framework for calculating the response of both bacteria and human cancer cells to a large variety of drug combinations comprised of anti-microbial or anti-cancer drugs. We experimentally show that the cellular responses to drug pairs are sufficient to infer the effects of larger drug combinations in gram negative bacteria, Escherichia coli, gram positive bacteria, Staphylococcus aureus, and also human breast cancer and melanoma cell lines. Remarkably, the accurate predictions of this framework suggest that the multi-drug response obeys statistical rather than chemical laws for combinations larger than two. Consequently, these findings offer a new strategy for the rational design of therapies using large drug combinations.

  13. A generalized model on the evaluation of entropy and entropy of mixing of liquid Na-Sn alloys

    NASA Astrophysics Data System (ADS)

    Satpathy, Alok; Sengupta, Saumendu

    2017-01-01

    Recently proposed theory of entropy of mixing of the structurally inhomogeneous binary liquid alloys of alkali metals and group-IV elements is applied successfully to the liquid Na-Sn alloy. This alloy indicates chemical short range ordering (CSRO) i.e. exhibits partially salt like characteristics due to strong tendencies to compound formation, in the solid as well as in the liquid state. So, the generalized model for entropy of charged-hard-spheres mixture of arbitrary charge and size is employed to evaluate entropies of mixing, treating the sample as partially charge transfer system. The computed entropies of mixing are in excellent agreement with the experimental data.

  14. Entanglement entropy of Wilson loops: Holography and matrix models

    NASA Astrophysics Data System (ADS)

    Gentle, Simon A.; Gutperle, Michael

    2014-09-01

    A half-Bogomol'nyi-Prasad-Sommerfeld circular Wilson loop in N=4 SU(N) supersymmetric Yang-Mills theory in an arbitrary representation is described by a Gaussian matrix model with a particular insertion. The additional entanglement entropy of a spherical region in the presence of such a loop was recently computed by Lewkowycz and Maldacena using exact matrix model results. In this paper we utilize the supergravity solutions that are dual to such Wilson loops in a representation with order N2 boxes to calculate this entropy holographically. Employing the matrix model results of Gomis, Matsuura, Okuda and Trancanelli we express this holographic entanglement entropy in a form that can be compared with the calculation of Lewkowycz and Maldacena. We find complete agreement between the matrix model and holographic calculations.

  15. Experiments and Model for Serration Statistics in Low-Entropy, Medium-Entropy, and High-Entropy Alloys

    SciTech Connect

    Carroll, Robert; Lee, Chi; Tsai, Che-Wei; Yeh, Jien-Wei; Antonaglia, James; Brinkman, Braden A.W.; LeBlanc, Michael; Xie, Xie; Chen, Shuying; Liaw, Peter K; Dahmen, Karin A

    2015-11-23

    High-entropy alloys (HEAs) are new alloys that contain five or more elements in roughly equal proportion. We present new experiments and theory on the deformation behavior of HEAs under slow stretching (straining), and observe differences, compared to conventional alloys with fewer elements. For a specific range of temperatures and strain-rates, HEAs deform in a jerky way, with sudden slips that make it difficult to precisely control the deformation. An analytic model explains these slips as avalanches of slipping weak spots and predicts the observed slip statistics, stress-strain curves, and their dependence on temperature, strain-rate, and material composition. The ratio of the weak spots’ healing rate to the strain-rate is the main tuning parameter, reminiscent of the Portevin-LeChatellier effect and time-temperature superposition in polymers. Our model predictions agree with the experimental results. The proposed widely-applicable deformation mechanism is useful for deformation control and alloys design.

  16. Experiments and Model for Serration Statistics in Low-Entropy, Medium-Entropy, and High-Entropy Alloys.

    PubMed

    Carroll, Robert; Lee, Chi; Tsai, Che-Wei; Yeh, Jien-Wei; Antonaglia, James; Brinkman, Braden A W; LeBlanc, Michael; Xie, Xie; Chen, Shuying; Liaw, Peter K; Dahmen, Karin A

    2015-11-23

    High-entropy alloys (HEAs) are new alloys that contain five or more elements in roughly-equal proportion. We present new experiments and theory on the deformation behavior of HEAs under slow stretching (straining), and observe differences, compared to conventional alloys with fewer elements. For a specific range of temperatures and strain-rates, HEAs deform in a jerky way, with sudden slips that make it difficult to precisely control the deformation. An analytic model explains these slips as avalanches of slipping weak spots and predicts the observed slip statistics, stress-strain curves, and their dependence on temperature, strain-rate, and material composition. The ratio of the weak spots' healing rate to the strain-rate is the main tuning parameter, reminiscent of the Portevin-LeChatellier effect and time-temperature superposition in polymers. Our model predictions agree with the experimental results. The proposed widely-applicable deformation mechanism is useful for deformation control and alloy design.

  17. Thermospheric density model biases at sunspot maximum

    NASA Astrophysics Data System (ADS)

    Pardini, Carmen; Moe, Kenneth; Anselmo, Luciano

    A previous study (Pardini C., Anselmo L, Moe K., Moe M.M., Drag and energy accommodation coefficients during sunspot maximum, Adv. Space Res., 2009, doi:10.1016/j.asr.2009.08.034), including ten satellites with altitudes between 200 and 630 km, has yielded values for the energy accommodation coefficient as well as for the physical drag coefficient as a function of height during solar maximum conditions. The results are consistent with the altitude and solar cycle variation of atomic oxygen, which is known to be adsorbed on satellite surfaces, affecting both the energy accommodation and angular distribution of the reemitted molecules. Taking advantage of these results, an investigation of thermospheric density model biases at sunspot maximum became possible using the recently upgraded CDFIT software code. Specif-ically developed at ISTI/CNR, CDFIT is used to fit the observed satellite semi-major axis decay. All the relevant orbital perturbations are considered and several atmospheric density models have been implemented over the years, including JR-71, MSISE-90, NRLMSISE-00, GOST2004 and JB2006. For this analysis we reused the satellites Cosmos 2265 and Cosmos 2332 (altitude: 275 km), SNOE (altitude: 480 km), and Clementine (altitude: 630 km), spanning the last solar cycle maximum (October 1999 -January 2003). For each satellite, and for each of the above men-tioned atmospheric density models, the fitted drag coefficient was obtained with CDFIT, using the observed orbital decay, and then compared with the corresponding physical drag coefficient estimated in the previous study (Pardini et al., 2009). It was consequently possible to derive the average density biases of the thermospheric models during the considered time span. The average results obtained for the last sunspot maximum can be summarized as follows (the sign "+" means that the atmospheric density is overestimated by the model, while the sign "-" means that the atmospheric density is underestimated

  18. Classic maximum entropy recovery of the average joint distribution of apparent FRET efficiency and fluorescence photons for single-molecule burst measurements.

    PubMed

    DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K

    2012-04-05

    We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.

  19. Monotonic entropy growth for a nonlinear model of random exchanges.

    PubMed

    Apenko, S M

    2013-02-01

    We present a proof of the monotonic entropy growth for a nonlinear discrete-time model of a random market. This model, based on binary collisions, also may be viewed as a particular case of Ulam's redistribution of energy problem. We represent each step of this dynamics as a combination of two processes. The first one is a linear energy-conserving evolution of the two-particle distribution, for which the entropy growth can be easily verified. The original nonlinear process is actually a result of a specific "coarse graining" of this linear evolution, when after the collision one variable is integrated away. This coarse graining is of the same type as the real space renormalization group transformation and leads to an additional entropy growth. The combination of these two factors produces the required result which is obtained only by means of information theory inequalities.

  20. An improved model for the transit entropy of monatomic liquids

    SciTech Connect

    Wallace, Duane C; Chisolm, Eric D; Bock, Nicolas

    2009-01-01

    In the original formulation of V-T theory for monatomic liquid dynamics, the transit contribution to entropy was taken to be a universal constant, calibrated to the constant-volume entropy of melting. This model suffers two deficiencies: (a) it does not account for experimental entropy differences of {+-}2% among elemental liquids, and (b) it implies a value of zero for the transit contribution to internal energy. The purpose of this paper is to correct these deficiencies. To this end, the V-T equation for entropy is fitted to an overall accuracy of {+-}0.1% to the available experimental high temperature entropy data for elemental liquids. The theory contains two nuclear motion contributions: (a) the dominant vibrational contribution S{sub vib}(T/{theta}{sub 0}), where T is temperature and {theta}{sub 0} is the vibrational characteristic temperature, and (b) the transit contribution S{sub tr}(T/{theta}{sub tr}), where {theta}{sub tr} is a scaling temperature for each liquid. The appearance of a common functional form of S{sub tr} for all the liquids studied is a property of the experimental data, when analyzed via the V-T formula. The resulting S{sub tr} implies the correct transit contribution to internal energy. The theoretical entropy of melting is derived, in a single formula applying to normal and anomalous melting alike. An ab initio calculation of {theta}{sub 0}, based on density functional theory, is reported for liquid Na and Cu. Comparison of these calculations with the above analysis of experimental entropy data provides verification of V-T theory. In view of the present results, techniques currently being applied in ab initio simulations of liquid properties can be employed to advantage in the further testing and development of V-T theory.

  1. Entanglement entropy of fermionic quadratic band touching model

    NASA Astrophysics Data System (ADS)

    Chen, Xiao; Cho, Gil Young; Fradkin, Eduardo

    2014-03-01

    The entanglement entropy has been proven to be a useful tool to diagnose and characterize strongly correlated systems such as topologically ordered phases and some critical points. Motivated by the successes, we study the entanglement entropy (EE) of a fermionic quadratic band touching model in (2 + 1) dimension. This is a fermionic ``spinor'' model with a finite DOS at k=0 and infinitesimal instabilities. The calculation on two-point correlation functions shows that a Dirac fermion model and the quadratic band touching model both have the asymptotically identical behavior in the long distance limit. This implies that EE for the quadratic band touching model also has an area law as the Dirac fermion. This is in contradiction with the expectation that dense fermi systems with a finite DOS should exhibit LlogL violations to the area law of entanglement entropy (L is the length of the boundary of the sub-region) by analogy with the Fermi surface. We performed numerical calculations of entanglement entropies on a torus of the lattice models for the quadratic band touching point and the Dirac fermion to confirm this. The numerical calculation shows that EE for both cases satisfy the area law. We further verify this result by the analytic calculation on the torus geometry. This work was supported in part by the NSF grant DMR-1064319.

  2. An articulatorily constrained, maximum entropy approach to speech recognition and speech coding

    SciTech Connect

    Hogden, J.

    1996-12-31

    Hidden Markov models (HMM`s) are among the most popular tools for performing computer speech recognition. One of the primary reasons that HMM`s typically outperform other speech recognition techniques is that the parameters used for recognition are determined by the data, not by preconceived notions of what the parameters should be. This makes HMM`s better able to deal with intra- and inter-speaker variability despite the limited knowledge of how speech signals vary and despite the often limited ability to correctly formulate rules describing variability and invariance in speech. In fact, it is often the case that when HMM parameter values are constrained using the limited knowledge of speech, recognition performance decreases. However, the structure of an HMM has little in common with the mechanisms underlying speech production. Here, the author argues that by using probabilistic models that more accurately embody the process of speech production, he can create models that have all the advantages of HMM`s, but that should more accurately capture the statistical properties of real speech samples--presumably leading to more accurate speech recognition. The model he will discuss uses the fact that speech articulators move smoothly and continuously. Before discussing how to use articulatory constraints, he will give a brief description of HMM`s. This will allow him to highlight the similarities and differences between HMM`s and the proposed technique.

  3. Existence of the Entropy Solution for a Viscoelastic Model

    NASA Astrophysics Data System (ADS)

    Zhu, Changjiang

    1998-06-01

    In this paper, we consider the Cauchy problem for a viscoelastic model with relaxationut+σx=0, (σ-f(u))t+{1}/{δ} (σ-μf(u))=0with discontinuous, large initial data, where 0<μ<1,δ>0 are constants. When the system is nonstrictly hyperbolic, under the additional assumptionv0x∈L∞, the system is reduced to an inhomogeneous scalar balance law by employing the special form of the system itself. After introducing a definition of entropy solutions to the system, we prove the existence, uniqueness, and continuous dependence of the global entropy solution for the system. When the system is strictly hyperbolic, some special entropy pairs of the Lax type are constructed, in which the progression terms are functions of a single variable, and the necessary estimates for the major terms are obtained by using the theory of singular perturbation of the ordinary differential equations. The special entropy pairs are used to prove the existence of the global entropy solutions for the corresponding Cauchy problem by applying the method of compensated compactness

  4. Fluctuations and entropy in models of quantum optical resonance

    NASA Astrophysics Data System (ADS)

    Phoenix, S. J. D.; Knight, P. L.

    1988-09-01

    We use variances, entropy, and the Shannon entropy to analyse the fluctuations and quantum evolution of various simple models of quantum optical resonance. We discuss at length the properties of the single-mode radiation field coupled to a single two-level atom, and then extend our analysis to describe the micromaser in which a cavity mode is repeatedly pumped by a succession of atoms passing through the cavity. We also discuss the fluctuations in the single-mode laser theory of Scully and Lamb.

  5. Maximum-entropy large-scale structures of Boolean networks optimized for criticality

    NASA Astrophysics Data System (ADS)

    Möller, Marco; Peixoto, Tiago P.

    2015-04-01

    We construct statistical ensembles of modular Boolean networks that are constrained to lie at the critical line between frozen and chaotic dynamic regimes. The ensembles are maximally random given the imposed constraints, and thus represent null models of critical networks. By varying the network density and the entropic cost associated with biased Boolean functions, the ensembles undergo several phase transitions. The observed structures range from fully random to several ordered ones, including a prominent core-periphery-like structure, and an 'attenuated' two-group structure, where the network is divided in two groups of nodes, and one of them has Boolean functions with very low sensitivity. This shows that such simple large-scale structures are the most likely to occur when optimizing for criticality, in the absence of any other constraint or competing optimization criteria.

  6. Factor Analysis of Wildfire and Risk Area Estimation in Korean Peninsula Using Maximum Entropy

    NASA Astrophysics Data System (ADS)

    Kim, Teayeon; Lim, Chul-Hee; Lee, Woo-Kyun; Kim, YouSeung; Heo, Seongbong; Cha, Sung Eun; Kim, Seajin

    2016-04-01

    The number of wildfires and accompanying human injuries and physical damages has been increased by frequent drought. Especially, Korea experienced severe drought and numbers of wildfire took effect this year. We used MaxEnt model to figure out major environmental factors for wildfire and used RCP scenarios to predict future wildfire risk area. In this study, environmental variables including topographic, anthropogenic, meteorologic data was used to figure out contributing variables of wildfire in South and North Korea, and compared accordingly. As for occurrence data, we used MODIS fire data after verification. In North Korea, AUC(Area Under the ROC Curve) value was 0.890 which was high enough to explain the distribution of wildfires. South Korea had low AUC value than North Korea and high mean standard deviation which means there is low anticipation to predict fire with same environmental variables. It is expected to enhance AUC value in South Korea with environmental variables such as distance from trails, wildfire management systems. For instance, fire occurred within DMZ(demilitarized zone, 4kms boundary from 38th parallel) has decisive influence on fire risk area in South Korea, but not in North Korea. The contribution of each environmental variables was more distributed among variables in North Korea than in South Korea. This means South Korea is dependent on few certain variables, and North Korea can be explained as number of variables with evenly distributed portions. Although the AUC value and standard deviation of South Korea was not high enough to predict wildfire, the result carries an significant meaning to figure out scientific and social matters that certain environmental variables has great weight by understanding their response curves. We also made future wildfire risk area map in whole Korean peninsula using the same model. In four RCP scenarios, it was found that severe climate change would lead wildfire risk area move north. Especially North

  7. Relative entropy and covariance type constraints yielding ARMA models

    NASA Astrophysics Data System (ADS)

    Girardin, Valérie

    2001-05-01

    We consider zero mean weakly stationary multidimensional scalar time series. We determine the form of the spectral density which minimizes a relative entropy under trigonometric moment constraints, as covariance, impulse responses or cepstral coefficients. It often yields autoregressive moving average models giving one more justification to their widespread use. .

  8. A Unified Theory of Turbulence: Maximum Entropy Increase Due To Turbulent Dissipation In Fluid Systems From Laboratory-scale Turbulence To Global-scale Circulations

    NASA Astrophysics Data System (ADS)

    Ozawa, Hisashi; Shimokawa, Shinya; Sakuma, Hirofumi

    Turbulence is ubiquitous in nature, yet remains an enigma in many respects. Here we investigate dissipative properties of turbulence so as to find out a statistical "law" of turbulence. Two general expressions are derived for a rate of entropy increase due to thermal and viscous dissipation (turbulent dissipation) in a fluid system. It is found with these equations that phenomenological properties of turbulence such as Malkus's suggestion on maximum heat transport in thermal convection as well as Busse's sug- gestion on maximum momentum transport in shear turbulence can rigorously be ex- plained by a unique state in which the rate of entropy increase due to the turbulent dissipation is at a maximum (dS/dt = Max.). It is also shown that the same state cor- responds to the maximum entropy climate suggested by Paltridge. The tendency to increase the rate of entropy increase has also been confirmed by our recent GCM ex- periments. These results suggest the existence of a universal law that manifests itself in the long-term statistics of turbulent fluid systems from laboratory-scale turbulence to planetary-scale circulations. Ref.) Ozawa, H., Shimokawa, S., and Sakuma, H., Phys. Rev. E 64, 026303, 2001.

  9. n-Order and maximum fuzzy similarity entropy for discrimination of signals of different complexity: Application to fetal heart rate signals.

    PubMed

    Zaylaa, Amira; Oudjemia, Souad; Charara, Jamal; Girault, Jean-Marc

    2015-09-01

    This paper presents two new concepts for discrimination of signals of different complexity. The first focused initially on solving the problem of setting entropy descriptors by varying the pattern size instead of the tolerance. This led to the search for the optimal pattern size that maximized the similarity entropy. The second paradigm was based on the n-order similarity entropy that encompasses the 1-order similarity entropy. To improve the statistical stability, n-order fuzzy similarity entropy was proposed. Fractional Brownian motion was simulated to validate the different methods proposed, and fetal heart rate signals were used to discriminate normal from abnormal fetuses. In all cases, it was found that it was possible to discriminate time series of different complexity such as fractional Brownian motion and fetal heart rate signals. The best levels of performance in terms of sensitivity (90%) and specificity (90%) were obtained with the n-order fuzzy similarity entropy. However, it was shown that the optimal pattern size and the maximum similarity measurement were related to intrinsic features of the time series.

  10. Stable discretization of the Boltzmann equation based on spherical harmonics, box integration, and a maximum entropy dissipation principle

    NASA Astrophysics Data System (ADS)

    Jungemann, C.; Pham, A. T.; Meinerzhagen, B.; Ringhofer, C.; Bollhöfer, M.

    2006-07-01

    The Boltzmann equation for transport in semiconductors is projected onto spherical harmonics in such a way that the resultant balance equations for the coefficients of the distribution function times the generalized density of states can be discretized over energy and real spaces by box integration. This ensures exact current continuity for the discrete equations. Spurious oscillations of the distribution function are suppressed by stabilization based on a maximum entropy dissipation principle avoiding the H transformation. The derived formulation can be used on arbitrary grids as long as box integration is possible. The approach works not only with analytical bands but also with full band structures in the case of holes. Results are presented for holes in bulk silicon based on a full band structure and electrons in a Si NPN bipolar junction transistor. The convergence of the spherical harmonics expansion is shown for a device, and it is found that the quasiballistic transport in nanoscale devices requires an expansion of considerably higher order than the usual first one. The stability of the discretization is demonstrated for a range of grid spacings in the real space and bias points which produce huge gradients in the electron density and electric field. It is shown that the resultant large linear system of equations can be solved in a memory efficient way by the numerically robust package ILUPACK.

  11. Precipitation Interpolation by Multivariate Bayesian Maximum Entropy Based on Meteorological Data in Yun- Gui-Guang region, Mainland China

    NASA Astrophysics Data System (ADS)

    Wang, Chaolin; Zhong, Shaobo; Zhang, Fushen; Huang, Quanyi

    2016-11-01

    Precipitation interpolation has been a hot area of research for many years. It had close relation to meteorological factors. In this paper, precipitation from 91 meteorological stations located in and around Yunnan, Guizhou and Guangxi Zhuang provinces (or autonomous region), Mainland China was taken into consideration for spatial interpolation. Multivariate Bayesian maximum entropy (BME) method with auxiliary variables, including mean relative humidity, water vapour pressure, mean temperature, mean wind speed and terrain elevation, was used to get more accurate regional distribution of annual precipitation. The means, standard deviations, skewness and kurtosis of meteorological factors were calculated. Variogram and cross- variogram were fitted between precipitation and auxiliary variables. The results showed that the multivariate BME method was precise with hard and soft data, probability density function. Annual mean precipitation was positively correlated with mean relative humidity, mean water vapour pressure, mean temperature and mean wind speed, negatively correlated with terrain elevation. The results are supposed to provide substantial reference for research of drought and waterlog in the region.

  12. Experiments and Model for Serration Statistics in Low-Entropy, Medium-Entropy, and High-Entropy Alloys

    PubMed Central

    Carroll, Robert; Lee, Chi; Tsai, Che-Wei; Yeh, Jien-Wei; Antonaglia, James; Brinkman, Braden A. W.; LeBlanc, Michael; Xie, Xie; Chen, Shuying; Liaw, Peter K.; Dahmen, Karin A.

    2015-01-01

    High-entropy alloys (HEAs) are new alloys that contain five or more elements in roughly-equal proportion. We present new experiments and theory on the deformation behavior of HEAs under slow stretching (straining), and observe differences, compared to conventional alloys with fewer elements. For a specific range of temperatures and strain-rates, HEAs deform in a jerky way, with sudden slips that make it difficult to precisely control the deformation. An analytic model explains these slips as avalanches of slipping weak spots and predicts the observed slip statistics, stress-strain curves, and their dependence on temperature, strain-rate, and material composition. The ratio of the weak spots’ healing rate to the strain-rate is the main tuning parameter, reminiscent of the Portevin-LeChatellier effect and time-temperature superposition in polymers. Our model predictions agree with the experimental results. The proposed widely-applicable deformation mechanism is useful for deformation control and alloy design. PMID:26593056

  13. Experiments and Model for Serration Statistics in Low-Entropy, Medium-Entropy, and High-Entropy Alloys

    DOE PAGES

    Carroll, Robert; Lee, Chi; Tsai, Che-Wei; ...

    2015-11-23

    High-entropy alloys (HEAs) are new alloys that contain five or more elements in roughly equal proportion. We present new experiments and theory on the deformation behavior of HEAs under slow stretching (straining), and observe differences, compared to conventional alloys with fewer elements. For a specific range of temperatures and strain-rates, HEAs deform in a jerky way, with sudden slips that make it difficult to precisely control the deformation. An analytic model explains these slips as avalanches of slipping weak spots and predicts the observed slip statistics, stress-strain curves, and their dependence on temperature, strain-rate, and material composition. The ratio ofmore » the weak spots’ healing rate to the strain-rate is the main tuning parameter, reminiscent of the Portevin-LeChatellier effect and time-temperature superposition in polymers. Our model predictions agree with the experimental results. The proposed widely-applicable deformation mechanism is useful for deformation control and alloys design.« less

  14. Does the soil's effective hydraulic conductivity adapt in order to obey the Maximum Entropy Production principle? A lab experiment

    NASA Astrophysics Data System (ADS)

    Westhoff, Martijn; Zehe, Erwin; Erpicum, Sébastien; Archambeau, Pierre; Pirotton, Michel; Dewals, Benjamin

    2015-04-01

    The Maximum Entropy Production (MEP) principle is a conjecture assuming that a medium is organized in such a way that maximum power is subtracted from a gradient driving a flux (with power being a flux times its driving gradient). This maximum power is also known as the Carnot limit. It has already been shown that the atmosphere operates close to this Carnot limit when it comes to heat transport from the Equator to the poles, or vertically, from the surface to the atmospheric boundary layer. To reach this state close to the Carnot limit, the effective thermal conductivity of the atmosphere is adapted by the creation of convection cells (e.g. wind). The aim of this study is to test if the soil's effective hydraulic conductivity also adapts itself in such a way that it operates close to the Carnot limit. The big difference between atmosphere and soil is the way of adaptation of its resistance. The soil's hydraulic conductivity is either changed by weathering processes, which is a very slow process, or by creation of preferential flow paths. In this study the latter process is simulated in a lab experiment, where we focus on the preferential flow paths created by piping. Piping is the process of backwards erosion of sand particles subject to a large pressure gradient. Since this is a relatively fast process, it is suitable for being tested in the lab. In the lab setup a horizontal sand bed connects two reservoirs that both drain freely at a level high enough to keep the sand bed always saturated. By adding water to only one reservoir, a horizontal pressure gradient is maintained. If the flow resistance is small, a large gradient develops, leading to the effect of piping. When pipes are being formed, the effective flow resistance decreases; the flow through the sand bed increases and the pressure gradient decreases. At a certain point, the flow velocity is small enough to stop the pipes from growing any further. In this steady state, the effective flow resistance of

  15. Probabilistic modelling of flood events using the entropy copula

    NASA Astrophysics Data System (ADS)

    Li, Fan; Zheng, Qian

    2016-11-01

    The estimation of flood frequency is vital for the flood control strategies and hydraulic structure design. Generating synthetic flood events according to statistical properties of observations is one of plausible methods to analyze the flood frequency. Due to the statistical dependence among the flood event variables (i.e. the flood peak, volume and duration), a multidimensional joint probability estimation is required. Recently, the copula method is widely used for multivariable dependent structure construction, however, the copula family should be chosen before application and the choice process is sometimes rather subjective. The entropy copula, a new copula family, employed in this research proposed a way to avoid the relatively subjective process by combining the theories of copula and entropy. The analysis shows the effectiveness of the entropy copula for probabilistic modelling the flood events of two hydrological gauges, and a comparison of accuracy with the popular copulas was made. The Gibbs sampling technique was applied for trivariate flood events simulation in order to mitigate the calculation difficulties of extending to three dimension directly. The simulation results indicate that the entropy copula is a simple and effective copula family for trivariate flood simulation.

  16. Improving Estimations of Spatial Distribution of Soil Respiration Using the Bayesian Maximum Entropy Algorithm and Soil Temperature as Auxiliary Data.

    PubMed

    Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi

    2016-01-01

    Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points.

  17. Improving Estimations of Spatial Distribution of Soil Respiration Using the Bayesian Maximum Entropy Algorithm and Soil Temperature as Auxiliary Data

    PubMed Central

    Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi

    2016-01-01

    Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points. PMID:26807579

  18. Inflation via logarithmic entropy-corrected holographic dark energy model

    NASA Astrophysics Data System (ADS)

    Darabi, F.; Felegary, F.; Setare, M. R.

    2016-12-01

    We study the inflation in terms of the logarithmic entropy-corrected holographic dark energy (LECHDE) model with future event horizon, particle horizon, and Hubble horizon cut-offs, and we compare the results with those obtained in the study of inflation by the holographic dark energy HDE model. In comparison, the spectrum of primordial scalar power spectrum in the LECHDE model becomes redder than the spectrum in the HDE model. Moreover, the consistency with the observational data in the LECHDE model of inflation constrains the reheating temperature and Hubble parameter by one parameter of holographic dark energy and two new parameters of logarithmic corrections.

  19. Entropy corrected holographic dark energy models in modified gravity

    NASA Astrophysics Data System (ADS)

    Jawad, Abdul; Azhar, Nadeem; Rani, Shamaila

    We consider the power law and the entropy corrected holographic dark energy (HDE) models with Hubble horizon in the dynamical Chern-Simons modified gravity. We explore various cosmological parameters and planes in this framework. The Hubble parameter lies within the consistent range at the present and later epoch for both entropy corrected models. The deceleration parameter explains the accelerated expansion of the universe. The equation of state (EoS) parameter corresponds to quintessence and cold dark matter (ΛCDM) limit. The ωΛ‑ωΛ‧ approaches to ΛCDM limit and freezing region in both entropy corrected models. The statefinder parameters are consistent with ΛCDM limit and dark energy (DE) models. The generalized second law of thermodynamics remain valid in all cases of interacting parameter. It is interesting to mention here that our results of Hubble, EoS parameter and ωΛ‑ωΛ‧ plane show consistency with the present observations like Planck, WP, BAO, H0, SNLS and nine-year WMAP.

  20. ATR applications of minimax entropy models of texture and shape

    NASA Astrophysics Data System (ADS)

    Zhu, Song-Chun; Yuille, Alan L.; Lanterman, Aaron D.

    2001-10-01

    Concepts from information theory have recently found favor in both the mainstream computer vision community and the military automatic target recognition community. In the computer vision literature, the principles of minimax entropy learning theory have been used to generate rich probabilitistic models of texture and shape. In addition, the method of types and large deviation theory has permitted the difficulty of various texture and shape recognition tasks to be characterized by 'order parameters' that determine how fundamentally vexing a task is, independent of the particular algorithm used. These information-theoretic techniques have been demonstrated using traditional visual imagery in applications such as simulating cheetah skin textures and such as finding roads in aerial imagery. We discuss their application to problems in the specific application domain of automatic target recognition using infrared imagery. We also review recent theoretical and algorithmic developments which permit learning minimax entropy texture models for infrared textures in reasonable timeframes.

  1. Holography and entropy bounds in the plane wave matrix model

    SciTech Connect

    Bousso, Raphael; Mints, Aleksey L.

    2006-06-15

    As a quantum theory of gravity, matrix theory should provide a realization of the holographic principle, in the sense that a holographic theory should contain one binary degree of freedom per Planck area. We present evidence that Bekenstein's entropy bound, which is related to area differences, is manifest in the plane wave matrix model. If holography is implemented in this way, we predict crossover behavior at strong coupling when the energy exceeds N{sup 2} in units of the mass scale.

  2. Modeling of high entropy alloys of refractory elements

    NASA Astrophysics Data System (ADS)

    del Grosso, M. F.; Bozzolo, G.; Mosca, H. O.

    2012-08-01

    Reverting the traditional process of developing new alloys based on one or two single elements with minority additions, the study of high entropy alloys (HEA) (equimolar combinations of many elements) has become a relevant and interesting new field of research due to their tendency to form solid solutions with particular properties in the absence of intermetallic phases. Theoretical or modeling studies at the atomic level on specific HEA, describing the formation, structure, and properties of these alloys are limited due to the large number of constituents involved. In this work we focus on HEA with refractory elements showing atomistic modeling results for W-Nb-Mo-Ta and W-Nb-Mo-Ta-V HEA, for which experimental background exists. An atomistic modeling approach is applied for the determination of the role of each element and identification of the interactions and features responsible for the transition to the high entropy regime. Results for equimolar alloys of 4 and 5 refractory elements, for which experimental results exist, are shown. A straightforward algorithm is introduced to interpret the transition to the high entropy regime.

  3. Stability of ecological industry chain: an entropy model approach.

    PubMed

    Wang, Qingsong; Qiu, Shishou; Yuan, Xueliang; Zuo, Jian; Cao, Dayong; Hong, Jinglan; Zhang, Jian; Dong, Yong; Zheng, Ying

    2016-07-01

    A novel methodology is proposed in this study to examine the stability of ecological industry chain network based on entropy theory. This methodology is developed according to the associated dissipative structure characteristics, i.e., complexity, openness, and nonlinear. As defined in the methodology, network organization is the object while the main focus is the identification of core enterprises and core industry chains. It is proposed that the chain network should be established around the core enterprise while supplementation to the core industry chain helps to improve system stability, which is verified quantitatively. Relational entropy model can be used to identify core enterprise and core eco-industry chain. It could determine the core of the network organization and core eco-industry chain through the link form and direction of node enterprises. Similarly, the conductive mechanism of different node enterprises can be examined quantitatively despite the absence of key data. Structural entropy model can be employed to solve the problem of order degree for network organization. Results showed that the stability of the entire system could be enhanced by the supplemented chain around the core enterprise in eco-industry chain network organization. As a result, the sustainability of the entire system could be further improved.

  4. A Bayesian Maximum Entropy approach to address the change of support problem in the spatial analysis of childhood asthma prevalence across North Carolina

    PubMed Central

    LEE, SEUNG-JAE; YEATTS, KARIN; SERRE, MARC L.

    2009-01-01

    The spatial analysis of data observed at different spatial observation scales leads to the change of support problem (COSP). A solution to the COSP widely used in linear spatial statistics consists in explicitly modeling the spatial autocorrelation of the variable observed at different spatial scales. We present a novel approach that takes advantage of the non-linear Bayesian Maximum Entropy (BME) extension of linear spatial statistics to address the COSP directly without relying on the classical linear approach. Our procedure consists in modeling data observed over large areas as soft data for the process at the local scale. We demonstrate the application of our approach to obtain spatially detailed maps of childhood asthma prevalence across North Carolina (NC). Because of the high prevalence of childhood asthma in NC, the small number problem is not an issue, so we can focus our attention solely to the COSP of integrating prevalence data observed at the county-level together with data observed at a targeted local scale equivalent to the scale of school-districts. Our spatially detailed maps can be used for different applications ranging from exploratory and hypothesis generating analyses to targeting intervention and exposure mitigation efforts. PMID:20300553

  5. On the possibility of obtaining non-diffused proximity functions from cloud-chamber data: II. Maximum entropy and Bayesian methods.

    PubMed

    Zaider, M; Minerbo, G N

    1988-11-01

    Maximum entropy and Bayesian methods are applied to an inversion problem which consists of unfolding diffusion from proximity functions calculated from cloud-chamber data. The solution appears to be relatively insensitive to statistical errors in the data (an important feature) given the limited number of tracks normally available from cloud-chamber measurements. It is the first time, to our knowledge, that such methods are applied to microdosimetry.

  6. Application of a maximum entropy method to estimate the probability density function of nonlinear or chaotic behavior in structural health monitoring data

    NASA Astrophysics Data System (ADS)

    Livingston, Richard A.; Jin, Shuang

    2005-05-01

    Bridges and other civil structures can exhibit nonlinear and/or chaotic behavior under ambient traffic or wind loadings. The probability density function (pdf) of the observed structural responses thus plays an important role for long-term structural health monitoring, LRFR and fatigue life analysis. However, the actual pdf of such structural response data often has a very complicated shape due to its fractal nature. Various conventional methods to approximate it can often lead to biased estimates. This paper presents recent research progress at the Turner-Fairbank Highway Research Center of the FHWA in applying a novel probabilistic scaling scheme for enhanced maximum entropy evaluation to find the most unbiased pdf. The maximum entropy method is applied with a fractal interpolation formulation based on contraction mappings through an iterated function system (IFS). Based on a fractal dimension determined from the entire response data set by an algorithm involving the information dimension, a characteristic uncertainty parameter, called the probabilistic scaling factor, can be introduced. This allows significantly enhanced maximum entropy evaluation through the added inferences about the fine scale fluctuations in the response data. Case studies using the dynamic response data sets collected from a real world bridge (Commodore Barry Bridge, PA) and from the simulation of a classical nonlinear chaotic system (the Lorenz system) are presented in this paper. The results illustrate the advantages of the probabilistic scaling method over conventional approaches for finding the unbiased pdf especially in the critical tail region that contains the larger structural responses.

  7. Singularities and Entropy in Bulk Viscosity Dark Energy Model

    NASA Astrophysics Data System (ADS)

    Meng, Xin-He; Dou, Xu

    2011-11-01

    In this paper bulk viscosity is introduced to describe the effects of cosmic non-perfect fluid on the cosmos evolution and to build the unified dark energy (DE) with (dark) matter models. Also we derive a general relation between the bulk viscosity form and Hubble parameter that can provide a procedure for the viscosity DE model building. Especially, a redshift dependent viscosity parameter ζ ∝ λ0 + λ1(1 + z)n proposed in the previous work [X.H. Meng and X. Dou, Commun. Theor. Phys. 52 (2009) 377] is investigated extensively in this present work. Further more we use the recently released supernova dataset (the Constitution dataset) to constrain the model parameters. In order to differentiate the proposed concrete dark energy models from the well known ΛCDM model, statefinder diagnostic method is applied to this bulk viscosity model, as a complementary to the Om parameter diagnostic and the deceleration parameter analysis performed by us before. The DE model evolution behavior and tendency are shown in the plane of the statefinder diagnostic parameter pair {r, s} as axes where the fixed point represents the ΛCDM model. The possible singularity property in this bulk viscosity cosmology is also discussed to which we can conclude that in the different parameter regions chosen properly, this concrete viscosity DE model can have various late evolution behaviors and the late time singularity could be avoided. We also calculate the cosmic entropy in the bulk viscosity dark energy frame, and find that the total entropy in the viscosity DE model increases monotonously with respect to the scale factor evolution, thus this monotonous increasing property can indicate an arrow of time in the universe evolution, though the quantum version of the arrow of time is still very puzzling.

  8. Entropy maximization under the constraints on the generalized Gini index and its application in modeling income distributions

    NASA Astrophysics Data System (ADS)

    Khosravi Tanak, A.; Mohtashami Borzadaran, G. R.; Ahmadi, J.

    2015-11-01

    In economics and social sciences, the inequality measures such as Gini index, Pietra index etc., are commonly used to measure the statistical dispersion. There is a generalization of Gini index which includes it as special case. In this paper, we use principle of maximum entropy to approximate the model of income distribution with a given mean and generalized Gini index. Many distributions have been used as descriptive models for the distribution of income. The most widely known of these models are the generalized beta of second kind and its subclass distributions. The obtained maximum entropy distributions are fitted to the US family total money income in 2009, 2011 and 2013 and their relative performances with respect to generalized beta of second kind family are compared.

  9. Probability distributions of bed load particle velocities, accelerations, hop distances, and travel times informed by Jaynes's principle of maximum entropy

    NASA Astrophysics Data System (ADS)

    Furbish, David Jon; Schmeeckle, Mark W.; Schumer, Rina; Fathel, Siobhan L.

    2016-07-01

    We describe the most likely forms of the probability distributions of bed load particle velocities, accelerations, hop distances, and travel times, in a manner that formally appeals to inferential statistics while honoring mechanical and kinematic constraints imposed by equilibrium transport conditions. The analysis is based on E. Jaynes's elaboration of the implications of the similarity between the Gibbs entropy in statistical mechanics and the Shannon entropy in information theory. By maximizing the information entropy of a distribution subject to known constraints on its moments, our choice of the form of the distribution is unbiased. The analysis suggests that particle velocities and travel times are exponentially distributed and that particle accelerations follow a Laplace distribution with zero mean. Particle hop distances, viewed alone, ought to be distributed exponentially. However, the covariance between hop distances and travel times precludes this result. Instead, the covariance structure suggests that hop distances follow a Weibull distribution. These distributions are consistent with high-resolution measurements obtained from high-speed imaging of bed load particle motions. The analysis brings us closer to choosing distributions based on our mechanical insight.

  10. Probability distributions of bed load particle velocities, accelerations, hop distances, and travel times informed by Jaynes's principle of maximum entropy

    USGS Publications Warehouse

    Furbish, David J.; Schmeeckle, Mark; Schumer, Rina; Fathel, Siobhan L.

    2016-01-01

    We describe the most likely forms of the probability distributions of bed load particle velocities, accelerations, hop distances, and travel times, in a manner that formally appeals to inferential statistics while honoring mechanical and kinematic constraints imposed by equilibrium transport conditions. The analysis is based on E. Jaynes's elaboration of the implications of the similarity between the Gibbs entropy in statistical mechanics and the Shannon entropy in information theory. By maximizing the information entropy of a distribution subject to known constraints on its moments, our choice of the form of the distribution is unbiased. The analysis suggests that particle velocities and travel times are exponentially distributed and that particle accelerations follow a Laplace distribution with zero mean. Particle hop distances, viewed alone, ought to be distributed exponentially. However, the covariance between hop distances and travel times precludes this result. Instead, the covariance structure suggests that hop distances follow a Weibull distribution. These distributions are consistent with high-resolution measurements obtained from high-speed imaging of bed load particle motions. The analysis brings us closer to choosing distributions based on our mechanical insight.

  11. Maximum Entropy Method and Charge Flipping, a Powerful Combination to Visualize the True Nature of Structural Disorder from in situ X-ray Powder Diffraction Data

    SciTech Connect

    Samy, A.; Dinnebier, R; van Smaalen, S; Jansen, M

    2010-01-01

    In a systematic approach, the ability of the Maximum Entropy Method (MEM) to reconstruct the most probable electron density of highly disordered crystal structures from X-ray powder diffraction data was evaluated. As a case study, the ambient temperature crystal structures of disordered {alpha}-Rb{sub 2}[C{sub 2}O{sub 4}] and {alpha}-Rb{sub 2}[CO{sub 3}] and ordered {delta}-K{sub 2}[C{sub 2}O{sub 4}] were investigated in detail with the aim of revealing the 'true' nature of the apparent disorder. Different combinations of F (based on phased structure factors) and G constraints (based on structure-factor amplitudes) from different sources were applied in MEM calculations. In particular, a new combination of the MEM with the recently developed charge-flipping algorithm with histogram matching for powder diffraction data (pCF) was successfully introduced to avoid the inevitable bias of the phases of the structure-factor amplitudes by the Rietveld model. Completely ab initio electron-density distributions have been obtained with the MEM applied to a combination of structure-factor amplitudes from Le Bail fits with phases derived from pCF. All features of the crystal structures, in particular the disorder of the oxalate and carbonate anions, and the displacements of the cations, are clearly obtained. This approach bears the potential of a fast method of electron-density determination, even for highly disordered materials. All the MEM maps obtained in this work were compared with the MEM map derived from the best Rietveld refined model. In general, the phased observed structure factors obtained from Rietveld refinement (applying F and G constraints) were found to give the closest description of the experimental data and thus lead to the most accurate image of the actual disorder.

  12. Reprint of : Connection between wave transport through disordered 1D waveguides and energy density inside the sample: A maximum-entropy approach

    NASA Astrophysics Data System (ADS)

    Mello, Pier A.; Shi, Zhou; Genack, Azriel Z.

    2016-08-01

    We study the average energy - or particle - density of waves inside disordered 1D multiply-scattering media. We extend the transfer-matrix technique that was used in the past for the calculation of the intensity beyond the sample to study the intensity in the interior of the sample by considering the transfer matrices of the two segments that form the entire waveguide. The statistical properties of the two disordered segments are found using a maximum-entropy ansatz subject to appropriate constraints. The theoretical expressions are shown to be in excellent agreement with 1D transfer-matrix simulations.

  13. Single-particle spectral density of the unitary Fermi gas: Novel approach based on the operator product expansion, sum rules and the maximum entropy method

    SciTech Connect

    Gubler, Philipp; Yamamoto, Naoki; Hatsuda, Tetsuo; Nishida, Yusuke

    2015-05-15

    Making use of the operator product expansion, we derive a general class of sum rules for the imaginary part of the single-particle self-energy of the unitary Fermi gas. The sum rules are analyzed numerically with the help of the maximum entropy method, which allows us to extract the single-particle spectral density as a function of both energy and momentum. These spectral densities contain basic information on the properties of the unitary Fermi gas, such as the dispersion relation and the superfluid pairing gap, for which we obtain reasonable agreement with the available results based on quantum Monte-Carlo simulations.

  14. Emergence of spacetime dynamics in entropy corrected and braneworld models

    SciTech Connect

    Sheykhi, A.; Dehghani, M.H.; Hosseini, S.E. E-mail: mhd@shirazu.ac.ir

    2013-04-01

    A very interesting new proposal on the origin of the cosmic expansion was recently suggested by Padmanabhan [arXiv:1206.4916]. He argued that the difference between the surface degrees of freedom and the bulk degrees of freedom in a region of space drives the accelerated expansion of the universe, as well as the standard Friedmann equation through relation ΔV = Δt(N{sub sur}−N{sub bulk}). In this paper, we first present the general expression for the number of degrees of freedom on the holographic surface, N{sub sur}, using the general entropy corrected formula S = A/(4L{sub p}{sup 2})+s(A). Then, as two example, by applying the Padmanabhan's idea we extract the corresponding Friedmann equations in the presence of power-law and logarithmic correction terms in the entropy. We also extend the study to RS II and DGP braneworld models and derive successfully the correct form of the Friedmann equations in these theories. Our study further supports the viability of Padmanabhan's proposal.

  15. Shifting distributions of adult Atlantic sturgeon amidst post-industrialization and future impacts in the Delaware River: a maximum entropy approach.

    PubMed

    Breece, Matthew W; Oliver, Matthew J; Cimino, Megan A; Fox, Dewayne A

    2013-01-01

    Atlantic sturgeon (Acipenser oxyrinchus oxyrinchus) experienced severe declines due to habitat destruction and overfishing beginning in the late 19(th) century. Subsequent to the boom and bust period of exploitation, there has been minimal fishing pressure and improving habitats. However, lack of recovery led to the 2012 listing of Atlantic sturgeon under the Endangered Species Act. Although habitats may be improving, the availability of high quality spawning habitat, essential for the survival and development of eggs and larvae may still be a limiting factor in the recovery of Atlantic sturgeon. To estimate adult Atlantic sturgeon spatial distributions during riverine occupancy in the Delaware River, we utilized a maximum entropy (MaxEnt) approach along with passive biotelemetry during the likely spawning season. We found that substrate composition and distance from the salt front significantly influenced the locations of adult Atlantic sturgeon in the Delaware River. To broaden the scope of this study we projected our model onto four scenarios depicting varying locations of the salt front in the Delaware River: the contemporary location of the salt front during the likely spawning season, the location of the salt front during the historic fishery in the late 19(th) century, an estimated shift in the salt front by the year 2100 due to climate change, and an extreme drought scenario, similar to that which occurred in the 1960's. The movement of the salt front upstream as a result of dredging and climate change likely eliminated historic spawning habitats and currently threatens areas where Atlantic sturgeon spawning may be taking place. Identifying where suitable spawning substrate and water chemistry intersect with the likely occurrence of adult Atlantic sturgeon in the Delaware River highlights essential spawning habitats, enhancing recovery prospects for this imperiled species.

  16. Midnight Temperature Maximum (MTM) in Whole Atmosphere Model (WAM) Simulations

    DTIC Science & Technology

    2016-04-14

    C. G. (1996), Simulations of the low -latitude midnight temperature maximum, J. Geophys. Res., 101, 26,863–26,874. Forbes, J. M., S. L. Bruinsma, Y...Midnight temperature maximum (MTM) in Whole Atmosphere Model (WAM) simulations R. A. Akmaev,1 F. Wu,2 T. J. Fuller-Rowell,2 and H. Wang2 Received 13...February 2009; accepted 18 March 2009; published 14 April 2009. [1] Discovered almost four decades ago, the midnight temperature maximum (MTM) with

  17. Mixture Rasch Models with Joint Maximum Likelihood Estimation

    ERIC Educational Resources Information Center

    Willse, John T.

    2011-01-01

    This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…

  18. Modeling the Overalternating Bias with an Asymmetric Entropy Measure

    PubMed Central

    Gronchi, Giorgio; Raglianti, Marco; Noventa, Stefano; Lazzeri, Alessandro; Guazzini, Andrea

    2016-01-01

    Psychological research has found that human perception of randomness is biased. In particular, people consistently show the overalternating bias: they rate binary sequences of symbols (such as Heads and Tails in coin flipping) with an excess of alternation as more random than prescribed by the normative criteria of Shannon's entropy. Within data mining for medical applications, Marcellin proposed an asymmetric measure of entropy that can be ideal to account for such bias and to quantify subjective randomness. We fitted Marcellin's entropy and Renyi's entropy (a generalized form of uncertainty measure comprising many different kinds of entropies) to experimental data found in the literature with the Differential Evolution algorithm. We observed a better fit for Marcellin's entropy compared to Renyi's entropy. The fitted asymmetric entropy measure also showed good predictive properties when applied to different datasets of randomness-related tasks. We concluded that Marcellin's entropy can be a parsimonious and effective measure of subjective randomness that can be useful in psychological research about randomness perception. PMID:27458418

  19. Models, Entropy and Information of Temporal Social Networks

    NASA Astrophysics Data System (ADS)

    Zhao, Kun; Karsai, Márton; Bianconi, Ginestra

    Temporal social networks are characterized by heterogeneous duration of contacts, which can either follow a power-law distribution, such as in face-to-face interactions, or a Weibull distribution, such as in mobile-phone communication. Here we model the dynamics of face-to-face interaction and mobile phone communication by a reinforcement dynamics, which explains the data observed in these different types of social interactions. We quantify the information encoded in the dynamics of these networks by the entropy of temporal networks. Finally, we show evidence that human dynamics is able to modulate the information present in social network dynamics when it follows circadian rhythms and when it is interfacing with a new technology such as the mobile-phone communication technology.

  20. A stochastic model for the analysis of maximum daily temperature

    NASA Astrophysics Data System (ADS)

    Sirangelo, B.; Caloiero, T.; Coscarelli, R.; Ferrari, E.

    2016-08-01

    In this paper, a stochastic model for the analysis of the daily maximum temperature is proposed. First, a deseasonalization procedure based on the truncated Fourier expansion is adopted. Then, the Johnson transformation functions were applied for the data normalization. Finally, the fractionally autoregressive integrated moving average model was used to reproduce both short- and long-memory behavior of the temperature series. The model was applied to the data of the Cosenza gauge (Calabria region) and verified on other four gauges of southern Italy. Through a Monte Carlo simulation procedure based on the proposed model, 105 years of daily maximum temperature have been generated. Among the possible applications of the model, the occurrence probabilities of the annual maximum values have been evaluated. Moreover, the procedure was applied for the estimation of the return periods of long sequences of days with maximum temperature above prefixed thresholds.

  1. Scaling of Entanglement Entropy for the Heisenberg Model on Clusters Joined by Point Contacts

    NASA Astrophysics Data System (ADS)

    Friedman, B. A.; Levine, G. C.

    2016-11-01

    The scaling of entanglement entropy for the nearest neighbor antiferromagnetic Heisenberg spin model is studied computationally for clusters joined by a single bond. Bisecting the balanced three legged Bethe cluster, gives a second Renyi entropy and the valence bond entropy which scales as the number of sites in the cluster. For the analogous situation with square clusters, i.e. two L × L clusters joined by a single bond, numerical results suggest that the second Renyi entropy and the valence bond entropy scales as L. For both systems, the environment and the system are connected by the single bond and interaction is short range. The entropy is not constant with system size as suggested by the area law.

  2. Assessing Bayesian model averaging uncertainty of groundwater modeling based on information entropy method

    NASA Astrophysics Data System (ADS)

    Zeng, Xiankui; Wu, Jichun; Wang, Dong; Zhu, Xiaobin; Long, Yuqiao

    2016-07-01

    Because of groundwater conceptualization uncertainty, multi-model methods are usually used and the corresponding uncertainties are estimated by integrating Markov Chain Monte Carlo (MCMC) and Bayesian model averaging (BMA) methods. Generally, the variance method is used to measure the uncertainties of BMA prediction. The total variance of ensemble prediction is decomposed into within-model and between-model variances, which represent the uncertainties derived from parameter and conceptual model, respectively. However, the uncertainty of a probability distribution couldn't be comprehensively quantified by variance solely. A new measuring method based on information entropy theory is proposed in this study. Due to actual BMA process hard to meet the ideal mutually exclusive collectively exhaustive condition, BMA predictive uncertainty could be decomposed into parameter, conceptual model, and overlapped uncertainties, respectively. Overlapped uncertainty is induced by the combination of predictions from correlated model structures. In this paper, five simple analytical functions are firstly used to illustrate the feasibility of the variance and information entropy methods. A discrete distribution example shows that information entropy could be more appropriate to describe between-model uncertainty than variance. Two continuous distribution examples show that the two methods are consistent in measuring normal distribution, and information entropy is more appropriate to describe bimodal distribution than variance. The two examples of BMA uncertainty decomposition demonstrate that the two methods are relatively consistent in assessing the uncertainty of unimodal BMA prediction. Information entropy is more informative in describing the uncertainty decomposition of bimodal BMA prediction. Then, based on a synthetical groundwater model, the variance and information entropy methods are used to assess the BMA uncertainty of groundwater modeling. The uncertainty assessments of

  3. Two aspects of black hole entropy in Lanczos-Lovelock models of gravity

    NASA Astrophysics Data System (ADS)

    Kolekar, Sanved; Kothawala, Dawood; Padmanabhan, T.

    2012-03-01

    We consider two specific approaches to evaluate the black hole entropy which are known to produce correct results in the case of Einstein’s theory and generalize them to Lanczos-Lovelock models. In the first approach (which could be called extrinsic), we use a procedure motivated by earlier work by Pretorius, Vollick, and Israel, and by Oppenheim, and evaluate the entropy of a configuration of densely packed gravitating shells on the verge of forming a black hole in Lanczos-Lovelock theories of gravity. We find that this matter entropy is not equal to (it is less than) Wald entropy, except in the case of Einstein theory, where they are equal. The matter entropy is proportional to the Wald entropy if we consider a specific mth-order Lanczos-Lovelock model, with the proportionality constant depending on the spacetime dimensions D and the order m of the Lanczos-Lovelock theory as (D-2m)/(D-2). Since the proportionality constant depends on m, the proportionality between matter entropy and Wald entropy breaks down when we consider a sum of Lanczos-Lovelock actions involving different m. In the second approach (which could be called intrinsic), we generalize a procedure, previously introduced by Padmanabhan in the context of general relativity, to study off-shell entropy of a class of metrics with horizon using a path integral method. We consider the Euclidean action of Lanczos-Lovelock models for a class of metrics off shell and interpret it as a partition function. We show that in the case of spherically symmetric metrics, one can interpret the Euclidean action as the free energy and read off both the entropy and energy of a black hole spacetime. Surprisingly enough, this leads to exactly the Wald entropy and the energy of the spacetime in Lanczos-Lovelock models obtained by other methods. We comment on possible implications of the result.

  4. The performance evaluation model of mining project founded on the weight optimization entropy value method

    NASA Astrophysics Data System (ADS)

    Mao, Chao; Chen, Shou

    2017-01-01

    According to the traditional entropy value method still have low evaluation accuracy when evaluating the performance of mining projects, a performance evaluation model of mineral project founded on improved entropy is proposed. First establish a new weight assignment model founded on compatible matrix analysis of analytic hierarchy process (AHP) and entropy value method, when the compatibility matrix analysis to achieve consistency requirements, if it has differences between subjective weights and objective weights, moderately adjust both proportions, then on this basis, the fuzzy evaluation matrix for performance evaluation. The simulation experiments show that, compared with traditional entropy and compatible matrix analysis method, the proposed performance evaluation model of mining project based on improved entropy value method has higher accuracy assessment.

  5. Discrete state model and accurate estimation of loop entropy of RNA secondary structures.

    PubMed

    Zhang, Jian; Lin, Ming; Chen, Rong; Wang, Wei; Liang, Jie

    2008-03-28

    Conformational entropy makes important contribution to the stability and folding of RNA molecule, but it is challenging to either measure or compute conformational entropy associated with long loops. We develop optimized discrete k-state models of RNA backbone based on known RNA structures for computing entropy of loops, which are modeled as self-avoiding walks. To estimate entropy of hairpin, bulge, internal loop, and multibranch loop of long length (up to 50), we develop an efficient sampling method based on the sequential Monte Carlo principle. Our method considers excluded volume effect. It is general and can be applied to calculating entropy of loops with longer length and arbitrary complexity. For loops of short length, our results are in good agreement with a recent theoretical model and experimental measurement. For long loops, our estimated entropy of hairpin loops is in excellent agreement with the Jacobson-Stockmayer extrapolation model. However, for bulge loops and more complex secondary structures such as internal and multibranch loops, we find that the Jacobson-Stockmayer extrapolation model has large errors. Based on estimated entropy, we have developed empirical formulae for accurate calculation of entropy of long loops in different secondary structures. Our study on the effect of asymmetric size of loops suggest that loop entropy of internal loops is largely determined by the total loop length, and is only marginally affected by the asymmetric size of the two loops. Our finding suggests that the significant asymmetric effects of loop length in internal loops measured by experiments are likely to be partially enthalpic. Our method can be applied to develop improved energy parameters important for studying RNA stability and folding, and for predicting RNA secondary and tertiary structures. The discrete model and the program used to calculate loop entropy can be downloaded at http://gila.bioengr.uic.edu/resources/RNA.html.

  6. Entropy-Based Model for Interpreting Life Systems in Traditional Chinese Medicine

    PubMed Central

    Kang, Guo-lian; Zhang, Ji-feng

    2008-01-01

    Traditional Chinese medicine (TCM) treats qi as the core of the human life systems. Starting with a hypothetical correlation between TCM qi and the entropy theory, we address in this article a holistic model for evaluating and unveiling the rule of TCM life systems. Several new concepts such as acquired life entropy (ALE), acquired life entropy flow (ALEF) and acquired life entropy production (ALEP) are propounded to interpret TCM life systems. Using the entropy theory, mathematical models are established for ALE, ALEF and ALEP, which reflect the evolution of life systems. Some criteria are given on physiological activities and pathological changes of the body in different stages of life. Moreover, a real data-based simulation shows life entropies of the human body with different ages, Cold and Hot constitutions and in different seasons in North China are coincided with the manifestations of qi as well as the life evolution in TCM descriptions. Especially, based on the comparative and quantitative analysis, the entropy-based model can nicely describe the evolution of life entropies in Cold and Hot individuals thereby fitting the Yin–Yang theory in TCM. Thus, this work establishes a novel approach to interpret the fundamental principles in TCM, and provides an alternative understanding for the complex life systems. PMID:18830452

  7. A maximum entropy approach to the study of residue-specific backbone angle distributions in α-synuclein, an intrinsically disordered protein

    PubMed Central

    Mantsyzov, Alexey B; Maltsev, Alexander S; Ying, Jinfa; Shen, Yang; Hummer, Gerhard; Bax, Ad

    2014-01-01

    α-Synuclein is an intrinsically disordered protein of 140 residues that switches to an α-helical conformation upon binding phospholipid membranes. We characterize its residue-specific backbone structure in free solution with a novel maximum entropy procedure that integrates an extensive set of NMR data. These data include intraresidue and sequential HN–Hα and HN–HN NOEs, values for 3JHNHα, 1JHαCα, 2JCαN, and 1JCαN, as well as chemical shifts of 15N, 13Cα, and 13C′ nuclei, which are sensitive to backbone torsion angles. Distributions of these torsion angles were identified that yield best agreement to the experimental data, while using an entropy term to minimize the deviation from statistical distributions seen in a large protein coil library. Results indicate that although at the individual residue level considerable deviations from the coil library distribution are seen, on average the fitted distributions agree fairly well with this library, yielding a moderate population (20–30%) of the PPII region and a somewhat higher population of the potentially aggregation-prone β region (20–40%) than seen in the database. A generally lower population of the αR region (10–20%) is found. Analysis of 1H–1H NOE data required consideration of the considerable backbone diffusion anisotropy of a disordered protein. PMID:24976112

  8. Entanglement entropy production in gravitational collapse: covariant regularization and solvable models

    NASA Astrophysics Data System (ADS)

    Bianchi, Eugenio; De Lorenzo, Tommaso; Smerlak, Matteo

    2015-06-01

    We study the dynamics of vacuum entanglement in the process of gravitational collapse and subsequent black hole evaporation. In the first part of the paper, we introduce a covariant regularization of entanglement entropy tailored to curved spacetimes; this regularization allows us to propose precise definitions for the concepts of black hole "exterior entropy" and "radiation entropy." For a Vaidya model of collapse we find results consistent with the standard thermodynamic properties of Hawking radiation. In the second part of the paper, we compute the vacuum entanglement entropy of various spherically-symmetric spacetimes of interest, including the nonsingular black hole model of Bardeen, Hayward, Frolov and Rovelli-Vidotto and the "black hole fireworks" model of Haggard-Rovelli. We discuss specifically the role of event and trapping horizons in connection with the behavior of the radiation entropy at future null infinity. We observe in particular that ( i) in the presence of an event horizon the radiation entropy diverges at the end of the evaporation process, ( ii) in models of nonsingular evaporation (with a trapped region but no event horizon) the generalized second law holds only at early times and is violated in the "purifying" phase, ( iii) at late times the radiation entropy can become negative (i.e. the radiation can be less correlated than the vacuum) before going back to zero leading to an up-down-up behavior for the Page curve of a unitarily evaporating black hole.

  9. A new assessment method for urbanization environmental impact: urban environment entropy model and its application.

    PubMed

    Ouyang, Tingping; Fu, Shuqing; Zhu, Zhaoyu; Kuang, Yaoqiu; Huang, Ningsheng; Wu, Zhifeng

    2008-11-01

    The thermodynamic law is one of the most widely used scientific principles. The comparability between the environmental impact of urbanization and the thermodynamic entropy was systematically analyzed. Consequently, the concept "Urban Environment Entropy" was brought forward and the "Urban Environment Entropy" model was established for urbanization environmental impact assessment in this study. The model was then utilized in a case study for the assessment of river water quality in the Pearl River Delta Economic Zone. The results indicated that the assessing results of the model are consistent to that of the equalized synthetic pollution index method. Therefore, it can be concluded that the Urban Environment Entropy model has high reliability and can be applied widely in urbanization environmental assessment research using many different environmental parameters.

  10. The viscosity of planetary tholeiitic melts: A configurational entropy model

    NASA Astrophysics Data System (ADS)

    Sehlke, Alexander; Whittington, Alan G.

    2016-10-01

    The viscosity (η) of silicate melts is a fundamental physical property controlling mass transfer in magmatic systems. Viscosity can span many orders of magnitude, strongly depending on temperature and composition. Several models are available that describe this dependency for terrestrial melts quite well. Planetary basaltic lavas however are distinctly different in composition, being dominantly alkali-poor, iron-rich and/or highly magnesian. We measured the viscosity of 20 anhydrous tholeiitic melts, of which 15 represent known or estimated surface compositions of Mars, Mercury, the Moon, Io and Vesta, by concentric cylinder and parallel plate viscometry. The planetary basalts span a viscosity range of 2 orders of magnitude at liquidus temperatures and 4 orders of magnitude near the glass transition, and can be more or less viscous than terrestrial lavas. We find that current models under- and overestimate superliquidus viscosities by up to 2 orders of magnitude for these compositions, and deviate even more strongly from measured viscosities toward the glass transition. We used the Adam-Gibbs theory (A-G) to relate viscosity (η) to absolute temperature (T) and the configurational entropy of the system at that temperature (Sconf), which is in the form of log η =Ae +Be /TSconf . Heat capacities (CP) for glasses and liquids of our investigated compositions were calculated via available literature models. We show that the A-G theory is applicable to model the viscosity of individual complex tholeiitic melts containing 10 or more major oxides as well or better than the commonly used empirical equations. We successfully modeled the global viscosity data set using a constant Ae of -3.34 ± 0.22 log units and 12 adjustable sub-parameters, which capture the compositional and temperature dependence on melt viscosity. Seven sub-parameters account for the compositional dependence of Be and 5 for Sconf. Our model reproduces the 496 measured viscosity data points with a 1

  11. Using the Maximum Entropy Principle as a Unifying Theory Characterization and Sampling of Multi-Scaling Processes in Hydrometeorology

    DTIC Science & Technology

    2015-08-20

    for monitoring and modeling water-energy- carbon cycles at scales ranging from local to global. Technology Transfer NAME Total Number: Sabina Shahnaz...monitoring and modeling water-energy- carbon cycles of the Earth system. A new conceptual model has been developed to express freshwater flux... carbon dioxide and methane) fluxes, ocean freshwater fluxes, regional crop yield among others. An on-going study suggests that the global annual

  12. Cluster-size entropy in the Axelrod model of social influence: small-world networks and mass media.

    PubMed

    Gandica, Y; Charmell, A; Villegas-Febres, J; Bonalde, I

    2011-10-01

    We study the Axelrod's cultural adaptation model using the concept of cluster-size entropy S(c), which gives information on the variability of the cultural cluster size present in the system. Using networks of different topologies, from regular to random, we find that the critical point of the well-known nonequilibrium monocultural-multicultural (order-disorder) transition of the Axelrod model is given by the maximum of the S(c)(q) distributions. The width of the cluster entropy distributions can be used to qualitatively determine whether the transition is first or second order. By scaling the cluster entropy distributions we were able to obtain a relationship between the critical cultural trait q(c) and the number F of cultural features in two-dimensional regular networks. We also analyze the effect of the mass media (external field) on social systems within the Axelrod model in a square network. We find a partially ordered phase whose largest cultural cluster is not aligned with the external field, in contrast with a recent suggestion that this type of phase cannot be formed in regular networks. We draw a q-B phase diagram for the Axelrod model in regular networks.

  13. Cluster-size entropy in the Axelrod model of social influence: Small-world networks and mass media

    NASA Astrophysics Data System (ADS)

    Gandica, Y.; Charmell, A.; Villegas-Febres, J.; Bonalde, I.

    2011-10-01

    We study the Axelrod's cultural adaptation model using the concept of cluster-size entropy Sc, which gives information on the variability of the cultural cluster size present in the system. Using networks of different topologies, from regular to random, we find that the critical point of the well-known nonequilibrium monocultural-multicultural (order-disorder) transition of the Axelrod model is given by the maximum of the Sc(q) distributions. The width of the cluster entropy distributions can be used to qualitatively determine whether the transition is first or second order. By scaling the cluster entropy distributions we were able to obtain a relationship between the critical cultural trait qc and the number F of cultural features in two-dimensional regular networks. We also analyze the effect of the mass media (external field) on social systems within the Axelrod model in a square network. We find a partially ordered phase whose largest cultural cluster is not aligned with the external field, in contrast with a recent suggestion that this type of phase cannot be formed in regular networks. We draw a q-B phase diagram for the Axelrod model in regular networks.

  14. Intuitionistic Fuzzy Weighted Linear Regression Model with Fuzzy Entropy under Linear Restrictions.

    PubMed

    Kumar, Gaurav; Bajaj, Rakesh Kumar

    2014-01-01

    In fuzzy set theory, it is well known that a triangular fuzzy number can be uniquely determined through its position and entropies. In the present communication, we extend this concept on triangular intuitionistic fuzzy number for its one-to-one correspondence with its position and entropies. Using the concept of fuzzy entropy the estimators of the intuitionistic fuzzy regression coefficients have been estimated in the unrestricted regression model. An intuitionistic fuzzy weighted linear regression (IFWLR) model with some restrictions in the form of prior information has been considered. Further, the estimators of regression coefficients have been obtained with the help of fuzzy entropy for the restricted/unrestricted IFWLR model by assigning some weights in the distance function.

  15. Where and how long ago was water in the western North Atlantic ventilated? Maximum entropy inversions of bottle data from WOCE line A20

    NASA Astrophysics Data System (ADS)

    Holzer, Mark; Primeau, FrançOis W.; Smethie, William M.; Khatiwala, Samar

    2010-07-01

    A maximum entropy (ME) method is used to deconvolve tracer data for the joint distribution ? of locations and times since last ventilation. The deconvolutions utilize World Ocean Circulation Experiment line A20 repeat hydrography for CFC-11, potential temperature, salinity, oxygen, and phosphate, as well as Global Ocean Data Analysis Project (GLODAP) radiocarbon data, combined with surface boundary conditions derived from the atmospheric history of CFC-11 and the World Ocean Atlas 2005 and GLODAP databases. Because of the limited number of available tracers the deconvolutions are highly underdetermined, leading to large entropic uncertainties, which are quantified using the information entropy of ? relative to a prior distribution. Additional uncertainties resulting from data sparsity are estimated using a Monte Carlo approach and found to be of secondary importance. The ME deconvolutions objectively identify key water mass formation regions and quantify the local fraction of water of age τ or older last ventilated in each region. Ideal mean age and radiocarbon age are also estimated but found to have large entropic uncertainties that can be attributed to uncertainties in the partitioning of a given water parcel according to where it was last ventilated. Labrador/Irminger seawater (L water) is determined to be mostly less than ˜40 a old in the vicinity of the deep western boundary current (DWBC) at the northern end of A20 but several decades older where the DWBC recrosses the section further south, pointing to the importance of mixing via a multitude of eddy-diffusive paths. Overflow water lies primarily below L water with young waters (τ ≲ 40 a) at middepth in the northern part of A20 and waters as old as ˜600 a below ˜3500 m.

  16. Entropy, chaos, and excited-state quantum phase transitions in the Dicke model.

    PubMed

    Lóbez, C M; Relaño, A

    2016-07-01

    We study nonequilibrium processes in an isolated quantum system-the Dicke model-focusing on the role played by the transition from integrability to chaos and the presence of excited-state quantum phase transitions. We show that both diagonal and entanglement entropies are abruptly increased by the onset of chaos. Also, this increase ends in both cases just after the system crosses the critical energy of the excited-state quantum phase transition. The link between entropy production, the development of chaos, and the excited-state quantum phase transition is more clear for the entanglement entropy.

  17. Dynamic approximate entropy electroanatomic maps detect rotors in a simulated atrial fibrillation model.

    PubMed

    Ugarte, Juan P; Orozco-Duque, Andrés; Tobón, Catalina; Kremen, Vaclav; Novak, Daniel; Saiz, Javier; Oesterlein, Tobias; Schmitt, Clauss; Luik, Armin; Bustamante, John

    2014-01-01

    There is evidence that rotors could be drivers that maintain atrial fibrillation. Complex fractionated atrial electrograms have been located in rotor tip areas. However, the concept of electrogram fractionation, defined using time intervals, is still controversial as a tool for locating target sites for ablation. We hypothesize that the fractionation phenomenon is better described using non-linear dynamic measures, such as approximate entropy, and that this tool could be used for locating the rotor tip. The aim of this work has been to determine the relationship between approximate entropy and fractionated electrograms, and to develop a new tool for rotor mapping based on fractionation levels. Two episodes of chronic atrial fibrillation were simulated in a 3D human atrial model, in which rotors were observed. Dynamic approximate entropy maps were calculated using unipolar electrogram signals generated over the whole surface of the 3D atrial model. In addition, we optimized the approximate entropy calculation using two real multi-center databases of fractionated electrogram signals, labeled in 4 levels of fractionation. We found that the values of approximate entropy and the levels of fractionation are positively correlated. This allows the dynamic approximate entropy maps to localize the tips from stable and meandering rotors. Furthermore, we assessed the optimized approximate entropy using bipolar electrograms generated over a vicinity enclosing a rotor, achieving rotor detection. Our results suggest that high approximate entropy values are able to detect a high level of fractionation and to locate rotor tips in simulated atrial fibrillation episodes. We suggest that dynamic approximate entropy maps could become a tool for atrial fibrillation rotor mapping.

  18. Backward transfer entropy: Informational measure for detecting hidden Markov models and its interpretations in thermodynamics, gambling and causality

    NASA Astrophysics Data System (ADS)

    Ito, Sosuke

    2016-11-01

    The transfer entropy is a well-established measure of information flow, which quantifies directed influence between two stochastic time series and has been shown to be useful in a variety fields of science. Here we introduce the transfer entropy of the backward time series called the backward transfer entropy, and show that the backward transfer entropy quantifies how far it is from dynamics to a hidden Markov model. Furthermore, we discuss physical interpretations of the backward transfer entropy in completely different settings of thermodynamics for information processing and the gambling with side information. In both settings of thermodynamics and the gambling, the backward transfer entropy characterizes a possible loss of some benefit, where the conventional transfer entropy characterizes a possible benefit. Our result implies the deep connection between thermodynamics and the gambling in the presence of information flow, and that the backward transfer entropy would be useful as a novel measure of information flow in nonequilibrium thermodynamics, biochemical sciences, economics and statistics.

  19. Entropy analysis on non-equilibrium two-phase flow models

    SciTech Connect

    Karwat, H.; Ruan, Y.Q.

    1995-09-01

    A method of entropy analysis according to the second law of thermodynamics is proposed for the assessment of a class of practical non-equilibrium two-phase flow models. Entropy conditions are derived directly from a local instantaneous formulation for an arbitrary control volume of a structural two-phase fluid, which are finally expressed in terms of the averaged thermodynamic independent variables and their time derivatives as well as the boundary conditions for the volume. On the basis of a widely used thermal-hydraulic system code it is demonstrated with practical examples that entropy production rates in control volumes can be numerically quantified by using the data from the output data files. Entropy analysis using the proposed method is useful in identifying some potential problems in two-phase flow models and predictions as well as in studying the effects of some free parameters in closure relationships.

  20. An Integrated Modeling Framework for Probable Maximum Precipitation and Flood

    NASA Astrophysics Data System (ADS)

    Gangrade, S.; Rastogi, D.; Kao, S. C.; Ashfaq, M.; Naz, B. S.; Kabela, E.; Anantharaj, V. G.; Singh, N.; Preston, B. L.; Mei, R.

    2015-12-01

    With the increasing frequency and magnitude of extreme precipitation and flood events projected in the future climate, there is a strong need to enhance our modeling capabilities to assess the potential risks on critical energy-water infrastructures such as major dams and nuclear power plants. In this study, an integrated modeling framework is developed through high performance computing to investigate the climate change effects on probable maximum precipitation (PMP) and probable maximum flood (PMF). Multiple historical storms from 1981-2012 over the Alabama-Coosa-Tallapoosa River Basin near the Atlanta metropolitan area are simulated by the Weather Research and Forecasting (WRF) model using the Climate Forecast System Reanalysis (CFSR) forcings. After further WRF model tuning, these storms are used to simulate PMP through moisture maximization at initial and lateral boundaries. A high resolution hydrological model, Distributed Hydrology-Soil-Vegetation Model, implemented at 90m resolution and calibrated by the U.S. Geological Survey streamflow observations, is then used to simulate the corresponding PMF. In addition to the control simulation that is driven by CFSR, multiple storms from the Community Climate System Model version 4 under the Representative Concentrations Pathway 8.5 emission scenario are used to simulate PMP and PMF in the projected future climate conditions. The multiple PMF scenarios developed through this integrated modeling framework may be utilized to evaluate the vulnerability of existing energy-water infrastructures with various aspects associated PMP and PMF.

  1. Inferring Markov chains: Bayesian estimation, model comparison, entropy rate, and out-of-class modeling.

    PubMed

    Strelioff, Christopher C; Crutchfield, James P; Hübler, Alfred W

    2007-07-01

    Markov chains are a natural and well understood tool for describing one-dimensional patterns in time or space. We show how to infer kth order Markov chains, for arbitrary k , from finite data by applying Bayesian methods to both parameter estimation and model-order selection. Extending existing results for multinomial models of discrete data, we connect inference to statistical mechanics through information-theoretic (type theory) techniques. We establish a direct relationship between Bayesian evidence and the partition function which allows for straightforward calculation of the expectation and variance of the conditional relative entropy and the source entropy rate. Finally, we introduce a method that uses finite data-size scaling with model-order comparison to infer the structure of out-of-class processes.

  2. Maximum parsimony, substitution model, and probability phylogenetic trees.

    PubMed

    Weng, J F; Thomas, D A; Mareels, I

    2011-01-01

    The problem of inferring phylogenies (phylogenetic trees) is one of the main problems in computational biology. There are three main methods for inferring phylogenies-Maximum Parsimony (MP), Distance Matrix (DM) and Maximum Likelihood (ML), of which the MP method is the most well-studied and popular method. In the MP method the optimization criterion is the number of substitutions of the nucleotides computed by the differences in the investigated nucleotide sequences. However, the MP method is often criticized as it only counts the substitutions observable at the current time and all the unobservable substitutions that really occur in the evolutionary history are omitted. In order to take into account the unobservable substitutions, some substitution models have been established and they are now widely used in the DM and ML methods but these substitution models cannot be used within the classical MP method. Recently the authors proposed a probability representation model for phylogenetic trees and the reconstructed trees in this model are called probability phylogenetic trees. One of the advantages of the probability representation model is that it can include a substitution model to infer phylogenetic trees based on the MP principle. In this paper we explain how to use a substitution model in the reconstruction of probability phylogenetic trees and show the advantage of this approach with examples.

  3. Expected Shannon Entropy and Shannon Differentiation between Subpopulations for Neutral Genes under the Finite Island Model

    PubMed Central

    Chao, Anne; Jost, Lou; Hsieh, T. C.; Ma, K. H.; Sherwin, William B.; Rollins, Lee Ann

    2015-01-01

    Shannon entropy H and related measures are increasingly used in molecular ecology and population genetics because (1) unlike measures based on heterozygosity or allele number, these measures weigh alleles in proportion to their population fraction, thus capturing a previously-ignored aspect of allele frequency distributions that may be important in many applications; (2) these measures connect directly to the rich predictive mathematics of information theory; (3) Shannon entropy is completely additive and has an explicitly hierarchical nature; and (4) Shannon entropy-based differentiation measures obey strong monotonicity properties that heterozygosity-based measures lack. We derive simple new expressions for the expected values of the Shannon entropy of the equilibrium allele distribution at a neutral locus in a single isolated population under two models of mutation: the infinite allele model and the stepwise mutation model. Surprisingly, this complex stochastic system for each model has an entropy expressable as a simple combination of well-known mathematical functions. Moreover, entropy- and heterozygosity-based measures for each model are linked by simple relationships that are shown by simulations to be approximately valid even far from equilibrium. We also identify a bridge between the two models of mutation. We apply our approach to subdivided populations which follow the finite island model, obtaining the Shannon entropy of the equilibrium allele distributions of the subpopulations and of the total population. We also derive the expected mutual information and normalized mutual information (“Shannon differentiation”) between subpopulations at equilibrium, and identify the model parameters that determine them. We apply our measures to data from the common starling (Sturnus vulgaris) in Australia. Our measures provide a test for neutrality that is robust to violations of equilibrium assumptions, as verified on real world data from starlings. PMID

  4. Expected Shannon Entropy and Shannon Differentiation between Subpopulations for Neutral Genes under the Finite Island Model.

    PubMed

    Chao, Anne; Jost, Lou; Hsieh, T C; Ma, K H; Sherwin, William B; Rollins, Lee Ann

    2015-01-01

    Shannon entropy H and related measures are increasingly used in molecular ecology and population genetics because (1) unlike measures based on heterozygosity or allele number, these measures weigh alleles in proportion to their population fraction, thus capturing a previously-ignored aspect of allele frequency distributions that may be important in many applications; (2) these measures connect directly to the rich predictive mathematics of information theory; (3) Shannon entropy is completely additive and has an explicitly hierarchical nature; and (4) Shannon entropy-based differentiation measures obey strong monotonicity properties that heterozygosity-based measures lack. We derive simple new expressions for the expected values of the Shannon entropy of the equilibrium allele distribution at a neutral locus in a single isolated population under two models of mutation: the infinite allele model and the stepwise mutation model. Surprisingly, this complex stochastic system for each model has an entropy expressable as a simple combination of well-known mathematical functions. Moreover, entropy- and heterozygosity-based measures for each model are linked by simple relationships that are shown by simulations to be approximately valid even far from equilibrium. We also identify a bridge between the two models of mutation. We apply our approach to subdivided populations which follow the finite island model, obtaining the Shannon entropy of the equilibrium allele distributions of the subpopulations and of the total population. We also derive the expected mutual information and normalized mutual information ("Shannon differentiation") between subpopulations at equilibrium, and identify the model parameters that determine them. We apply our measures to data from the common starling (Sturnus vulgaris) in Australia. Our measures provide a test for neutrality that is robust to violations of equilibrium assumptions, as verified on real world data from starlings.

  5. Constant Entropy Properties for an Approximate Model of Equilibrium Air

    NASA Technical Reports Server (NTRS)

    Hansen, C. Frederick; Hodge, Marion E.

    1961-01-01

    Approximate analytic solutions for properties of equilibrium air up to 15,000 K have been programmed for machine computation. Temperature, compressibility, enthalpy, specific heats, and speed of sound are tabulated as constant entropy functions of temperature. The reciprocal of acoustic impedance and its integral with respect to pressure are also given for the purpose of evaluating the Riemann constants for one-dimensional, isentropic flow.

  6. A Theoretical Study of Ene Reactions in Solution: A Solution-Phase Translational Entropy Model.

    PubMed

    Zhao, Liu; Li, Shi-Jun; Fang, De-Cai

    2015-12-01

    Several density functional theory (DFT) methods, such as CAM-B3LYP, M06, ωB97x, and ωB97xD, are used to characterize a range of ene reactions. The Gibbs free energy, activation enthalpy, and entropy are calculated with both the gas- and solution-phase translational entropy; the results obtained from the solution-phase translational entropies are quite close to the experimental measurements, whereas the gas-phase translational entropies do not perform well. For ene reactions between the enophile propanedioic acid (2-oxo-1,3-dimethyl ester) and π donors, the two-solvent-involved explicit+implicit model can be employed to obtain accurate activation entropies and free-energy barriers, because the interaction between the carbonyl oxygen atom and the solvent in the transition state is strengthened with the formation of C-C and O-H bonds. In contrast, an implicit solvent model is adequate to calculate activation entropies and free-energy barriers for the corresponding reactions of the enophile 4-phenyl-1,2,4-triazoline-3,5-dione.

  7. Rényi entropy perspective on topological order in classical toric code models

    NASA Astrophysics Data System (ADS)

    Helmes, Johannes; Stéphan, Jean-Marie; Trebst, Simon

    2015-09-01

    Concepts of information theory are increasingly used to characterize collective phenomena in condensed matter systems, such as the use of entanglement entropies to identify emergent topological order in interacting quantum many-body systems. Here, we employ classical variants of these concepts, in particular Rényi entropies and their associated mutual information, to identify topological order in classical systems. Like for their quantum counterparts, the presence of topological order can be identified in such classical systems via a universal, subleading contribution to the prevalent volume and boundary laws of the classical Rényi entropies. We demonstrate that an additional subleading O (1 ) contribution generically arises for all Rényi entropies S(n ) with n ≥2 when driving the system towards a phase transition, e.g., into a conventionally ordered phase. This additional subleading term, which we dub connectivity contribution, tracks back to partial subsystem ordering and is proportional to the number of connected parts in a given bipartition. Notably, the Levin-Wen summation scheme, typically used to extract the topological contribution to the Rényi entropies, does not fully eliminate this additional connectivity contribution in this classical context. This indicates that the distillation of topological order from Rényi entropies requires an additional level of scrutiny to distinguish topological from nontopological O (1 ) contributions. This is also the case for quantum systems, for which we discuss which entropies are sensitive to these connectivity contributions. We showcase these findings by extensive numerical simulations of a classical variant of the toric code model, for which we study the stability of topological order in the presence of a magnetic field and at finite temperatures from a Rényi entropy perspective.

  8. Tracking instantaneous entropy in heartbeat dynamics through inhomogeneous point-process nonlinear models.

    PubMed

    Valenza, Gaetano; Citi, Luca; Scilingo, Enzo Pasquale; Barbieri, Riccardo

    2014-01-01

    Measures of entropy have been proved as powerful quantifiers of complex nonlinear systems, particularly when applied to stochastic series of heartbeat dynamics. Despite the remarkable achievements obtained through standard definitions of approximate and sample entropy, a time-varying definition of entropy characterizing the physiological dynamics at each moment in time is still missing. To this extent, we propose two novel measures of entropy based on the inho-mogeneous point-process theory. The RR interval series is modeled through probability density functions (pdfs) which characterize and predict the time until the next event occurs as a function of the past history. Laguerre expansions of the Wiener-Volterra autoregressive terms account for the long-term nonlinear information. As the proposed measures of entropy are instantaneously defined through such probability functions, the proposed indices are able to provide instantaneous tracking of autonomic nervous system complexity. Of note, the distance between the time-varying phase-space vectors is calculated through the Kolmogorov-Smirnov distance of two pdfs. Experimental results, obtained from the analysis of RR interval series extracted from ten healthy subjects during stand-up tasks, suggest that the proposed entropy indices provide instantaneous tracking of the heartbeat complexity, also allowing for the definition of complexity variability indices.

  9. Neuronal Entropy-Rate Feature of Entopeduncular Nucleus in Rat Model of Parkinson's Disease.

    PubMed

    Darbin, Olivier; Jin, Xingxing; Von Wrangel, Christof; Schwabe, Kerstin; Nambu, Atsushi; Naritoku, Dean K; Krauss, Joachim K; Alam, Mesbah

    2016-03-01

    The function of the nigro-striatal pathway on neuronal entropy in the basal ganglia (BG) output nucleus, i.e. the entopeduncular nucleus (EPN) was investigated in the unilaterally 6-hyroxydopamine (6-OHDA)-lesioned rat model of Parkinson's disease (PD). In both control subjects and subjects with 6-OHDA lesion of dopamine (DA) the nigro-striatal pathway, a histological hallmark for parkinsonism, neuronal entropy in EPN was maximal in neurons with firing rates ranging between 15 and 25 Hz. In 6-OHDA lesioned rats, neuronal entropy in the EPN was specifically higher in neurons with firing rates above 25 Hz. Our data establishes that the nigro-striatal pathway controls neuronal entropy in motor circuitry and that the parkinsonian condition is associated with abnormal relationship between firing rate and neuronal entropy in BG output nuclei. The neuronal firing rates and entropy relationship provide putative relevant electrophysiological information to investigate the sensory-motor processing in normal condition and conditions such as movement disorders.

  10. Interface tension and interface entropy in the 2+1 flavor Nambu-Jona-Lasinio model

    NASA Astrophysics Data System (ADS)

    Ke, Wei-yao; Liu, Yu-xin

    2014-04-01

    We study the QCD phases and their transitions in the 2+1 flavor Nambu-Jona-Lasinio model, with a focus on the interface effects such as the interface tension, the interface entropy, and the critical bubble size in the coexistence region of the first-order phase transitions. Our results show that under the thin-wall approximation, the interface contribution to the total entropy density changes its discontinuity scale in the first-order phase transition. However, the entropy density of the dynamical chiral symmetry (DCS) phase is always greater than that of the dynamical chiral symmetry broken (DCSB) phase in both the heating and hadronization processes. To address this entropy puzzle, the thin-wall approximation is evaluated in the present work. We find that the puzzle can be attributed to an overestimate of the critical bubble size at low temperature in the hadronization process. With an improvement on the thin-wall approximation, the entropy puzzle is well solved with the total entropy density of the hadron-DCSB phase exceeding apparently that of the DCS-quark phase at low temperature.

  11. Bayesian and maximum likelihood estimation of hierarchical response time models

    PubMed Central

    Farrell, Simon; Ludwig, Casimir

    2008-01-01

    Hierarchical (or multilevel) statistical models have become increasingly popular in psychology in the last few years. We consider the application of multilevel modeling to the ex-Gaussian, a popular model of response times. Single-level estimation is compared with hierarchical estimation of parameters of the ex-Gaussian distribution. Additionally, for each approach maximum likelihood (ML) estimation is compared with Bayesian estimation. A set of simulations and analyses of parameter recovery show that although all methods perform adequately well, hierarchical methods are better able to recover the parameters of the ex-Gaussian by reducing the variability in recovered parameters. At each level, little overall difference was observed between the ML and Bayesian methods. PMID:19001592

  12. Maximum sustainable yields from a spatially-explicit harvest model.

    PubMed

    Takashina, Nao; Mougi, Akihiko

    2015-10-21

    Spatial heterogeneity plays an important role in complex ecosystem dynamics, and therefore is also an important consideration in sustainable resource management. However, little is known about how spatial effects can influence management targets derived from a non-spatial harvest model. Here, we extended the Schaefer model, a conventional non-spatial harvest model that is widely used in resource management, to a spatially-explicit harvest model by integrating environmental heterogeneities, as well as species exchange between patches. By comparing the maximum sustainable yields (MSY), one of the central management targets in resource management, obtained from the spatially extended model with that of the conventional model, we examined the effect of spatial heterogeneity. When spatial heterogeneity exists, we found that the Schaefer model tends to overestimate the MSY, implying potential for causing overharvesting. In addition, by assuming a well-mixed population in the heterogeneous environment, we showed analytically that the Schaefer model always overestimate the MSY, regardless of the number of patches existing. The degree of overestimation becomes significant when spatial heterogeneity is marked. Collectively, these results highlight the importance of integrating the spatial structure to conduct sustainable resource management.

  13. An information entropy model on clinical assessment of patients based on the holographic field of meridian

    NASA Astrophysics Data System (ADS)

    Wu, Jingjing; Wu, Xinming; Li, Pengfei; Li, Nan; Mao, Xiaomei; Chai, Lihe

    2017-04-01

    Meridian system is not only the basis of traditional Chinese medicine (TCM) method (e.g. acupuncture, massage), but also the core of TCM's basic theory. This paper has introduced a new informational perspective to understand the reality and the holographic field of meridian. Based on maximum information entropy principle (MIEP), a dynamic equation for the holographic field has been deduced, which reflects the evolutionary characteristics of meridian. By using self-organizing artificial neural network as algorithm, the evolutionary dynamic equation of the holographic field can be resolved to assess properties of meridians and clinically diagnose the health characteristics of patients. Finally, through some cases from clinical patients (e.g. a 30-year-old male patient, an apoplectic patient, an epilepsy patient), we use this model to assess the evolutionary properties of meridians. It is proved that this model not only has significant implications in revealing the essence of meridian in TCM, but also may play a guiding role in clinical assessment of patients based on the holographic field of meridians.

  14. Modeling of groundwater productivity in northeastern Wasit Governorate, Iraq using frequency ratio and Shannon's entropy models

    NASA Astrophysics Data System (ADS)

    Al-Abadi, Alaa M.

    2015-04-01

    In recent years, delineation of groundwater productivity zones plays an increasingly important role in sustainable management of groundwater resource throughout the world. In this study, groundwater productivity index of northeastern Wasit Governorate was delineated using probabilistic frequency ratio (FR) and Shannon's entropy models in framework of GIS. Eight factors believed to influence the groundwater occurrence in the study area were selected and used as the input data. These factors were elevation (m), slope angle (degree), geology, soil, aquifer transmissivity (m2/d), storativity (dimensionless), distance to river (m), and distance to faults (m). In the first step, borehole location inventory map consisting of 68 boreholes with relatively high yield (>8 l/sec) was prepared. 47 boreholes (70 %) were used as training data and the remaining 21 (30 %) were used for validation. The predictive capability of each model was determined using relative operating characteristic technique. The results of the analysis indicate that the FR model with a success rate of 87.4 % and prediction rate 86.9 % performed slightly better than Shannon's entropy model with success rate of 84.4 % and prediction rate of 82.4 %. The resultant groundwater productivity index was classified into five classes using natural break classification scheme: very low, low, moderate, high, and very high. The high-very high classes for FR and Shannon's entropy models occurred within 30 % (217 km2) and 31 % (220 km2), respectively indicating low productivity conditions of the aquifer system. From final results, both of the models were capable to prospect GWPI with very good results, but FR was better in terms of success and prediction rates. Results of this study could be helpful for better management of groundwater resources in the study area and give planners and decision makers an opportunity to prepare appropriate groundwater investment plans.

  15. High flow-resolution for mobility estimation in 2D-ENMR of proteins using maximum entropy method (MEM-ENMR).

    PubMed

    Thakur, Sunitha B; He, Qiuhong

    2006-11-01

    Multidimensional electrophoretic NMR (nD-ENMR) is a potentially powerful tool for structural characterization of co-existing proteins and protein conformations. By applying a DC electric field pulse, the electrophoretic migration rates of different proteins were detected experimentally in a new dimension of electrophoretic flow. The electrophoretic mobilities were employed to differentiate protein signals. In U-shaped ENMR sample chambers, individual protein components in a solution mixture followed a cosinusoidal electrophoretic interferogram as a function of its unique electrophoretic migration rate. After Fourier transformation in the electrophoretic flow dimension, the protein signals were resolved at different resonant frequencies proportional to their electrophoretic mobilities. Currently, the mobility resolution of the proteins in the electrophoretic flow dimension is limited by severe truncations of the electrophoretic interferograms due to the finite electric field strength available before the onset of heat-induced convection. In this article, we present a successful signal processing method, the Burg's maximum entropy method (MEM), to analyze the truncated ENMR signals (MEM-ENMR). Significant enhancement in flow resolution was demonstrated using two-dimensional ENMR of two protein samples: a lysozyme solution and a solution mixture of bovine serum albumin (BSA) and ubiquitin. The electrophoretic mobilities of lysozyme, BSA and ubiquitin were measured from the MEM analysis as 7.5x10(-5), 1.9x10(-4) and 8.7x10(-5) cm2 V-1 s-1, respectively. Results from computer simulations confirmed a complete removal of truncation artifacts in the MEM-ENMR spectra with 3- to 6-fold resolution enhancement.

  16. Experimental tests of the von Karman self-preservation hypothesis: decay of an electron plasma to a near-maximum entropy state

    NASA Astrophysics Data System (ADS)

    Rodgers, D.; Servidio, S.; Matthaeus, W. H.; Montgomery, D.; Mitchell, T.; Aziz, T.

    2009-12-01

    The self-preservation hypothesis of von Karman [1] implies that in three dimensiolnal turbulence the energy E decays as dE/dt = - a Z^3/L, where a is a constant, Z is the turbulence amplitude and L is a simlarity length scale. Extensions of this idea to MHD [2] has been of great utility in solar wind and coronal heating studies. Here we conduct an experimental study of this idea in the context of two dimensional electron plasma turbulence. In particular, we examine the time evolution that leads to dynamical relaxation of a pure electron plasma in a Malmberg-Penning (MP) trap, comparing experiments and statistical theories of weakly dissipative two-dimensional (2D) turbulence [3]. A formulation of von Karman-Howarth (vKH) self-preserving decay is presented for a 2D positive-vorticity fluid, a system that corresponds closely to a 2D electron ExB drift plasma. When the enstrophy of the meta-stable equilibrium is accounted for, the enstrophy decay follows the predicted vKH decay for a variety of initial conditions in the MP experiment. Statistical analysis favors a theoretical picture of relaxation to a near-maximum entropy state, evidently driven by a self-preserving decay of enstrophy. [1] T. de Karman and L. Howarth, Proc. Roy. Soc Lon. A, 164, 192, 1938. [2] W. H. Matthaeus, G. P. Zank, and S. Oughton. J. Plas. Phys., 56:659, 1996. [3] D. J. Rodgers, S. Servidio, W. H. Matthaeus, D. C. Montgomery, T. B. Mitchell, and T. Aziz. Phys. Rev. Lett., 102(24):244501, 2009.

  17. Maximum likelihood estimation in meta-analytic structural equation modeling.

    PubMed

    Oort, Frans J; Jak, Suzanne

    2016-06-01

    Meta-analytic structural equation modeling (MASEM) involves fitting models to a common population correlation matrix that is estimated on the basis of correlation coefficients that are reported by a number of independent studies. MASEM typically consist of two stages. The method that has been found to perform best in terms of statistical properties is the two-stage structural equation modeling, in which maximum likelihood analysis is used to estimate the common correlation matrix in the first stage, and weighted least squares analysis is used to fit structural equation models to the common correlation matrix in the second stage. In the present paper, we propose an alternative method, ML MASEM, that uses ML estimation throughout. In a simulation study, we use both methods and compare chi-square distributions, bias in parameter estimates, false positive rates, and true positive rates. Both methods appear to yield unbiased parameter estimates and false and true positive rates that are close to the expected values. ML MASEM parameter estimates are found to be significantly less bias than two-stage structural equation modeling estimates, but the differences are very small. The choice between the two methods may therefore be based on other fundamental or practical arguments. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Configurational Information as Potentially Negative Entropy: The Triple Helix Model

    NASA Astrophysics Data System (ADS)

    Leydesdorff, Loet

    2008-12-01

    Configurational information is generated when three or more sources of variance interact. The variations not only disturb each other relationally, but by selecting upon each other, they are also positioned in a configuration. A configuration can be stabilized and/or globalized. Different stabilizations can be considered as second-order variation, and globalization as a second-order selection. The positive manifestations and the negative selections operate upon one another by adding and reducing uncertainty, respectively. Reduction of uncertainty in a configuration can be measured in bits of information. The variables can also be considered as dimensions of the probabilistic entropy in the system(s) under study. The configurational information then provides us with a measure of synergy within a complex system. For example, the knowledge base of an economy can be considered as such a synergy in the otherwise virtual (that is, fourth) dimension of a regime

  19. Joint modelling of annual maximum drought severity and corresponding duration

    NASA Astrophysics Data System (ADS)

    Tosunoglu, Fatih; Kisi, Ozgur

    2016-12-01

    In recent years, the joint distribution properties of drought characteristics (e.g. severity, duration and intensity) have been widely evaluated using copulas. However, history of copulas in modelling drought characteristics obtained from streamflow data is still short, especially in semi-arid regions, such as Turkey. In this study, unlike previous studies, drought events are characterized by annual maximum severity (AMS) and corresponding duration (CD) which are extracted from daily streamflow of the seven gauge stations located in Çoruh Basin, Turkey. On evaluation of the various univariate distributions, the Exponential, Weibull and Logistic distributions are identified as marginal distributions for the AMS and CD series. Archimedean copulas, namely Ali-Mikhail-Haq, Clayton, Frank and Gumbel-Hougaard, are then employed to model joint distribution of the AMS and CD series. With respect to the Anderson Darling and Cramér-von Mises statistical tests and the tail dependence assessment, Gumbel-Hougaard copula is identified as the most suitable model for joint modelling of the AMS and CD series at each station. Furthermore, the developed Gumbel-Hougaard copulas are used to derive the conditional and joint return periods of the AMS and CD series which can be useful for designing and management of reservoirs in the basin.

  20. The Entropy Solutions for the Lighthill-Whitham-Richards Traffic Flow Model with a Discontinuous Flow-Density Relationship

    DTIC Science & Technology

    2007-01-01

    The entropy solutions for the Lighthill-Whitham-Richards traffic flow model with a discontinuous flow-density relationship Yadong Lu1, S.C. Wong2...Mengping Zhang3, Chi-Wang Shu4 Abstract In this paper we explicitly construct the entropy solutions for the Lighthill-Whitham- Richards (LWR) traffic...polynomials meet, and with piecewise linear initial condition and piecewise constant boundary conditions. The existence and uniqueness of entropy solutions

  1. Fine structure of the entanglement entropy in the O(2) model

    NASA Astrophysics Data System (ADS)

    Yang, Li-Ping; Liu, Yuzhi; Zou, Haiyuan; Xie, Z. Y.; Meurice, Y.

    2016-01-01

    We compare two calculations of the particle density in the superfluid phase of the O(2) model with a chemical potential μ in 1+1 dimensions. The first relies on exact blocking formulas from the Tensor Renormalization Group (TRG) formulation of the transfer matrix. The second is a worm algorithm. We show that the particle number distributions obtained with the two methods agree well. We use the TRG method to calculate the thermal entropy and the entanglement entropy. We describe the particle density, the two entropies and the topology of the world lines as we increase μ to go across the superfluid phase between the first two Mott insulating phases. For a sufficiently large temporal size, this process reveals an interesting fine structure: the average particle number and the winding number of most of the world lines in the Euclidean time direction increase by one unit at a time. At each step, the thermal entropy develops a peak and the entanglement entropy increases until we reach half-filling and then decreases in a way that approximately mirrors the ascent. This suggests an approximate fermionic picture.

  2. Entity Relation Detection with Factorial Hidden Markov Models and Maximum Entropy Discriminant Latent Dirichlet Allocations

    ERIC Educational Resources Information Center

    Li, Dingcheng

    2011-01-01

    Coreference resolution (CR) and entity relation detection (ERD) aim at finding predefined relations between pairs of entities in text. CR focuses on resolving identity relations while ERD focuses on detecting non-identity relations. Both CR and ERD are important as they can potentially improve other natural language processing (NLP) related tasks…

  3. A Discrete Constraint for Entropy Conservation and Sound Waves in Cloud-Resolving Modeling

    NASA Technical Reports Server (NTRS)

    Zeng, Xi-Ping; Tao, Wei-Kuo; Simpson, Joanne

    2003-01-01

    Ideal cloud-resolving models contain little-accumulative errors. When their domain is so large that synoptic large-scale circulations are accommodated, they can be used for the simulation of the interaction between convective clouds and the large-scale circulations. This paper sets up a framework for the models, using moist entropy as a prognostic variable and employing conservative numerical schemes. The models possess no accumulative errors of thermodynamic variables when they comply with a discrete constraint on entropy conservation and sound waves. Alternatively speaking, the discrete constraint is related to the correct representation of the large-scale convergence and advection of moist entropy. Since air density is involved in entropy conservation and sound waves, the challenge is how to compute sound waves efficiently under the constraint. To address the challenge, a compensation method is introduced on the basis of a reference isothermal atmosphere whose governing equations are solved analytically. Stability analysis and numerical experiments show that the method allows the models to integrate efficiently with a large time step.

  4. Bridging a paradigmatic financial model and nonextensive entropy

    NASA Astrophysics Data System (ADS)

    Duarte Queirós, S. M.; Tsallis, C.

    2005-03-01

    Engle's ARCH algorithm is a generator of stochastic time series for financial returns (and similar quantities) characterised by a time-dependent variance. It involves a memory parameter b (b = 0 corresponds to no memory), and a noise currently chosen to be Gaussian. We assume here a generalised noise, namely qn-Gaussian, characterised by an index qn in fraktur R (qn = 1 recovers the Gaussian case, and qn > 1 corresponds to tailed distributions). Supported by the recently introduced concept of superstatistics, we match the second and fourth moments of ARCH return distribution with those associated with the q-Gaussian distribution obtained through optimisation of the entropy Sq = (1 - ∑i piq)/(q - 1), basis of nonextensive statistical mechanics. The outcome is an analytic distribution for returns, where a unique q >= qn corresponds to each pair (b, qn) (q = qn if b = 0). This distribution is compared with numerical results and appears to be remarkably precise. This system constitutes a simple, low-dimensional, dynamical mechanism which accommodates well within the current nonextensive framework.

  5. Entropy production analysis for hump characteristics of a pump turbine model

    NASA Astrophysics Data System (ADS)

    Li, Deyou; Gong, Ruzhi; Wang, Hongjie; Xiang, Gaoming; Wei, Xianzhu; Qin, Daqing

    2016-07-01

    The hump characteristic is one of the main problems for the stable operation of pump turbines in pump mode. However, traditional methods cannot reflect directly the energy dissipation in the hump region. In this paper, 3D simulations are carried out using the SST k- ω turbulence model in pump mode under different guide vane openings. The numerical results agree with the experimental data. The entropy production theory is introduced to determine the flow losses in the whole passage, based on the numerical simulation. The variation of entropy production under different guide vane openings is presented. The results show that entropy production appears to be a wave, with peaks under different guide vane openings, which correspond to wave troughs in the external characteristic curves. Entropy production mainly happens in the runner, guide vanes and stay vanes for a pump turbine in pump mode. Finally, entropy production rate distribution in the runner, guide vanes and stay vanes is analyzed for four points under the 18 mm guide vane opening in the hump region. The analysis indicates that the losses of the runner and guide vanes lead to hump characteristics. In addition, the losses mainly occur in the runner inlet near the band and on the suction surface of the blades. In the guide vanes and stay vanes, the losses come from pressure surface of the guide vanes and the wake effects of the vanes. A new insight-entropy production analysis is carried out in this paper in order to find the causes of hump characteristics in a pump turbine, and it could provide some basic theoretical guidance for the loss analysis of hydraulic machinery.

  6. On the Maximum Storage Capacity of the Hopfield Model

    PubMed Central

    Folli, Viola; Leonetti, Marco; Ruocco, Giancarlo

    2017-01-01

    Recurrent neural networks (RNN) have traditionally been of great interest for their capacity to store memories. In past years, several works have been devoted to determine the maximum storage capacity of RNN, especially for the case of the Hopfield network, the most popular kind of RNN. Analyzing the thermodynamic limit of the statistical properties of the Hamiltonian corresponding to the Hopfield neural network, it has been shown in the literature that the retrieval errors diverge when the number of stored memory patterns (P) exceeds a fraction (≈ 14%) of the network size N. In this paper, we study the storage performance of a generalized Hopfield model, where the diagonal elements of the connection matrix are allowed to be different from zero. We investigate this model at finite N. We give an analytical expression for the number of retrieval errors and show that, by increasing the number of stored patterns over a certain threshold, the errors start to decrease and reach values below unit for P ≫ N. We demonstrate that the strongest trade-off between efficiency and effectiveness relies on the number of patterns (P) that are stored in the network by appropriately fixing the connection weights. When P≫N and the diagonal elements of the adjacency matrix are not forced to be zero, the optimal storage capacity is obtained with a number of stored memories much larger than previously reported. This theory paves the way to the design of RNN with high storage capacity and able to retrieve the desired pattern without distortions. PMID:28119595

  7. A simple modelling approach for prediction of standard state real gas entropy of pure materials.

    PubMed

    Bagheri, M; Borhani, T N G; Gandomi, A H; Manan, Z A

    2014-01-01

    The performance of an energy conversion system depends on exergy analysis and entropy generation minimisation. A new simple four-parameter equation is presented in this paper to predict the standard state absolute entropy of real gases (SSTD). The model development and validation were accomplished using the Linear Genetic Programming (LGP) method and a comprehensive dataset of 1727 widely used materials. The proposed model was compared with the results obtained using a three-layer feed forward neural network model (FFNN model). The root-mean-square error (RMSE) and the coefficient of determination (r(2)) of all data obtained for the LGP model were 52.24 J/(mol K) and 0.885, respectively. Several statistical assessments were used to evaluate the predictive power of the model. In addition, this study provides an appropriate understanding of the most important molecular variables for exergy analysis. Compared with the LGP based model, the application of FFNN improved the r(2) to 0.914. The developed model is useful in the design of materials to achieve a desired entropy value.

  8. The Entropy Estimation of the Physics’ Course Content on the Basis of Intradisciplinary Connections’ Information Model

    NASA Astrophysics Data System (ADS)

    Tatyana, Gnitetskaya

    2016-08-01

    In this paper the information model of intradisciplinary connections and semantic structures method are described. The information parameters, which we use in information model, are introduced. The question we would like to answer in this paper is - how to optimize the Physics Course’ content. As an example, the differences between entropy values in the contents of physics lecture with one topic but different logics of explanations are showed.

  9. Fault detection and diagnosis for gas turbines based on a kernelized information entropy model.

    PubMed

    Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei

    2014-01-01

    Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms.

  10. Fault Detection and Diagnosis for Gas Turbines Based on a Kernelized Information Entropy Model

    PubMed Central

    Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei

    2014-01-01

    Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms. PMID:25258726

  11. Assessment Model of Ecoenvironmental Vulnerability Based on Improved Entropy Weight Method

    PubMed Central

    Zhang, Xianqi; Wang, Chenbo; Li, Enkuan; Xu, Cundong

    2014-01-01

    Assessment of ecoenvironmental vulnerability plays an important role in the guidance of regional planning, the construction and protection of ecological environment, which requires comprehensive consideration on regional resources, environment, ecology, society and other factors. Based on the driving mechanism and evolution characteristics of ecoenvironmental vulnerability in cold and arid regions of China, a novel evaluation index system on ecoenvironmental vulnerability is proposed in this paper. For the disadvantages of conventional entropy weight method, an improved entropy weight assessment model on ecoenvironmental vulnerability is developed and applied to evaluate the ecoenvironmental vulnerability in western Jilin Province of China. The assessing results indicate that the model is suitable for ecoenvironmental vulnerability assessment, and it shows more reasonable evaluation criterion, more distinct insights and satisfactory results combined with the practical conditions. The model can provide a new method for regional ecoenvironmental vulnerability evaluation. PMID:25133260

  12. Backward transfer entropy: Informational measure for detecting hidden Markov models and its interpretations in thermodynamics, gambling and causality

    PubMed Central

    Ito, Sosuke

    2016-01-01

    The transfer entropy is a well-established measure of information flow, which quantifies directed influence between two stochastic time series and has been shown to be useful in a variety fields of science. Here we introduce the transfer entropy of the backward time series called the backward transfer entropy, and show that the backward transfer entropy quantifies how far it is from dynamics to a hidden Markov model. Furthermore, we discuss physical interpretations of the backward transfer entropy in completely different settings of thermodynamics for information processing and the gambling with side information. In both settings of thermodynamics and the gambling, the backward transfer entropy characterizes a possible loss of some benefit, where the conventional transfer entropy characterizes a possible benefit. Our result implies the deep connection between thermodynamics and the gambling in the presence of information flow, and that the backward transfer entropy would be useful as a novel measure of information flow in nonequilibrium thermodynamics, biochemical sciences, economics and statistics. PMID:27833120

  13. Critical behavior of entropy production and learning rate: Ising model with an oscillating field

    NASA Astrophysics Data System (ADS)

    Zhang, Yirui; Barato, Andre C.

    2016-11-01

    We study the critical behavior of the entropy production of the Ising model subject to a magnetic field that oscillates in time. The mean-field model displays a phase transition that can be either first or second-order, depending on the amplitude of the field and on the frequency of oscillation. Within this approximation the entropy production rate is shown to have a discontinuity when the transition is first-order and to be continuous, with a jump in its first derivative, if the transition is second-order. In two dimensions, we find with numerical simulations that the critical behavior of the entropy production rate is the same, independent of the frequency and amplitude of the field. Its first derivative has a logarithmic divergence at the critical point. This result is in agreement with the lack of a first-order phase transition in two dimensions. We analyze a model with a field that changes at stochastic time-intervals between two values. This model allows for an informational theoretic interpretation, with the system as a sensor that follows the external field. We calculate numerically a lower bound on the learning rate, which quantifies how much information the system obtains about the field. Its first derivative with respect to temperature is found to have a jump at the critical point.

  14. Quantum and Ecosystem Entropies

    NASA Astrophysics Data System (ADS)

    Kirwan, A. D.

    2008-06-01

    Ecosystems and quantum gases share a number of superficial similarities including enormous numbers of interacting elements and the fundamental role of energy in such interactions. A theory for the synthesis of data and prediction of new phenomena is well established in quantum statistical mechanics. The premise of this paper is that the reason a comparable unifying theory has not emerged in ecology is that a proper role for entropy has yet to be assigned. To this end, a phase space entropy model of ecosystems is developed. Specification of an ecosystem phase space cell size based on microbial mass, length, and time scales gives an ecosystem uncertainty parameter only about three orders of magnitude larger than Planck’s constant. Ecosystem equilibria is specified by conservation of biomass and total metabolic energy, along with the principle of maximum entropy at equilibria. Both Bose - Einstein and Fermi - Dirac equilibrium conditions arise in ecosystems applications. The paper concludes with a discussion of some broader aspects of an ecosystem phase space.

  15. One- and two-dimensional quantum models: Quenches and the scaling of irreversible entropy.

    PubMed

    Sharma, Shraddha; Dutta, Amit

    2015-08-01

    Using the scaling relation of the ground state quantum fidelity, we propose the most generic scaling relations of the irreversible work (the residual energy) of a closed quantum system at absolute zero temperature when one of the parameters of its Hamiltonian is suddenly changed. We consider two extreme limits: the heat susceptibility limit and the thermodynamic limit. It is argued that the irreversible entropy generated for a thermal quench at low enough temperatures when the system is initially in a Gibbs state is likely to show a similar scaling behavior. To illustrate this proposition, we consider zero-temperature and thermal quenches in one-dimensional (1D) and 2D Dirac Hamiltonians where the exact estimation of the irreversible work and the irreversible entropy is possible. Exploiting these exact results, we then establish the following. (i) The irreversible work at zero temperature shows an appropriate scaling in the thermodynamic limit. (ii) The scaling of the irreversible work in the 1D Dirac model at zero temperature shows logarithmic corrections to the scaling, which is a signature of a marginal situation. (iii) Remarkably, the logarithmic corrections do indeed appear in the scaling of the entropy generated if the temperature is low enough while they disappear for high temperatures. For the 2D model, no such logarithmic correction is found to appear.

  16. Implementing Restricted Maximum Likelihood Estimation in Structural Equation Models

    ERIC Educational Resources Information Center

    Cheung, Mike W.-L.

    2013-01-01

    Structural equation modeling (SEM) is now a generic modeling framework for many multivariate techniques applied in the social and behavioral sciences. Many statistical models can be considered either as special cases of SEM or as part of the latent variable modeling framework. One popular extension is the use of SEM to conduct linear mixed-effects…

  17. Computing minimal entropy production trajectories: An approach to model reduction in chemical kinetics

    NASA Astrophysics Data System (ADS)

    Lebiedz, D.

    2004-04-01

    Advanced experimental techniques in chemistry and physics provide increasing access to detailed deterministic mass action models for chemical reaction kinetics. Especially in complex technical or biochemical systems the huge amount of species and reaction pathways involved in a detailed modeling approach call for efficient methods of model reduction. These should be automatic and based on a firm mathematical analysis of the ordinary differential equations underlying the chemical kinetics in deterministic models. A main purpose of model reduction is to enable accurate numerical simulations of even high dimensional and spatially extended reaction systems. The latter include physical transport mechanisms and are modeled by partial differential equations. Their numerical solution for hundreds or thousands of species within a reasonable time will exceed computer capacities available now and in a foreseeable future. The central idea of model reduction is to replace the high dimensional dynamics by a low dimensional approximation with an appropriate degree of accuracy. Here I present a global approach to model reduction based on the concept of minimal entropy production and its numerical implementation. For given values of a single species concentration in a chemical system all other species concentrations are computed under the assumption that the system is as close as possible to its attractor, the thermodynamic equilibrium, in the sense that all modes of thermodynamic forces are maximally relaxed except the one, which drives the remaining system dynamics. This relaxation is expressed in terms of minimal entropy production for single reaction steps along phase space trajectories.

  18. Absolute Entropy and Energy of Carbon Dioxide Using the Two-Phase Thermodynamic Model.

    PubMed

    Huang, Shao-Nung; Pascal, Tod A; Goddard, William A; Maiti, Prabal K; Lin, Shiang-Tai

    2011-06-14

    The two-phase thermodynamic (2PT) model is used to determine the absolute entropy and energy of carbon dioxide over a wide range of conditions from molecular dynamics trajectories. The 2PT method determines the thermodynamic properties by applying the proper statistical mechanical partition function to the normal modes of a fluid. The vibrational density of state (DoS), obtained from the Fourier transform of the velocity autocorrelation function, converges quickly, allowing the free energy, entropy, and other thermodynamic properties to be determined from short 20-ps MD trajectories. The anharmonic effects in the vibrations are accounted for by the broadening of the normal modes into bands from sampling the velocities over the trajectory. The low frequency diffusive modes, which lead to finite DoS at zero frequency, are accounted for by considering the DoS as a superposition of gas-phase and solid-phase components (two phases). The analytical decomposition of the DoS allows for an evaluation of properties contributed by different types of molecular motions. We show that this 2PT analysis leads to accurate predictions of entropy and energy of CO2 over a wide range of conditions (from the triple point to the critical point of both the vapor and the liquid phases along the saturation line). This allows the equation of state of CO2 to be determined, which is limited only by the accuracy of the force field. We also validated that the 2PT entropy agrees with that determined from thermodynamic integration, but 2PT requires only a fraction of the time. A complication for CO2 is that its equilibrium configuration is linear, which would have only two rotational modes, but during the dynamics it is never exactly linear, so that there is a third mode from rotational about the axis. In this work, we show how to treat such linear molecules in the 2PT framework.

  19. Maximum likelihood estimation for periodic autoregressive moving average models

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  20. RNA Thermodynamic Structural Entropy.

    PubMed

    Garcia-Martin, Juan Antonio; Clote, Peter

    2015-01-01

    Conformational entropy for atomic-level, three dimensional biomolecules is known experimentally to play an important role in protein-ligand discrimination, yet reliable computation of entropy remains a difficult problem. Here we describe the first two accurate and efficient algorithms to compute the conformational entropy for RNA secondary structures, with respect to the Turner energy model, where free energy parameters are determined from UV absorption experiments. An algorithm to compute the derivational entropy for RNA secondary structures had previously been introduced, using stochastic context free grammars (SCFGs). However, the numerical value of derivational entropy depends heavily on the chosen context free grammar and on the training set used to estimate rule probabilities. Using data from the Rfam database, we determine that both of our thermodynamic methods, which agree in numerical value, are substantially faster than the SCFG method. Thermodynamic structural entropy is much smaller than derivational entropy, and the correlation between length-normalized thermodynamic entropy and derivational entropy is moderately weak to poor. In applications, we plot the structural entropy as a function of temperature for known thermoswitches, such as the repression of heat shock gene expression (ROSE) element, we determine that the correlation between hammerhead ribozyme cleavage activity and total free energy is improved by including an additional free energy term arising from conformational entropy, and we plot the structural entropy of windows of the HIV-1 genome. Our software RNAentropy can compute structural entropy for any user-specified temperature, and supports both the Turner'99 and Turner'04 energy parameters. It follows that RNAentropy is state-of-the-art software to compute RNA secondary structure conformational entropy. Source code is available at https://github.com/clotelab/RNAentropy/; a full web server is available at http

  1. Maximum Likelihood Estimation in Meta-Analytic Structural Equation Modeling

    ERIC Educational Resources Information Center

    Oort, Frans J.; Jak, Suzanne

    2016-01-01

    Meta-analytic structural equation modeling (MASEM) involves fitting models to a common population correlation matrix that is estimated on the basis of correlation coefficients that are reported by a number of independent studies. MASEM typically consist of two stages. The method that has been found to perform best in terms of statistical…

  2. Finite-size scaling of entanglement entropy in one-dimensional topological models

    NASA Astrophysics Data System (ADS)

    Wang, Yuting; Gulden, Tobias; Kamenev, Alex

    2017-02-01

    We consider scaling of the entanglement entropy across a topological quantum phase transition for the Kitaev chain model. The change of the topology manifests itself in a subleading term, which scales as L-1 /α with the size of the subsystem L , here α is the Rényi index. This term reveals the scaling function hα(L /ξ ) , where ξ is the correlation length, which is sensitive to the topological index. The scaling function hα(L /ξ ) is independent of model parameters, suggesting some degree of its universality.

  3. Global models of Ne and Te at solar maximum based on DE-2 measurements

    NASA Technical Reports Server (NTRS)

    Brace, L. H.; Theis, R. F.

    1990-01-01

    Newly developed global models of the electron density of (Ne) and (Te) in the F-region at solar maximum, based on Langmuir probe measurements from the Dynamics Explorer-2 satellite, are compared with solar minimum models that were developed earlier from Atmosphere Explorer data. Spherical harmonics are used in both models to describe the variations with geomagnetic latitude and local time, but the solar maximum model also includes longitudinal variations. The solar minimum models were for the fixed altitudes of 300 km and 400 km, while the solar maximum model covers all altitudes between 300 and 1000 km. In this paper, the global patterns of Ne and Te at 400 km at solar maximum and minimum are compared with the IRI model for the corresponding parts of the solar cycle. In most respects, the IRI model describes the empirical models quite well at both solar maximum and solar minimum.

  4. Efficiently Evaluating Heavy Metal Urban Soil Pollution Using an Improved Entropy-Method-Based Topsis Model.

    PubMed

    Liu, Jie; Liu, Chun; Han, Wei

    2016-10-01

    Urban soil pollution is evaluated utilizing an efficient and simple algorithmic model referred to as the entropy method-based Topsis (EMBT) model. The model focuses on pollution source position to enhance the ability to analyze sources of pollution accurately. Initial application of EMBT to urban soil pollution analysis is actually implied. The pollution degree of sampling point can be efficiently calculated by the model with the pollution degree coefficient, which is efficiently attained by first utilizing the Topsis method to determine evaluation value and then by dividing the evaluation value of the sample point by background value. The Kriging interpolation method combines coordinates of sampling points with the corresponding coefficients and facilitates the formation of heavy metal distribution profile. A case study is completed with modeling results in accordance with actual heavy metal pollution, proving accuracy and practicality of the EMBT model.

  5. Theoretical estimation of kinetic parameters for nucleophilic substitution reactions in solution: an application of a solution translational entropy model.

    PubMed

    Han, Ling-Li; Li, Shi-Jun; Fang, De-Cai

    2016-02-17

    The kinetic parameters, such as activation entropy, activation enthalpy, activation free-energy, and reaction rate constant, for a series of nucleophilic substitution (SN) reactions in solution, are investigated using both a solution-phase translational entropy model and an ideal gas-phase translational entropy model. The results obtained from the solution translational entropy model are in excellent agreement with the experimental values, while the overestimation of activation free-energy from the ideal gas-phase translational entropy model is as large as 6.9 kcal mol(-1). For some of the reactions studied, such as and in methanol, and and in aqueous solution, the explicit + implicit model, namely, a cluster-continuum type model, should be employed to account for the strong solvent-solute interactions. In addition, the explicit + implicit models have also been applied to the DMSO-H2O mixtures, which would open up a door to investigate the reactions in a mixed solvent using density functional theory (DFT) methods.

  6. Maximum Likelihood Reconstruction for Ising Models with Asynchronous Updates

    NASA Astrophysics Data System (ADS)

    Zeng, Hong-Li; Alava, Mikko; Aurell, Erik; Hertz, John; Roudi, Yasser

    2013-05-01

    We describe how the couplings in an asynchronous kinetic Ising model can be inferred. We consider two cases: one in which we know both the spin history and the update times and one in which we know only the spin history. For the first case, we show that one can average over all possible choices of update times to obtain a learning rule that depends only on spin correlations and can also be derived from the equations of motion for the correlations. For the second case, the same rule can be derived within a further decoupling approximation. We study all methods numerically for fully asymmetric Sherrington-Kirkpatrick models, varying the data length, system size, temperature, and external field. Good convergence is observed in accordance with the theoretical expectations.

  7. Modeling specific heat and entropy change in La(Fe-Mn-Si)13-H compounds

    NASA Astrophysics Data System (ADS)

    Piazzi, Marco; Bennati, Cecilia; Curcio, Carmen; Kuepferling, Michaela; Basso, Vittorio

    2016-02-01

    In this paper we model the magnetocaloric effect of LaFexMnySiz-H1.65 compound (x + y + z = 13), a system showing a transition temperature finely tunable around room temperature by Mn substitution. The thermodynamic model takes into account the coupling between magnetism and specific volume as introduced by Bean and Rodbell. We find a good qualitative agreement between experimental and modeled entropy change - Δs(H , T). The main result is that the magnetoelastic coupling drives the phase transition of the system, changing it from second to first order by varying a model parameter η. It is also responsible for a decrease of - Δs at the transition, due to a small lattice contribution to the entropy counteracting the effect of the magnetic one. The role of Mn is reflected exclusively in a decrease of the strength of the exchange interaction, while the value of the coefficient β, responsible for the coupling between volume and exchange energy, is independent on the Mn content and it appears to be an intrinsic property of the La(Fe-Si)13 structure.

  8. An entropy spring model for the Young's modulus change of biodegradable polymers during biodegradation.

    PubMed

    Wang, Ying; Han, Xiaoxiao; Pan, Jingzhe; Sinka, Csaba

    2010-01-01

    This paper presents a model for the change in Young's modulus of biodegradable polymers due to hydrolysis cleavage of the polymer chains. The model is based on the entropy spring theory for amorphous polymers. It is assumed that isolated polymer chain cleavage and very short polymer chains do not affect the entropy change in a linear biodegradable polymer during its deformation. It is then possible to relate the Young's modulus to the average molecular weight in a computer simulated hydrolysis process of polymer chain sessions. The experimental data obtained by Tsuji [Tsuji, H., 2002. Autocatalytic hydrolysis of amorphous-made polylactides: Effects of L-lactide content, tacticity, and enantiomeric polymer blending. Polymers 43, 1789-1796] for poly(L-lactic acid) and poly(D-lactic acid) are examined using the model. It is shown that the model can provide a common thread through Tsuji's experimental data. A further numerical case study demonstrates that the Young's modulus obtained using very thin samples, such as those obtained by Tsuji, cannot be directly used to calculate the load carried by a device made of the same polymer but of various thicknesses. This is because the Young's modulus varies significantly in a biodegradable device due to the heterogeneous nature of the hydrolysis reaction. The governing equations for biodegradation and the relation between the Young's modulus and average molecular weight can be combined to calculate the load transfer from a degrading device to a healing bone.

  9. Entanglement entropy and massless phase in the antiferromagnetic three-state quantum chiral clock model

    NASA Astrophysics Data System (ADS)

    Dai, Yan-Wei; Cho, Sam Young; Batchelor, Murray T.; Zhou, Huan-Qiang

    2017-01-01

    The von Neumann entanglement entropy is used to estimate the critical point hc/J ≃0.143 (3 ) of the mixed ferro-antiferromagnetic three-state quantum Potts model H =∑i[J (XiXi+1 2+Xi2Xi +1) -h Ri] , where Xi and Ri are standard three-state Potts spin operators and J >0 is the antiferromagnetic coupling parameter. This critical point value gives improved estimates for two Kosterlitz-Thouless transition points in the antiferromagnetic (β <0 ) region of the Δ -β phase diagram of the three-state quantum chiral clock model, where Δ and β are, respectively, the chirality and coupling parameters in the clock model. These are the transition points βc≃-0.143 (3 ) at Δ =1/2 between incommensurate and commensurate phases and βc≃-7.0 (1 ) at Δ =0 between disordered and incommensurate phases. The von Neumann entropy is also used to calculate the central charge c of the underlying conformal field theory in the massless phase h ≤hc . The estimate c ≃1 in this phase is consistent with the known exact value at the particular point h /J =-1 corresponding to the purely antiferromagnetic three-state quantum Potts model. The algebraic decay of the Potts spin-spin correlation in the massless phase is used to estimate the continuously varying critical exponent η .

  10. Periodic matrix population models: growth rate, basic reproduction number, and entropy.

    PubMed

    Bacaër, Nicolas

    2009-10-01

    This article considers three different aspects of periodic matrix population models. First, a formula for the sensitivity analysis of the growth rate lambda is obtained that is simpler than the one obtained by Caswell and Trevisan. Secondly, the formula for the basic reproduction number R0 in a constant environment is generalized to the case of a periodic environment. Some inequalities between lambda and R0 proved by Cushing and Zhou are also generalized to the periodic case. Finally, we add some remarks on Demetrius' notion of evolutionary entropy H and its relationship to the growth rate lambda in the periodic case.

  11. Entropy and econophysics

    NASA Astrophysics Data System (ADS)

    Rosser, J. Barkley

    2016-12-01

    Entropy is a central concept of statistical mechanics, which is the main branch of physics that underlies econophysics, the application of physics concepts to understand economic phenomena. It enters into econophysics both in an ontological way through the Second Law of Thermodynamics as this drives the world economy from its ecological foundations as solar energy passes through food chains in dissipative process of entropy rising and production fundamentally involving the replacement of lower entropy energy states with higher entropy ones. In contrast the mathematics of entropy as appearing in information theory becomes the basis for modeling financial market dynamics as well as income and wealth distribution dynamics. It also provides the basis for an alternative view of stochastic price equilibria in economics, as well providing a crucial link between econophysics and sociophysics, keeping in mind the essential unity of the various concepts of entropy.

  12. Efficient entropy estimation based on doubly stochastic models for quantized wavelet image data.

    PubMed

    Gaubatz, Matthew D; Hemami, Sheila S

    2007-04-01

    Under a rate constraint, wavelet-based image coding involves strategic discarding of information such that the remaining data can be described with a given amount of rate. In a practical coding system, this task requires knowledge of the relationship between quantization step size and compressed rate for each group of wavelet coefficients, the R-Q curve. A common approach to this problem is to fit each subband with a scalar probability distribution and compute entropy estimates based on the model. This approach is not effective at rates below 1.0 bits-per-pixel because the distributions of quantized data do not reflect the dependencies in coefficient magnitudes. These dependencies can be addressed with doubly stochastic models, which have been previously proposed to characterize more localized behavior, though there are tradeoffs between storage, computation time, and accuracy. Using a doubly stochastic generalized Gaussian model, it is demonstrated that the relationship between step size and rate is accurately described by a low degree polynomial in the logarithm of the step size. Based on this observation, an entropy estimation scheme is presented which offers an excellent tradeoff between speed and accuracy; after a simple data-gathering step, estimates are computed instantaneously by evaluating a single polynomial for each group of wavelet coefficients quantized with the same step size. These estimates are on average within 3% of a desired target rate for several of state-of-the-art coders.

  13. Offline modeling for product quality prediction of mineral processing using modeling error PDF shaping and entropy minimization.

    PubMed

    Ding, Jinliang; Chai, Tianyou; Wang, Hong

    2011-03-01

    This paper presents a novel offline modeling for product quality prediction of mineral processing which consists of a number of unit processes in series. The prediction of the product quality of the whole mineral process (i.e., the mixed concentrate grade) plays an important role and the establishment of its predictive model is a key issue for the plantwide optimization. For this purpose, a hybrid modeling approach of the mixed concentrate grade prediction is proposed, which consists of a linear model and a nonlinear model. The least-squares support vector machine is adopted to establish the nonlinear model. The inputs of the predictive model are the performance indices of each unit process, while the output is the mixed concentrate grade. In this paper, the model parameter selection is transformed into the shape control of the probability density function (PDF) of the modeling error. In this context, both the PDF-control-based and minimum-entropy-based model parameter selection approaches are proposed. Indeed, this is the first time that the PDF shape control idea is used to deal with system modeling, where the key idea is to turn model parameters so that either the modeling error PDF is controlled to follow a target PDF or the modeling error entropy is minimized. The experimental results using the real plant data and the comparison of the two approaches are discussed. The results show the effectiveness of the proposed approaches.

  14. Dynamics of Entropy in Quantum-like Model of Decision Making

    NASA Astrophysics Data System (ADS)

    Basieva, Irina; Khrennikov, Andrei; Asano, Masanari; Ohya, Masanori; Tanaka, Yoshiharu

    2011-03-01

    We present a quantum-like model of decision making in games of the Prisoner's Dilemma type. By this model the brain processes information by using representation of mental states in complex Hilbert space. Driven by the master equation the mental state of a player, say Alice, approaches an equilibrium point in the space of density matrices. By using this equilibrium point Alice determines her mixed (i.e., probabilistic) strategy with respect to Bob. Thus our model is a model of thinking through decoherence of initially pure mental state. Decoherence is induced by interaction with memory and external environment. In this paper we study (numerically) dynamics of quantum entropy of Alice's state in the process of decision making. Our analysis demonstrates that this dynamics depends nontrivially on the initial state of Alice's mind on her own actions and her prediction state (for possible actions of Bob.)

  15. Distortion-rate models for entropy-coded lattice vector quantization.

    PubMed

    Raffy, P; Antonini, M; Barlaud, M

    2000-01-01

    The increasing demand for real-time applications requires the use of variable-rate quantizers having good performance in the low bit rate domain. In order to minimize the complexity of quantization, as well as maintaining a reasonably high PSNR ratio, we propose to use an entropy-coded lattice vector quantizer (ECLVQ). These quantizers have proven to outperform the well-known EZW algorithm's performance in terms of rate-distortion tradeoff. In this paper, we focus our attention on the modeling of the mean squared error (MSE) distortion and the prefix code rate for ECLVQ. First, we generalize the distortion model of Jeong and Gibson (1993) on fixed-rate cubic quantizers to lattices under a high rate assumption. Second, we derive new rate models for ECLVQ, efficient at low bit rates without any high rate assumptions. Simulation results prove the precision of our models.

  16. The impact of resolution upon entropy and information in coarse-grained models

    SciTech Connect

    Foley, Thomas T.; Shell, M. Scott; Noid, W. G.

    2015-12-28

    By eliminating unnecessary degrees of freedom, coarse-grained (CG) models tremendously facilitate numerical calculations and theoretical analyses of complex phenomena. However, their success critically depends upon the representation of the system and the effective potential that governs the CG degrees of freedom. This work investigates the relationship between the CG representation and the many-body potential of mean force (PMF), W, which is the appropriate effective potential for a CG model that exactly preserves the structural and thermodynamic properties of a given high resolution model. In particular, we investigate the entropic component of the PMF and its dependence upon the CG resolution. This entropic component, S{sub W}, is a configuration-dependent relative entropy that determines the temperature dependence of W. As a direct consequence of eliminating high resolution details from the CG model, the coarsening process transfers configurational entropy and information from the configuration space into S{sub W}. In order to further investigate these general results, we consider the popular Gaussian Network Model (GNM) for protein conformational fluctuations. We analytically derive the exact PMF for the GNM as a function of the CG representation. In the case of the GNM, −TS{sub W} is a positive, configuration-independent term that depends upon the temperature, the complexity of the protein interaction network, and the details of the CG representation. This entropic term demonstrates similar behavior for seven model proteins and also suggests, in each case, that certain resolutions provide a more efficient description of protein fluctuations. These results may provide general insight into the role of resolution for determining the information content, thermodynamic properties, and transferability of CG models. Ultimately, they may lead to a rigorous and systematic framework for optimizing the representation of CG models.

  17. Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models

    SciTech Connect

    Lu, Dan; Ye, Ming; Curtis, Gary P.

    2015-08-01

    While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. Our study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. Moreover, these reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Finally, limitations of

  18. Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models

    DOE PAGES

    Lu, Dan; Ye, Ming; Curtis, Gary P.

    2015-08-01

    While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. Our study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict themore » reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. Moreover, these reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Finally

  19. Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models

    USGS Publications Warehouse

    Curtis, Gary P.; Lu, Dan; Ye, Ming

    2015-01-01

    While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the

  20. Transfer entropy--a model-free measure of effective connectivity for the neurosciences.

    PubMed

    Vicente, Raul; Wibral, Michael; Lindner, Michael; Pipa, Gordon

    2011-02-01

    Understanding causal relationships, or effective connectivity, between parts of the brain is of utmost importance because a large part of the brain's activity is thought to be internally generated and, hence, quantifying stimulus response relationships alone does not fully describe brain dynamics. Past efforts to determine effective connectivity mostly relied on model based approaches such as Granger causality or dynamic causal modeling. Transfer entropy (TE) is an alternative measure of effective connectivity based on information theory. TE does not require a model of the interaction and is inherently non-linear. We investigated the applicability of TE as a metric in a test for effective connectivity to electrophysiological data based on simulations and magnetoencephalography (MEG) recordings in a simple motor task. In particular, we demonstrate that TE improved the detectability of effective connectivity for non-linear interactions, and for sensor level MEG signals where linear methods are hampered by signal-cross-talk due to volume conduction.

  1. Combined Population Dynamics and Entropy Modelling Supports Patient Stratification in Chronic Myeloid Leukemia

    NASA Astrophysics Data System (ADS)

    Brehme, Marc; Koschmieder, Steffen; Montazeri, Maryam; Copland, Mhairi; Oehler, Vivian G.; Radich, Jerald P.; Brümmendorf, Tim H.; Schuppert, Andreas

    2016-04-01

    Modelling the parameters of multistep carcinogenesis is key for a better understanding of cancer progression, biomarker identification and the design of individualized therapies. Using chronic myeloid leukemia (CML) as a paradigm for hierarchical disease evolution we show that combined population dynamic modelling and CML patient biopsy genomic analysis enables patient stratification at unprecedented resolution. Linking CD34+ similarity as a disease progression marker to patient-derived gene expression entropy separated established CML progression stages and uncovered additional heterogeneity within disease stages. Importantly, our patient data informed model enables quantitative approximation of individual patients’ disease history within chronic phase (CP) and significantly separates “early” from “late” CP. Our findings provide a novel rationale for personalized and genome-informed disease progression risk assessment that is independent and complementary to conventional measures of CML disease burden and prognosis.

  2. Generalized second law of thermodynamics for non-canonical scalar field model with corrected-entropy

    NASA Astrophysics Data System (ADS)

    Das, Sudipta; Debnath, Ujjal; Mamon, Abdulla Al

    2015-10-01

    In this work, we have considered a non-canonical scalar field dark energy model in the framework of flat FRW background. It has also been assumed that the dark matter sector interacts with the non-canonical dark energy sector through some interaction term. Using the solutions for this interacting non-canonical scalar field dark energy model, we have investigated the validity of generalized second law (GSL) of thermodynamics in various scenarios using first law and area law of thermodynamics. For this purpose, we have assumed two types of horizons viz apparent horizon and event horizon for the universe and using first law of thermodynamics, we have examined the validity of GSL on both apparent and event horizons. Next, we have considered two types of entropy-corrections on apparent and event horizons. Using the modified area law, we have examined the validity of GSL of thermodynamics on apparent and event horizons under some restrictions of model parameters.

  3. Moving object detection using a background modeling based on entropy theory and quad-tree decomposition

    NASA Astrophysics Data System (ADS)

    Elharrouss, Omar; Moujahid, Driss; Elkah, Samah; Tairi, Hamid

    2016-11-01

    A particular algorithm for moving object detection using a background subtraction approach is proposed. We generate the background model by combining quad-tree decomposition with entropy theory. In general, many background subtraction approaches are sensitive to sudden illumination change in the scene and cannot update the background image in scenes. The proposed background modeling approach analyzes the illumination change problem. After performing the background subtraction based on the proposed background model, the moving targets can be accurately detected at each frame of the image sequence. In order to produce high accuracy for the motion detection, the binary motion mask can be computed by the proposed threshold function. The experimental analysis based on statistical measurements proves the efficiency of our proposed method in terms of quality and quantity. And it even outperforms substantially existing methods by perceptional evaluation.

  4. Entropy Production during Fatigue as a Criterion for Failure. The Critical Entropy Threshold: A Mathematical Model for Fatigue.

    DTIC Science & Technology

    1983-08-15

    normally be expected just after a crack appears, the local maximum stress would normally be greater than the ultimate strength available from... crack has appeared, random yielding would still occur but would be significantly affected by the local stress concentration. Damping and fatigue could...The stress just before a crack appears would be greater than the stress just after the crack appears due to stress relief. The local increase in stress

  5. Entropy generation analysis for film boiling: A simple model of quenching

    NASA Astrophysics Data System (ADS)

    Lotfi, Ali; Lakzian, Esmail

    2016-04-01

    In this paper, quenching in high-temperature materials processing is modeled as a superheated isothermal flat plate. In these phenomena, a liquid flows over the highly superheated surfaces for cooling. So the surface and the liquid are separated by the vapor layer that is formed because of the liquid which is in contact with the superheated surface. This is named forced film boiling. As an objective, the distribution of the entropy generation in the laminar forced film boiling is obtained by similarity solution for the first time in the quenching processes. The PDE governing differential equations of the laminar film boiling including continuity, momentum, and energy are reduced to ODE ones, and a dimensionless equation for entropy generation inside the liquid boundary and vapor layer is obtained. Then the ODEs are solved by applying the 4th-order Runge-Kutta method with a shooting procedure. Moreover, the Bejan number is used as a design criterion parameter for a qualitative study about the rate of cooling and the effects of plate speed are studied in the quenching processes. It is observed that for high speed of the plate the rate of cooling (heat transfer) is more.

  6. Marginal Maximum A Posteriori Item Parameter Estimation for the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    Roberts, James S.; Thompson, Vanessa M.

    2011-01-01

    A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…

  7. A positive and entropy-satisfying finite volume scheme for the Baer-Nunziato model

    NASA Astrophysics Data System (ADS)

    Coquel, Frédéric; Hérard, Jean-Marc; Saleh, Khaled

    2017-02-01

    We present a relaxation scheme for approximating the entropy dissipating weak solutions of the Baer-Nunziato two-phase flow model. This relaxation scheme is straightforwardly obtained as an extension of the relaxation scheme designed in [16] for the isentropic Baer-Nunziato model and consequently inherits its main properties. To our knowledge, this is the only existing scheme for which the approximated phase fractions, phase densities and phase internal energies are proven to remain positive without any restrictive condition other than a classical fully computable CFL condition. For ideal gas and stiffened gas equations of state, real values of the phasic speeds of sound are also proven to be maintained by the numerical scheme. It is also the only scheme for which a discrete entropy inequality is proven, under a CFL condition derived from the natural sub-characteristic condition associated with the relaxation approximation. This last property, which ensures the non-linear stability of the numerical method, is satisfied for any admissible equation of state. We provide a numerical study for the convergence of the approximate solutions towards some exact Riemann solutions. The numerical simulations show that the relaxation scheme compares well with two of the most popular existing schemes available for the Baer-Nunziato model, namely Schwendeman-Wahle-Kapila's Godunov-type scheme [39] and Tokareva-Toro's HLLC scheme [44]. The relaxation scheme also shows a higher precision and a lower computational cost (for comparable accuracy) than a standard numerical scheme used in the nuclear industry, namely Rusanov's scheme. Finally, we assess the good behavior of the scheme when approximating vanishing phase solutions.

  8. Cross entropy: a new solver for Markov random field modeling and applications to medical image segmentation.

    PubMed

    Wu, Jue; Chung, Albert C S

    2005-01-01

    This paper introduces a novel solver, namely cross entropy (CE), into the MRF theory for medical image segmentation. The solver, which is based on the theory of rare event simulation, is general and stochastic. Unlike some popular optimization methods such as belief propagation and graph cuts, CE makes no assumption on the form of objective functions and thus can be applied to any type of MRF models. Furthermore, it achieves higher performance of finding more global optima because of its stochastic property. In addition, it is more efficient than other stochastic methods like simulated annealing. We tested the new solver in 4 series of segmentation experiments on synthetic and clinical, vascular and cerebral images. The experiments show that CE can give more accurate segmentation results.

  9. Exact valence bond entanglement entropy and probability distribution in the XXX spin chain and the potts model.

    PubMed

    Jacobsen, J L; Saleur, H

    2008-02-29

    We determine exactly the probability distribution of the number N_(c) of valence bonds connecting a subsystem of length L>1 to the rest of the system in the ground state of the XXX antiferromagnetic spin chain. This provides, in particular, the asymptotic behavior of the valence-bond entanglement entropy S_(VB)=N_(c)ln2=4ln2/pi(2)lnL disproving a recent conjecture that this should be related with the von Neumann entropy, and thus equal to 1/3lnL. Our results generalize to the Q-state Potts model.

  10. Application of maximum entropy method for the study of electron density distribution in SrS, BaS and PuS using powder X-ray data

    NASA Astrophysics Data System (ADS)

    Saravanan, R.

    2006-06-01

    A study of the electronic structure of the three sulphides, SrS, BaS and PuS has been carried out in this work, using the powder X-ray intensity data from JCPDS powder diffraction data base. The statistical approach, MEM (maximum entropy method) is used for the analysis of the data for the electron density distribution in these materials and an attempt has been made to understand the bonding between the metal atom and the sulphur atom. The mid-bond electron density is found to be maximum for PuS among these three sulphides, being 0.584 e/Å^3 at 2.397 Å. SrS is found to have the lowest electron density at the mid-bond (0.003 e/Å^3) at 2.118 Å from the origin leaving it more ionic than the other two sulphides studied in this work. The two-dimensional electron density maps on (1 0 0) and (1 1 0) planes and the one-dimensional profiles along the bonding direction [1 1 1] are used for these analyses. The overall and individual Debye-Waller factors of atoms in these systems have also been studied and analyzed. The refinements of the observed X-ray data were carried out using standard softwares and also a routine written by the author.

  11. EEG entropy measures in anesthesia

    PubMed Central

    Liang, Zhenhu; Wang, Yinghua; Sun, Xue; Li, Duan; Voss, Logan J.; Sleigh, Jamie W.; Hagihira, Satoshi; Li, Xiaoli

    2015-01-01

    Highlights: ► Twelve entropy indices were systematically compared in monitoring depth of anesthesia and detecting burst suppression.► Renyi permutation entropy performed best in tracking EEG changes associated with different anesthesia states.► Approximate Entropy and Sample Entropy performed best in detecting burst suppression. Objective: Entropy algorithms have been widely used in analyzing EEG signals during anesthesia. However, a systematic comparison of these entropy algorithms in assessing anesthesia drugs' effect is lacking. In this study, we compare the capability of 12 entropy indices for monitoring depth of anesthesia (DoA) and detecting the burst suppression pattern (BSP), in anesthesia induced by GABAergic agents. Methods: Twelve indices were investigated, namely Response Entropy (RE) and State entropy (SE), three wavelet entropy (WE) measures [Shannon WE (SWE), Tsallis WE (TWE), and Renyi WE (RWE)], Hilbert-Huang spectral entropy (HHSE), approximate entropy (ApEn), sample entropy (SampEn), Fuzzy entropy, and three permutation entropy (PE) measures [Shannon PE (SPE), Tsallis PE (TPE) and Renyi PE (RPE)]. Two EEG data sets from sevoflurane-induced and isoflurane-induced anesthesia respectively were selected to assess the capability of each entropy index in DoA monitoring and BSP detection. To validate the effectiveness of these entropy algorithms, pharmacokinetic/pharmacodynamic (PK/PD) modeling and prediction probability (Pk) analysis were applied. The multifractal detrended fluctuation analysis (MDFA) as a non-entropy measure was compared. Results: All the entropy and MDFA indices could track the changes in EEG pattern during different anesthesia states. Three PE measures outperformed the other entropy indices, with less baseline variability, higher coefficient of determination (R2) and prediction probability, and RPE performed best; ApEn and SampEn discriminated BSP best. Additionally, these entropy measures showed an advantage in computation

  12. Hierarchical minimax entropy modeling and probabilistic principal component visualization for data exploration

    NASA Astrophysics Data System (ADS)

    Wang, Yue J.; Luo, Lan; Li, Haifeng; Freedman, Matthew T.

    1999-05-01

    As a step toward understanding the complex information from data and relationships, structural and discriminative knowledge reveals insight that may prove useful in data interpretation and exploration. This paper reports the development of an automated and intelligent procedure for generating the hierarchy of minimize entropy models and principal component visualization spaces for improved data explanation. The proposed hierarchical mimimax entropy modeling and probabilistic principal component projection are both statistically principles and visually effective at revealing all of the interesting aspects of the data set. The methods involve multiple use of standard finite normal mixture models and probabilistic principal component projections. The strategy is that the top-level model and projection should explain the entire data set, best revealing the presence of clusters and relationships, while lower-level models and projections should display internal structure within individual clusters, such as the presence of subclusters and attribute trends, which might not be apparent in the higher-level models and projections. With may complementary mixture models and visualization projections, each level will be relatively simple while the complete hierarchy maintains overall flexibility yet still conveys considerable structural information. In particular, a model identification procedure is developed to select the optimal number and kernel shapes of local clusters from a class of data, resulting in a standard finite normal mixtures with minimum conditional bias and variance, and a probabilistic principal component neural network is advanced to generate optimal projections, leading to a hierarchical visualization algorithm allowing the complete data set to be analyzed at the top level, with best separated subclusters of data points analyzed at deeper levels. Hierarchial probabilistic principal component visualization involves (1) evaluation of posterior probabilities for

  13. The Holographic Entropy Cone

    SciTech Connect

    Bao, Ning; Nezami, Sepehr; Ooguri, Hirosi; Stoica, Bogdan; Sully, James; Walter, Michael

    2015-09-21

    We initiate a systematic enumeration and classification of entropy inequalities satisfied by the Ryu-Takayanagi formula for conformal field theory states with smooth holographic dual geometries. For 2, 3, and 4 regions, we prove that the strong subadditivity and the monogamy of mutual information give the complete set of inequalities. This is in contrast to the situation for generic quantum systems, where a complete set of entropy inequalities is not known for 4 or more regions. We also find an infinite new family of inequalities applicable to 5 or more regions. The set of all holographic entropy inequalities bounds the phase space of Ryu-Takayanagi entropies, defining the holographic entropy cone. We characterize this entropy cone by reducing geometries to minimal graph models that encode the possible cutting and gluing relations of minimal surfaces. We find that, for a fixed number of regions, there are only finitely many independent entropy inequalities. To establish new holographic entropy inequalities, we introduce a combinatorial proof technique that may also be of independent interest in Riemannian geometry and graph theory.

  14. The Holographic Entropy Cone

    DOE PAGES

    Bao, Ning; Nezami, Sepehr; Ooguri, Hirosi; ...

    2015-09-21

    We initiate a systematic enumeration and classification of entropy inequalities satisfied by the Ryu-Takayanagi formula for conformal field theory states with smooth holographic dual geometries. For 2, 3, and 4 regions, we prove that the strong subadditivity and the monogamy of mutual information give the complete set of inequalities. This is in contrast to the situation for generic quantum systems, where a complete set of entropy inequalities is not known for 4 or more regions. We also find an infinite new family of inequalities applicable to 5 or more regions. The set of all holographic entropy inequalities bounds the phasemore » space of Ryu-Takayanagi entropies, defining the holographic entropy cone. We characterize this entropy cone by reducing geometries to minimal graph models that encode the possible cutting and gluing relations of minimal surfaces. We find that, for a fixed number of regions, there are only finitely many independent entropy inequalities. To establish new holographic entropy inequalities, we introduce a combinatorial proof technique that may also be of independent interest in Riemannian geometry and graph theory.« less

  15. Integrating Entropy and Closed Frequent Pattern Mining for Social Network Modelling and Analysis

    NASA Astrophysics Data System (ADS)

    Adnan, Muhaimenul; Alhajj, Reda; Rokne, Jon

    The recent increase in the explicitly available social networks has attracted the attention of the research community to investigate how it would be possible to benefit from such a powerful model in producing effective solutions for problems in other domains where the social network is implicit; we argue that social networks do exist around us but the key issue is how to realize and analyze them. This chapter presents a novel approach for constructing a social network model by an integrated framework that first preparing the data to be analyzed and then applies entropy and frequent closed patterns mining for network construction. For a given problem, we first prepare the data by identifying items and transactions, which arc the basic ingredients for frequent closed patterns mining. Items arc main objects in the problem and a transaction is a set of items that could exist together at one time (e.g., items purchased in one visit to the supermarket). Transactions could be analyzed to discover frequent closed patterns using any of the well-known techniques. Frequent closed patterns have the advantage that they successfully grab the inherent information content of the dataset and is applicable to a broader set of domains. Entropies of the frequent closed patterns arc used to keep the dimensionality of the feature vectors to a reasonable size; it is a kind of feature reduction process. Finally, we analyze the dynamic behavior of the constructed social network. Experiments were conducted on a synthetic dataset and on the Enron corpus email dataset. The results presented in the chapter show that social networks extracted from a feature set as frequent closed patterns successfully carry the community structure information. Moreover, for the Enron email dataset, we present an analysis to dynamically indicate the deviations from each user's individual and community profile. These indications of deviations can be very useful to identify unusual events.

  16. Modeling the Maximum Spreading of Liquid Droplets Impacting Wetting and Nonwetting Surfaces.

    PubMed

    Lee, Jae Bong; Derome, Dominique; Guyer, Robert; Carmeliet, Jan

    2016-02-09

    Droplet impact has been imaged on different rigid, smooth, and rough substrates for three liquids with different viscosity and surface tension, with special attention to the lower impact velocity range. Of all studied parameters, only surface tension and viscosity, thus the liquid properties, clearly play a role in terms of the attained maximum spreading ratio of the impacting droplet. Surface roughness and type of surface (steel, aluminum, and parafilm) slightly affect the dynamic wettability and maximum spreading at low impact velocity. The dynamic contact angle at maximum spreading has been identified to properly characterize this dynamic spreading process, especially at low impact velocity where dynamic wetting plays an important role. The dynamic contact angle is found to be generally higher than the equilibrium contact angle, showing that statically wetting surfaces can become less wetting or even nonwetting under dynamic droplet impact. An improved energy balance model for maximum spreading ratio is proposed based on a correct analytical modeling of the time at maximum spreading, which determines the viscous dissipation. Experiments show that the time at maximum spreading decreases with impact velocity depending on the surface tension of the liquid, and a scaling with maximum spreading diameter and surface tension is proposed. A second improvement is based on the use of the dynamic contact angle at maximum spreading, instead of quasi-static contact angles, to describe the dynamic wetting process at low impact velocity. This improved model showed good agreement compared to experiments for the maximum spreading ratio versus impact velocity for different liquids, and a better prediction compared to other models in literature. In particular, scaling according to We(1/2) is found invalid for low velocities, since the curves bend over to higher maximum spreading ratios due to the dynamic wetting process.

  17. A simple model for the dependence on local detonation speed of the product entropy

    NASA Astrophysics Data System (ADS)

    Hetherington, David C.; Whitworth, Nicholas J.

    2012-03-01

    The generation of a burn time field as a pre-processing step ahead of a hydrocode calculation has been mostly upgraded in the explosives modelling community from the historical model of singlespeed programmed burn to DSD/WBL (Detonation Shock Dynamics / Whitham Bdzil Lambourn). The problem with this advance is that the previously conventional approach to the hydrodynamic stage of the model results in the entropy of the detonation products (s) having the wrong correlation with detonation speed (D). Instead of being higher where D is lower, the conventional method leads to s being lower where D is lower, resulting in a completely fictitious enhancement of available energy where the burn is degraded! A technique is described which removes this deficiency of the historical model when used with a DSD-generated burn time field. By treating the conventional JWL equation as a semi-empirical expression for the local expansion isentrope, and constraining the local parameter set for consistency with D, it is possible to obtain the two desirable outcomes that the model of the detonation wave is internally consistent, and s is realistically correlated with D.

  18. SELECTION OF CANDIDATE EUTROPHICATION MODELS FOR TOTAL MAXIMUM DAILY LOADS ANALYSES

    EPA Science Inventory

    A tiered approach was developed to evaluate candidate eutrophication models to select a common suite of models that could be used for Total Maximum Daily Loads (TMDL) analyses in estuaries, rivers, and lakes/reservoirs. Consideration for linkage to watershed models and ecologica...

  19. Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM

    ERIC Educational Resources Information Center

    Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman

    2012-01-01

    This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…

  20. Canonical Statistical Model for Maximum Expected Immission of Wire Conductor in an Aperture Enclosure

    NASA Technical Reports Server (NTRS)

    Bremner, Paul G.; Vazquez, Gabriel; Christiano, Daniel J.; Trout, Dawn H.

    2016-01-01

    Prediction of the maximum expected electromagnetic pick-up of conductors inside a realistic shielding enclosure is an important canonical problem for system-level EMC design of space craft, launch vehicles, aircraft and automobiles. This paper introduces a simple statistical power balance model for prediction of the maximum expected current in a wire conductor inside an aperture enclosure. It calculates both the statistical mean and variance of the immission from the physical design parameters of the problem. Familiar probability density functions can then be used to predict the maximum expected immission for deign purposes. The statistical power balance model requires minimal EMC design information and solves orders of magnitude faster than existing numerical models, making it ultimately viable for scaled-up, full system-level modeling. Both experimental test results and full wave simulation results are used to validate the foundational model.

  1. Atomistic-level non-equilibrium model for chemically reactive systems based on steepest-entropy-ascent quantum thermodynamics

    NASA Astrophysics Data System (ADS)

    Li, Guanchen; Al-Abbasi, Omar; von Spakovsky, Michael R.

    2014-10-01

    This paper outlines an atomistic-level framework for modeling the non-equilibrium behavior of chemically reactive systems. The framework called steepest- entropy-ascent quantum thermodynamics (SEA-QT) is based on the paradigm of intrinsic quantum thermodynamic (IQT), which is a theory that unifies quantum mechanics and thermodynamics into a single discipline with wide applications to the study of non-equilibrium phenomena at the atomistic level. SEA-QT is a novel approach for describing the state of chemically reactive systems as well as the kinetic and dynamic features of the reaction process without any assumptions of near-equilibrium states or weak-interactions with a reservoir or bath. Entropy generation is the basis of the dissipation which takes place internal to the system and is, thus, the driving force of the chemical reaction(s). The SEA-QT non-equilibrium model is able to provide detailed information during the reaction process, providing a picture of the changes occurring in key thermodynamic properties (e.g., the instantaneous species concentrations, entropy and entropy generation, reaction coordinate, chemical affinities, reaction rate, etc). As an illustration, the SEA-QT framework is applied to an atomistic-level chemically reactive system governed by the reaction mechanism F + H2 leftrightarrow FH + H.

  2. Generative Models for Similarity-based Classification

    DTIC Science & Technology

    2007-01-01

    problem of estimating the class-conditional similarity probability models is solved by applying the maximum entropy principle, under the constraint that...model. The SDA class-conditional probability models have exponential form, because they are derived as the maximum entropy distribu- tions subject to...exist because the constraints are based on the data. As prescribed by Jaynes’ principle of maximum entropy [34], a unique class- conditional joint

  3. Entropy-corrected new agegraphic dark energy model in the context of Chern-Simons modified gravity

    NASA Astrophysics Data System (ADS)

    Aly, Ayman A.; Fekry, M.; Mansour, H.

    2015-04-01

    Within the framework of Chern-Simons (CS) modified gravity, we studied dark energy models. The new agegraphic dark energy (NADE) model, entropy-corrected new agegraphic dark energy (ECNADE) model and NADE model with generalized uncertainty principle (GUP) are investigated. For these models, we studied the evolution of scale factor a, Hubble parameter H and deceleration parameter q. On meantime, we studied the state finder parameters s and r. These models show some similar behavior with modified Chaplygin gas model in some regions, while in other regions some similarity with phantom and quintessence dark energy is noticed.

  4. Maximum Likelihood Item Easiness Models for Test Theory without an Answer Key

    ERIC Educational Resources Information Center

    France, Stephen L.; Batchelder, William H.

    2015-01-01

    Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce…

  5. A Joint Maximum Likelihood Estimation Procedure for the Hyperbolic Cosine Model for Single-Stimulus Responses.

    ERIC Educational Resources Information Center

    Luo, Guanzhong

    2000-01-01

    Extends joint maximum likelihood estimation for the hyperbolic cosine model to the situation in which the units of items are allowed to vary. Describes the four estimation cycles designed to address four important issues of model development and presents results from two sets of simulation studies that show reasonably accurate parameter recovery…

  6. Marginal Maximum Likelihood Estimation of a Latent Variable Model with Interaction

    ERIC Educational Resources Information Center

    Cudeck, Robert; Harring, Jeffrey R.; du Toit, Stephen H. C.

    2009-01-01

    There has been considerable interest in nonlinear latent variable models specifying interaction between latent variables. Although it seems to be only slightly more complex than linear regression without the interaction, the model that includes a product of latent variables cannot be estimated by maximum likelihood assuming normality.…

  7. On the Existence and Uniqueness of Maximum-Likelihood Estimates in the Rasch Model.

    ERIC Educational Resources Information Center

    Fischer, Gerhard H.

    1981-01-01

    Necessary and sufficient conditions for the existence and uniqueness of a solution of the so-called "unconditional" and the "conditional" maximum-likelihood estimation equations in the dichotomous Rasch model are given. It is shown how to apply the results in practical uses of the Rasch model. (Author/JKS)

  8. Modelling the maximum voluntary joint torque/angular velocity relationship in human movement.

    PubMed

    Yeadon, Maurice R; King, Mark A; Wilson, Cassie

    2006-01-01

    The force exerted by a muscle is a function of the activation level and the maximum (tetanic) muscle force. In "maximum" voluntary knee extensions muscle activation is lower for eccentric muscle velocities than for concentric velocities. The aim of this study was to model this "differential activation" in order to calculate the maximum voluntary knee extensor torque as a function of knee angular velocity. Torque data were collected on two subjects during maximal eccentric-concentric knee extensions using an isovelocity dynamometer with crank angular velocities ranging from 50 to 450 degrees s(-1). The theoretical tetanic torque/angular velocity relationship was modelled using a four parameter function comprising two rectangular hyperbolas while the activation/angular velocity relationship was modelled using a three parameter function that rose from submaximal activation for eccentric velocities to full activation for high concentric velocities. The product of these two functions gave a seven parameter function which was fitted to the joint torque/angular velocity data, giving unbiased root mean square differences of 1.9% and 3.3% of the maximum torques achieved. Differential activation accounts for the non-hyperbolic behaviour of the torque/angular velocity data for low concentric velocities. The maximum voluntary knee extensor torque that can be exerted may be modelled accurately as the product of functions defining the maximum torque and the maximum voluntary activation level. Failure to include differential activation considerations when modelling maximal movements will lead to errors in the estimation of joint torque in the eccentric phase and low velocity concentric phase.

  9. Maximum likelihood training of connectionist models: comparison with least squares back-propagation and logistic regression.

    PubMed Central

    Spackman, K. A.

    1991-01-01

    This paper presents maximum likelihood back-propagation (ML-BP), an approach to training neural networks. The widely reported original approach uses least squares back-propagation (LS-BP), minimizing the sum of squared errors (SSE). Unfortunately, least squares estimation does not give a maximum likelihood (ML) estimate of the weights in the network. Logistic regression, on the other hand, gives ML estimates for single layer linear models only. This report describes how to obtain ML estimates of the weights in a multi-layer model, and compares LS-BP to ML-BP using several examples. It shows that in many neural networks, least squares estimation gives inferior results and should be abandoned in favor of maximum likelihood estimation. Questions remain about the potential uses of multi-level connectionist models in such areas as diagnostic systems and risk-stratification in outcomes research. PMID:1807606

  10. Estimating parameters of a multiple autoregressive model by the modified maximum likelihood method

    NASA Astrophysics Data System (ADS)

    Bayrak, Özlem Türker; Akkaya, Aysen D.

    2010-02-01

    We consider a multiple autoregressive model with non-normal error distributions, the latter being more prevalent in practice than the usually assumed normal distribution. Since the maximum likelihood equations have convergence problems (Puthenpura and Sinha, 1986) [11], we work out modified maximum likelihood equations by expressing the maximum likelihood equations in terms of ordered residuals and linearizing intractable nonlinear functions (Tiku and Suresh, 1992) [8]. The solutions, called modified maximum estimators, are explicit functions of sample observations and therefore easy to compute. They are under some very general regularity conditions asymptotically unbiased and efficient (Vaughan and Tiku, 2000) [4]. We show that for small sample sizes, they have negligible bias and are considerably more efficient than the traditional least squares estimators. We show that our estimators are robust to plausible deviations from an assumed distribution and are therefore enormously advantageous as compared to the least squares estimators. We give a real life example.

  11. Steepest-entropy-ascent quantum thermodynamic modeling of the relaxation process of isolated chemically reactive systems using density of states and the concept of hypoequilibrium state

    NASA Astrophysics Data System (ADS)

    Li, Guanchen; von Spakovsky, Michael R.

    2016-01-01

    This paper presents a study of the nonequilibrium relaxation process of chemically reactive systems using steepest-entropy-ascent quantum thermodynamics (SEAQT). The trajectory of the chemical reaction, i.e., the accessible intermediate states, is predicted and discussed. The prediction is made using a thermodynamic-ensemble approach, which does not require detailed information about the particle mechanics involved (e.g., the collision of particles). Instead, modeling the kinetics and dynamics of the relaxation process is based on the principle of steepest-entropy ascent (SEA) or maximum-entropy production, which suggests a constrained gradient dynamics in state space. The SEAQT framework is based on general definitions for energy and entropy and at least theoretically enables the prediction of the nonequilibrium relaxation of system state at all temporal and spatial scales. However, to make this not just theoretically but computationally possible, the concept of density of states is introduced to simplify the application of the relaxation model, which in effect extends the application of the SEAQT framework even to infinite energy eigenlevel systems. The energy eigenstructure of the reactive system considered here consists of an extremely large number of such levels (on the order of 10130) and yields to the quasicontinuous assumption. The principle of SEA results in a unique trajectory of system thermodynamic state evolution in Hilbert space in the nonequilibrium realm, even far from equilibrium. To describe this trajectory, the concepts of subsystem hypoequilibrium state and temperature are introduced and used to characterize each system-level, nonequilibrium state. This definition of temperature is fundamental rather than phenomenological and is a generalization of the temperature defined at stable equilibrium. In addition, to deal with the large number of energy eigenlevels, the equation of motion is formulated on the basis of the density of states and a set of

  12. Steepest-entropy-ascent quantum thermodynamic modeling of the relaxation process of isolated chemically reactive systems using density of states and the concept of hypoequilibrium state.

    PubMed

    Li, Guanchen; von Spakovsky, Michael R

    2016-01-01

    This paper presents a study of the nonequilibrium relaxation process of chemically reactive systems using steepest-entropy-ascent quantum thermodynamics (SEAQT). The trajectory of the chemical reaction, i.e., the accessible intermediate states, is predicted and discussed. The prediction is made using a thermodynamic-ensemble approach, which does not require detailed information about the particle mechanics involved (e.g., the collision of particles). Instead, modeling the kinetics and dynamics of the relaxation process is based on the principle of steepest-entropy ascent (SEA) or maximum-entropy production, which suggests a constrained gradient dynamics in state space. The SEAQT framework is based on general definitions for energy and entropy and at least theoretically enables the prediction of the nonequilibrium relaxation of system state at all temporal and spatial scales. However, to make this not just theoretically but computationally possible, the concept of density of states is introduced to simplify the application of the relaxation model, which in effect extends the application of the SEAQT framework even to infinite energy eigenlevel systems. The energy eigenstructure of the reactive system considered here consists of an extremely large number of such levels (on the order of 10^{130}) and yields to the quasicontinuous assumption. The principle of SEA results in a unique trajectory of system thermodynamic state evolution in Hilbert space in the nonequilibrium realm, even far from equilibrium. To describe this trajectory, the concepts of subsystem hypoequilibrium state and temperature are introduced and used to characterize each system-level, nonequilibrium state. This definition of temperature is fundamental rather than phenomenological and is a generalization of the temperature defined at stable equilibrium. In addition, to deal with the large number of energy eigenlevels, the equation of motion is formulated on the basis of the density of states and a set

  13. Determining the Tsallis parameter via maximum entropy

    NASA Astrophysics Data System (ADS)

    Conroy, J. M.; Miller, H. G.

    2015-05-01

    The nonextensive entropic measure proposed by Tsallis [C. Tsallis, J. Stat. Phys. 52, 479 (1988), 10.1007/BF01016429] introduces a parameter, q , which is not defined but rather must be determined. The value of q is typically determined from a piece of data and then fixed over the range of interest. On the other hand, from a phenomenological viewpoint, there are instances in which q cannot be treated as a constant. We present two distinct approaches for determining q depending on the form of the equations of constraint for the particular system. In the first case the equations of constraint for the operator O ̂ can be written as Tr (FqO ̂)=C , where C may be an explicit function of the distribution function F . We show that in this case one can solve an equivalent maxent problem which yields q as a function of the corresponding Lagrange multiplier. As an illustration the exact solution of the static generalized Fokker-Planck equation (GFPE) is obtained from maxent with the Tsallis enropy. As in the case where C is a constant, if q is treated as a variable within the maxent framework the entropic measure is maximized trivially for all values of q . Therefore q must be determined from existing data. In the second case an additional equation of constraint exists which cannot be brought into the above form. In this case the additional equation of constraint may be used to determine the fixed value of q .

  14. Use of an entropy-based metric in multiobjective calibration to improve model performance

    NASA Astrophysics Data System (ADS)

    Pechlivanidis, I. G.; Jackson, B.; McMillan, H.; Gupta, H.

    2014-10-01

    Parameter estimation for hydrological models is complicated for many reasons, one of which is the arbitrary emphasis placed, by most traditional measures of fit, on various magnitudes of the model residuals. Recent research has called for the development of robust diagnostic measures that provide insights into which model structural components and/or data may be inadequate. In this regard, the flow duration curve (FDC) represents the historical variability of flow and is considered to be an informative signature of catchment behavior. Here we investigate the potential of using the recently developed conditioned entropy difference metric (CED) in combination with the Kling-Gupta efficiency (KGE). The CED respects the static information contained in the flow frequency distribution (and hence the FDC), but does not explicitly characterize temporal dynamics. The KGE reweights the importance of various hydrograph components (correlation, bias, variability) in a way that has been demonstrated to provide better model calibrations than the commonly used Nash-Sutcliffe efficiency, while being explicitly time sensitive. We employ both measures within a multiobjective calibration framework and achieve better performance over the full range of flows than obtained by single-criteria approaches, or by the common multiobjective approach that uses log-transformed and untransformed data to balance fitting of low and high flow periods. The investigation highlights the potential of CED to complement KGE (and vice versa) during model identification. It is possible that some of the complementarity is due to CED representing more information from moments >2 than KGE or other common metrics. We therefore suggest that an interesting way forward would be to extend KGE to include higher moments, i.e., use different moments as multiple criteria.

  15. Safety assessment of dangerous goods transport enterprise based on the relative entropy aggregation in group decision making model.

    PubMed

    Wu, Jun; Li, Chengbing; Huo, Yueying

    2014-01-01

    Safety of dangerous goods transport is directly related to the operation safety of dangerous goods transport enterprise. Aiming at the problem of the high accident rate and large harm in dangerous goods logistics transportation, this paper took the group decision making problem based on integration and coordination thought into a multiagent multiobjective group decision making problem; a secondary decision model was established and applied to the safety assessment of dangerous goods transport enterprise. First of all, we used dynamic multivalue background and entropy theory building the first level multiobjective decision model. Secondly, experts were to empower according to the principle of clustering analysis, and combining with the relative entropy theory to establish a secondary rally optimization model based on relative entropy in group decision making, and discuss the solution of the model. Then, after investigation and analysis, we establish the dangerous goods transport enterprise safety evaluation index system. Finally, case analysis to five dangerous goods transport enterprises in the Inner Mongolia Autonomous Region validates the feasibility and effectiveness of this model for dangerous goods transport enterprise recognition, which provides vital decision making basis for recognizing the dangerous goods transport enterprises.

  16. Spiking Cortical Model Based Multimodal Medical Image Fusion by Combining Entropy Information with Weber Local Descriptor.

    PubMed

    Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei

    2016-09-15

    Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation.

  17. Spiking Cortical Model Based Multimodal Medical Image Fusion by Combining Entropy Information with Weber Local Descriptor

    PubMed Central

    Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei

    2016-01-01

    Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation. PMID:27649190

  18. Bearing fault prognostics using Rényi entropy based features and Gaussian process models

    NASA Astrophysics Data System (ADS)

    Boškoski, Pavle; Gašperin, Matej; Petelin, Dejan; Juričić, Đani

    2015-02-01

    Bearings are considered to be the most frequent cause for failures in rotational machinery. Hence efficient means to anticipate the remaining useful life (RUL) on-line, by processing the available sensory records, is of substantial practical relevance. Many of the data-driven approaches rely on conjecture that evolution of condition monitoring (CM) indices is related with the aggravation of the condition and, indirectly, with the remaining useful life of a bearing. Problems with trending may be threefold: (i) most of the operational life show no significant trend until the time very close to failure; this is usually accompanied by rapidly growing values of CM indices which is not easy to forecast, (ii) the evolution of CM indices is not necessarily monotonous, (iii) variable and immeasurable fluctuations in operating may fool the trend. Motivated by these issues we propose an approach for bearing fault prognostics that employs Rényi entropy based features. It exploits the idea that progressing fault implicates raising dissimilarity in the distribution of energies across the vibrational spectral band sensitive to the bearing faults. The innovative way of predicting RUL relies on a posterior distribution following Bayes' rule using Gaussian process (GP) models' output as a likelihood distribution. The proposed approach was evaluated on the dataset provided for the IEEE PHM 2012 Prognostic Data Challenge.

  19. Regionalization of Chinese Material Medical Quality Based on Maximum Entropy Model: A case study of Atractylodes lancea

    PubMed Central

    Shoudong, Zhu; Huasheng, Peng; Lanping, Guo; Tongren, Xu; Yan, Zhang; Meilan, Chen; Qingxiu, Hao; Liping, Kang; Luqi, Huang

    2017-01-01

    Atractylodes is an East-Asiatic endemic genera that distributed in China, Japan and Russian Far Eastern. As an important resource of medicinal plant, atractylodes has long been used as herbal medicine. To example the significant features in its trueborn quality and geographical distribution, we explored the relationships between medicine quality and habitat suitability in two classifications–lower atractylodin content than the standard of Chinese Pharmacopoeia (2010) and the other has higher content. We found that the atractylodin content is negatively related to the habitat suitability for atractylodes with lower atractylodin, while the atractylodin content is positively related to the habitat suitability for those with higher atractylodin. By analyzing the distribution of atractylodeswith lower atractylodin content than the standard of Pharmacopeia, we discovered that the main ecological factors that could inhibit the accumulation of atractylodin were soil type (39.7%), soil clay content (26.7%), mean temperature in December (22.3%), Cation-exchange capacity (6%), etc. And these ecological factors promoted the accumulation of atractylodin for the atractylodes with higher atractylodin. By integrating the two classifications, we finally predicted the distribution of atractylodin content in China.Our results realized the query of atractylodes quality in arbitrary coordinates, and satisfied the actually cultivation demands of “Planting area based on atractylodin quality”. PMID:28205539

  20. Regionalization of Chinese Material Medical Quality Based on Maximum Entropy Model: A case study of Atractylodes lancea

    NASA Astrophysics Data System (ADS)

    Shoudong, Zhu; Huasheng, Peng; Lanping, Guo; Tongren, Xu; Yan, Zhang; Meilan, Chen; Qingxiu, Hao; Liping, Kang; Luqi, Huang

    2017-02-01

    Atractylodes is an East-Asiatic endemic genera that distributed in China, Japan and Russian Far Eastern. As an important resource of medicinal plant, atractylodes has long been used as herbal medicine. To example the significant features in its trueborn quality and geographical distribution, we explored the relationships between medicine quality and habitat suitability in two classifications–lower atractylodin content than the standard of Chinese Pharmacopoeia (2010) and the other has higher content. We found that the atractylodin content is negatively related to the habitat suitability for atractylodes with lower atractylodin, while the atractylodin content is positively related to the habitat suitability for those with higher atractylodin. By analyzing the distribution of atractylodeswith lower atractylodin content than the standard of Pharmacopeia, we discovered that the main ecological factors that could inhibit the accumulation of atractylodin were soil type (39.7%), soil clay content (26.7%), mean temperature in December (22.3%), Cation-exchange capacity (6%), etc. And these ecological factors promoted the accumulation of atractylodin for the atractylodes with higher atractylodin. By integrating the two classifications, we finally predicted the distribution of atractylodin content in China.Our results realized the query of atractylodes quality in arbitrary coordinates, and satisfied the actually cultivation demands of “Planting area based on atractylodin quality”.

  1. Maximum Likelihood Inference for the Cox Regression Model with Applications to Missing Covariates.

    PubMed

    Chen, Ming-Hui; Ibrahim, Joseph G; Shao, Qi-Man

    2009-10-01

    In this paper, we carry out an in-depth theoretical investigation for existence of maximum likelihood estimates for the Cox model (Cox, 1972, 1975) both in the full data setting as well as in the presence of missing covariate data. The main motivation for this work arises from missing data problems, where models can easily become difficult to estimate with certain missing data configurations or large missing data fractions. We establish necessary and sufficient conditions for existence of the maximum partial likelihood estimate (MPLE) for completely observed data (i.e., no missing data) settings as well as sufficient conditions for existence of the maximum likelihood estimate (MLE) for survival data with missing covariates via a profile likelihood method. Several theorems are given to establish these conditions. A real dataset from a cancer clinical trial is presented to further illustrate the proposed methodology.

  2. Ground-state energy and entropy of the two-dimensional Edwards-Anderson spin-glass model with different bond distributions

    NASA Astrophysics Data System (ADS)

    Perez-Morelo, D. J.; Ramirez-Pastor, A. J.; Romá, F.

    2012-02-01

    We study the two-dimensional Edwards-Anderson spin-glass model using a parallel tempering Monte Carlo algorithm. The ground-state energy and entropy are calculated for different bond distributions. In particular, the entropy is obtained by using a thermodynamic integration technique and an appropriate reference state, which is determined with the method of high-temperature expansion. This strategy provides accurate values of this quantity for finite-size lattices. By extrapolating to the thermodynamic limit, the ground-state energy and entropy of the different versions of the spin-glass model are determined.

  3. Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods

    ERIC Educational Resources Information Center

    Zhong, Xiaoling; Yuan, Ke-Hai

    2011-01-01

    In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…

  4. Applying a Weighted Maximum Likelihood Latent Trait Estimator to the Generalized Partial Credit Model

    ERIC Educational Resources Information Center

    Penfield, Randall D.; Bergeron, Jennifer M.

    2005-01-01

    This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…

  5. Comparison of Maximum Likelihood and Pearson Chi-Square Statistics for Assessing Latent Class Models.

    ERIC Educational Resources Information Center

    Holt, Judith A.; Macready, George B.

    When latent class parameters are estimated, maximum likelihood and Pearson chi-square statistics can be derived for assessing the fit of the model to the data. This study used simulated data to compare these two statistics, and is based on mixtures of latent binomial distributions, using data generated from five dichotomous manifest variables.…

  6. Pseudo Maximum Likelihood Estimation and a Test for Misspecification in Mean and Covariance Structure Models.

    ERIC Educational Resources Information Center

    Arminger, Gerhard; Schoenberg, Ronald J.

    1989-01-01

    Misspecification of mean and covariance structures for metric endogenous variables is considered. Maximum likelihood estimation of model parameters and the asymptotic covariance matrix of the estimates are discussed. A Haussman test for misspecification is developed, which is sensitive to misspecification not detected by the test statistics…

  7. Quasi-Maximum Likelihood Estimation of Structural Equation Models with Multiple Interaction and Quadratic Effects

    ERIC Educational Resources Information Center

    Klein, Andreas G.; Muthen, Bengt O.

    2007-01-01

    In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…

  8. The Relative Performance of Full Information Maximum Likelihood Estimation for Missing Data in Structural Equation Models.

    ERIC Educational Resources Information Center

    Enders, Craig K.; Bandalos, Deborah L.

    2001-01-01

    Used Monte Carlo simulation to examine the performance of four missing data methods in structural equation models: (1)full information maximum likelihood (FIML); (2) listwise deletion; (3) pairwise deletion; and (4) similar response pattern imputation. Results show that FIML estimation is superior across all conditions of the design. (SLD)

  9. The Performance of the Full Information Maximum Likelihood Estimator in Multiple Regression Models with Missing Data.

    ERIC Educational Resources Information Center

    Enders, Craig K.

    2001-01-01

    Examined the performance of a recently available full information maximum likelihood (FIML) estimator in a multiple regression model with missing data using Monte Carlo simulation and considering the effects of four independent variables. Results indicate that FIML estimation was superior to that of three ad hoc techniques, with less bias and less…

  10. Maximum Likelihood Estimation of Nonlinear Structural Equation Models with Ignorable Missing Data

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Song, Xin-Yuan; Lee, John C. K.

    2003-01-01

    The existing maximum likelihood theory and its computer software in structural equation modeling are established on the basis of linear relationships among latent variables with fully observed data. However, in social and behavioral sciences, nonlinear relationships among the latent variables are important for establishing more meaningful models…

  11. Optimized Kernel Entropy Components.

    PubMed

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2016-02-25

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  12. Entanglement entropy on fuzzy spaces

    SciTech Connect

    Dou, Djamel; Ydri, Badis

    2006-08-15

    We study the entanglement entropy of a scalar field in 2+1 spacetime where space is modeled by a fuzzy sphere and a fuzzy disc. In both models we evaluate numerically the resulting entropies and find that they are proportional to the number of boundary degrees of freedom. In the Moyal plane limit of the fuzzy disc the entanglement entropy per unite area (length) diverges if the ignored region is of infinite size. The divergence is (interpreted) of IR-UV mixing origin. In general we expect the entanglement entropy per unite area to be finite on a noncommutative space if the ignored region is of finite size.

  13. Evaluation of Maximum Radionuclide Groundwater Concentrations for Basement Fill Model. Zion Station Restoration Project

    SciTech Connect

    Sullivan, Terry

    2016-02-22

    The objectives of this report are; To present a simplified conceptual model for release from the buildings with residual subsurface structures that can be used to provide an upper bound on contaminant concentrations in the fill material; Provide maximum water concentrations and the corresponding amount of mass sorbed to the solid fill material that could occur in each building for use in dose assessment calculations; Estimate the maximum concentration in a well located outside of the fill material; and Perform a sensitivity analysis of key parameters.

  14. Simple Statistical Model to Quantify Maximum Expected EMC in Spacecraft and Avionics Boxes

    NASA Technical Reports Server (NTRS)

    Trout, Dawn H.; Bremner, Paul

    2014-01-01

    This study shows cumulative distribution function (CDF) comparisons of composite a fairing electromagnetic field data obtained by computational electromagnetic 3D full wave modeling and laboratory testing. Test and model data correlation is shown. In addition, this presentation shows application of the power balance and extention of this method to predict the variance and maximum exptected mean of the E-field data. This is valuable for large scale evaluations of transmission inside cavities.

  15. Performance of default risk model with barrier option framework and maximum likelihood estimation: Evidence from Taiwan

    NASA Astrophysics Data System (ADS)

    Chou, Heng-Chih; Wang, David

    2007-11-01

    We investigate the performance of a default risk model based on the barrier option framework with maximum likelihood estimation. We provide empirical validation of the model by showing that implied default barriers are statistically significant for a sample of construction firms in Taiwan over the period 1994-2004. We find that our model dominates the commonly adopted models, Merton model, Z-score model and ZETA model. Moreover, we test the n-year-ahead prediction performance of the model and find evidence that the prediction accuracy of the model improves as the forecast horizon decreases. Finally, we assess the effect of estimated default risk on equity returns and find that default risk is able to explain equity returns and that default risk is a variable worth considering in asset-pricing tests, above and beyond size and book-to-market.

  16. Entropy, matter, and cosmology

    PubMed Central

    Prigogine, I.; Géhéniau, J.

    1986-01-01

    The role of irreversible processes corresponding to creation of matter in general relativity is investigated. The use of Landau-Lifshitz pseudotensors together with conformal (Minkowski) coordinates suggests that this creation took place in the early universe at the stage of the variation of the conformal factor. The entropy production in this creation process is calculated. It is shown that these dissipative processes lead to the possibility of cosmological models that start from empty conditions and gradually build up matter and entropy. Gravitational entropy takes a simple meaning as associated to the entropy that is necessary to produce matter. This leads to an extension of the third law of thermodynamics, as now the zero point of entropy becomes the space-time structure out of which matter is generated. The theory can be put into a convenient form using a supplementary “C” field in Einstein's field equations. The role of the C field is to express the coupling between gravitation and matter leading to irreversible entropy production. PMID:16593747

  17. Estimation of Dynamic Discrete Choice Models by Maximum Likelihood and the Simulated Method of Moments

    PubMed Central

    Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano

    2015-01-01

    We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926

  18. Autonomous entropy-based intelligent experimental design

    NASA Astrophysics Data System (ADS)

    Malakar, Nabin Kumar

    2011-07-01

    The aim of this thesis is to explore the application of probability and information theory in experimental design, and to do so in a way that combines what we know about inference and inquiry in a comprehensive and consistent manner. Present day scientific frontiers involve data collection at an ever-increasing rate. This requires that we find a way to collect the most relevant data in an automated fashion. By following the logic of the scientific method, we couple an inference engine with an inquiry engine to automate the iterative process of scientific learning. The inference engine involves Bayesian machine learning techniques to estimate model parameters based upon both prior information and previously collected data, while the inquiry engine implements data-driven exploration. By choosing an experiment whose distribution of expected results has the maximum entropy, the inquiry engine selects the experiment that maximizes the expected information gain. The coupled inference and inquiry engines constitute an autonomous learning method for scientific exploration. We apply it to a robotic arm to demonstrate the efficacy of the method. Optimizing inquiry involves searching for an experiment that promises, on average, to be maximally informative. If the set of potential experiments is described by many parameters, the search involves a high-dimensional entropy space. In such cases, a brute force search method will be slow and computationally expensive. We develop an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment. This helps to reduce the number of computations necessary to find the optimal experiment. We also extended the method of maximizing entropy, and developed a method of maximizing joint entropy so that it could be used as a principle of collaboration between two robots. This is a major achievement of this thesis, as it allows the information-based collaboration between two robotic units towards a same

  19. Entropy distance: New quantum phenomena

    SciTech Connect

    Weis, Stephan; Knauf, Andreas

    2012-10-15

    We study a curve of Gibbsian families of complex 3 Multiplication-Sign 3-matrices and point out new features, absent in commutative finite-dimensional algebras: a discontinuous maximum-entropy inference, a discontinuous entropy distance, and non-exposed faces of the mean value set. We analyze these problems from various aspects including convex geometry, topology, and information geometry. This research is motivated by a theory of infomax principles, where we contribute by computing first order optimality conditions of the entropy distance.

  20. Information entropy in cosmology.

    PubMed

    Hosoya, Akio; Buchert, Thomas; Morita, Masaaki

    2004-04-09

    The effective evolution of an inhomogeneous cosmological model may be described in terms of spatially averaged variables. We point out that in this context, quite naturally, a measure arises which is identical to a fluid model of the Kullback-Leibler relative information entropy, expressing the distinguishability of the local inhomogeneous mass density field from its spatial average on arbitrary compact domains. We discuss the time evolution of "effective information" and explore some implications. We conjecture that the information content of the Universe-measured by relative information entropy of a cosmological model containing dust matter-is increasing.

  1. Extending Transfer Entropy Improves Identification of Effective Connectivity in a Spiking Cortical Network Model

    PubMed Central

    Ito, Shinya; Hansen, Michael E.; Heiland, Randy; Lumsdaine, Andrew; Litke, Alan M.; Beggs, John M.

    2011-01-01

    Transfer entropy (TE) is an information-theoretic measure which has received recent attention in neuroscience for its potential to identify effective connectivity between neurons. Calculating TE for large ensembles of spiking neurons is computationally intensive, and has caused most investigators to probe neural interactions at only a single time delay and at a message length of only a single time bin. This is problematic, as synaptic delays between cortical neurons, for example, range from one to tens of milliseconds. In addition, neurons produce bursts of spikes spanning multiple time bins. To address these issues, here we introduce a free software package that allows TE to be measured at multiple delays and message lengths. To assess performance, we applied these extensions of TE to a spiking cortical network model (Izhikevich, 2006) with known connectivity and a range of synaptic delays. For comparison, we also investigated single-delay TE, at a message length of one bin (D1TE), and cross-correlation (CC) methods. We found that D1TE could identify 36% of true connections when evaluated at a false positive rate of 1%. For extended versions of TE, this dramatically improved to 73% of true connections. In addition, the connections correctly identified by extended versions of TE accounted for 85% of the total synaptic weight in the network. Cross correlation methods generally performed more poorly than extended TE, but were useful when data length was short. A computational performance analysis demonstrated that the algorithm for extended TE, when used on currently available desktop computers, could extract effective connectivity from 1 hr recordings containing 200 neurons in ∼5 min. We conclude that extending TE to multiple delays and message lengths improves its ability to assess effective connectivity between spiking neurons. These extensions to TE soon could become practical tools for experimentalists who record hundreds of spiking neurons. PMID:22102894

  2. Using maximum topology matching to explore differences in species distribution models

    USGS Publications Warehouse

    Poco, Jorge; Doraiswamy, Harish; Talbert, Marian K.; Morisette, Jeffrey; Silva, Claudio

    2015-01-01

    Species distribution models (SDM) are used to help understand what drives the distribution of various plant and animal species. These models are typically high dimensional scalar functions, where the dimensions of the domain correspond to predictor variables of the model algorithm. Understanding and exploring the differences between models help ecologists understand areas where their data or understanding of the system is incomplete and will help guide further investigation in these regions. These differences can also indicate an important source of model to model uncertainty. However, it is cumbersome and often impractical to perform this analysis using existing tools, which allows for manual exploration of the models usually as 1-dimensional curves. In this paper, we propose a topology-based framework to help ecologists explore the differences in various SDMs directly in the high dimensional domain. In order to accomplish this, we introduce the concept of maximum topology matching that computes a locality-aware correspondence between similar extrema of two scalar functions. The matching is then used to compute the similarity between two functions. We also design a visualization interface that allows ecologists to explore SDMs using their topological features and to study the differences between pairs of models found using maximum topological matching. We demonstrate the utility of the proposed framework through several use cases using different data sets and report the feedback obtained from ecologists.

  3. Entropy and the Shelf Model: A Quantum Physical Approach to a Physical Property

    ERIC Educational Resources Information Center

    Jungermann, Arnd H.

    2006-01-01

    In contrast to most other thermodynamic data, entropy values are not given in relation to a certain--more or less arbitrarily defined--zero level. They are listed in standard thermodynamic tables as absolute values of specific substances. Therefore these values describe a physical property of the listed substances. One of the main tasks of…

  4. Spatiotemporal Modeling of Ozone Levels in Quebec (Canada): A Comparison of Kriging, Land-Use Regression (LUR), and Combined Bayesian Maximum Entropy–LUR Approaches

    PubMed Central

    Adam-Poupart, Ariane; Brand, Allan; Fournier, Michel; Jerrett, Michael

    2014-01-01

    Background: Ambient air ozone (O3) is a pulmonary irritant that has been associated with respiratory health effects including increased lung inflammation and permeability, airway hyperreactivity, respiratory symptoms, and decreased lung function. Estimation of O3 exposure is a complex task because the pollutant exhibits complex spatiotemporal patterns. To refine the quality of exposure estimation, various spatiotemporal methods have been developed worldwide. Objectives: We sought to compare the accuracy of three spatiotemporal models to predict summer ground-level O3 in Quebec, Canada. Methods: We developed a land-use mixed-effects regression (LUR) model based on readily available data (air quality and meteorological monitoring data, road networks information, latitude), a Bayesian maximum entropy (BME) model incorporating both O3 monitoring station data and the land-use mixed model outputs (BME-LUR), and a kriging method model based only on available O3 monitoring station data (BME kriging). We performed leave-one-station-out cross-validation and visually assessed the predictive capability of each model by examining the mean temporal and spatial distributions of the average estimated errors. Results: The BME-LUR was the best predictive model (R2 = 0.653) with the lowest root mean-square error (RMSE ;7.06 ppb), followed by the LUR model (R2 = 0.466, RMSE = 8.747) and the BME kriging model (R2 = 0.414, RMSE = 9.164). Conclusions: Our findings suggest that errors of estimation in the interpolation of O3 concentrations with BME can be greatly reduced by incorporating outputs from a LUR model developed with readily available data. Citation: Adam-Poupart A, Brand A, Fournier M, Jerrett M, Smargiassi A. 2014. Spatiotemporal modeling of ozone levels in Quebec (Canada): a comparison of kriging, land-use regression (LUR), and combined Bayesian maximum entropy–LUR approaches. Environ Health Perspect 122:970–976; http://dx.doi.org/10.1289/ehp.1306566 PMID:24879650

  5. Computing maximum-likelihood estimates for parameters of the National Descriptive Model of Mercury in Fish

    USGS Publications Warehouse

    Donato, David I.

    2012-01-01

    This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.

  6. Artificial neural networks modeling for forecasting the maximum daily total precipitation at Athens, Greece

    NASA Astrophysics Data System (ADS)

    Nastos, P. T.; Paliatsos, A. G.; Koukouletsos, K. V.; Larissi, I. K.; Moustris, K. P.

    2014-07-01

    Extreme daily precipitation events are involved in significant environmental damages, even in life loss, because of causing adverse impacts, such as flash floods, in urban and sometimes in rural areas. Thus, long-term forecast of such events is of great importance for the preparation of local authorities in order to confront and mitigate the adverse consequences. The objective of this study is to estimate the possibility of forecasting the maximum daily precipitation for the next coming year. For this reason, appropriate prognostic models, such as Artificial Neural Networks (ANNs) were developed and applied. The data used for the analysis concern annual maximum daily precipitation totals, which have been recorded at the National Observatory of Athens (NOA), during the long term period 1891-2009. To evaluate the potential of daily extreme precipitation forecast by the applied ANNs, a different period for validation was considered than the one used for the ANNs training. Thus, the datasets of the period 1891-1980 were used as training datasets, while the datasets of the period 1981-2009 as validation datasets. Appropriate statistical indices, such as the coefficient of determination (R2), the index of agreement (IA), the Root Mean Square Error (RMSE) and the Mean Bias Error (MBE), were applied to test the reliability of the models. The findings of the analysis showed that, a quite satisfactory relationship (R2 = 0.482, IA = 0.817, RMSE = 16.4 mm and MBE = + 5.2 mm) appears between the forecasted and the respective observed maximum daily precipitation totals one year ahead. The developed ANN seems to overestimate the maximum daily precipitation totals appeared in 1988 while underestimate the maximum in 1999, which could be attributed to the relatively low frequency of occurrence of these extreme events within GAA having impact on the optimum training of ANN.

  7. Maximum Likelihood Shift Estimation Using High Resolution Polarimetric SAR Clutter Model

    NASA Astrophysics Data System (ADS)

    Harant, Olivier; Bombrun, Lionel; Vasile, Gabriel; Ferro-Famil, Laurent; Gay, Michel

    2011-03-01

    This paper deals with a Maximum Likelihood (ML) shift estimation method in the context of High Resolution (HR) Polarimetric SAR (PolSAR) clutter. Texture modeling is exposed and the generalized ML texture tracking method is extended to the merging of various sensors. Some results on displacement estimation on the Argentiere glacier in the Mont Blanc massif using dual-pol TerraSAR-X (TSX) and quad-pol RADARSAT-2 (RS2) sensors are finally discussed.

  8. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  9. Spatial modeling of the highest daily maximum temperature in Korea via max-stable processes

    NASA Astrophysics Data System (ADS)

    Lee, Youngsaeng; Yoon, Sanghoo; Murshed, Md. Sharwar; Kim, Maeng-Ki; Cho, ChunHo; Baek, Hee-Jeong; Park, Jeong-Soo

    2013-11-01

    This paper examines the annual highest daily maximum temperature (DMT) in Korea by using data from 56 weather stations and employing spatial extreme modeling. Our approach is based on max-stable processes (MSP) with Schlather’s characterization. We divide the country into four regions for a better model fit and identify the best model for each region. We show that regional MSP modeling is more suitable than MSP modeling for the entire region and the pointwise generalized extreme value distribution approach. The advantage of spatial extreme modeling is that more precise and robust return levels and some indices of the highest temperatures can be obtained for observation stations and for locations with no observed data, and so help to determine the effects and assessment of vulnerability as well as to downscale extreme events.

  10. Modifying high-order aeroelastic math model of a jet transport using maximum likelihood estimation

    NASA Technical Reports Server (NTRS)

    Anissipour, Amir A.; Benson, Russell A.

    1989-01-01

    The design of control laws to damp flexible structural modes requires accurate math models. Unlike the design of control laws for rigid body motion (e.g., where robust control is used to compensate for modeling inaccuracies), structural mode damping usually employs narrow band notch filters. In order to obtain the required accuracy in the math model, maximum likelihood estimation technique is employed to improve the accuracy of the math model using flight data. Presented here are all phases of this methodology: (1) pre-flight analysis (i.e., optimal input signal design for flight test, sensor location determination, model reduction technique, etc.), (2) data collection and preprocessing, and (3) post-flight analysis (i.e., estimation technique and model verification). In addition, a discussion is presented of the software tools used and the need for future study in this field.

  11. Information entropy analysis of leopard seal vocalization bouts

    NASA Astrophysics Data System (ADS)

    Buck, John R.; Rogers, Tracey L.; Cato, Douglas H.

    2004-05-01

    Leopard seals (Hydrurga leptonyx) are solitary pinnipeds who are vocally active during their brief breeding season. The seals produce vocal bouts consisting of a sequence of distinct sounds, with an average length of roughly ten sounds. The sequential structure of the bouts is thought to be individually distinctive. Bouts recorded from five leopard seals during 1992-1994 were analyzed using information theory. The first-order Markov model entropy estimates were substantially smaller than the independent, identically distributed model entropy estimates for all five seals, indicative of constraints on the sequential structure of each seal's bouts. Each bout in the data set was classified using maximum-likelihood estimates from the first-order Markov model for each seal. This technique correctly classified 85% of the bouts, comparable to results in Rogers and Cato [Behaviour (2002)]. The relative entropies between the Markov models were found to be infinite in 18/20 possible cross-comparisons, indicating there is no probability of misclassifying the bouts in these 18 comparisons in the limit of long data sequences. One seal has sufficient data to compare a nonparametric entropy estimate with the Markov entropy estimate, finding only a small difference. This suggests that the first-order Markov model captures almost all the sequential structure in this seal's bouts.

  12. Change point models for cognitive tests using semi-parametric maximum likelihood

    PubMed Central

    van den Hout, Ardo; Muniz-Terrera, Graciela; Matthews, Fiona E.

    2013-01-01

    Random-effects change point models are formulated for longitudinal data obtained from cognitive tests. The conditional distribution of the response variable in a change point model is often assumed to be normal even if the response variable is discrete and shows ceiling effects. For the sum score of a cognitive test, the binomial and the beta-binomial distributions are presented as alternatives to the normal distribution. Smooth shapes for the change point models are imposed. Estimation is by marginal maximum likelihood where a parametric population distribution for the random change point is combined with a non-parametric mixing distribution for other random effects. An extension to latent class modelling is possible in case some individuals do not experience a change in cognitive ability. The approach is illustrated using data from a longitudinal study of Swedish octogenarians and nonagenarians that began in 1991. Change point models are applied to investigate cognitive change in the years before death. PMID:23471297

  13. Recent developments in maximum likelihood estimation of MTMM models for categorical data.

    PubMed

    Jeon, Minjeong; Rijmen, Frank

    2014-01-01

    Maximum likelihood (ML) estimation of categorical multitrait-multimethod (MTMM) data is challenging because the likelihood involves high-dimensional integrals over the crossed method and trait factors, with no known closed-form solution. The purpose of the study is to introduce three newly developed ML methods that are eligible for estimating MTMM models with categorical responses: Variational maximization-maximization (e.g., Rijmen and Jeon, 2013), alternating imputation posterior (e.g., Cho and Rabe-Hesketh, 2011), and Monte Carlo local likelihood (e.g., Jeon et al., under revision). Each method is briefly described and its applicability for MTMM models with categorical data are discussed.

  14. Terror birds on the run: a mechanical model to estimate its maximum running speed

    PubMed Central

    Blanco, R. Ernesto; Jones, Washington W

    2005-01-01

    ‘Terror bird’ is a common name for the family Phorusrhacidae. These large terrestrial birds were probably the dominant carnivores on the South American continent from the Middle Palaeocene to the Pliocene–Pleistocene limit. Here we use a mechanical model based on tibiotarsal strength to estimate maximum running speeds of three species of terror birds: Mesembriornis milneedwardsi, Patagornis marshi and a specimen of Phorusrhacinae gen. The model is proved on three living large terrestrial bird species. On the basis of the tibiotarsal strength we propose that Mesembriornis could have used its legs to break long bones and access their marrow. PMID:16096087

  15. Configurational entropy of glueball states

    NASA Astrophysics Data System (ADS)

    Bernardini, Alex E.; Braga, Nelson R. F.; da Rocha, Roldão

    2017-02-01

    The configurational entropy of glueball states is calculated using a holographic description. Glueball states are represented by a supergravity dual picture, consisting of a 5-dimensional graviton-dilaton action of a dynamical holographic AdS/QCD model. The configurational entropy is studied as a function of the glueball spin and of the mass, providing information about the stability of the glueball states.

  16. Entanglement Entropy of Black Holes

    NASA Astrophysics Data System (ADS)

    Solodukhin, Sergey N.

    2011-12-01

    The entanglement entropy is a fundamental quantity, which characterizes the correlations between sub-systems in a larger quantum-mechanical system. For two sub-systems separated by a surface the entanglement entropy is proportional to the area of the surface and depends on the UV cutoff, which regulates the short-distance correlations. The geometrical nature of entanglement-entropy calculation is particularly intriguing when applied to black holes when the entangling surface is the black-hole horizon. I review a variety of aspects of this calculation: the useful mathematical tools such as the geometry of spaces with conical singularities and the heat kernel method, the UV divergences in the entropy and their renormalization, the logarithmic terms in the entanglement entropy in four and six dimensions and their relation to the conformal anomalies. The focus in the review is on the systematic use of the conical singularity method. The relations to other known approaches such as ’t Hooft’s brick-wall model and the Euclidean path integral in the optical metric are discussed in detail. The puzzling behavior of the entanglement entropy due to fields, which non-minimally couple to gravity, is emphasized. The holographic description of the entanglement entropy of the blackhole horizon is illustrated on the two- and four-dimensional examples. Finally, I examine the possibility to interpret the Bekenstein-Hawking entropy entirely as the entanglement entropy.

  17. MARSpline model for lead seven-day maximum and minimum air temperature prediction in Chennai, India

    NASA Astrophysics Data System (ADS)

    Ramesh, K.; Anitha, R.

    2014-06-01

    In this study, a Multivariate Adaptive Regression Spline (MARS) based lead seven days minimum and maximum surface air temperature prediction system is modelled for station Chennai, India. To emphasize the effectiveness of the proposed system, comparison is made with the models created using statistical learning technique Support Vector Machine Regression (SVMr). The analysis highlights that prediction accuracy of MARS models for minimum temperature forecast are promising for short term forecast (lead days 1 to 3) with mean absolute error (MAE) less than 1 °C and the prediction efficiency and skill degrades in medium term forecast (lead days 4 to 7) with slightly above 1 °C. The MAE of maximum temperature is little higher than minimum temperature forecast varying from 0.87 °C for day-one to 1.27 °C for lag day-seven with MARS approach. The statistical error analysis emphasizes that MARS models perform well with an average 0.2 °C of reduction in MAE over SVMr models for all ahead seven days and provide significant guidance for the prediction of temperature event. The study also suggests that the correlation between the atmospheric parameters used as predictors and the temperature event decreases as the lag increases with both approaches.

  18. Possible ecosystem impacts of applying maximum sustainable yield policy in food chain models.

    PubMed

    Ghosh, Bapan; Kar, T K

    2013-07-21

    This paper describes the possible impacts of maximum sustainable yield (MSY) and maximum sustainable total yield (MSTY) policy in ecosystems. In general it is observed that exploitation at MSY (of single species) or MSTY (of multispecies) level may cause the extinction of several species. In particular, for traditional prey-predator system, fishing under combined harvesting effort at MSTY (if it exists) level may be a sustainable policy, but if MSTY does not exist then it is due to the extinction of the predator species only. In generalist prey-predator system, harvesting of any one of the species at MSY level is always a sustainable policy, but harvesting of both the species at MSTY level may or may not be a sustainable policy. In addition, we have also investigated the MSY and MSTY policy in a traditional tri-trophic and four trophic food chain models.

  19. Handling Missing Data With Multilevel Structural Equation Modeling and Full Information Maximum Likelihood Techniques.

    PubMed

    Schminkey, Donna L; von Oertzen, Timo; Bullock, Linda

    2016-08-01

    With increasing access to population-based data and electronic health records for secondary analysis, missing data are common. In the social and behavioral sciences, missing data frequently are handled with multiple imputation methods or full information maximum likelihood (FIML) techniques, but healthcare researchers have not embraced these methodologies to the same extent and more often use either traditional imputation techniques or complete case analysis, which can compromise power and introduce unintended bias. This article is a review of options for handling missing data, concluding with a case study demonstrating the utility of multilevel structural equation modeling using full information maximum likelihood (MSEM with FIML) to handle large amounts of missing data. MSEM with FIML is a parsimonious and hypothesis-driven strategy to cope with large amounts of missing data without compromising power or introducing bias. This technique is relevant for nurse researchers faced with ever-increasing amounts of electronic data and decreasing research budgets. © 2016 Wiley Periodicals, Inc.

  20. Evaluation of Maximum Radionuclide Groundwater Concentrations for Basement Fill Model. Zion Station Restoration Project

    SciTech Connect

    Sullivan, Terry

    2014-12-02

    ZionSolutions is in the process of decommissioning the Zion Nuclear Power Plant in order to establish a new water treatment plant. There is some residual radioactive particles from the plant which need to be brought down to levels so an individual who receives water from the new treatment plant does not receive a radioactive dose in excess of 25 mrem/y⁻¹. The objectives of this report are: (a) To present a simplified conceptual model for release from the buildings with residual subsurface structures that can be used to provide an upper bound on contaminant concentrations in the fill material; (b) Provide maximum water concentrations and the corresponding amount of mass sorbed to the solid fill material that could occur in each building for use in dose assessment calculations; (c) Estimate the maximum concentration in a well located outside of the fill material; and (d) Perform a sensitivity analysis of key parameters.

  1. Eigen solutions, Shannon entropy and fisher information under the Eckart Manning Rosen potential model

    NASA Astrophysics Data System (ADS)

    Onate, C. A.; Onyeaju, M. C.; Ikot, A. N.; Idiodi, J. O. A.; Ojonubah, J. O.

    2017-02-01

    We solved the Schrödinger equation with a certain approximation to the centrifugal term for an arbitrary angular momentum state with the Eckart Manning Rosen potential. The bound-state energy eigenvalues and the corresponding wave functions have been approximately obtained using the parametric Nikiforov Uvarov method. The solutions of the Schrödinger equation for the Eckart potential, Manning Rosen potential, and Hulthén potential have been obtained using a certain transformation. The concepts of the Shannon entropy and the Fisher information of a system under the Eckart Manning Rosen potential are investigated in detail. The behavior of the screening parameter and the quantum number n for Fisher information and the Shannon entropy are also investigated.

  2. Upper entropy axioms and lower entropy axioms

    SciTech Connect

    Guo, Jin-Li Suo, Qi

    2015-04-15

    The paper suggests the concepts of an upper entropy and a lower entropy. We propose a new axiomatic definition, namely, upper entropy axioms, inspired by axioms of metric spaces, and also formulate lower entropy axioms. We also develop weak upper entropy axioms and weak lower entropy axioms. Their conditions are weaker than those of Shannon–Khinchin axioms and Tsallis axioms, while these conditions are stronger than those of the axiomatics based on the first three Shannon–Khinchin axioms. The subadditivity and strong subadditivity of entropy are obtained in the new axiomatics. Tsallis statistics is a special case of satisfying our axioms. Moreover, different forms of information measures, such as Shannon entropy, Daroczy entropy, Tsallis entropy and other entropies, can be unified under the same axiomatics.

  3. Hamiltonian formalism and path entropy maximization

    NASA Astrophysics Data System (ADS)

    Davis, Sergio; González, Diego

    2015-10-01

    Maximization of the path information entropy is a clear prescription for constructing models in non-equilibrium statistical mechanics. Here it is shown that, following this prescription under the assumption of arbitrary instantaneous constraints on position and velocity, a Lagrangian emerges which determines the most probable trajectory. Deviations from the probability maximum can be consistently described as slices in time by a Hamiltonian, according to a nonlinear Langevin equation and its associated Fokker-Planck equation. The connections unveiled between the maximization of path entropy and the Langevin/Fokker-Planck equations imply that missing information about the phase space coordinate never decreases in time, a purely information-theoretical version of the second law of thermodynamics. All of these results are independent of any physical assumptions, and thus valid for any generalized coordinate as a function of time, or any other parameter. This reinforces the view that the second law is a fundamental property of plausible inference.

  4. Entropy-enthalpy compensation in chemical reactions and adsorption: an exactly solvable model.

    PubMed

    Freed, Karl F

    2011-02-24

    The free energies of reaction or activation for many systems respond in a common fashion to a perturbing parameter, such as the concentration of an "inert" additive. Arrhenius plots as a function of the perturbing parameter display a "'compensation temperature" at which the free energy appears to be independent of the perturber, an entropy-enthalpy compensation process. Thus, as the perturber's concentration varies, Arrhenius plots of the rate constant or equilibrium constant exhibit a rotation about the fixed compensation temperature. While this (isokinetic/isoequilibrium) component of the phenomenon of entropy-enthalpy compensation appears in a huge number of situations of relevance to chemistry, biology, and materials science, statistical mechanical descriptions have been almost completely lacking. We provide the general statistical mechanical basis for solvent induced isokinetic/isoequilibrium entropy-enthalpy compensation in chemical reactions and adsorption, understanding that can be used to control of rate processes and binding constants in diverse applications. The general behavior is illustrated with an analytical solution for the dilute gas limit.

  5. Entropy jump across an inviscid shock wave

    NASA Technical Reports Server (NTRS)

    Salas, Manuel D.; Iollo, Angelo

    1995-01-01

    The shock jump conditions for the Euler equations in their primitive form are derived by using generalized functions. The shock profiles for specific volume, speed, and pressure are shown to be the same, however density has a different shock profile. Careful study of the equations that govern the entropy shows that the inviscid entropy profile has a local maximum within the shock layer. We demonstrate that because of this phenomenon, the entropy, propagation equation cannot be used as a conservation law.

  6. Maximum Pseudolikelihood Estimation for Model-Based Clustering of Time Series Data.

    PubMed

    Nguyen, Hien D; McLachlan, Geoffrey J; Orban, Pierre; Bellec, Pierre; Janke, Andrew L

    2017-04-01

    Mixture of autoregressions (MoAR) models provide a model-based approach to the clustering of time series data. The maximum likelihood (ML) estimation of MoAR models requires evaluating products of large numbers of densities of normal random variables. In practical scenarios, these products converge to zero as the length of the time series increases, and thus the ML estimation of MoAR models becomes infeasible without the use of numerical tricks. We propose a maximum pseudolikelihood (MPL) estimation approach as an alternative to the use of numerical tricks. The MPL estimator is proved to be consistent and can be computed with an EM (expectation-maximization) algorithm. Simulations are used to assess the performance of the MPL estimator against that of the ML estimator in cases where the latter was able to be calculated. An application to the clustering of time series data arising from a resting state fMRI experiment is presented as a demonstration of the methodology.

  7. On the maximum energy release in flux-rope models of eruptive flares

    NASA Technical Reports Server (NTRS)

    Forbes, T. G.; Priest, E. R.; Isenberg, P. A.

    1994-01-01

    We determine the photospheric boundary conditions which maximize the magnetic energy released by a loss of ideal-magnetohydrodynamic (MHD) equilibrium in two-dimensional flux-rope models. In these models a loss of equilibrium causes a transition of the flux rope to a lower magnetic energy state at a higher altitude. During the transition a vertical current sheet forms below the flux rope, and reconnection in this current sheet releases additional energy. Here we compute how much energy is released by the loss of equilibrium relative to the total energy release. When the flux-rope radius is small compared to its height, it is possible to obtain general solutions of the Grad-Shafranov equation for a wide range of boundary conditions. Variational principles can then be used to find the particular boundary condition which maximizes the magnetic energy released for a given class of conditions. We apply this procedure to a class of models known as cusp-type catastrophes, and we find that the maximum energy released by the loss of equilibrium is 20.8% of the total energy release for any model in this class. If the additional restriction is imposed that the photospheric magnetic field forms a simple arcade in the absence of coronal currents, then the maximum energy release reduces to 8.6%

  8. Using entropy measures to characterize human locomotion.

    PubMed

    Leverick, Graham; Szturm, Tony; Wu, Christine Q

    2014-12-01

    Entropy measures have been widely used to quantify the complexity of theoretical and experimental dynamical systems. In this paper, the value of using entropy measures to characterize human locomotion is demonstrated based on their construct validity, predictive validity in a simple model of human walking and convergent validity in an experimental study. Results show that four of the five considered entropy measures increase meaningfully with the increased probability of falling in a simple passive bipedal walker model. The same four entropy measures also experienced statistically significant increases in response to increasing age and gait impairment caused by cognitive interference in an experimental study. Of the considered entropy measures, the proposed quantized dynamical entropy (QDE) and quantization-based approximation of sample entropy (QASE) offered the best combination of sensitivity to changes in gait dynamics and computational efficiency. Based on these results, entropy appears to be a viable candidate for assessing the stability of human locomotion.

  9. Thin Interface Asymptotics for an Energy/Entropy Approach to Phase-Field Models with Unequal Conductivities

    NASA Technical Reports Server (NTRS)

    McFadden, G. B.; Wheeler, A. A.; Anderson, D. M.

    1999-01-01

    Karma and Rapped recently developed a new sharp interface asymptotic analysis of the phase-field equations that is especially appropriate for modeling dendritic growth at low undercoolings. Their approach relieves a stringent restriction on the interface thickness that applies in the conventional asymptotic analysis, and has the added advantage that interfacial kinetic effects can also be eliminated. However, their analysis focussed on the case of equal thermal conductivities in the solid and liquid phases; when applied to a standard phase-field model with unequal conductivities, anomalous terms arise in the limiting forms of the boundary conditions for the interfacial temperature that are not present in conventional sharp-interface solidification models, as discussed further by Almgren. In this paper we apply their asymptotic methodology to a generalized phase-field model which is derived using a thermodynamically consistent approach that is based on independent entropy and internal energy gradient functionals that include double wells in both the entropy and internal energy densities. The additional degrees of freedom associated with the generalized phased-field equations can be chosen to eliminate the anomalous terms that arise for unequal conductivities.

  10. Estimation of entropy rate in a fast physical random-bit generator using a chaotic semiconductor laser with intrinsic noise.

    PubMed

    Mikami, Takuya; Kanno, Kazutaka; Aoyama, Kota; Uchida, Atsushi; Ikeguchi, Tohru; Harayama, Takahisa; Sunada, Satoshi; Arai, Ken-ichi; Yoshimura, Kazuyuki; Davis, Peter

    2012-01-01

    We analyze the time for growth of bit entropy when generating nondeterministic bits using a chaotic semiconductor laser model. The mechanism for generating nondeterministic bits is modeled as a 1-bit sampling of the intensity of light output. Microscopic noise results in an ensemble of trajectories whose bit entropy increases with time. The time for the growth of bit entropy, called the memory time, depends on both noise strength and laser dynamics. It is shown that the average memory time decreases logarithmically with increase in noise strength. It is argued that the ratio of change in average memory time with change in logarithm of noise strength can be used to estimate the intrinsic dynamical entropy rate for this method of random bit generation. It is also shown that in this model the entropy rate corresponds to the maximum Lyapunov exponent.

  11. Adapting Predictive Models for Cepheid Variable Star Classification Using Linear Regression and Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Gupta, Kinjal Dhar; Vilalta, Ricardo; Asadourian, Vicken; Macri, Lucas

    2014-05-01

    We describe an approach to automate the classification of Cepheid variable stars into two subtypes according to their pulsation mode. Automating such classification is relevant to obtain a precise determination of distances to nearby galaxies, which in addition helps reduce the uncertainty in the current expansion of the universe. One main difficulty lies in the compatibility of models trained using different galaxy datasets; a model trained using a training dataset may be ineffectual on a testing set. A solution to such difficulty is to adapt predictive models across domains; this is necessary when the training and testing sets do not follow the same distribution. The gist of our methodology is to train a predictive model on a nearby galaxy (e.g., Large Magellanic Cloud), followed by a model-adaptation step to make the model operable on other nearby galaxies. We follow a parametric approach to density estimation by modeling the training data (anchor galaxy) using a mixture of linear models. We then use maximum likelihood to compute the right amount of variable displacement, until the testing data closely overlaps the training data. At that point, the model can be directly used in the testing data (target galaxy).

  12. Maximum penalized likelihood estimation in semiparametric mark-recapture-recovery models.

    PubMed

    Michelot, Théo; Langrock, Roland; Kneib, Thomas; King, Ruth

    2016-01-01

    We discuss the semiparametric modeling of mark-recapture-recovery data where the temporal and/or individual variation of model parameters is explained via covariates. Typically, in such analyses a fixed (or mixed) effects parametric model is specified for the relationship between the model parameters and the covariates of interest. In this paper, we discuss the modeling of the relationship via the use of penalized splines, to allow for considerably more flexible functional forms. Corresponding models can be fitted via numerical maximum penalized likelihood estimation, employing cross-validation to choose the smoothing parameters in a data-driven way. Our contribution builds on and extends the existing literature, providing a unified inferential framework for semiparametric mark-recapture-recovery models for open populations, where the interest typically lies in the estimation of survival probabilities. The approach is applied to two real datasets, corresponding to gray herons (Ardea cinerea), where we model the survival probability as a function of environmental condition (a time-varying global covariate), and Soay sheep (Ovis aries), where we model the survival probability as a function of individual weight (a time-varying individual-specific covariate). The proposed semiparametric approach is compared to a standard parametric (logistic) regression and new interesting underlying dynamics are observed in both cases.

  13. Robust maximum likelihood estimation for stochastic state space model with observation outliers

    NASA Astrophysics Data System (ADS)

    AlMutawa, J.

    2016-08-01

    The objective of this paper is to develop a robust maximum likelihood estimation (MLE) for the stochastic state space model via the expectation maximisation algorithm to cope with observation outliers. Two types of outliers and their influence are studied in this paper: namely,the additive outlier (AO) and innovative outlier (IO). Due to the sensitivity of the MLE to AO and IO, we propose two techniques for robustifying the MLE: the weighted maximum likelihood estimation (WMLE) and the trimmed maximum likelihood estimation (TMLE). The WMLE is easy to implement with weights estimated from the data; however, it is still sensitive to IO and a patch of AO outliers. On the other hand, the TMLE is reduced to a combinatorial optimisation problem and hard to implement but it is efficient to both types of outliers presented here. To overcome the difficulty, we apply the parallel randomised algorithm that has a low computational cost. A Monte Carlo simulation result shows the efficiency of the proposed algorithms. An earlier version of this paper was presented at the 8th Asian Control Conference, Kaohsiung, Taiwan, 2011.

  14. Maximum group velocity in a one-dimensional model with a sinusoidally varying staggered potential

    NASA Astrophysics Data System (ADS)

    Nag, Tanay; Sen, Diptiman; Dutta, Amit

    2015-06-01

    We use Floquet theory to study the maximum value of the stroboscopic group velocity in a one-dimensional tight-binding model subjected to an on-site staggered potential varying sinusoidally in time. The results obtained by numerically diagonalizing the Floquet operator are analyzed using a variety of analytical schemes. In the low-frequency limit we use adiabatic theory, while in the high-frequency limit the Magnus expansion of the Floquet Hamiltonian turns out to be appropriate. When the magnitude of the staggered potential is much greater or much less than the hopping, we use degenerate Floquet perturbation theory; we find that dynamical localization occurs in the former case when the maximum group velocity vanishes. Finally, starting from an "engineered" initial state where the particles (taken to be hard-core bosons) are localized in one part of the chain, we demonstrate that the existence of a maximum stroboscopic group velocity manifests in a light-cone-like spreading of the particles in real space.

  15. Maximum likelihood estimation for semiparametric transformation models with interval-censored data

    PubMed Central

    Zeng, Donglin; Mao, Lu; Lin, D. Y.

    2016-01-01

    Interval censoring arises frequently in clinical, epidemiological, financial and sociological studies, where the event or failure of interest is known only to occur within an interval induced by periodic monitoring. We formulate the effects of potentially time-dependent covariates on the interval-censored failure time through a broad class of semiparametric transformation models that encompasses proportional hazards and proportional odds models. We consider nonparametric maximum likelihood estimation for this class of models with an arbitrary number of monitoring times for each subject. We devise an EM-type algorithm that converges stably, even in the presence of time-dependent covariates, and show that the estimators for the regression parameters are consistent, asymptotically normal, and asymptotically efficient with an easily estimated covariance matrix. Finally, we demonstrate the performance of our procedures through simulation studies and application to an HIV/AIDS study conducted in Thailand. PMID:27279656

  16. A label field fusion bayesian model and its penalized maximum rand estimator for image segmentation.

    PubMed

    Mignotte, Max

    2010-06-01

    This paper presents a novel segmentation approach based on a Markov random field (MRF) fusion model which aims at combining several segmentation results associated with simpler clustering models in order to achieve a more reliable and accurate segmentation result. The proposed fusion model is derived from the recently introduced probabilistic Rand measure for comparing one segmentation result to one or more manual segmentations of the same image. This non-parametric measure allows us to easily derive an appealing fusion model of label fields, easily expressed as a Gibbs distribution, or as a nonstationary MRF model defined on a complete graph. Concretely, this Gibbs energy model encodes the set of binary constraints, in terms of pairs of pixel labels, provided by each segmentation results to be fused. Combined with a prior distribution, this energy-based Gibbs model also allows for definition of an interesting penalized maximum probabilistic rand estimator with which the fusion of simple, quickly estimated, segmentation results appears as an interesting alternative to complex segmentation models existing in the literature. This fusion framework has been successfully applied on the Berkeley image database. The experiments reported in this paper demonstrate that the proposed method is efficient in terms of visual evaluation and quantitative performance measures and performs well compared to the best existing state-of-the-art segmentation methods recently proposed in the literature.

  17. Estimation of instantaneous peak flow from simulated maximum daily flow using the HBV model

    NASA Astrophysics Data System (ADS)

    Ding, Jie; Haberlandt, Uwe

    2014-05-01

    Instantaneous peak flow (IPF) data are the foundation of the design of hydraulic structures and flood frequency analysis. However, the long discharge records published by hydrological agencies contain usually only average daily flows which are of little value for design in small catchments. In former research, statistical analysis using observed peak and daily flow data was carried out to explore the link between instantaneous peak flow (IPF) and maximum daily flow (MDF) where the multiple regression model is proved to have the best performance. The objective of this study is to further investigate the acceptability of the multiple regression model for post-processing simulated daily flows from hydrological modeling. The model based flood frequency analysis allows to consider change in the condition of the catchments and in climate for design. Here, the HBV model is calibrated on peak flow distributions and flow duration curves using two approaches. In a two -step approach the simulated MDF are corrected with a priory established regressions. In a one-step procedure the regression coefficients are calibrated together with the parameters of the model. For the analysis data from 18 mesoscale catchments in the Aller-Leine river basin in Northern Germany are used. The results show that: (1) the multiple regression model is capable to predict the peak flows with the simulated MDF data; (2) the calibrated hydrological model reproduces well the magnitude and frequency distribution of peak flows; (3) the one-step procedure outperforms the two-step procedure regarding the estimation of peak flows.

  18. The role of the deformational entropy in the miscibility of polymer blends investigated using a hybrid statistical mechanics and molecular dynamics model.

    PubMed

    Madkour, Tarek M; Salem, Sarah A; Miller, Stephen A

    2013-04-28

    To fully understand the thermodynamic nature of polymer blends and accurately predict their miscibility on a microscopic level, a hybrid model employing both statistical mechanics and molecular dynamics techniques was developed to effectively predict the total free energy of mixing. The statistical mechanics principles were used to derive an expression for the deformational entropy of the chains in the polymeric blends that could be evaluated from molecular dynamics trajectories. Evaluation of the entropy loss due to the deformation of the polymer chains in the case of coiling as a result of the repulsive interactions between the blend components or in the case of swelling due to the attractive interactions between the polymeric segments predicted a negative value for the deformational entropy resulting in a decrease in the overall entropy change upon mixing. Molecular dynamics methods were then used to evaluate the enthalpy of mixing, entropy of mixing, the loss in entropy due to the deformation of the polymeric chains upon mixing and the total free energy change for a series of polar and non-polar, poly(glycolic acid), PGA, polymer blends.

  19. Numerical Modeling of the Last Glacial Maximum Yellowstone Ice Cap Captures Asymmetry in Moraine Ages

    NASA Astrophysics Data System (ADS)

    Anderson, L. S.; Wickert, A. D.; Colgan, W. T.; Anderson, R. S.

    2014-12-01

    The Last Glacial Maximum (LGM) Yellowstone Ice Cap was the largest continuous ice body in the US Rocky Mountains. Terminal moraine ages derived from cosmogenic radionuclide dating (e.g., Licciardi and Pierce, 2008) constrain the timing of maximum Ice Cap extent. Importantly, the moraine ages vary by several thousand years around the Ice Cap; ages on the eastern outlet glaciers are significantly younger than their western counterparts. In order to interpret these observations within the context of LGM climate in North America, we perform two numerical glacier modeling experiments: 1) We model the initiation and growth of the Ice Cap to steady state; and 2) We estimate the range of LGM climate states which led to the formation of the Ice Cap. We use an efficient semi-implicit 2-D glacier model coupled to a fully implicit solution for flexural isostasy, allowing for transient links between climatic forcing, ice thickness, and earth surface deflection. Independent of parameter selection, the Ice Cap initiates in the Absaroka and Beartooth mountains and then advances across the Yellowstone plateau to the west. The Ice Cap advances to its maximum extent first to the older eastern moraines and last to the younger western and northwestern moraines. This suggests that the moraine ages may reflect the timescale required for the Ice Cap to advance across the high elevation Yellowstone plateau rather than the timing of local LGM climate. With no change in annual precipitation from the present, a mean summer temperature drop of 8-9° C is required to form the Ice Cap. Further parameter searches provide the full range of LGM paleoclimate states that led to the Yellowstone Ice Cap. Using our preferred parameter set, we find that the timescale for the growth of the complete Ice Cap is roughly 10,000 years. Isostatic subsidence helps explain the long timescale of Ice Cap growth. The Yellowstone Ice Cap caused a maximum surface deflection of 300 m (using a constant effective elastic

  20. Essential equivalence of the general equation for the nonequilibrium reversible-irreversible coupling (GENERIC) and steepest-entropy-ascent models of dissipation for nonequilibrium thermodynamics

    NASA Astrophysics Data System (ADS)

    Montefusco, Alberto; Consonni, Francesco; Beretta, Gian Paolo

    2015-04-01

    By reformulating the steepest-entropy-ascent (SEA) dynamical model for nonequilibrium thermodynamics in the mathematical language of differential geometry, we compare it with the primitive formulation of the general equation for the nonequilibrium reversible-irreversible coupling (GENERIC) model and discuss the main technical differences of the two approaches. In both dynamical models the description of dissipation is of the "entropy-gradient" type. SEA focuses only on the dissipative, i.e., entropy generating, component of the time evolution, chooses a sub-Riemannian metric tensor as dissipative structure, and uses the local entropy density field as potential. GENERIC emphasizes the coupling between the dissipative and nondissipative components of the time evolution, chooses two compatible degenerate structures (Poisson and degenerate co-Riemannian), and uses the global energy and entropy functionals as potentials. As an illustration, we rewrite the known GENERIC formulation of the Boltzmann equation in terms of the square root of the distribution function adopted by the SEA formulation. We then provide a formal proof that in more general frameworks, whenever all degeneracies in the GENERIC framework are related to conservation laws, the SEA and GENERIC models of the dissipative component of the dynamics are essentially interchangeable, provided of course they assume the same kinematics. As part of the discussion, we note that equipping the dissipative structure of GENERIC with the Leibniz identity makes it automatically SEA on metric leaves.

  1. Essential equivalence of the general equation for the nonequilibrium reversible-irreversible coupling (GENERIC) and steepest-entropy-ascent models of dissipation for nonequilibrium thermodynamics.

    PubMed

    Montefusco, Alberto; Consonni, Francesco; Beretta, Gian Paolo

    2015-04-01

    By reformulating the steepest-entropy-ascent (SEA) dynamical model for nonequilibrium thermodynamics in the mathematical language of differential geometry, we compare it with the primitive formulation of the general equation for the nonequilibrium reversible-irreversible coupling (GENERIC) model and discuss the main technical differences of the two approaches. In both dynamical models the description of dissipation is of the "entropy-gradient" type. SEA focuses only on the dissipative, i.e., entropy generating, component of the time evolution, chooses a sub-Riemannian metric tensor as dissipative structure, and uses the local entropy density field as potential. GENERIC emphasizes the coupling between the dissipative and nondissipative components of the time evolution, chooses two compatible degenerate structures (Poisson and degenerate co-Riemannian), and uses the global energy and entropy functionals as potentials. As an illustration, we rewrite the known GENERIC formulation of the Boltzmann equation in terms of the square root of the distribution function adopted by the SEA formulation. We then provide a formal proof that in more general frameworks, whenever all degeneracies in the GENERIC framework are related to conservation laws, the SEA and GENERIC models of the dissipative component of the dynamics are essentially interchangeable, provided of course they assume the same kinematics. As part of the discussion, we note that equipping the dissipative structure of GENERIC with the Leibniz identity makes it automatically SEA on metric leaves.

  2. Inverse Modeling of Respiratory System during Noninvasive Ventilation by Maximum Likelihood Estimation

    NASA Astrophysics Data System (ADS)

    Saatci, Esra; Akan, Aydin

    2010-12-01

    We propose a procedure to estimate the model parameters of presented nonlinear Resistance-Capacitance (RC) and the widely used linear Resistance-Inductance-Capacitance (RIC) models of the respiratory system by Maximum Likelihood Estimator (MLE). The measurement noise is assumed to be Generalized Gaussian Distributed (GGD), and the variance and the shape factor of the measurement noise are estimated by MLE and Kurtosis method, respectively. The performance of the MLE algorithm is also demonstrated by the Cramer-Rao Lower Bound (CRLB) with artificially produced respiratory signals. Airway flow, mask pressure, and lung volume are measured from patients with Chronic Obstructive Pulmonary Disease (COPD) under the noninvasive ventilation and from healthy subjects. Simulations show that respiratory signals from healthy subjects are better represented by the RIC model compared to the nonlinear RC model. On the other hand, the Patient group respiratory signals are fitted to the nonlinear RC model with lower measurement noise variance, better converged measurement noise shape factor, and model parameter tracks. Also, it is observed that for the Patient group the shape factor of the measurement noise converges to values between 1 and 2 whereas for the Control group shape factor values are estimated in the super-Gaussian area.

  3. Gamma-ray constraints on maximum cosmogenic neutrino fluxes and UHECR source evolution models

    NASA Astrophysics Data System (ADS)

    Gelmini, Graciela B.; Kalashev, Oleg; Semikoz, Dmitri V.

    2012-01-01

    The dip model assumes that the ultra-high energy cosmic rays (UHECRs) above 1018 eV consist exclusively of protons and is consistent with the spectrum and composition measure by HiRes. Here we present the range of cosmogenic neutrino fluxes in the dip-model which are compatible with a recent determination of the extragalactic very high energy (VHE) gamma-ray diffuse background derived from 2.5 years of Fermi/LAT data. We show that the largest fluxes predicted in the dip model would be detectable by IceCube in about 10 years of observation and are within the reach of a few years of observation with the ARA project. In the incomplete UHECR model in which protons are assumed to dominate only above 1019 eV, the cosmogenic neutrino fluxes could be a factor of 2 or 3 larger. Any fraction of heavier nuclei in the UHECR at these energies would reduce the maximum cosmogenic neutrino fluxes. We also consider here special evolution models in which the UHECR sources are assumed to have the same evolution of either the star formation rate (SFR), or the gamma-ray burst (GRB) rate, or the active galactic nuclei (AGN) rate in the Universe and found that the last two are disfavored (and in the dip model rejected) by the new VHE gamma-ray background.

  4. Stable water isotope behavior during the last glacial maximum: A general circulation model analysis

    NASA Technical Reports Server (NTRS)

    Jouzel, Jean; Koster, Randal D.; Suozzo, Robert J.; Russell, Gary L.

    1994-01-01

    Global water isotope geochemisty during the last glacial maximum (LGM) is simulated with an 8 deg x 10 deg atmospheric general circulation model (GCM). The simulation results suggest that the spatial delta O-18/temperature relationships observed for the present day and LGM climates are very similar. Furthermore, the temporal delta O-18/temperature relationship is similar to the present-day spatial relationship in regions for which the LGM/present-day temperature change is significant. This helps justify the standard practice of applying the latter to the interpretation of paleodata, despite the possible influence of other factors, such as changes in the evaportive sources of precipitation or in the seasonality of precipitation. The model suggests, for example, that temperature shifts inferred from ice core data may differ from the true shifts by only about 30%.

  5. Striatal and Hippocampal Entropy and Recognition Signals in Category Learning: Simultaneous Processes Revealed by Model-Based fMRI

    PubMed Central

    Davis, Tyler; Love, Bradley C.; Preston, Alison R.

    2012-01-01

    Category learning is a complex phenomenon that engages multiple cognitive processes, many of which occur simultaneously and unfold dynamically over time. For example, as people encounter objects in the world, they simultaneously engage processes to determine their fit with current knowledge structures, gather new information about the objects, and adjust their representations to support behavior in future encounters. Many techniques that are available to understand the neural basis of category learning assume that the multiple processes that subserve it can be neatly separated between different trials of an experiment. Model-based functional magnetic resonance imaging offers a promising tool to separate multiple, simultaneously occurring processes and bring the analysis of neuroimaging data more in line with category learning’s dynamic and multifaceted nature. We use model-based imaging to explore the neural basis of recognition and entropy signals in the medial temporal lobe and striatum that are engaged while participants learn to categorize novel stimuli. Consistent with theories suggesting a role for the anterior hippocampus and ventral striatum in motivated learning in response to uncertainty, we find that activation in both regions correlates with a model-based measure of entropy. Simultaneously, separate subregions of the hippocampus and striatum exhibit activation correlated with a model-based recognition strength measure. Our results suggest that model-based analyses are exceptionally useful for extracting information about cognitive processes from neuroimaging data. Models provide a basis for identifying the multiple neural processes that contribute to behavior, and neuroimaging data can provide a powerful test bed for constraining and testing model predictions. PMID:22746951

  6. UniEnt: uniform entropy model for the dynamics of a neuronal population

    NASA Astrophysics Data System (ADS)

    Hernandez Lahme, Damian; Nemenman, Ilya

    Sensory information and motor responses are encoded in the brain in a collective spiking activity of a large number of neurons. Understanding the neural code requires inferring statistical properties of such collective dynamics from multicellular neurophysiological recordings. Questions of whether synchronous activity or silence of multiple neurons carries information about the stimuli or the motor responses are especially interesting. Unfortunately, detection of such high order statistical interactions from data is especially challenging due to the exponentially large dimensionality of the state space of neural collectives. Here we present UniEnt, a method for the inference of strengths of multivariate neural interaction patterns. The method is based on the Bayesian prior that makes no assumptions (uniform a priori expectations) about the value of the entropy of the observed multivariate neural activity, in contrast to popular approaches that maximize this entropy. We then study previously published multi-electrode recordings data from salamander retina, exposing the relevance of higher order neural interaction patterns for information encoding in this system. This work was supported in part by Grants JSMF/220020321 and NSF/IOS/1208126.

  7. Entropy and cosmology.

    NASA Astrophysics Data System (ADS)

    Zucker, M. H.

    temperature and thus, by itself; reverse entropy. The vast encompassing gravitational forces that the universe has at its disposal, forces that dominate the phase of contraction, provide the compacting, compressive mechanism that regenerates heat in an expanded, cooled universe and decreases entropy. And this phenomenon takes place without diminishing or depleting the finite amount of mass/energy with which the universe began. The fact that the universe can reverse the entropic process leads to possibilities previously ignored when assessing which of the three models (open, closed, of flat) most probably represents the future of the universe. After analyzing the models, the conclusion reached here is that the open model is only an expanded version of the closed model and therefore is not open, and the closed model will never collapse to a big crunch and, therefore, is not closed. Which leaves a modified model, oscillating forever between limited phases of expansion and contraction (a universe in "dynamic equilibrium") as the only feasible choice.

  8. Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET

    SciTech Connect

    Gopich, Irina V.

    2015-01-21

    Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.

  9. Analytical template protection performance and maximum key size given a Gaussian-modeled biometric source

    NASA Astrophysics Data System (ADS)

    Kelkboom, Emile J. C.; Breebaart, Jeroen; Buhan, Ileana; Veldhuis, Raymond N. J.

    2010-04-01

    Template protection techniques are used within biometric systems in order to protect the stored biometric template against privacy and security threats. A great portion of template protection techniques are based on extracting a key from or binding a key to a biometric sample. The achieved protection depends on the size of the key and its closeness to being random. In the literature it can be observed that there is a large variation on the reported key lengths at similar classification performance of the same template protection system, even when based on the same biometric modality and database. In this work we determine the analytical relationship between the system performance and the theoretical maximum key size given a biometric source modeled by parallel Gaussian channels. We consider the case where the source capacity is evenly distributed across all channels and the channels are independent. We also determine the effect of the parameters such as the source capacity, the number of enrolment and verification samples, and the operating point selection on the maximum key size. We show that a trade-off exists between the privacy protection of the biometric system and its convenience for its users.

  10. Last glacial maximum constraints on the Earth System model HadGEM2-ES

    NASA Astrophysics Data System (ADS)

    Hopcroft, Peter O.; Valdes, Paul J.

    2015-09-01

    We investigate the response of the atmospheric and land surface components of the CMIP5/AR5 Earth System model HadGEM2-ES to pre-industrial (PI: AD 1860) and last glacial maximum (LGM: 21 kyr) boundary conditions. HadGEM2-ES comprises atmosphere, ocean and sea-ice components which are interactively coupled to representations of the carbon cycle, aerosols including mineral dust and tropospheric chemistry. In this study, we focus on the atmosphere-only model HadGEM2-A coupled to terrestrial carbon cycle and aerosol models. This configuration is forced with monthly sea surface temperature and sea-ice fields from equivalent coupled simulations with an older version of the Hadley Centre model, HadCM3. HadGEM2-A simulates extreme cooling over northern continents and nearly complete die back of vegetation in Asia, giving a poor representation of the LGM environment compared with reconstructions of surface temperatures and biome distributions. The model also performs significantly worse for the LGM in comparison with its precursor AR4 model HadCM3M2. Detailed analysis shows that the major factor behind the vegetation die off in HadGEM2-A is a subtle change to the temperature dependence of leaf mortality within the phenology model of HadGEM2. This impacts on both snow-vegetation albedo and vegetation dynamics. A new set of parameters is tested for both the pre-industrial and LGM, showing much improved coverage of vegetation in both time periods, including an improved representation of the needle-leaf forest coverage in Siberia for the pre-industrial. The new parameters and the resulting changes in global vegetation distribution strongly impact the simulated loading of mineral dust, an important aerosol for the LGM. The climate response in an abrupt 4× pre-industrial CO2 simulation is also analysed and shows modest regional impacts on surface temperatures across the Boreal zone.

  11. Local entropy of a nonequilibrium fermion system

    NASA Astrophysics Data System (ADS)

    Stafford, Charles A.; Shastry, Abhay

    2017-03-01

    The local entropy of a nonequilibrium system of independent fermions is investigated and analyzed in the context of the laws of thermodynamics. It is shown that the local temperature and chemical potential can only be expressed in terms of derivatives of the local entropy for linear deviations from local equilibrium. The first law of thermodynamics is shown to lead to an inequality, not equality, for the change in the local entropy as the nonequilibrium state of the system is changed. The maximum entropy principle (second law of thermodynamics) is proven: a nonequilibrium distribution has a local entropy less than or equal to a local equilibrium distribution satisfying the same constraints. It is shown that the local entropy of the system tends to zero when the local temperature tends to zero, consistent with the third law of thermodynamics.

  12. Competition between Homophily and Information Entropy Maximization in Social Networks

    PubMed Central

    Zhao, Jichang; Liang, Xiao; Xu, Ke

    2015-01-01

    In social networks, it is conventionally thought that two individuals with more overlapped friends tend to establish a new friendship, which could be stated as homophily breeding new connections. While the recent hypothesis of maximum information entropy is presented as the possible origin of effective navigation in small-world networks. We find there exists a competition between information entropy maximization and homophily in local structure through both theoretical and experimental analysis. This competition suggests that a newly built relationship between two individuals with more common friends would lead to less information entropy gain for them. We demonstrate that in the evolution of the social network, both of the two assumptions coexist. The rule of maximum information entropy produces weak ties in the network, while the law of homophily makes the network highly clustered locally and the individuals would obtain strong and trust ties. A toy model is also presented to demonstrate the competition and evaluate the roles of different rules in the evolution of real networks. Our findings could shed light on the social network modeling from a new perspective. PMID:26334994

  13. ENTROPY PRODUCTION IN COLLISIONLESS SYSTEMS. III. RESULTS FROM SIMULATIONS

    SciTech Connect

    Barnes, Eric I.; Egerer, Colin P. E-mail: egerer.coli@uwlax.edu

    2015-05-20

    The equilibria formed by the self-gravitating, collisionless collapse of simple initial conditions have been investigated for decades. We present the results of our attempts to describe the equilibria formed in N-body simulations using thermodynamically motivated models. Previous work has suggested that it is possible to define distribution functions for such systems that describe maximum entropy states. These distribution functions are used to create radial density and velocity distributions for comparison to those from simulations. A wide variety of N-body code conditions are used to reduce the chance that results are biased by numerical issues. We find that a subset of initial conditions studied lead to equilibria that can be accurately described by these models, and that direct calculation of the entropy shows maximum values being achieved.

  14. Application of Markov chain model to daily maximum temperature for thermal comfort in Malaysia

    NASA Astrophysics Data System (ADS)

    Nordin, Muhamad Asyraf bin Che; Hassan, Husna

    2015-10-01

    The Markov chain's first order principle has been widely used to model various meteorological fields, for prediction purposes. In this study, a 14-year (2000-2013) data of daily maximum temperatures in Bayan Lepas were used. Earlier studies showed that the outdoor thermal comfort range based on physiologically equivalent temperature (PET) index in Malaysia is less than 34°C, thus the data obtained were classified into two state: normal state (within thermal comfort range) and hot state (above thermal comfort range). The long-run results show the probability of daily temperature exceed TCR will be only 2.2%. On the other hand, the probability daily temperature within TCR will be 97.8%.

  15. Tropical climate at the last glacial maximum inferred from glacier mass-balance modeling

    USGS Publications Warehouse

    Hostetler, S.W.; Clark, P.U.

    2000-01-01

    Model-derived equilibrium line altitudes (ELAs) of former tropical glaciers support arguments, based on other paleoclimate data, for both the magnitude and spatial pattern of terrestrial cooling in the tropics at the last glacial maximum (LGM). Relative to the present, LGM ELAs were maintained by air temperatures that were 3.5??to 6.6 ??C lower and precipitation that ranged from 63% wetter in Hawaii to 25% drier on Mt. Kenya, Africa. Our results imply the need for a ~3 ??C cooling of LGM sea surface temperatures in the western Pacific warm pool. Sensitivity tests suggest that LGM ELAs could have persisted until 16,000 years before the present in the Peruvian Andes and on Papua, New Guinea.

  16. Application of Markov chain model to daily maximum temperature for thermal comfort in Malaysia

    SciTech Connect

    Nordin, Muhamad Asyraf bin Che; Hassan, Husna

    2015-10-22

    The Markov chain’s first order principle has been widely used to model various meteorological fields, for prediction purposes. In this study, a 14-year (2000-2013) data of daily maximum temperatures in Bayan Lepas were used. Earlier studies showed that the outdoor thermal comfort range based on physiologically equivalent temperature (PET) index in Malaysia is less than 34°C, thus the data obtained were classified into two state: normal state (within thermal comfort range) and hot state (above thermal comfort range). The long-run results show the probability of daily temperature exceed TCR will be only 2.2%. On the other hand, the probability daily temperature within TCR will be 97.8%.

  17. NLSCIDNT user's guide maximum likehood parameter identification computer program with nonlinear rotorcraft model

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A nonlinear, maximum likelihood, parameter identification computer program (NLSCIDNT) is described which evaluates rotorcraft stability and control coefficients from flight test data. The optimal estimates of the parameters (stability and control coefficients) are determined (identified) by minimizing the negative log likelihood cost function. The minimization technique is the Levenberg-Marquardt method, which behaves like the steepest descent method when it is far from the minimum and behaves like the modified Newton-Raphson method when it is nearer the minimum. Twenty-one states and 40 measurement variables are modeled, and any subset may be selected. States which are not integrated may be fixed at an input value, or time history data may be substituted for the state in the equations of motion. Any aerodynamic coefficient may be expressed as a nonlinear polynomial function of selected 'expansion variables'.

  18. Error Analysis of Multi-Source Data for 3D Geological Modeling Using Entropy-based Weighting

    NASA Astrophysics Data System (ADS)

    Hou, W.; Yang, L.; Clarke, K.

    2013-12-01

    geological attribute probabilities, allowing different kinds of error distribution of spatial data to be summed directly after the transformation. When building a 3D geological model, several kinds of raw data may cross over one point or line. In this circumstance, an entropy-based weight was given for each kind of data when calculating the final probability. For any point of one data source in space, its geological attribute probability results in an entropy weight value. The larger the value, the smaller the entropy weight. The final geological attribute probability of each spatial point is calculated using a linear entropy-based weighted summation. A color scale is used to illustrate the distribution of geological attribute probability using the MapGIS K9. A concrete example illustrates that geological attribute probability is an effective way of describing multiple error distributions among the raw data used for geological modeling. Acknowledgement: This study is funded by NSFC (41102207) and the Fundamental Research Funds for the Central Universities (121gpy19).

  19. Maximum Urine Concentrating Capability in a Mathematical Model of the Inner Medulla of the Rat Kidney

    PubMed Central

    Marcano, Mariano; Layton, Anita T.; Layton, Harold E.

    2009-01-01

    In a mathematical model of the urine concentrating mechanism of the inner medulla of the rat kidney, a nonlinear optimization technique was used to estimate parameter sets that maximize the urine-to-plasma osmolality ratio (U/P) while maintaining the urine flow rate within a plausible physiologic range. The model, which used a central core formulation, represented loops of Henle turning at all levels of the inner medulla and a composite collecting duct (CD). The parameters varied were: water flow and urea concentration in tubular fluid entering the descending thin limbs and the composite CD at the outer-inner medullary boundary; scaling factors for the number of loops of Henle and CDs as a function of medullary depth; location and increase rate of the urea permeability profile along the CD; and a scaling factor for the maximum rate of NaCl transport from the CD. The optimization algorithm sought to maximize a quantity E that equaled U/P minus a penalty function for insufficient urine flow. Maxima of E were sought by changing parameter values in the direction in parameter space in which E increased. The algorithm attained a maximum E that increased urine osmolality and inner medullary concentrating capability by 37.5% and 80.2%, respectively, above base-case values; the corresponding urine flow rate and the concentrations of NaCl and urea were all within or near reported experimental ranges. Our results predict that urine osmolality is particularly sensitive to three parameters: the urea concentration in tubular fluid entering the CD at the outer-inner medullary boundary, the location and increase rate of the urea permeability profile along the CD, and the rate of decrease of the CD population (and thus of surface area) along the cortico-medullary axis. PMID:19915926

  20. Entropy Generation Across Earth's Bow Shock

    NASA Technical Reports Server (NTRS)

    Parks, George K.; McCarthy, Michael; Fu, Suiyan; Lee E. s; Cao, Jinbin; Goldstein, Melvyn L.; Canu, Patrick; Dandouras, Iannis S.; Reme, Henri; Fazakerley, Andrew; Lin, Naiguo; Wilber, Mark

    2011-01-01

    Earth's bow shock is a transition layer that causes an irreversible change in the state of plasma that is stationary in time. Theories predict entropy increases across the bow shock but entropy has never been directly measured. Cluster and Double Star plasma experiments measure 3D plasma distributions upstream and downstream of the bow shock that allow calculation of Boltzmann's entropy function H and his famous H-theorem, dH/dt O. We present the first direct measurements of entropy density changes across Earth's bow shock. We will show that this entropy generation may be part of the processes that produce the non-thermal plasma distributions is consistent with a kinetic entropy flux model derived from the collisionless Boltzmann equation, giving strong support that solar wind's total entropy across the bow shock remains unchanged. As far as we know, our results are not explained by any existing shock models and should be of interests to theorists.