Sample records for entropy estimation methods

  1. Estimating the Entropy of Binary Time Series: Methodology, Some Theory and a Simulation Study

    NASA Astrophysics Data System (ADS)

    Gao, Yun; Kontoyiannis, Ioannis; Bienenstock, Elie

    2008-06-01

    Partly motivated by entropy-estimation problems in neuroscience, we present a detailed and extensive comparison between some of the most popular and effective entropy estimation methods used in practice: The plug-in method, four different estimators based on the Lempel-Ziv (LZ) family of data compression algorithms, an estimator based on the Context-Tree Weighting (CTW) method, and the renewal entropy estimator. METHODOLOGY: Three new entropy estimators are introduced; two new LZ-based estimators, and the “renewal entropy estimator,” which is tailored to data generated by a binary renewal process. For two of the four LZ-based estimators, a bootstrap procedure is described for evaluating their standard error, and a practical rule of thumb is heuristically derived for selecting the values of their parameters in practice. THEORY: We prove that, unlike their earlier versions, the two new LZ-based estimators are universally consistent, that is, they converge to the entropy rate for every finite-valued, stationary and ergodic process. An effective method is derived for the accurate approximation of the entropy rate of a finite-state hidden Markov model (HMM) with known distribution. Heuristic calculations are presented and approximate formulas are derived for evaluating the bias and the standard error of each estimator. SIMULATION: All estimators are applied to a wide range of data generated by numerous different processes with varying degrees of dependence and memory. The main conclusions drawn from these experiments include: (i) For all estimators considered, the main source of error is the bias. (ii) The CTW method is repeatedly and consistently seen to provide the most accurate results. (iii) The performance of the LZ-based estimators is often comparable to that of the plug-in method. (iv) The main drawback of the plug-in method is its computational inefficiency; with small word-lengths it fails to detect longer-range structure in the data, and with longer word-lengths the empirical distribution is severely undersampled, leading to large biases.

  2. Nonparametric entropy estimation using kernel densities.

    PubMed

    Lake, Douglas E

    2009-01-01

    The entropy of experimental data from the biological and medical sciences provides additional information over summary statistics. Calculating entropy involves estimates of probability density functions, which can be effectively accomplished using kernel density methods. Kernel density estimation has been widely studied and a univariate implementation is readily available in MATLAB. The traditional definition of Shannon entropy is part of a larger family of statistics, called Renyi entropy, which are useful in applications that require a measure of the Gaussianity of data. Of particular note is the quadratic entropy which is related to the Friedman-Tukey (FT) index, a widely used measure in the statistical community. One application where quadratic entropy is very useful is the detection of abnormal cardiac rhythms, such as atrial fibrillation (AF). Asymptotic and exact small-sample results for optimal bandwidth and kernel selection to estimate the FT index are presented and lead to improved methods for entropy estimation.

  3. In Vivo potassium-39 NMR spectra by the burg maximum-entropy method

    NASA Astrophysics Data System (ADS)

    Uchiyama, Takanori; Minamitani, Haruyuki

    The Burg maximum-entropy method was applied to estimate 39K NMR spectra of mung bean root tips. The maximum-entropy spectra have as good a linearity between peak areas and potassium concentrations as those obtained by fast Fourier transform and give a better estimation of intracellular potassium concentrations. Therefore potassium uptake and loss processes of mung bean root tips are shown to be more clearly traced by the maximum-entropy method.

  4. Unification of field theory and maximum entropy methods for learning probability densities

    NASA Astrophysics Data System (ADS)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  5. Unification of field theory and maximum entropy methods for learning probability densities.

    PubMed

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  6. Application of Bayesian Maximum Entropy Filter in parameter calibration of groundwater flow model in PingTung Plain

    NASA Astrophysics Data System (ADS)

    Cheung, Shao-Yong; Lee, Chieh-Han; Yu, Hwa-Lung

    2017-04-01

    Due to the limited hydrogeological observation data and high levels of uncertainty within, parameter estimation of the groundwater model has been an important issue. There are many methods of parameter estimation, for example, Kalman filter provides a real-time calibration of parameters through measurement of groundwater monitoring wells, related methods such as Extended Kalman Filter and Ensemble Kalman Filter are widely applied in groundwater research. However, Kalman Filter method is limited to linearity. This study propose a novel method, Bayesian Maximum Entropy Filtering, which provides a method that can considers the uncertainty of data in parameter estimation. With this two methods, we can estimate parameter by given hard data (certain) and soft data (uncertain) in the same time. In this study, we use Python and QGIS in groundwater model (MODFLOW) and development of Extended Kalman Filter and Bayesian Maximum Entropy Filtering in Python in parameter estimation. This method may provide a conventional filtering method and also consider the uncertainty of data. This study was conducted through numerical model experiment to explore, combine Bayesian maximum entropy filter and a hypothesis for the architecture of MODFLOW groundwater model numerical estimation. Through the virtual observation wells to simulate and observe the groundwater model periodically. The result showed that considering the uncertainty of data, the Bayesian maximum entropy filter will provide an ideal result of real-time parameters estimation.

  7. Efficient Transfer Entropy Analysis of Non-Stationary Neural Time Series

    PubMed Central

    Vicente, Raul; Díaz-Pernas, Francisco J.; Wibral, Michael

    2014-01-01

    Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these necessary observations, available estimators typically assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble of realizations is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that is suitable for the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method for transfer entropy estimation. We test the performance and robustness of our implementation on data from numerical simulations of stochastic processes. We also demonstrate the applicability of the ensemble method to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscience data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems. PMID:25068489

  8. Low Streamflow Forcasting using Minimum Relative Entropy

    NASA Astrophysics Data System (ADS)

    Cui, H.; Singh, V. P.

    2013-12-01

    Minimum relative entropy spectral analysis is derived in this study, and applied to forecast streamflow time series. Proposed method extends the autocorrelation in the manner that the relative entropy of underlying process is minimized so that time series data can be forecasted. Different prior estimation, such as uniform, exponential and Gaussian assumption, is taken to estimate the spectral density depending on the autocorrelation structure. Seasonal and nonseasonal low streamflow series obtained from Colorado River (Texas) under draught condition is successfully forecasted using proposed method. Minimum relative entropy determines spectral of low streamflow series with higher resolution than conventional method. Forecasted streamflow is compared to the prediction using Burg's maximum entropy spectral analysis (MESA) and Configurational entropy. The advantage and disadvantage of each method in forecasting low streamflow is discussed.

  9. Rényi continuous entropy of DNA sequences.

    PubMed

    Vinga, Susana; Almeida, Jonas S

    2004-12-07

    Entropy measures of DNA sequences estimate their randomness or, inversely, their repeatability. L-block Shannon discrete entropy accounts for the empirical distribution of all length-L words and has convergence problems for finite sequences. A new entropy measure that extends Shannon's formalism is proposed. Renyi's quadratic entropy calculated with Parzen window density estimation method applied to CGR/USM continuous maps of DNA sequences constitute a novel technique to evaluate sequence global randomness without some of the former method drawbacks. The asymptotic behaviour of this new measure was analytically deduced and the calculation of entropies for several synthetic and experimental biological sequences was performed. The results obtained were compared with the distributions of the null model of randomness obtained by simulation. The biological sequences have shown a different p-value according to the kernel resolution of Parzen's method, which might indicate an unknown level of organization of their patterns. This new technique can be very useful in the study of DNA sequence complexity and provide additional tools for DNA entropy estimation. The main MATLAB applications developed and additional material are available at the webpage . Specialized functions can be obtained from the authors.

  10. Population entropies estimates of proteins

    NASA Astrophysics Data System (ADS)

    Low, Wai Yee

    2017-05-01

    The Shannon entropy equation provides a way to estimate variability of amino acids sequences in a multiple sequence alignment of proteins. Knowledge of protein variability is useful in many areas such as vaccine design, identification of antibody binding sites, and exploration of protein 3D structural properties. In cases where the population entropies of a protein are of interest but only a small sample size can be obtained, a method based on linear regression and random subsampling can be used to estimate the population entropy. This method is useful for comparisons of entropies where the actual sequence counts differ and thus, correction for alignment size bias is needed. In the current work, an R based package named EntropyCorrect that enables estimation of population entropy is presented and an empirical study on how well this new algorithm performs on simulated dataset of various combinations of population and sample sizes is discussed. The package is available at https://github.com/lloydlow/EntropyCorrect. This article, which was originally published online on 12 May 2017, contained an error in Eq. (1), where the summation sign was missing. The corrected equation appears in the Corrigendum attached to the pdf.

  11. Entropy in bimolecular simulations: A comprehensive review of atomic fluctuations-based methods.

    PubMed

    Kassem, Summer; Ahmed, Marawan; El-Sheikh, Salah; Barakat, Khaled H

    2015-11-01

    Entropy of binding constitutes a major, and in many cases a detrimental, component of the binding affinity in biomolecular interactions. While the enthalpic part of the binding free energy is easier to calculate, estimating the entropy of binding is further more complicated. A precise evaluation of entropy requires a comprehensive exploration of the complete phase space of the interacting entities. As this task is extremely hard to accomplish in the context of conventional molecular simulations, calculating entropy has involved many approximations. Most of these golden standard methods focused on developing a reliable estimation of the conformational part of the entropy. Here, we review these methods with a particular emphasis on the different techniques that extract entropy from atomic fluctuations. The theoretical formalisms behind each method is explained highlighting its strengths as well as its limitations, followed by a description of a number of case studies for each method. We hope that this brief, yet comprehensive, review provides a useful tool to understand these methods and realize the practical issues that may arise in such calculations. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Financial time series analysis based on effective phase transfer entropy

    NASA Astrophysics Data System (ADS)

    Yang, Pengbo; Shang, Pengjian; Lin, Aijing

    2017-02-01

    Transfer entropy is a powerful technique which is able to quantify the impact of one dynamic system on another system. In this paper, we propose the effective phase transfer entropy method based on the transfer entropy method. We use simulated data to test the performance of this method, and the experimental results confirm that the proposed approach is capable of detecting the information transfer between the systems. We also explore the relationship between effective phase transfer entropy and some variables, such as data size, coupling strength and noise. The effective phase transfer entropy is positively correlated with the data size and the coupling strength. Even in the presence of a large amount of noise, it can detect the information transfer between systems, and it is very robust to noise. Moreover, this measure is indeed able to accurately estimate the information flow between systems compared with phase transfer entropy. In order to reflect the application of this method in practice, we apply this method to financial time series and gain new insight into the interactions between systems. It is demonstrated that the effective phase transfer entropy can be used to detect some economic fluctuations in the financial market. To summarize, the effective phase transfer entropy method is a very efficient tool to estimate the information flow between systems.

  13. Distance-Based Configurational Entropy of Proteins from Molecular Dynamics Simulations

    PubMed Central

    Fogolari, Federico; Corazza, Alessandra; Fortuna, Sara; Soler, Miguel Angel; VanSchouwen, Bryan; Brancolini, Giorgia; Corni, Stefano; Melacini, Giuseppe; Esposito, Gennaro

    2015-01-01

    Estimation of configurational entropy from molecular dynamics trajectories is a difficult task which is often performed using quasi-harmonic or histogram analysis. An entirely different approach, proposed recently, estimates local density distribution around each conformational sample by measuring the distance from its nearest neighbors. In this work we show this theoretically well grounded the method can be easily applied to estimate the entropy from conformational sampling. We consider a set of systems that are representative of important biomolecular processes. In particular: reference entropies for amino acids in unfolded proteins are obtained from a database of residues not participating in secondary structure elements;the conformational entropy of folding of β2-microglobulin is computed from molecular dynamics simulations using reference entropies for the unfolded state;backbone conformational entropy is computed from molecular dynamics simulations of four different states of the EPAC protein and compared with order parameters (often used as a measure of entropy);the conformational and rototranslational entropy of binding is computed from simulations of 20 tripeptides bound to the peptide binding protein OppA and of β2-microglobulin bound to a citrate coated gold surface. This work shows the potential of the method in the most representative biological processes involving proteins, and provides a valuable alternative, principally in the shown cases, where other approaches are problematic. PMID:26177039

  14. Distance-Based Configurational Entropy of Proteins from Molecular Dynamics Simulations.

    PubMed

    Fogolari, Federico; Corazza, Alessandra; Fortuna, Sara; Soler, Miguel Angel; VanSchouwen, Bryan; Brancolini, Giorgia; Corni, Stefano; Melacini, Giuseppe; Esposito, Gennaro

    2015-01-01

    Estimation of configurational entropy from molecular dynamics trajectories is a difficult task which is often performed using quasi-harmonic or histogram analysis. An entirely different approach, proposed recently, estimates local density distribution around each conformational sample by measuring the distance from its nearest neighbors. In this work we show this theoretically well grounded the method can be easily applied to estimate the entropy from conformational sampling. We consider a set of systems that are representative of important biomolecular processes. In particular: reference entropies for amino acids in unfolded proteins are obtained from a database of residues not participating in secondary structure elements;the conformational entropy of folding of β2-microglobulin is computed from molecular dynamics simulations using reference entropies for the unfolded state;backbone conformational entropy is computed from molecular dynamics simulations of four different states of the EPAC protein and compared with order parameters (often used as a measure of entropy);the conformational and rototranslational entropy of binding is computed from simulations of 20 tripeptides bound to the peptide binding protein OppA and of β2-microglobulin bound to a citrate coated gold surface. This work shows the potential of the method in the most representative biological processes involving proteins, and provides a valuable alternative, principally in the shown cases, where other approaches are problematic.

  15. Estimating Temporal Causal Interaction between Spike Trains with Permutation and Transfer Entropy

    PubMed Central

    Li, Zhaohui; Li, Xiaoli

    2013-01-01

    Estimating the causal interaction between neurons is very important for better understanding the functional connectivity in neuronal networks. We propose a method called normalized permutation transfer entropy (NPTE) to evaluate the temporal causal interaction between spike trains, which quantifies the fraction of ordinal information in a neuron that has presented in another one. The performance of this method is evaluated with the spike trains generated by an Izhikevich’s neuronal model. Results show that the NPTE method can effectively estimate the causal interaction between two neurons without influence of data length. Considering both the precision of time delay estimated and the robustness of information flow estimated against neuronal firing rate, the NPTE method is superior to other information theoretic method including normalized transfer entropy, symbolic transfer entropy and permutation conditional mutual information. To test the performance of NPTE on analyzing simulated biophysically realistic synapses, an Izhikevich’s cortical network that based on the neuronal model is employed. It is found that the NPTE method is able to characterize mutual interactions and identify spurious causality in a network of three neurons exactly. We conclude that the proposed method can obtain more reliable comparison of interactions between different pairs of neurons and is a promising tool to uncover more details on the neural coding. PMID:23940662

  16. Estimation of Lithological Classification in Taipei Basin: A Bayesian Maximum Entropy Method

    NASA Astrophysics Data System (ADS)

    Wu, Meng-Ting; Lin, Yuan-Chien; Yu, Hwa-Lung

    2015-04-01

    In environmental or other scientific applications, we must have a certain understanding of geological lithological composition. Because of restrictions of real conditions, only limited amount of data can be acquired. To find out the lithological distribution in the study area, many spatial statistical methods used to estimate the lithological composition on unsampled points or grids. This study applied the Bayesian Maximum Entropy (BME method), which is an emerging method of the geological spatiotemporal statistics field. The BME method can identify the spatiotemporal correlation of the data, and combine not only the hard data but the soft data to improve estimation. The data of lithological classification is discrete categorical data. Therefore, this research applied Categorical BME to establish a complete three-dimensional Lithological estimation model. Apply the limited hard data from the cores and the soft data generated from the geological dating data and the virtual wells to estimate the three-dimensional lithological classification in Taipei Basin. Keywords: Categorical Bayesian Maximum Entropy method, Lithological Classification, Hydrogeological Setting

  17. Quantifying complexity of financial short-term time series by composite multiscale entropy measure

    NASA Astrophysics Data System (ADS)

    Niu, Hongli; Wang, Jun

    2015-05-01

    It is significant to study the complexity of financial time series since the financial market is a complex evolved dynamic system. Multiscale entropy is a prevailing method used to quantify the complexity of a time series. Due to its less reliability of entropy estimation for short-term time series at large time scales, a modification method, the composite multiscale entropy, is applied to the financial market. To qualify its effectiveness, its applications in the synthetic white noise and 1 / f noise with different data lengths are reproduced first in the present paper. Then it is introduced for the first time to make a reliability test with two Chinese stock indices. After conducting on short-time return series, the CMSE method shows the advantages in reducing deviations of entropy estimation and demonstrates more stable and reliable results when compared with the conventional MSE algorithm. Finally, the composite multiscale entropy of six important stock indices from the world financial markets is investigated, and some useful and interesting empirical results are obtained.

  18. The maximum entropy method of moments and Bayesian probability theory

    NASA Astrophysics Data System (ADS)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  19. A practical comparison of algorithms for the measurement of multiscale entropy in neural time series data.

    PubMed

    Kuntzelman, Karl; Jack Rhodes, L; Harrington, Lillian N; Miskovic, Vladimir

    2018-06-01

    There is a broad family of statistical methods for capturing time series regularity, with increasingly widespread adoption by the neuroscientific community. A common feature of these methods is that they permit investigators to quantify the entropy of brain signals - an index of unpredictability/complexity. Despite the proliferation of algorithms for computing entropy from neural time series data there is scant evidence concerning their relative stability and efficiency. Here we evaluated several different algorithmic implementations (sample, fuzzy, dispersion and permutation) of multiscale entropy in terms of their stability across sessions, internal consistency and computational speed, accuracy and precision using a combination of electroencephalogram (EEG) and synthetic 1/ƒ noise signals. Overall, we report fair to excellent internal consistency and longitudinal stability over a one-week period for the majority of entropy estimates, with several caveats. Computational timing estimates suggest distinct advantages for dispersion and permutation entropy over other entropy estimates. Considered alongside the psychometric evidence, we suggest several ways in which researchers can maximize computational resources (without sacrificing reliability), especially when working with high-density M/EEG data or multivoxel BOLD time series signals. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. A method for the fast estimation of a battery entropy-variation high-resolution curve - Application on a commercial LiFePO4/graphite cell

    NASA Astrophysics Data System (ADS)

    Damay, Nicolas; Forgez, Christophe; Bichat, Marie-Pierre; Friedrich, Guy

    2016-11-01

    The entropy-variation of a battery is responsible for heat generation or consumption during operation and its prior measurement is mandatory for developing a thermal model. It is generally done through the potentiometric method which is considered as a reference. However, it requires several days or weeks to get a look-up table with a 5 or 10% SoC (State of Charge) resolution. In this study, a calorimetric method based on the inversion of a thermal model is proposed for the fast estimation of a nearly continuous curve of entropy-variation. This is achieved by separating the heats produced while charging and discharging the battery. The entropy-variation is then deduced from the extracted entropic heat. The proposed method is validated by comparing the results obtained with several current rates to measurements made with the potentiometric method.

  1. Partial knowledge, entropy, and estimation

    PubMed Central

    MacQueen, James; Marschak, Jacob

    1975-01-01

    In a growing body of literature, available partial knowledge is used to estimate the prior probability distribution p≡(p1,...,pn) by maximizing entropy H(p)≡-Σpi log pi, subject to constraints on p which express that partial knowledge. The method has been applied to distributions of income, of traffic, of stock-price changes, and of types of brand-article purchases. We shall respond to two justifications given for the method: (α) It is “conservative,” and therefore good, to maximize “uncertainty,” as (uniquely) represented by the entropy parameter. (β) One should apply the mathematics of statistical thermodynamics, which implies that the most probable distribution has highest entropy. Reason (α) is rejected. Reason (β) is valid when “complete ignorance” is defined in a particular way and both the constraint and the estimator's loss function are of certain kinds. PMID:16578733

  2. Entropy and climate. I - ERBE observations of the entropy production of the earth

    NASA Technical Reports Server (NTRS)

    Stephens, G. L.; O'Brien, D. M.

    1993-01-01

    An approximate method for estimating the global distributions of the entropy fluxes flowing through the upper boundary of the climate system is introduced, and an estimate of the entropy exchange between the earth and space and the entropy production of the planet is provided. Entropy fluxes calculated from the Earth Radiation Budget Experiment measurements show how the long-wave entropy flux densities dominate the total entropy fluxes at all latitudes compared with the entropy flux densities associated with reflected sunlight, although the short-wave flux densities are important in the context of clear sky-cloudy sky net entropy flux differences. It is suggested that the entropy production of the planet is both constant for the 36 months of data considered and very near its maximum possible value. The mean value of this production is 0.68 x 10 exp 15 W/K, and the amplitude of the annual cycle is approximately 1 to 2 percent of this value.

  3. Estimation of absolute solvent and solvation shell entropies via permutation reduction

    NASA Astrophysics Data System (ADS)

    Reinhard, Friedemann; Grubmüller, Helmut

    2007-01-01

    Despite its prominent contribution to the free energy of solvated macromolecules such as proteins or DNA, and although principally contained within molecular dynamics simulations, the entropy of the solvation shell is inaccessible to straightforward application of established entropy estimation methods. The complication is twofold. First, the configurational space density of such systems is too complex for a sufficiently accurate fit. Second, and in contrast to the internal macromolecular dynamics, the configurational space volume explored by the diffusive motion of the solvent molecules is too large to be exhaustively sampled by current simulation techniques. Here, we develop a method to overcome the second problem and to significantly alleviate the first one. We propose to exploit the permutation symmetry of the solvent by transforming the trajectory in a way that renders established estimation methods applicable, such as the quasiharmonic approximation or principal component analysis. Our permutation-reduced approach involves a combinatorial problem, which is solved through its equivalence with the linear assignment problem, for which O(N3) methods exist. From test simulations of dense Lennard-Jones gases, enhanced convergence and improved entropy estimates are obtained. Moreover, our approach renders diffusive systems accessible to improved fit functions.

  4. Modeling Information Content Via Dirichlet-Multinomial Regression Analysis.

    PubMed

    Ferrari, Alberto

    2017-01-01

    Shannon entropy is being increasingly used in biomedical research as an index of complexity and information content in sequences of symbols, e.g. languages, amino acid sequences, DNA methylation patterns and animal vocalizations. Yet, distributional properties of information entropy as a random variable have seldom been the object of study, leading to researchers mainly using linear models or simulation-based analytical approach to assess differences in information content, when entropy is measured repeatedly in different experimental conditions. Here a method to perform inference on entropy in such conditions is proposed. Building on results coming from studies in the field of Bayesian entropy estimation, a symmetric Dirichlet-multinomial regression model, able to deal efficiently with the issue of mean entropy estimation, is formulated. Through a simulation study the model is shown to outperform linear modeling in a vast range of scenarios and to have promising statistical properties. As a practical example, the method is applied to a data set coming from a real experiment on animal communication.

  5. An improved method for predicting the evolution of the characteristic parameters of an information system

    NASA Astrophysics Data System (ADS)

    Dushkin, A. V.; Kasatkina, T. I.; Novoseltsev, V. I.; Ivanov, S. V.

    2018-03-01

    The article proposes a forecasting method that allows, based on the given values of entropy and error level of the first and second kind, to determine the allowable time for forecasting the development of the characteristic parameters of a complex information system. The main feature of the method under consideration is the determination of changes in the characteristic parameters of the development of the information system in the form of the magnitude of the increment in the ratios of its entropy. When a predetermined value of the prediction error ratio is reached, that is, the entropy of the system, the characteristic parameters of the system and the depth of the prediction in time are estimated. The resulting values of the characteristics and will be optimal, since at that moment the system possessed the best ratio of entropy as a measure of the degree of organization and orderliness of the structure of the system. To construct a method for estimating the depth of prediction, it is expedient to use the maximum principle of the value of entropy.

  6. How to Calculate Renyi Entropy from Heart Rate Variability, and Why it Matters for Detecting Cardiac Autonomic Neuropathy.

    PubMed

    Cornforth, David J; Tarvainen, Mika P; Jelinek, Herbert F

    2014-01-01

    Cardiac autonomic neuropathy (CAN) is a disease that involves nerve damage leading to an abnormal control of heart rate. An open question is to what extent this condition is detectable from heart rate variability (HRV), which provides information only on successive intervals between heart beats, yet is non-invasive and easy to obtain from a three-lead ECG recording. A variety of measures may be extracted from HRV, including time domain, frequency domain, and more complex non-linear measures. Among the latter, Renyi entropy has been proposed as a suitable measure that can be used to discriminate CAN from controls. However, all entropy methods require estimation of probabilities, and there are a number of ways in which this estimation can be made. In this work, we calculate Renyi entropy using several variations of the histogram method and a density method based on sequences of RR intervals. In all, we calculate Renyi entropy using nine methods and compare their effectiveness in separating the different classes of participants. We found that the histogram method using single RR intervals yields an entropy measure that is either incapable of discriminating CAN from controls, or that it provides little information that could not be gained from the SD of the RR intervals. In contrast, probabilities calculated using a density method based on sequences of RR intervals yield an entropy measure that provides good separation between groups of participants and provides information not available from the SD. The main contribution of this work is that different approaches to calculating probability may affect the success of detecting disease. Our results bring new clarity to the methods used to calculate the Renyi entropy in general, and in particular, to the successful detection of CAN.

  7. How to Calculate Renyi Entropy from Heart Rate Variability, and Why it Matters for Detecting Cardiac Autonomic Neuropathy

    PubMed Central

    Cornforth, David J.;  Tarvainen, Mika P.; Jelinek, Herbert F.

    2014-01-01

    Cardiac autonomic neuropathy (CAN) is a disease that involves nerve damage leading to an abnormal control of heart rate. An open question is to what extent this condition is detectable from heart rate variability (HRV), which provides information only on successive intervals between heart beats, yet is non-invasive and easy to obtain from a three-lead ECG recording. A variety of measures may be extracted from HRV, including time domain, frequency domain, and more complex non-linear measures. Among the latter, Renyi entropy has been proposed as a suitable measure that can be used to discriminate CAN from controls. However, all entropy methods require estimation of probabilities, and there are a number of ways in which this estimation can be made. In this work, we calculate Renyi entropy using several variations of the histogram method and a density method based on sequences of RR intervals. In all, we calculate Renyi entropy using nine methods and compare their effectiveness in separating the different classes of participants. We found that the histogram method using single RR intervals yields an entropy measure that is either incapable of discriminating CAN from controls, or that it provides little information that could not be gained from the SD of the RR intervals. In contrast, probabilities calculated using a density method based on sequences of RR intervals yield an entropy measure that provides good separation between groups of participants and provides information not available from the SD. The main contribution of this work is that different approaches to calculating probability may affect the success of detecting disease. Our results bring new clarity to the methods used to calculate the Renyi entropy in general, and in particular, to the successful detection of CAN. PMID:25250311

  8. A two-phase copula entropy-based multiobjective optimization approach to hydrometeorological gauge network design

    NASA Astrophysics Data System (ADS)

    Xu, Pengcheng; Wang, Dong; Singh, Vijay P.; Wang, Yuankun; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Chen, Yuanfang; Chen, Xi; Liu, Jiufu; Zou, Ying; He, Ruimin

    2017-12-01

    Hydrometeorological data are needed for obtaining point and areal mean, quantifying the spatial variability of hydrometeorological variables, and calibration and verification of hydrometeorological models. Hydrometeorological networks are utilized to collect such data. Since data collection is expensive, it is essential to design an optimal network based on the minimal number of hydrometeorological stations in order to reduce costs. This study proposes a two-phase copula entropy- based multiobjective optimization approach that includes: (1) copula entropy-based directional information transfer (CDIT) for clustering the potential hydrometeorological gauges into several groups, and (2) multiobjective method for selecting the optimal combination of gauges for regionalized groups. Although entropy theory has been employed for network design before, the joint histogram method used for mutual information estimation has several limitations. The copula entropy-based mutual information (MI) estimation method is shown to be more effective for quantifying the uncertainty of redundant information than the joint histogram (JH) method. The effectiveness of this approach is verified by applying to one type of hydrometeorological gauge network, with the use of three model evaluation measures, including Nash-Sutcliffe Coefficient (NSC), arithmetic mean of the negative copula entropy (MNCE), and MNCE/NSC. Results indicate that the two-phase copula entropy-based multiobjective technique is capable of evaluating the performance of regional hydrometeorological networks and can enable decision makers to develop strategies for water resources management.

  9. Propane spectral resolution enhancement by the maximum entropy method

    NASA Technical Reports Server (NTRS)

    Bonavito, N. L.; Stewart, K. P.; Hurley, E. J.; Yeh, K. C.; Inguva, R.

    1990-01-01

    The Burg algorithm for maximum entropy power spectral density estimation is applied to a time series of data obtained from a Michelson interferometer and compared with a standard FFT estimate for resolution capability. The propane transmittance spectrum was estimated by use of the FFT with a 2 to the 18th data sample interferogram, giving a maximum unapodized resolution of 0.06/cm. This estimate was then interpolated by zero filling an additional 2 to the 18th points, and the final resolution was taken to be 0.06/cm. Comparison of the maximum entropy method (MEM) estimate with the FFT was made over a 45/cm region of the spectrum for several increasing record lengths of interferogram data beginning at 2 to the 10th. It is found that over this region the MEM estimate with 2 to the 16th data samples is in close agreement with the FFT estimate using 2 to the 18th samples.

  10. Optimized Kernel Entropy Components.

    PubMed

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2017-06-01

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  11. Simultaneous Multi-Scale Diffusion Estimation and Tractography Guided by Entropy Spectrum Pathways

    PubMed Central

    Galinsky, Vitaly L.; Frank, Lawrence R.

    2015-01-01

    We have developed a method for the simultaneous estimation of local diffusion and the global fiber tracts based upon the information entropy flow that computes the maximum entropy trajectories between locations and depends upon the global structure of the multi-dimensional and multi-modal diffusion field. Computation of the entropy spectrum pathways requires only solving a simple eigenvector problem for the probability distribution for which efficient numerical routines exist, and a straight forward integration of the probability conservation through ray tracing of the convective modes guided by a global structure of the entropy spectrum coupled with a small scale local diffusion. The intervoxel diffusion is sampled by multi b-shell multi q-angle DWI data expanded in spherical waves. This novel approach to fiber tracking incorporates global information about multiple fiber crossings in every individual voxel and ranks it in the most scientifically rigorous way. This method has potential significance for a wide range of applications, including studies of brain connectivity. PMID:25532167

  12. Maximum Entropy Approach in Dynamic Contrast-Enhanced Magnetic Resonance Imaging.

    PubMed

    Farsani, Zahra Amini; Schmid, Volker J

    2017-01-01

    In the estimation of physiological kinetic parameters from Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) data, the determination of the arterial input function (AIF) plays a key role. This paper proposes a Bayesian method to estimate the physiological parameters of DCE-MRI along with the AIF in situations, where no measurement of the AIF is available. In the proposed algorithm, the maximum entropy method (MEM) is combined with the maximum a posterior approach (MAP). To this end, MEM is used to specify a prior probability distribution of the unknown AIF. The ability of this method to estimate the AIF is validated using the Kullback-Leibler divergence. Subsequently, the kinetic parameters can be estimated with MAP. The proposed algorithm is evaluated with a data set from a breast cancer MRI study. The application shows that the AIF can reliably be determined from the DCE-MRI data using MEM. Kinetic parameters can be estimated subsequently. The maximum entropy method is a powerful tool to reconstructing images from many types of data. This method is useful for generating the probability distribution based on given information. The proposed method gives an alternative way to assess the input function from the existing data. The proposed method allows a good fit of the data and therefore a better estimation of the kinetic parameters. In the end, this allows for a more reliable use of DCE-MRI. Schattauer GmbH.

  13. Development and application of the maximum entropy method and other spectral estimation techniques

    NASA Astrophysics Data System (ADS)

    King, W. R.

    1980-09-01

    This summary report is a collection of four separate progress reports prepared under three contracts, which are all sponsored by the Office of Naval Research in Arlington, Virginia. This report contains the results of investigations into the application of the maximum entropy method (MEM), a high resolution, frequency and wavenumber estimation technique. The report also contains a description of two, new, stable, high resolution spectral estimation techniques that is provided in the final report section. Many examples of wavenumber spectral patterns for all investigated techniques are included throughout the report. The maximum entropy method is also known as the maximum entropy spectral analysis (MESA) technique, and both names are used in the report. Many MEM wavenumber spectral patterns are demonstrated using both simulated and measured radar signal and noise data. Methods for obtaining stable MEM wavenumber spectra are discussed, broadband signal detection using the MEM prediction error transform (PET) is discussed, and Doppler radar narrowband signal detection is demonstrated using the MEM technique. It is also shown that MEM cannot be applied to randomly sampled data. The two new, stable, high resolution, spectral estimation techniques discussed in the final report section, are named the Wiener-King and the Fourier spectral estimation techniques. The two new techniques have a similar derivation based upon the Wiener prediction filter, but the two techniques are otherwise quite different. Further development of the techniques and measurement of the technique spectral characteristics is recommended for subsequent investigation.

  14. An adaptable binary entropy coder

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.

    2001-01-01

    We present a novel entropy coding technique which is based on recursive interleaving of variable-to-variable length binary source codes. We discuss code design and performance estimation methods, as well as practical encoding and decoding algorithms.

  15. Heat capacty, relative enthalpy, and calorimetric entropy of silicate minerals: an empirical method of prediction.

    USGS Publications Warehouse

    Robinson, G.R.; Haas, J.L.

    1983-01-01

    Through the evaluation of experimental calorimetric data and estimates of the molar isobaric heat capacities, relative enthalpies and entropies of constituent oxides, a procedure for predicting the thermodynamic properties of silicates is developed. Estimates of the accuracy and precision of the technique and examples of its application are also presented. -J.A.Z.

  16. Measuring an entropy in heavy ion collisions

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyz, W.; Wosiek, J.

    1999-03-01

    We propose to use the coincidence method of Ma to measure an entropy of the system created in heavy ion collisions. Moreover we estimate, in a simple model, the values of parameters for which the thermodynamical behaviour sets in.

  17. Computation of entropy and Lyapunov exponent by a shift transform.

    PubMed

    Matsuoka, Chihiro; Hiraide, Koichi

    2015-10-01

    We present a novel computational method to estimate the topological entropy and Lyapunov exponent of nonlinear maps using a shift transform. Unlike the computation of periodic orbits or the symbolic dynamical approach by the Markov partition, the method presented here does not require any special techniques in computational and mathematical fields to calculate these quantities. In spite of its simplicity, our method can accurately capture not only the chaotic region but also the non-chaotic region (window region) such that it is important physically but the (Lebesgue) measure zero and usually hard to calculate or observe. Furthermore, it is shown that the Kolmogorov-Sinai entropy of the Sinai-Ruelle-Bowen measure (the physical measure) coincides with the topological entropy.

  18. Computation of entropy and Lyapunov exponent by a shift transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsuoka, Chihiro, E-mail: matsuoka.chihiro.mm@ehime-u.ac.jp; Hiraide, Koichi

    2015-10-15

    We present a novel computational method to estimate the topological entropy and Lyapunov exponent of nonlinear maps using a shift transform. Unlike the computation of periodic orbits or the symbolic dynamical approach by the Markov partition, the method presented here does not require any special techniques in computational and mathematical fields to calculate these quantities. In spite of its simplicity, our method can accurately capture not only the chaotic region but also the non-chaotic region (window region) such that it is important physically but the (Lebesgue) measure zero and usually hard to calculate or observe. Furthermore, it is shown thatmore » the Kolmogorov-Sinai entropy of the Sinai-Ruelle-Bowen measure (the physical measure) coincides with the topological entropy.« less

  19. Convex Accelerated Maximum Entropy Reconstruction

    PubMed Central

    Worley, Bradley

    2016-01-01

    Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm – called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm – is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra. PMID:26894476

  20. Conditional maximum-entropy method for selecting prior distributions in Bayesian statistics

    NASA Astrophysics Data System (ADS)

    Abe, Sumiyoshi

    2014-11-01

    The conditional maximum-entropy method (abbreviated here as C-MaxEnt) is formulated for selecting prior probability distributions in Bayesian statistics for parameter estimation. This method is inspired by a statistical-mechanical approach to systems governed by dynamics with largely separated time scales and is based on three key concepts: conjugate pairs of variables, dimensionless integration measures with coarse-graining factors and partial maximization of the joint entropy. The method enables one to calculate a prior purely from a likelihood in a simple way. It is shown, in particular, how it not only yields Jeffreys's rules but also reveals new structures hidden behind them.

  1. Bayesian or Laplacien inference, entropy and information theory and information geometry in data and signal processing

    NASA Astrophysics Data System (ADS)

    Mohammad-Djafari, Ali

    2015-01-01

    The main object of this tutorial article is first to review the main inference tools using Bayesian approach, Entropy, Information theory and their corresponding geometries. This review is focused mainly on the ways these tools have been used in data, signal and image processing. After a short introduction of the different quantities related to the Bayes rule, the entropy and the Maximum Entropy Principle (MEP), relative entropy and the Kullback-Leibler divergence, Fisher information, we will study their use in different fields of data and signal processing such as: entropy in source separation, Fisher information in model order selection, different Maximum Entropy based methods in time series spectral estimation and finally, general linear inverse problems.

  2. Quantifying selection and diversity in viruses by entropy methods, with application to the haemagglutinin of H3N2 influenza

    PubMed Central

    Pan, Keyao; Deem, Michael W.

    2011-01-01

    Many viruses evolve rapidly. For example, haemagglutinin (HA) of the H3N2 influenza A virus evolves to escape antibody binding. This evolution of the H3N2 virus means that people who have previously been exposed to an influenza strain may be infected by a newly emerged virus. In this paper, we use Shannon entropy and relative entropy to measure the diversity and selection pressure by an antibody in each amino acid site of H3 HA between the 1992–1993 season and the 2009–2010 season. Shannon entropy and relative entropy are two independent state variables that we use to characterize H3N2 evolution. The entropy method estimates future H3N2 evolution and migration using currently available H3 HA sequences. First, we show that the rate of evolution increases with the virus diversity in the current season. The Shannon entropy of the sequence in the current season predicts relative entropy between sequences in the current season and those in the next season. Second, a global migration pattern of H3N2 is assembled by comparing the relative entropy flows of sequences sampled in China, Japan, the USA and Europe. We verify this entropy method by describing two aspects of historical H3N2 evolution. First, we identify 54 amino acid sites in HA that have evolved in the past to evade the immune system. Second, the entropy method shows that epitopes A and B on the top of HA evolve most vigorously to escape antibody binding. Our work provides a novel entropy-based method to predict and quantify future H3N2 evolution and to describe the evolutionary history of H3N2. PMID:21543352

  3. Applications of quantum entropy to statistics

    NASA Astrophysics Data System (ADS)

    Silver, R. N.; Martz, H. F.

    This paper develops two generalizations of the maximum entropy (ME) principle. First, Shannon classical entropy is replaced by von Neumann quantum entropy to yield a broader class of information divergences (or penalty functions) for statistics applications. Negative relative quantum entropy enforces convexity, positivity, non-local extensivity and prior correlations such as smoothness. This enables the extension of ME methods from their traditional domain of ill-posed in-verse problems to new applications such as non-parametric density estimation. Second, given a choice of information divergence, a combination of ME and Bayes rule is used to assign both prior and posterior probabilities. Hyperparameters are interpreted as Lagrange multipliers enforcing constraints. Conservation principles are proposed to act statistical regularization and other hyperparameters, such as conservation of information and smoothness. ME provides an alternative to hierarchical Bayes methods.

  4. Nonlinear dynamics applied to the study of cardiovascular effects of stress

    NASA Astrophysics Data System (ADS)

    Anishchenko, T. G.; Igosheva, N. B.

    1998-03-01

    We study cardiovascular responses to emotional stresses in humans and rats using traditional physiological parameters and methods of nonlinear dynamics. We found that emotional stress results in significant changes of chaos degree of ECG and blood pressure signals, estimated using a normalized entropy. We demonstrate that the normalized entropy is a more sensitive indicator of the stress-induced changes in cardiovascular systems compared with traditional physiological parameters Using the normalized entropy we discovered the significant individual differences in cardiovascular stress-reactivity that was impossible to obtain by traditional physiological methods.

  5. Estimation of conformational entropy in protein-ligand interactions: a computational perspective.

    PubMed

    Polyansky, Anton A; Zubac, Ruben; Zagrovic, Bojan

    2012-01-01

    Conformational entropy is an important component of the change in free energy upon binding of a ligand to its target protein. As a consequence, development of computational techniques for reliable estimation of conformational entropies is currently receiving an increased level of attention in the context of computational drug design. Here, we review the most commonly used techniques for conformational entropy estimation from classical molecular dynamics simulations. Although by-and-large still not directly used in practical drug design, these techniques provide a golden standard for developing other, computationally less-demanding methods for such applications, in addition to furthering our understanding of protein-ligand interactions in general. In particular, we focus on the quasi-harmonic approximation and discuss different approaches that can be used to go beyond it, most notably, when it comes to treating anharmonic and/or correlated motions. In addition to reviewing basic theoretical formalisms, we provide a concrete set of steps required to successfully calculate conformational entropy from molecular dynamics simulations, as well as discuss a number of practical issues that may arise in such calculations.

  6. Inferring Markov chains: Bayesian estimation, model comparison, entropy rate, and out-of-class modeling.

    PubMed

    Strelioff, Christopher C; Crutchfield, James P; Hübler, Alfred W

    2007-07-01

    Markov chains are a natural and well understood tool for describing one-dimensional patterns in time or space. We show how to infer kth order Markov chains, for arbitrary k , from finite data by applying Bayesian methods to both parameter estimation and model-order selection. Extending existing results for multinomial models of discrete data, we connect inference to statistical mechanics through information-theoretic (type theory) techniques. We establish a direct relationship between Bayesian evidence and the partition function which allows for straightforward calculation of the expectation and variance of the conditional relative entropy and the source entropy rate. Finally, we introduce a method that uses finite data-size scaling with model-order comparison to infer the structure of out-of-class processes.

  7. Prediction of Protein Configurational Entropy (Popcoen).

    PubMed

    Goethe, Martin; Gleixner, Jan; Fita, Ignacio; Rubi, J Miguel

    2018-03-13

    A knowledge-based method for configurational entropy prediction of proteins is presented; this methodology is extremely fast, compared to previous approaches, because it does not involve any type of configurational sampling. Instead, the configurational entropy of a query fold is estimated by evaluating an artificial neural network, which was trained on molecular-dynamics simulations of ∼1000 proteins. The predicted entropy can be incorporated into a large class of protein software based on cost-function minimization/evaluation, in which configurational entropy is currently neglected for performance reasons. Software of this type is used for all major protein tasks such as structure predictions, proteins design, NMR and X-ray refinement, docking, and mutation effect predictions. Integrating the predicted entropy can yield a significant accuracy increase as we show exemplarily for native-state identification with the prominent protein software FoldX. The method has been termed Popcoen for Prediction of Protein Configurational Entropy. An implementation is freely available at http://fmc.ub.edu/popcoen/ .

  8. On the asymptotic behavior of a subcritical convection-diffusion equation with nonlocal diffusion

    NASA Astrophysics Data System (ADS)

    Cazacu, Cristian M.; Ignat, Liviu I.; Pazoto, Ademir F.

    2017-08-01

    In this paper we consider a subcritical model that involves nonlocal diffusion and a classical convective term. In spite of the nonlocal diffusion, we obtain an Oleinik type estimate similar to the case when the diffusion is local. First we prove that the entropy solution can be obtained by adding a small viscous term μ uxx and letting μ\\to 0 . Then, by using uniform Oleinik estimates for the viscous approximation we are able to prove the well-posedness of the entropy solutions with L 1-initial data. Using a scaling argument and hyperbolic estimates given by Oleinik’s inequality, we obtain the first term in the asymptotic behavior of the nonnegative solutions. Finally, the large time behavior of changing sign solutions is proved using the classical flux-entropy method and estimates for the nonlocal operator.

  9. Bubble Entropy: An Entropy Almost Free of Parameters.

    PubMed

    Manis, George; Aktaruzzaman, Md; Sassi, Roberto

    2017-11-01

    Objective : A critical point in any definition of entropy is the selection of the parameters employed to obtain an estimate in practice. We propose a new definition of entropy aiming to reduce the significance of this selection. Methods: We call the new definition Bubble Entropy . Bubble Entropy is based on permutation entropy, where the vectors in the embedding space are ranked. We use the bubble sort algorithm for the ordering procedure and count instead the number of swaps performed for each vector. Doing so, we create a more coarse-grained distribution and then compute the entropy of this distribution. Results: Experimental results with both real and synthetic HRV signals showed that bubble entropy presents remarkable stability and exhibits increased descriptive and discriminating power compared to all other definitions, including the most popular ones. Conclusion: The definition proposed is almost free of parameters. The most common ones are the scale factor r and the embedding dimension m . In our definition, the scale factor is totally eliminated and the importance of m is significantly reduced. The proposed method presents increased stability and discriminating power. Significance: After the extensive use of some entropy measures in physiological signals, typical values for their parameters have been suggested, or at least, widely used. However, the parameters are still there, application and dataset dependent, influencing the computed value and affecting the descriptive power. Reducing their significance or eliminating them alleviates the problem, decoupling the method from the data and the application, and eliminating subjective factors. Objective : A critical point in any definition of entropy is the selection of the parameters employed to obtain an estimate in practice. We propose a new definition of entropy aiming to reduce the significance of this selection. Methods: We call the new definition Bubble Entropy . Bubble Entropy is based on permutation entropy, where the vectors in the embedding space are ranked. We use the bubble sort algorithm for the ordering procedure and count instead the number of swaps performed for each vector. Doing so, we create a more coarse-grained distribution and then compute the entropy of this distribution. Results: Experimental results with both real and synthetic HRV signals showed that bubble entropy presents remarkable stability and exhibits increased descriptive and discriminating power compared to all other definitions, including the most popular ones. Conclusion: The definition proposed is almost free of parameters. The most common ones are the scale factor r and the embedding dimension m . In our definition, the scale factor is totally eliminated and the importance of m is significantly reduced. The proposed method presents increased stability and discriminating power. Significance: After the extensive use of some entropy measures in physiological signals, typical values for their parameters have been suggested, or at least, widely used. However, the parameters are still there, application and dataset dependent, influencing the computed value and affecting the descriptive power. Reducing their significance or eliminating them alleviates the problem, decoupling the method from the data and the application, and eliminating subjective factors.

  10. Probabilistic modelling of flood events using the entropy copula

    NASA Astrophysics Data System (ADS)

    Li, Fan; Zheng, Qian

    2016-11-01

    The estimation of flood frequency is vital for the flood control strategies and hydraulic structure design. Generating synthetic flood events according to statistical properties of observations is one of plausible methods to analyze the flood frequency. Due to the statistical dependence among the flood event variables (i.e. the flood peak, volume and duration), a multidimensional joint probability estimation is required. Recently, the copula method is widely used for multivariable dependent structure construction, however, the copula family should be chosen before application and the choice process is sometimes rather subjective. The entropy copula, a new copula family, employed in this research proposed a way to avoid the relatively subjective process by combining the theories of copula and entropy. The analysis shows the effectiveness of the entropy copula for probabilistic modelling the flood events of two hydrological gauges, and a comparison of accuracy with the popular copulas was made. The Gibbs sampling technique was applied for trivariate flood events simulation in order to mitigate the calculation difficulties of extending to three dimension directly. The simulation results indicate that the entropy copula is a simple and effective copula family for trivariate flood simulation.

  11. A comparison of entropy balance and probability weighting methods to generalize observational cohorts to a population: a simulation and empirical example.

    PubMed

    Harvey, Raymond A; Hayden, Jennifer D; Kamble, Pravin S; Bouchard, Jonathan R; Huang, Joanna C

    2017-04-01

    We compared methods to control bias and confounding in observational studies including inverse probability weighting (IPW) and stabilized IPW (sIPW). These methods often require iteration and post-calibration to achieve covariate balance. In comparison, entropy balance (EB) optimizes covariate balance a priori by calibrating weights using the target's moments as constraints. We measured covariate balance empirically and by simulation by using absolute standardized mean difference (ASMD), absolute bias (AB), and root mean square error (RMSE), investigating two scenarios: the size of the observed (exposed) cohort exceeds the target (unexposed) cohort and vice versa. The empirical application weighted a commercial health plan cohort to a nationally representative National Health and Nutrition Examination Survey target on the same covariates and compared average total health care cost estimates across methods. Entropy balance alone achieved balance (ASMD ≤ 0.10) on all covariates in simulation and empirically. In simulation scenario I, EB achieved the lowest AB and RMSE (13.64, 31.19) compared with IPW (263.05, 263.99) and sIPW (319.91, 320.71). In scenario II, EB outperformed IPW and sIPW with smaller AB and RMSE. In scenarios I and II, EB achieved the lowest mean estimate difference from the simulated population outcome ($490.05, $487.62) compared with IPW and sIPW, respectively. Empirically, only EB differed from the unweighted mean cost indicating IPW, and sIPW weighting was ineffective. Entropy balance demonstrated the bias-variance tradeoff achieving higher estimate accuracy, yet lower estimate precision, compared with IPW methods. EB weighting required no post-processing and effectively mitigated observed bias and confounding. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Conditional Entropy and Location Error in Indoor Localization Using Probabilistic Wi-Fi Fingerprinting.

    PubMed

    Berkvens, Rafael; Peremans, Herbert; Weyn, Maarten

    2016-10-02

    Localization systems are increasingly valuable, but their location estimates are only useful when the uncertainty of the estimate is known. This uncertainty is currently calculated as the location error given a ground truth, which is then used as a static measure in sometimes very different environments. In contrast, we propose the use of the conditional entropy of a posterior probability distribution as a complementary measure of uncertainty. This measure has the advantage of being dynamic, i.e., it can be calculated during localization based on individual sensor measurements, does not require a ground truth, and can be applied to discrete localization algorithms. Furthermore, for every consistent location estimation algorithm, both the location error and the conditional entropy measures must be related, i.e., a low entropy should always correspond with a small location error, while a high entropy can correspond with either a small or large location error. We validate this relationship experimentally by calculating both measures of uncertainty in three publicly available datasets using probabilistic Wi-Fi fingerprinting with eight different implementations of the sensor model. We show that the discrepancy between these measures, i.e., many location estimates having a high location error while simultaneously having a low conditional entropy, is largest for the least realistic implementations of the probabilistic sensor model. Based on the results presented in this paper, we conclude that conditional entropy, being dynamic, complementary to location error, and applicable to both continuous and discrete localization, provides an important extra means of characterizing a localization method.

  13. Conditional Entropy and Location Error in Indoor Localization Using Probabilistic Wi-Fi Fingerprinting

    PubMed Central

    Berkvens, Rafael; Peremans, Herbert; Weyn, Maarten

    2016-01-01

    Localization systems are increasingly valuable, but their location estimates are only useful when the uncertainty of the estimate is known. This uncertainty is currently calculated as the location error given a ground truth, which is then used as a static measure in sometimes very different environments. In contrast, we propose the use of the conditional entropy of a posterior probability distribution as a complementary measure of uncertainty. This measure has the advantage of being dynamic, i.e., it can be calculated during localization based on individual sensor measurements, does not require a ground truth, and can be applied to discrete localization algorithms. Furthermore, for every consistent location estimation algorithm, both the location error and the conditional entropy measures must be related, i.e., a low entropy should always correspond with a small location error, while a high entropy can correspond with either a small or large location error. We validate this relationship experimentally by calculating both measures of uncertainty in three publicly available datasets using probabilistic Wi-Fi fingerprinting with eight different implementations of the sensor model. We show that the discrepancy between these measures, i.e., many location estimates having a high location error while simultaneously having a low conditional entropy, is largest for the least realistic implementations of the probabilistic sensor model. Based on the results presented in this paper, we conclude that conditional entropy, being dynamic, complementary to location error, and applicable to both continuous and discrete localization, provides an important extra means of characterizing a localization method. PMID:27706099

  14. Sample entropy applied to the analysis of synthetic time series and tachograms

    NASA Astrophysics Data System (ADS)

    Muñoz-Diosdado, A.; Gálvez-Coyt, G. G.; Solís-Montufar, E.

    2017-01-01

    Entropy is a method of non-linear analysis that allows an estimate of the irregularity of a system, however, there are different types of computational entropy that were considered and tested in order to obtain one that would give an index of signals complexity taking into account the data number of the analysed time series, the computational resources demanded by the method, and the accuracy of the calculation. An algorithm for the generation of fractal time-series with a certain value of β was used for the characterization of the different entropy algorithms. We obtained a significant variation for most of the algorithms in terms of the series size, which could result counterproductive for the study of real signals of different lengths. The chosen method was sample entropy, which shows great independence of the series size. With this method, time series of heart interbeat intervals or tachograms of healthy subjects and patients with congestive heart failure were analysed. The calculation of sample entropy was carried out for 24-hour tachograms and time subseries of 6-hours for sleepiness and wakefulness. The comparison between the two populations shows a significant difference that is accentuated when the patient is sleeping.

  15. Dissecting Protein Configurational Entropy into Conformational and Vibrational Contributions.

    PubMed

    Chong, Song-Ho; Ham, Sihyun

    2015-10-01

    Quantifying how the rugged nature of the underlying free-energy landscape determines the entropic cost a protein must incur upon folding and ligand binding is a challenging problem. Here, we present a novel computational approach that dissects the protein configurational entropy on the basis of the classification of protein dynamics on the landscape into two separate components: short-term vibrational dynamics related to individual free-energy wells and long-term conformational dynamics associated with transitions between wells. We apply this method to separate the configurational entropy of the protein villin headpiece subdomain into its conformational and vibrational components. We find that the change in configurational entropy upon folding is dominated by the conformational entropy despite the fact that the magnitude of the vibrational entropy is the significantly larger component in each of the folded and unfolded states, which is in accord with the previous empirical estimations. The straightforward applicability of our method to unfolded proteins promises a wide range of applications, including those related to intrinsically disordered proteins.

  16. Joint Entropy for Space and Spatial Frequency Domains Estimated from Psychometric Functions of Achromatic Discrimination

    PubMed Central

    Silveira, Vladímir de Aquino; Souza, Givago da Silva; Gomes, Bruno Duarte; Rodrigues, Anderson Raiol; Silveira, Luiz Carlos de Lima

    2014-01-01

    We used psychometric functions to estimate the joint entropy for space discrimination and spatial frequency discrimination. Space discrimination was taken as discrimination of spatial extent. Seven subjects were tested. Gábor functions comprising unidimensionalsinusoidal gratings (0.4, 2, and 10 cpd) and bidimensionalGaussian envelopes (1°) were used as reference stimuli. The experiment comprised the comparison between reference and test stimulithat differed in grating's spatial frequency or envelope's standard deviation. We tested 21 different envelope's standard deviations around the reference standard deviation to study spatial extent discrimination and 19 different grating's spatial frequencies around the reference spatial frequency to study spatial frequency discrimination. Two series of psychometric functions were obtained for 2%, 5%, 10%, and 100% stimulus contrast. The psychometric function data points for spatial extent discrimination or spatial frequency discrimination were fitted with Gaussian functions using the least square method, and the spatial extent and spatial frequency entropies were estimated from the standard deviation of these Gaussian functions. Then, joint entropy was obtained by multiplying the square root of space extent entropy times the spatial frequency entropy. We compared our results to the theoretical minimum for unidimensional Gábor functions, 1/4π or 0.0796. At low and intermediate spatial frequencies and high contrasts, joint entropy reached levels below the theoretical minimum, suggesting non-linear interactions between two or more visual mechanisms. We concluded that non-linear interactions of visual pathways, such as the M and P pathways, could explain joint entropy values below the theoretical minimum at low and intermediate spatial frequencies and high contrasts. These non-linear interactions might be at work at intermediate and high contrasts at all spatial frequencies once there was a substantial decrease in joint entropy for these stimulus conditions when contrast was raised. PMID:24466158

  17. Joint entropy for space and spatial frequency domains estimated from psychometric functions of achromatic discrimination.

    PubMed

    Silveira, Vladímir de Aquino; Souza, Givago da Silva; Gomes, Bruno Duarte; Rodrigues, Anderson Raiol; Silveira, Luiz Carlos de Lima

    2014-01-01

    We used psychometric functions to estimate the joint entropy for space discrimination and spatial frequency discrimination. Space discrimination was taken as discrimination of spatial extent. Seven subjects were tested. Gábor functions comprising unidimensionalsinusoidal gratings (0.4, 2, and 10 cpd) and bidimensionalGaussian envelopes (1°) were used as reference stimuli. The experiment comprised the comparison between reference and test stimulithat differed in grating's spatial frequency or envelope's standard deviation. We tested 21 different envelope's standard deviations around the reference standard deviation to study spatial extent discrimination and 19 different grating's spatial frequencies around the reference spatial frequency to study spatial frequency discrimination. Two series of psychometric functions were obtained for 2%, 5%, 10%, and 100% stimulus contrast. The psychometric function data points for spatial extent discrimination or spatial frequency discrimination were fitted with Gaussian functions using the least square method, and the spatial extent and spatial frequency entropies were estimated from the standard deviation of these Gaussian functions. Then, joint entropy was obtained by multiplying the square root of space extent entropy times the spatial frequency entropy. We compared our results to the theoretical minimum for unidimensional Gábor functions, 1/4π or 0.0796. At low and intermediate spatial frequencies and high contrasts, joint entropy reached levels below the theoretical minimum, suggesting non-linear interactions between two or more visual mechanisms. We concluded that non-linear interactions of visual pathways, such as the M and P pathways, could explain joint entropy values below the theoretical minimum at low and intermediate spatial frequencies and high contrasts. These non-linear interactions might be at work at intermediate and high contrasts at all spatial frequencies once there was a substantial decrease in joint entropy for these stimulus conditions when contrast was raised.

  18. Entropic benefit of a cross-link in protein association.

    PubMed

    Zaman, Muhammad H; Berry, R Stephen; Sosnick, Tobin R

    2002-08-01

    We introduce a method to estimate the loss of configurational entropy upon insertion of a cross-link to a dimeric system. First, a clear distinction is established between the loss of entropy upon tethering and binding, two quantities that are often considered to be equivalent. By comparing the probability distribution of the center-to-center distances for untethered and cross-linked versions, we are able to calculate the loss of translational entropy upon cross-linking. The distribution function for the untethered helices is calculated from the probability that a given helix is closer to its partner than to all other helices, the "Nearest Neighbor" method. This method requires no assumptions about the nature of the solvent, and hence resolves difficulties normally associated with calculations for systems in liquids. Analysis of the restriction of angular freedom upon tethering indicates that the loss of rotational entropy is negligible. The method is applied in the context of the folding of a ten turn helical coiled coil with the tether modeled as a Gaussian chain or a flexible amino acid chain. After correcting for loop closure entropy in the docked state, we estimate the introduction of a six-residue tether in the coiled coil results in an effective concentration of the chain to be about 4 or 100 mM, depending upon whether the helices are denatured or pre-folded prior to their association. Thus, tethering results in significant stabilization for systems with millimolar or stronger dissociation constants. Copyright 2002 Wiley-Liss, Inc.

  19. Multifractal diffusion entropy analysis: Optimal bin width of probability histograms

    NASA Astrophysics Data System (ADS)

    Jizba, Petr; Korbel, Jan

    2014-11-01

    In the framework of Multifractal Diffusion Entropy Analysis we propose a method for choosing an optimal bin-width in histograms generated from underlying probability distributions of interest. The method presented uses techniques of Rényi’s entropy and the mean squared error analysis to discuss the conditions under which the error in the multifractal spectrum estimation is minimal. We illustrate the utility of our approach by focusing on a scaling behavior of financial time series. In particular, we analyze the S&P500 stock index as sampled at a daily rate in the time period 1950-2013. In order to demonstrate a strength of the method proposed we compare the multifractal δ-spectrum for various bin-widths and show the robustness of the method, especially for large values of q. For such values, other methods in use, e.g., those based on moment estimation, tend to fail for heavy-tailed data or data with long correlations. Connection between the δ-spectrum and Rényi’s q parameter is also discussed and elucidated on a simple example of multiscale time series.

  20. Estimation of depth to magnetic source using maximum entropy power spectra, with application to the Peru-Chile Trench

    USGS Publications Warehouse

    Blakely, Richard J.

    1981-01-01

    Estimations of the depth to magnetic sources using the power spectrum of magnetic anomalies generally require long magnetic profiles. The method developed here uses the maximum entropy power spectrum (MEPS) to calculate depth to source on short windows of magnetic data; resolution is thereby improved. The method operates by dividing a profile into overlapping windows, calculating a maximum entropy power spectrum for each window, linearizing the spectra, and calculating with least squares the various depth estimates. The assumptions of the method are that the source is two dimensional and that the intensity of magnetization includes random noise; knowledge of the direction of magnetization is not required. The method is applied to synthetic data and to observed marine anomalies over the Peru-Chile Trench. The analyses indicate a continuous magnetic basement extending from the eastern margin of the Nazca plate and into the subduction zone. The computed basement depths agree with acoustic basement seaward of the trench axis, but deepen as the plate approaches the inner trench wall. This apparent increase in the computed depths may result from the deterioration of magnetization in the upper part of the ocean crust, possibly caused by compressional disruption of the basaltic layer. Landward of the trench axis, the depth estimates indicate possible thrusting of the oceanic material into the lower slope of the continental margin.

  1. Entropy-Based TOA Estimation and SVM-Based Ranging Error Mitigation in UWB Ranging Systems

    PubMed Central

    Yin, Zhendong; Cui, Kai; Wu, Zhilu; Yin, Liang

    2015-01-01

    The major challenges for Ultra-wide Band (UWB) indoor ranging systems are the dense multipath and non-line-of-sight (NLOS) problems of the indoor environment. To precisely estimate the time of arrival (TOA) of the first path (FP) in such a poor environment, a novel approach of entropy-based TOA estimation and support vector machine (SVM) regression-based ranging error mitigation is proposed in this paper. The proposed method can estimate the TOA precisely by measuring the randomness of the received signals and mitigate the ranging error without the recognition of the channel conditions. The entropy is used to measure the randomness of the received signals and the FP can be determined by the decision of the sample which is followed by a great entropy decrease. The SVM regression is employed to perform the ranging-error mitigation by the modeling of the regressor between the characteristics of received signals and the ranging error. The presented numerical simulation results show that the proposed approach achieves significant performance improvements in the CM1 to CM4 channels of the IEEE 802.15.4a standard, as compared to conventional approaches. PMID:26007726

  2. Moments of the phase-space density, coincidence probabilities, and entropies of a multiparticle system

    NASA Astrophysics Data System (ADS)

    Bialas, A.

    2006-04-01

    A method to estimate moments of the phase-space density from event-by-event fluctuations is reviewed and its accuracy analyzed. Relation of these measurements to the determination of the entropy of the system is discussed. This is a summary of the results obtained recently together with W.Czyz and K.Zalewski.

  3. Estimating Bayesian Phylogenetic Information Content

    PubMed Central

    Lewis, Paul O.; Chen, Ming-Hui; Kuo, Lynn; Lewis, Louise A.; Fučíková, Karolina; Neupane, Suman; Wang, Yu-Bo; Shi, Daoyuan

    2016-01-01

    Measuring the phylogenetic information content of data has a long history in systematics. Here we explore a Bayesian approach to information content estimation. The entropy of the posterior distribution compared with the entropy of the prior distribution provides a natural way to measure information content. If the data have no information relevant to ranking tree topologies beyond the information supplied by the prior, the posterior and prior will be identical. Information in data discourages consideration of some hypotheses allowed by the prior, resulting in a posterior distribution that is more concentrated (has lower entropy) than the prior. We focus on measuring information about tree topology using marginal posterior distributions of tree topologies. We show that both the accuracy and the computational efficiency of topological information content estimation improve with use of the conditional clade distribution, which also allows topological information content to be partitioned by clade. We explore two important applications of our method: providing a compelling definition of saturation and detecting conflict among data partitions that can negatively affect analyses of concatenated data. [Bayesian; concatenation; conditional clade distribution; entropy; information; phylogenetics; saturation.] PMID:27155008

  4. Reduced-Reference Quality Assessment Based on the Entropy of DWT Coefficients of Locally Weighted Gradient Magnitudes.

    PubMed

    Golestaneh, S Alireza; Karam, Lina

    2016-08-24

    Perceptual image quality assessment (IQA) attempts to use computational models to estimate the image quality in accordance with subjective evaluations. Reduced-reference (RR) image quality assessment (IQA) methods make use of partial information or features extracted from the reference image for estimating the quality of distorted images. Finding a balance between the number of RR features and accuracy of the estimated image quality is essential and important in IQA. In this paper we propose a training-free low-cost RRIQA method that requires a very small number of RR features (6 RR features). The proposed RRIQA algorithm is based on the discrete wavelet transform (DWT) of locally weighted gradient magnitudes.We apply human visual system's contrast sensitivity and neighborhood gradient information to weight the gradient magnitudes in a locally adaptive manner. The RR features are computed by measuring the entropy of each DWT subband, for each scale, and pooling the subband entropies along all orientations, resulting in L RR features (one average entropy per scale) for an L-level DWT. Extensive experiments performed on seven large-scale benchmark databases demonstrate that the proposed RRIQA method delivers highly competitive performance as compared to the state-of-the-art RRIQA models as well as full reference ones for both natural and texture images. The MATLAB source code of REDLOG and the evaluation results are publicly available online at https://http://lab.engineering.asu.edu/ivulab/software/redlog/.

  5. RNA Thermodynamic Structural Entropy

    PubMed Central

    Garcia-Martin, Juan Antonio; Clote, Peter

    2015-01-01

    Conformational entropy for atomic-level, three dimensional biomolecules is known experimentally to play an important role in protein-ligand discrimination, yet reliable computation of entropy remains a difficult problem. Here we describe the first two accurate and efficient algorithms to compute the conformational entropy for RNA secondary structures, with respect to the Turner energy model, where free energy parameters are determined from UV absorption experiments. An algorithm to compute the derivational entropy for RNA secondary structures had previously been introduced, using stochastic context free grammars (SCFGs). However, the numerical value of derivational entropy depends heavily on the chosen context free grammar and on the training set used to estimate rule probabilities. Using data from the Rfam database, we determine that both of our thermodynamic methods, which agree in numerical value, are substantially faster than the SCFG method. Thermodynamic structural entropy is much smaller than derivational entropy, and the correlation between length-normalized thermodynamic entropy and derivational entropy is moderately weak to poor. In applications, we plot the structural entropy as a function of temperature for known thermoswitches, such as the repression of heat shock gene expression (ROSE) element, we determine that the correlation between hammerhead ribozyme cleavage activity and total free energy is improved by including an additional free energy term arising from conformational entropy, and we plot the structural entropy of windows of the HIV-1 genome. Our software RNAentropy can compute structural entropy for any user-specified temperature, and supports both the Turner’99 and Turner’04 energy parameters. It follows that RNAentropy is state-of-the-art software to compute RNA secondary structure conformational entropy. Source code is available at https://github.com/clotelab/RNAentropy/; a full web server is available at http://bioinformatics.bc.edu/clotelab/RNAentropy, including source code and ancillary programs. PMID:26555444

  6. RNA Thermodynamic Structural Entropy.

    PubMed

    Garcia-Martin, Juan Antonio; Clote, Peter

    2015-01-01

    Conformational entropy for atomic-level, three dimensional biomolecules is known experimentally to play an important role in protein-ligand discrimination, yet reliable computation of entropy remains a difficult problem. Here we describe the first two accurate and efficient algorithms to compute the conformational entropy for RNA secondary structures, with respect to the Turner energy model, where free energy parameters are determined from UV absorption experiments. An algorithm to compute the derivational entropy for RNA secondary structures had previously been introduced, using stochastic context free grammars (SCFGs). However, the numerical value of derivational entropy depends heavily on the chosen context free grammar and on the training set used to estimate rule probabilities. Using data from the Rfam database, we determine that both of our thermodynamic methods, which agree in numerical value, are substantially faster than the SCFG method. Thermodynamic structural entropy is much smaller than derivational entropy, and the correlation between length-normalized thermodynamic entropy and derivational entropy is moderately weak to poor. In applications, we plot the structural entropy as a function of temperature for known thermoswitches, such as the repression of heat shock gene expression (ROSE) element, we determine that the correlation between hammerhead ribozyme cleavage activity and total free energy is improved by including an additional free energy term arising from conformational entropy, and we plot the structural entropy of windows of the HIV-1 genome. Our software RNAentropy can compute structural entropy for any user-specified temperature, and supports both the Turner'99 and Turner'04 energy parameters. It follows that RNAentropy is state-of-the-art software to compute RNA secondary structure conformational entropy. Source code is available at https://github.com/clotelab/RNAentropy/; a full web server is available at http://bioinformatics.bc.edu/clotelab/RNAentropy, including source code and ancillary programs.

  7. Spatial Decomposition of Translational Water–Water Correlation Entropy in Binding Pockets

    PubMed Central

    2015-01-01

    A number of computational tools available today compute the thermodynamic properties of water at surfaces and in binding pockets by using inhomogeneous solvation theory (IST) to analyze explicit-solvent simulations. Such methods enable qualitative spatial mappings of both energy and entropy around a solute of interest and can also be applied quantitatively. However, the entropy estimates of existing methods have, to date, been almost entirely limited to the first-order terms in the IST’s entropy expansion. These first-order terms account for localization and orientation of water molecules in the field of the solute but not for the modification of water–water correlations by the solute. Here, we present an extension of the Grid Inhomogeneous Solvation Theory (GIST) approach which accounts for water–water translational correlations. The method involves rewriting the two-point density of water in terms of a conditional density and utilizes the efficient nearest-neighbor entropy estimation approach. Spatial maps of this second order term, for water in and around the synthetic host cucurbit[7]uril and in the binding pocket of the enzyme Factor Xa, reveal mainly negative contributions, indicating solute-induced water–water correlations relative to bulk water; particularly strong signals are obtained for sites at the entrances of cavities or pockets. This second-order term thus enters with the same, negative, sign as the first order translational and orientational terms. Numerical and convergence properties of the methodology are examined. PMID:26636620

  8. Measuring Ambiguity in HLA Typing Methods

    PubMed Central

    Madbouly, Abeer; Freeman, John; Maiers, Martin

    2012-01-01

    In hematopoietic stem cell transplantation, donor selection is based primarily on matching donor and patient HLA genes. These genes are highly polymorphic and their typing can result in exact allele assignment at each gene (the resolution at which patients and donors are matched), but it can also result in a set of ambiguous assignments, depending on the typing methodology used. To facilitate rapid identification of matched donors, registries employ statistical algorithms to infer HLA alleles from ambiguous genotypes. Linkage disequilibrium information encapsulated in haplotype frequencies is used to facilitate prediction of the most likely haplotype assignment. An HLA typing with less ambiguity produces fewer high-probability haplotypes and a more reliable prediction. We estimated ambiguity for several HLA typing methods across four continental populations using an information theory-based measure, Shannon's entropy. We used allele and haplotype frequencies to calculate entropy for different sets of 1,000 subjects with simulated HLA typing. Using allele frequencies we calculated an average entropy in Caucasians of 1.65 for serology, 1.06 for allele family level, 0.49 for a 2002-era SSO kit, and 0.076 for single-pass SBT. When using haplotype frequencies in entropy calculations, we found average entropies of 0.72 for serology, 0.73 for allele family level, 0.05 for SSO, and 0.002 for single-pass SBT. Application of haplotype frequencies further reduces HLA typing ambiguity. We also estimated expected confirmatory typing mismatch rates for simulated subjects. In a hypothetical registry with all donors typed using the same method, the entropy values based on haplotype frequencies correspond to confirmatory typing mismatch rates of 1.31% for SSO versus only 0.08% for SBT. Intermediate-resolution single-pass SBT contains the least ambiguity of the methods we evaluated and therefore the most certainty in allele prediction. The presented measure objectively evaluates HLA typing methods and can help define acceptable HLA typing for donor recruitment. PMID:22952712

  9. Epoch-based Entropy for Early Screening of Alzheimer's Disease.

    PubMed

    Houmani, N; Dreyfus, G; Vialatte, F B

    2015-12-01

    In this paper, we introduce a novel entropy measure, termed epoch-based entropy. This measure quantifies disorder of EEG signals both at the time level and spatial level, using local density estimation by a Hidden Markov Model on inter-channel stationary epochs. The investigation is led on a multi-centric EEG database recorded from patients at an early stage of Alzheimer's disease (AD) and age-matched healthy subjects. We investigate the classification performances of this method, its robustness to noise, and its sensitivity to sampling frequency and to variations of hyperparameters. The measure is compared to two alternative complexity measures, Shannon's entropy and correlation dimension. The classification accuracies for the discrimination of AD patients from healthy subjects were estimated using a linear classifier designed on a development dataset, and subsequently tested on an independent test set. Epoch-based entropy reached a classification accuracy of 83% on the test dataset (specificity = 83.3%, sensitivity = 82.3%), outperforming the two other complexity measures. Furthermore, it was shown to be more stable to hyperparameter variations, and less sensitive to noise and sampling frequency disturbances than the other two complexity measures.

  10. A Variational Level Set Approach Based on Local Entropy for Image Segmentation and Bias Field Correction.

    PubMed

    Tang, Jian; Jiang, Xiaoliang

    2017-01-01

    Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.

  11. Entropy of hydrological systems under small samples: Uncertainty and variability

    NASA Astrophysics Data System (ADS)

    Liu, Dengfeng; Wang, Dong; Wang, Yuankun; Wu, Jichun; Singh, Vijay P.; Zeng, Xiankui; Wang, Lachun; Chen, Yuanfang; Chen, Xi; Zhang, Liyuan; Gu, Shenghua

    2016-01-01

    Entropy theory has been increasingly applied in hydrology in both descriptive and inferential ways. However, little attention has been given to the small-sample condition widespread in hydrological practice, where either hydrological measurements are limited or are even nonexistent. Accordingly, entropy estimated under this condition may incur considerable bias. In this study, small-sample condition is considered and two innovative entropy estimators, the Chao-Shen (CS) estimator and the James-Stein-type shrinkage (JSS) estimator, are introduced. Simulation tests are conducted with common distributions in hydrology, that lead to the best-performing JSS estimator. Then, multi-scale moving entropy-based hydrological analyses (MM-EHA) are applied to indicate the changing patterns of uncertainty of streamflow data collected from the Yangtze River and the Yellow River, China. For further investigation into the intrinsic property of entropy applied in hydrological uncertainty analyses, correlations of entropy and other statistics at different time-scales are also calculated, which show connections between the concept of uncertainty and variability.

  12. Configurational entropy: an improvement of the quasiharmonic approximation using configurational temperature.

    PubMed

    Nguyen, Phuong H; Derreumaux, Philippe

    2012-01-14

    One challenge in computational biophysics and biology is to develop methodologies able to estimate accurately the configurational entropy of macromolecules. Among many methods, the quasiharmonic approximation (QH) is most widely used as it is simple in both theory and implementation. However, it has been shown that this method becomes inaccurate by overestimating entropy for systems with rugged free energy landscapes. Here, we propose a simple method to improve the QH approximation, i.e., to reduce QH entropy. We approximate the potential energy landscape of the system by an effective harmonic potential, and request that this potential must produce exactly the configurational temperature of the system. Due to this constraint, the force constants associated with the effective harmonic potential are increased, or equivalently, entropy of motion governed by this effective harmonic potential is reduced. We also introduce the effective configurational temperature concept which can be used as an indicator to check the anharmonicity of the free energy landscape. To validate the new method we compare it with the recently developed expansion approximate method by calculating entropy of one simple model system and two peptides with 3 and 16 amino acids either in gas phase or in explicit solvent. We show that the new method appears to be a good choice in practice as it is a compromise between accuracy and computational speed. A modification of the expansion approximate method is also introduced and advantages are discussed in some detail.

  13. A comparison of performance of automatic cloud coverage assessment algorithm for Formosat-2 image using clustering-based and spatial thresholding methods

    NASA Astrophysics Data System (ADS)

    Hsu, Kuo-Hsien

    2012-11-01

    Formosat-2 image is a kind of high-spatial-resolution (2 meters GSD) remote sensing satellite data, which includes one panchromatic band and four multispectral bands (Blue, Green, Red, near-infrared). An essential sector in the daily processing of received Formosat-2 image is to estimate the cloud statistic of image using Automatic Cloud Coverage Assessment (ACCA) algorithm. The information of cloud statistic of image is subsequently recorded as an important metadata for image product catalog. In this paper, we propose an ACCA method with two consecutive stages: preprocessing and post-processing analysis. For pre-processing analysis, the un-supervised K-means classification, Sobel's method, thresholding method, non-cloudy pixels reexamination, and cross-band filter method are implemented in sequence for cloud statistic determination. For post-processing analysis, Box-Counting fractal method is implemented. In other words, the cloud statistic is firstly determined via pre-processing analysis, the correctness of cloud statistic of image of different spectral band is eventually cross-examined qualitatively and quantitatively via post-processing analysis. The selection of an appropriate thresholding method is very critical to the result of ACCA method. Therefore, in this work, We firstly conduct a series of experiments of the clustering-based and spatial thresholding methods that include Otsu's, Local Entropy(LE), Joint Entropy(JE), Global Entropy(GE), and Global Relative Entropy(GRE) method, for performance comparison. The result shows that Otsu's and GE methods both perform better than others for Formosat-2 image. Additionally, our proposed ACCA method by selecting Otsu's method as the threshoding method has successfully extracted the cloudy pixels of Formosat-2 image for accurate cloud statistic estimation.

  14. The coupling analysis between stock market indices based on permutation measures

    NASA Astrophysics Data System (ADS)

    Shi, Wenbin; Shang, Pengjian; Xia, Jianan; Yeh, Chien-Hung

    2016-04-01

    Many information-theoretic methods have been proposed for analyzing the coupling dependence between time series. And it is significant to quantify the correlation relationship between financial sequences since the financial market is a complex evolved dynamic system. Recently, we developed a new permutation-based entropy, called cross-permutation entropy (CPE), to detect the coupling structures between two synchronous time series. In this paper, we extend the CPE method to weighted cross-permutation entropy (WCPE), to address some of CPE's limitations, mainly its inability to differentiate between distinct patterns of a certain motif and the sensitivity of patterns close to the noise floor. It shows more stable and reliable results than CPE does when applied it to spiky data and AR(1) processes. Besides, we adapt the CPE method to infer the complexity of short-length time series by freely changing the time delay, and test it with Gaussian random series and random walks. The modified method shows the advantages in reducing deviations of entropy estimation compared with the conventional one. Finally, the weighted cross-permutation entropy of eight important stock indices from the world financial markets is investigated, and some useful and interesting empirical results are obtained.

  15. Estimating parameter of Rayleigh distribution by using Maximum Likelihood method and Bayes method

    NASA Astrophysics Data System (ADS)

    Ardianti, Fitri; Sutarman

    2018-01-01

    In this paper, we use Maximum Likelihood estimation and Bayes method under some risk function to estimate parameter of Rayleigh distribution to know the best method. The prior knowledge which used in Bayes method is Jeffrey’s non-informative prior. Maximum likelihood estimation and Bayes method under precautionary loss function, entropy loss function, loss function-L 1 will be compared. We compare these methods by bias and MSE value using R program. After that, the result will be displayed in tables to facilitate the comparisons.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khosla, D.; Singh, M.

    The estimation of three-dimensional dipole current sources on the cortical surface from the measured magnetoencephalogram (MEG) is a highly under determined inverse problem as there are many {open_quotes}feasible{close_quotes} images which are consistent with the MEG data. Previous approaches to this problem have concentrated on the use of weighted minimum norm inverse methods. While these methods ensure a unique solution, they often produce overly smoothed solutions and exhibit severe sensitivity to noise. In this paper we explore the maximum entropy approach to obtain better solutions to the problem. This estimation technique selects that image from the possible set of feasible imagesmore » which has the maximum entropy permitted by the information available to us. In order to account for the presence of noise in the data, we have also incorporated a noise rejection or likelihood term into our maximum entropy method. This makes our approach mirror a Bayesian maximum a posteriori (MAP) formulation. Additional information from other functional techniques like functional magnetic resonance imaging (fMRI) can be incorporated in the proposed method in the form of a prior bias function to improve solutions. We demonstrate the method with experimental phantom data from a clinical 122 channel MEG system.« less

  17. Generalized Cross Entropy Method for estimating joint distribution from incomplete information

    NASA Astrophysics Data System (ADS)

    Xu, Hai-Yan; Kuo, Shyh-Hao; Li, Guoqi; Legara, Erika Fille T.; Zhao, Daxuan; Monterola, Christopher P.

    2016-07-01

    Obtaining a full joint distribution from individual marginal distributions with incomplete information is a non-trivial task that continues to challenge researchers from various domains including economics, demography, and statistics. In this work, we develop a new methodology referred to as ;Generalized Cross Entropy Method; (GCEM) that is aimed at addressing the issue. The objective function is proposed to be a weighted sum of divergences between joint distributions and various references. We show that the solution of the GCEM is unique and global optimal. Furthermore, we illustrate the applicability and validity of the method by utilizing it to recover the joint distribution of a household profile of a given administrative region. In particular, we estimate the joint distribution of the household size, household dwelling type, and household home ownership in Singapore. Results show a high-accuracy estimation of the full joint distribution of the household profile under study. Finally, the impact of constraints and weight on the estimation of joint distribution is explored.

  18. Analysis of the phase transition in the two-dimensional Ising ferromagnet using a Lempel-Ziv string-parsing scheme and black-box data-compression utilities

    NASA Astrophysics Data System (ADS)

    Melchert, O.; Hartmann, A. K.

    2015-02-01

    In this work we consider information-theoretic observables to analyze short symbolic sequences, comprising time series that represent the orientation of a single spin in a two-dimensional (2D) Ising ferromagnet on a square lattice of size L2=1282 for different system temperatures T . The latter were chosen from an interval enclosing the critical point Tc of the model. At small temperatures the sequences are thus very regular; at high temperatures they are maximally random. In the vicinity of the critical point, nontrivial, long-range correlations appear. Here we implement estimators for the entropy rate, excess entropy (i.e., "complexity"), and multi-information. First, we implement a Lempel-Ziv string-parsing scheme, providing seemingly elaborate entropy rate and multi-information estimates and an approximate estimator for the excess entropy. Furthermore, we apply easy-to-use black-box data-compression utilities, providing approximate estimators only. For comparison and to yield results for benchmarking purposes, we implement the information-theoretic observables also based on the well-established M -block Shannon entropy, which is more tedious to apply compared to the first two "algorithmic" entropy estimation procedures. To test how well one can exploit the potential of such data-compression techniques, we aim at detecting the critical point of the 2D Ising ferromagnet. Among the above observables, the multi-information, which is known to exhibit an isolated peak at the critical point, is very easy to replicate by means of both efficient algorithmic entropy estimation procedures. Finally, we assess how good the various algorithmic entropy estimates compare to the more conventional block entropy estimates and illustrate a simple modification that yields enhanced results.

  19. Entropy production of doubly stochastic quantum channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Müller-Hermes, Alexander, E-mail: muellerh@posteo.net; Department of Mathematical Sciences, University of Copenhagen, 2100 Copenhagen; Stilck França, Daniel, E-mail: dsfranca@mytum.de

    2016-02-15

    We study the entropy increase of quantum systems evolving under primitive, doubly stochastic Markovian noise and thus converging to the maximally mixed state. This entropy increase can be quantified by a logarithmic-Sobolev constant of the Liouvillian generating the noise. We prove a universal lower bound on this constant that stays invariant under taking tensor-powers. Our methods involve a new comparison method to relate logarithmic-Sobolev constants of different Liouvillians and a technique to compute logarithmic-Sobolev inequalities of Liouvillians with eigenvectors forming a projective representation of a finite abelian group. Our bounds improve upon similar results established before and as an applicationmore » we prove an upper bound on continuous-time quantum capacities. In the last part of this work we study entropy production estimates of discrete-time doubly stochastic quantum channels by extending the framework of discrete-time logarithmic-Sobolev inequalities to the quantum case.« less

  20. A Mixed QM/MM Scoring Function to Predict Protein-Ligand Binding Affinity

    PubMed Central

    Hayik, Seth A.; Dunbrack, Roland; Merz, Kenneth M.

    2010-01-01

    Computational methods for predicting protein-ligand binding free energy continue to be popular as a potential cost-cutting method in the drug discovery process. However, accurate predictions are often difficult to make as estimates must be made for certain electronic and entropic terms in conventional force field based scoring functions. Mixed quantum mechanics/molecular mechanics (QM/MM) methods allow electronic effects for a small region of the protein to be calculated, treating the remaining atoms as a fixed charge background for the active site. Such a semi-empirical QM/MM scoring function has been implemented in AMBER using DivCon and tested on a set of 23 metalloprotein-ligand complexes, where QM/MM methods provide a particular advantage in the modeling of the metal ion. The binding affinity of this set of proteins can be calculated with an R2 of 0.64 and a standard deviation of 1.88 kcal/mol without fitting and 0.71 and a standard deviation of 1.69 kcal/mol with fitted weighting of the individual scoring terms. In this study we explore using various methods to calculate terms in the binding free energy equation, including entropy estimates and minimization standards. From these studies we found that using the rotational bond estimate to ligand entropy results in a reasonable R2 of 0.63 without fitting. We also found that using the ESCF energy of the proteins without minimization resulted in an R2 of 0.57, when using the rotatable bond entropy estimate. PMID:21221417

  1. Uncertainties have a meaning: Information entropy as a quality measure for 3-D geological models

    NASA Astrophysics Data System (ADS)

    Wellmann, J. Florian; Regenauer-Lieb, Klaus

    2012-03-01

    Analyzing, visualizing and communicating uncertainties are important issues as geological models can never be fully determined. To date, there exists no general approach to quantify uncertainties in geological modeling. We propose here to use information entropy as an objective measure to compare and evaluate model and observational results. Information entropy was introduced in the 50s and defines a scalar value at every location in the model for predictability. We show that this method not only provides a quantitative insight into model uncertainties but, due to the underlying concept of information entropy, can be related to questions of data integration (i.e. how is the model quality interconnected with the used input data) and model evolution (i.e. does new data - or a changed geological hypothesis - optimize the model). In other words information entropy is a powerful measure to be used for data assimilation and inversion. As a first test of feasibility, we present the application of the new method to the visualization of uncertainties in geological models, here understood as structural representations of the subsurface. Applying the concept of information entropy on a suite of simulated models, we can clearly identify (a) uncertain regions within the model, even for complex geometries; (b) the overall uncertainty of a geological unit, which is, for example, of great relevance in any type of resource estimation; (c) a mean entropy for the whole model, important to track model changes with one overall measure. These results cannot easily be obtained with existing standard methods. The results suggest that information entropy is a powerful method to visualize uncertainties in geological models, and to classify the indefiniteness of single units and the mean entropy of a model quantitatively. Due to the relationship of this measure to the missing information, we expect the method to have a great potential in many types of geoscientific data assimilation problems — beyond pure visualization.

  2. Wang-Landau method for calculating Rényi entropies in finite-temperature quantum Monte Carlo simulations.

    PubMed

    Inglis, Stephen; Melko, Roger G

    2013-01-01

    We implement a Wang-Landau sampling technique in quantum Monte Carlo (QMC) simulations for the purpose of calculating the Rényi entanglement entropies and associated mutual information. The algorithm converges an estimate for an analog to the density of states for stochastic series expansion QMC, allowing a direct calculation of Rényi entropies without explicit thermodynamic integration. We benchmark results for the mutual information on two-dimensional (2D) isotropic and anisotropic Heisenberg models, a 2D transverse field Ising model, and a three-dimensional Heisenberg model, confirming a critical scaling of the mutual information in cases with a finite-temperature transition. We discuss the benefits and limitations of broad sampling techniques compared to standard importance sampling methods.

  3. A Surrogate Technique for Investigating Deterministic Dynamics in Discrete Human Movement.

    PubMed

    Taylor, Paul G; Small, Michael; Lee, Kwee-Yum; Landeo, Raul; O'Meara, Damien M; Millett, Emma L

    2016-10-01

    Entropy is an effective tool for investigation of human movement variability. However, before applying entropy, it can be beneficial to employ analyses to confirm that observed data are not solely the result of stochastic processes. This can be achieved by contrasting observed data with that produced using surrogate methods. Unlike continuous movement, no appropriate method has been applied to discrete human movement. This article proposes a novel surrogate method for discrete movement data, outlining the processes for determining its critical values. The proposed technique reliably generated surrogates for discrete joint angle time series, destroying fine-scale dynamics of the observed signal, while maintaining macro structural characteristics. Comparison of entropy estimates indicated observed signals had greater regularity than surrogates and were not only the result of stochastic but also deterministic processes. The proposed surrogate method is both a valid and reliable technique to investigate determinism in other discrete human movement time series.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Maoyuan; Besford, Quinn Alexander; Mulvaney, Thomas

    The entropy of hydrophobic solvation has been explained as the result of ordered solvation structures, of hydrogen bonds, of the small size of the water molecule, of dispersion forces, and of solvent density fluctuations. We report a new approach to the calculation of the entropy of hydrophobic solvation, along with tests of and comparisons to several other methods. The methods are assessed in the light of the available thermodynamic and spectroscopic information on the effects of temperature on hydrophobic solvation. Five model hydrophobes in SPC/E water give benchmark solvation entropies via Widom’s test-particle insertion method, and other methods and modelsmore » are tested against these particle-insertion results. Entropies associated with distributions of tetrahedral order, of electric field, and of solvent dipole orientations are examined. We find these contributions are small compared to the benchmark particle-insertion entropy. Competitive with or better than other theories in accuracy, but with no free parameters, is the new estimate of the entropy contributed by correlations between dipole moments. Dipole correlations account for most of the hydrophobic solvation entropy for all models studied and capture the distinctive temperature dependence seen in thermodynamic and spectroscopic experiments. Entropies based on pair and many-body correlations in number density approach the correct magnitudes but fail to describe temperature and size dependences, respectively. Hydrogen-bond definitions and free energies that best reproduce entropies from simulations are reported, but it is difficult to choose one hydrogen bond model that fits a variety of experiments. The use of information theory, scaled-particle theory, and related methods is discussed briefly. Our results provide a test of the Frank-Evans hypothesis that the negative solvation entropy is due to structured water near the solute, complement the spectroscopic detection of that solvation structure by identifying the structural feature responsible for the entropy change, and point to a possible explanation for the observed dependence on length scale. Our key results are that the hydrophobic effect, i.e. the signature, temperature-dependent, solvation entropy of nonpolar molecules in water, is largely due to a dispersion force arising from correlations between rotating permanent dipole moments, that the strength of this force depends on the Kirkwood g-factor, and that the strength of this force may be obtained exactly without simulation.« less

  5. Entropy stable discontinuous interfaces coupling for the three-dimensional compressible Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Parsani, Matteo; Carpenter, Mark H.; Nielsen, Eric J.

    2015-06-01

    Non-linear entropy stability and a summation-by-parts (SBP) framework are used to derive entropy stable interior interface coupling for the semi-discretized three-dimensional (3D) compressible Navier-Stokes equations. A complete semi-discrete entropy estimate for the interior domain is achieved combining a discontinuous entropy conservative operator of any order [1,2] with an entropy stable coupling condition for the inviscid terms, and a local discontinuous Galerkin (LDG) approach with an interior penalty (IP) procedure for the viscous terms. The viscous penalty contributions scale with the inverse of the Reynolds number (Re) so that for Re → ∞ their contributions vanish and only the entropy stable inviscid interface penalty term is recovered. This paper extends the interface couplings presented [1,2] and provides a simple and automatic way to compute the magnitude of the viscous IP term. The approach presented herein is compatible with any diagonal norm summation-by-parts (SBP) spatial operator, including finite element, finite volume, finite difference schemes and the class of high-order accurate methods which include the large family of discontinuous Galerkin discretizations and flux reconstruction schemes.

  6. Renyi entropy measures of heart rate Gaussianity.

    PubMed

    Lake, Douglas E

    2006-01-01

    Sample entropy and approximate entropy are measures that have been successfully utilized to study the deterministic dynamics of heart rate (HR). A complementary stochastic point of view and a heuristic argument using the Central Limit Theorem suggests that the Gaussianity of HR is a complementary measure of the physiological complexity of the underlying signal transduction processes. Renyi entropy (or q-entropy) is a widely used measure of Gaussianity in many applications. Particularly important members of this family are differential (or Shannon) entropy (q = 1) and quadratic entropy (q = 2). We introduce the concepts of differential and conditional Renyi entropy rate and, in conjunction with Burg's theorem, develop a measure of the Gaussianity of a linear random process. Robust algorithms for estimating these quantities are presented along with estimates of their standard errors.

  7. EEG entropy measures in anesthesia

    PubMed Central

    Liang, Zhenhu; Wang, Yinghua; Sun, Xue; Li, Duan; Voss, Logan J.; Sleigh, Jamie W.; Hagihira, Satoshi; Li, Xiaoli

    2015-01-01

    Highlights: ► Twelve entropy indices were systematically compared in monitoring depth of anesthesia and detecting burst suppression.► Renyi permutation entropy performed best in tracking EEG changes associated with different anesthesia states.► Approximate Entropy and Sample Entropy performed best in detecting burst suppression. Objective: Entropy algorithms have been widely used in analyzing EEG signals during anesthesia. However, a systematic comparison of these entropy algorithms in assessing anesthesia drugs' effect is lacking. In this study, we compare the capability of 12 entropy indices for monitoring depth of anesthesia (DoA) and detecting the burst suppression pattern (BSP), in anesthesia induced by GABAergic agents. Methods: Twelve indices were investigated, namely Response Entropy (RE) and State entropy (SE), three wavelet entropy (WE) measures [Shannon WE (SWE), Tsallis WE (TWE), and Renyi WE (RWE)], Hilbert-Huang spectral entropy (HHSE), approximate entropy (ApEn), sample entropy (SampEn), Fuzzy entropy, and three permutation entropy (PE) measures [Shannon PE (SPE), Tsallis PE (TPE) and Renyi PE (RPE)]. Two EEG data sets from sevoflurane-induced and isoflurane-induced anesthesia respectively were selected to assess the capability of each entropy index in DoA monitoring and BSP detection. To validate the effectiveness of these entropy algorithms, pharmacokinetic/pharmacodynamic (PK/PD) modeling and prediction probability (Pk) analysis were applied. The multifractal detrended fluctuation analysis (MDFA) as a non-entropy measure was compared. Results: All the entropy and MDFA indices could track the changes in EEG pattern during different anesthesia states. Three PE measures outperformed the other entropy indices, with less baseline variability, higher coefficient of determination (R2) and prediction probability, and RPE performed best; ApEn and SampEn discriminated BSP best. Additionally, these entropy measures showed an advantage in computation efficiency compared with MDFA. Conclusion: Each entropy index has its advantages and disadvantages in estimating DoA. Overall, it is suggested that the RPE index was a superior measure. Investigating the advantages and disadvantages of these entropy indices could help improve current clinical indices for monitoring DoA. PMID:25741277

  8. A pairwise maximum entropy model accurately describes resting-state human brain networks

    PubMed Central

    Watanabe, Takamitsu; Hirose, Satoshi; Wada, Hiroyuki; Imai, Yoshio; Machida, Toru; Shirouzu, Ichiro; Konishi, Seiki; Miyashita, Yasushi; Masuda, Naoki

    2013-01-01

    The resting-state human brain networks underlie fundamental cognitive functions and consist of complex interactions among brain regions. However, the level of complexity of the resting-state networks has not been quantified, which has prevented comprehensive descriptions of the brain activity as an integrative system. Here, we address this issue by demonstrating that a pairwise maximum entropy model, which takes into account region-specific activity rates and pairwise interactions, can be robustly and accurately fitted to resting-state human brain activities obtained by functional magnetic resonance imaging. Furthermore, to validate the approximation of the resting-state networks by the pairwise maximum entropy model, we show that the functional interactions estimated by the pairwise maximum entropy model reflect anatomical connexions more accurately than the conventional functional connectivity method. These findings indicate that a relatively simple statistical model not only captures the structure of the resting-state networks but also provides a possible method to derive physiological information about various large-scale brain networks. PMID:23340410

  9. LensEnt2: Maximum-entropy weak lens reconstruction

    NASA Astrophysics Data System (ADS)

    Marshall, P. J.; Hobson, M. P.; Gull, S. F.; Bridle, S. L.

    2013-08-01

    LensEnt2 is a maximum entropy reconstructor of weak lensing mass maps. The method takes each galaxy shape as an independent estimator of the reduced shear field and incorporates an intrinsic smoothness, determined by Bayesian methods, into the reconstruction. The uncertainties from both the intrinsic distribution of galaxy shapes and galaxy shape estimation are carried through to the final mass reconstruction, and the mass within arbitrarily shaped apertures are calculated with corresponding uncertainties. The input is a galaxy ellipticity catalog with each measured galaxy shape treated as a noisy tracer of the reduced shear field, which is inferred on a fine pixel grid assuming positivity, and smoothness on scales of w arcsec where w is an input parameter. The ICF width w can be chosen by computing the evidence for it.

  10. Estimating the melting point, entropy of fusion, and enthalpy of fusion of organic compounds via SPARC

    EPA Science Inventory

    The entropies of fusion, enthalies of fusion, and melting points of organic compounds can be estimated through three models developed using the SPARC (SPARC Performs Automated Reasoning in Chemistry) platform. The entropy of fusion is modeled through a combination of interaction ...

  11. Autonomous entropy-based intelligent experimental design

    NASA Astrophysics Data System (ADS)

    Malakar, Nabin Kumar

    2011-07-01

    The aim of this thesis is to explore the application of probability and information theory in experimental design, and to do so in a way that combines what we know about inference and inquiry in a comprehensive and consistent manner. Present day scientific frontiers involve data collection at an ever-increasing rate. This requires that we find a way to collect the most relevant data in an automated fashion. By following the logic of the scientific method, we couple an inference engine with an inquiry engine to automate the iterative process of scientific learning. The inference engine involves Bayesian machine learning techniques to estimate model parameters based upon both prior information and previously collected data, while the inquiry engine implements data-driven exploration. By choosing an experiment whose distribution of expected results has the maximum entropy, the inquiry engine selects the experiment that maximizes the expected information gain. The coupled inference and inquiry engines constitute an autonomous learning method for scientific exploration. We apply it to a robotic arm to demonstrate the efficacy of the method. Optimizing inquiry involves searching for an experiment that promises, on average, to be maximally informative. If the set of potential experiments is described by many parameters, the search involves a high-dimensional entropy space. In such cases, a brute force search method will be slow and computationally expensive. We develop an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment. This helps to reduce the number of computations necessary to find the optimal experiment. We also extended the method of maximizing entropy, and developed a method of maximizing joint entropy so that it could be used as a principle of collaboration between two robots. This is a major achievement of this thesis, as it allows the information-based collaboration between two robotic units towards a same goal in an automated fashion.

  12. The moving-window Bayesian maximum entropy framework: estimation of PM(2.5) yearly average concentration across the contiguous United States.

    PubMed

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L

    2012-09-01

    Geostatistical methods are widely used in estimating long-term exposures for epidemiological studies on air pollution, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and the uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian maximum entropy (BME) method and applied this framework to estimate fine particulate matter (PM(2.5)) yearly average concentrations over the contiguous US. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingness in the air-monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM(2.5) data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM(2.5). Moreover, the MWBME method further reduces the MSE by 8.4-43.7%, with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM(2.5) across large geographical domains with expected spatial non-stationarity.

  13. Evaluation of scale invariance in physiological signals by means of balanced estimation of diffusion entropy.

    PubMed

    Zhang, Wenqing; Qiu, Lu; Xiao, Qin; Yang, Huijie; Zhang, Qingjun; Wang, Jianyong

    2012-11-01

    By means of the concept of the balanced estimation of diffusion entropy, we evaluate the reliable scale invariance embedded in different sleep stages and stride records. Segments corresponding to waking, light sleep, rapid eye movement (REM) sleep, and deep sleep stages are extracted from long-term electroencephalogram signals. For each stage the scaling exponent value is distributed over a considerably wide range, which tell us that the scaling behavior is subject and sleep cycle dependent. The average of the scaling exponent values for waking segments is almost the same as that for REM segments (∼0.8). The waking and REM stages have a significantly higher value of the average scaling exponent than that for light sleep stages (∼0.7). For the stride series, the original diffusion entropy (DE) and the balanced estimation of diffusion entropy (BEDE) give almost the same results for detrended series. The evolutions of local scaling invariance show that the physiological states change abruptly, although in the experiments great efforts have been made to keep conditions unchanged. The global behavior of a single physiological signal may lose rich information on physiological states. Methodologically, the BEDE can evaluate with considerable precision the scale invariance in very short time series (∼10^{2}), while the original DE method sometimes may underestimate scale-invariance exponents or even fail in detecting scale-invariant behavior. The BEDE method is sensitive to trends in time series. The existence of trends may lead to an unreasonably high value of the scaling exponent and consequent mistaken conclusions.

  14. Evaluation of scale invariance in physiological signals by means of balanced estimation of diffusion entropy

    NASA Astrophysics Data System (ADS)

    Zhang, Wenqing; Qiu, Lu; Xiao, Qin; Yang, Huijie; Zhang, Qingjun; Wang, Jianyong

    2012-11-01

    By means of the concept of the balanced estimation of diffusion entropy, we evaluate the reliable scale invariance embedded in different sleep stages and stride records. Segments corresponding to waking, light sleep, rapid eye movement (REM) sleep, and deep sleep stages are extracted from long-term electroencephalogram signals. For each stage the scaling exponent value is distributed over a considerably wide range, which tell us that the scaling behavior is subject and sleep cycle dependent. The average of the scaling exponent values for waking segments is almost the same as that for REM segments (˜0.8). The waking and REM stages have a significantly higher value of the average scaling exponent than that for light sleep stages (˜0.7). For the stride series, the original diffusion entropy (DE) and the balanced estimation of diffusion entropy (BEDE) give almost the same results for detrended series. The evolutions of local scaling invariance show that the physiological states change abruptly, although in the experiments great efforts have been made to keep conditions unchanged. The global behavior of a single physiological signal may lose rich information on physiological states. Methodologically, the BEDE can evaluate with considerable precision the scale invariance in very short time series (˜102), while the original DE method sometimes may underestimate scale-invariance exponents or even fail in detecting scale-invariant behavior. The BEDE method is sensitive to trends in time series. The existence of trends may lead to an unreasonably high value of the scaling exponent and consequent mistaken conclusions.

  15. Bayesian Probability Theory

    NASA Astrophysics Data System (ADS)

    von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo

    2014-06-01

    Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.

  16. Coupling diffusion and maximum entropy models to estimate thermal inertia

    USDA-ARS?s Scientific Manuscript database

    Thermal inertia is a physical property of soil at the land surface related to water content. We have developed a method for estimating soil thermal inertia using two daily measurements of surface temperature, to capture the diurnal range, and diurnal time series of net radiation and specific humidi...

  17. Discrete-time entropy formulation of optimal and adaptive control problems

    NASA Technical Reports Server (NTRS)

    Tsai, Yweting A.; Casiello, Francisco A.; Loparo, Kenneth A.

    1992-01-01

    The discrete-time version of the entropy formulation of optimal control of problems developed by G. N. Saridis (1988) is discussed. Given a dynamical system, the uncertainty in the selection of the control is characterized by the probability distribution (density) function which maximizes the total entropy. The equivalence between the optimal control problem and the optimal entropy problem is established, and the total entropy is decomposed into a term associated with the certainty equivalent control law, the entropy of estimation, and the so-called equivocation of the active transmission of information from the controller to the estimator. This provides a useful framework for studying the certainty equivalent and adaptive control laws.

  18. Bayesian Approach to Spectral Function Reconstruction for Euclidean Quantum Field Theories

    NASA Astrophysics Data System (ADS)

    Burnier, Yannis; Rothkopf, Alexander

    2013-11-01

    We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T=2.33TC.

  19. Bayesian approach to spectral function reconstruction for Euclidean quantum field theories.

    PubMed

    Burnier, Yannis; Rothkopf, Alexander

    2013-11-01

    We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T=2.33T(C).

  20. Estimation of Fine Particulate Matter in Taipei Using Landuse Regression and Bayesian Maximum Entropy Methods

    PubMed Central

    Yu, Hwa-Lung; Wang, Chih-Hsih; Liu, Ming-Che; Kuo, Yi-Ming

    2011-01-01

    Fine airborne particulate matter (PM2.5) has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS), the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME) method. The resulting epistemic framework can assimilate knowledge bases including: (a) empirical-based spatial trends of PM concentration based on landuse regression, (b) the spatio-temporal dependence among PM observation information, and (c) site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan) from 2005–2007. PMID:21776223

  1. Estimation of fine particulate matter in Taipei using landuse regression and bayesian maximum entropy methods.

    PubMed

    Yu, Hwa-Lung; Wang, Chih-Hsih; Liu, Ming-Che; Kuo, Yi-Ming

    2011-06-01

    Fine airborne particulate matter (PM2.5) has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS), the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME) method. The resulting epistemic framework can assimilate knowledge bases including: (a) empirical-based spatial trends of PM concentration based on landuse regression, (b) the spatio-temporal dependence among PM observation information, and (c) site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan) from 2005-2007.

  2. [Application of an Adaptive Inertia Weight Particle Swarm Algorithm in the Magnetic Resonance Bias Field Correction].

    PubMed

    Wang, Chang; Qin, Xin; Liu, Yan; Zhang, Wenchao

    2016-06-01

    An adaptive inertia weight particle swarm algorithm is proposed in this study to solve the local optimal problem with the method of traditional particle swarm optimization in the process of estimating magnetic resonance(MR)image bias field.An indicator measuring the degree of premature convergence was designed for the defect of traditional particle swarm optimization algorithm.The inertia weight was adjusted adaptively based on this indicator to ensure particle swarm to be optimized globally and to avoid it from falling into local optimum.The Legendre polynomial was used to fit bias field,the polynomial parameters were optimized globally,and finally the bias field was estimated and corrected.Compared to those with the improved entropy minimum algorithm,the entropy of corrected image was smaller and the estimated bias field was more accurate in this study.Then the corrected image was segmented and the segmentation accuracy obtained in this research was 10% higher than that with improved entropy minimum algorithm.This algorithm can be applied to the correction of MR image bias field.

  3. Excess entropy and crystallization in Stillinger-Weber and Lennard-Jones fluids

    NASA Astrophysics Data System (ADS)

    Dhabal, Debdas; Nguyen, Andrew Huy; Singh, Murari; Khatua, Prabir; Molinero, Valeria; Bandyopadhyay, Sanjoy; Chakravarty, Charusita

    2015-10-01

    Molecular dynamics simulations are used to contrast the supercooling and crystallization behaviour of monatomic liquids that exemplify the transition from simple to anomalous, tetrahedral liquids. As examples of simple fluids, we use the Lennard-Jones (LJ) liquid and a pair-dominated Stillinger-Weber liquid (SW16). As examples of tetrahedral, water-like fluids, we use the Stillinger-Weber model with variable tetrahedrality parameterized for germanium (SW20), silicon (SW21), and water (SW23.15 or mW model). The thermodynamic response functions show clear qualitative differences between simple and water-like liquids. For simple liquids, the compressibility and the heat capacity remain small on isobaric cooling. The tetrahedral liquids in contrast show a very sharp rise in these two response functions as the lower limit of liquid-phase stability is reached. While the thermal expansivity decreases with temperature but never crosses zero in simple liquids, in all three tetrahedral liquids at the studied pressure, there is a temperature of maximum density below which thermal expansivity is negative. In contrast to the thermodynamic response functions, the excess entropy on isobaric cooling does not show qualitatively different features for simple and water-like liquids; however, the slope and curvature of the entropy-temperature plots reflect the heat capacity trends. Two trajectory-based computational estimation methods for the entropy and the heat capacity are compared for possible structural insights into supercooling, with the entropy obtained from thermodynamic integration. The two-phase thermodynamic estimator for the excess entropy proves to be fairly accurate in comparison to the excess entropy values obtained by thermodynamic integration, for all five Lennard-Jones and Stillinger-Weber liquids. The entropy estimator based on the multiparticle correlation expansion that accounts for both pair and triplet correlations, denoted by Strip, is also studied. Strip is a good entropy estimator for liquids where pair and triplet correlations are important such as Ge and Si, but loses accuracy for purely pair-dominated liquids, like LJ fluid, or near the crystallization temperature (Tthr). Since local tetrahedral order is compatible with both liquid and crystalline states, the reorganisation of tetrahedral liquids is accompanied by a clear rise in the pair, triplet, and thermodynamic contributions to the heat capacity, resulting in the heat capacity anomaly. In contrast, the pair-dominated liquids show increasing dominance of triplet correlations on approaching crystallization but no sharp rise in either the pair or thermodynamic heat capacities.

  4. Estimating transition probabilities in unmarked populations --entropy revisited

    USGS Publications Warehouse

    Cooch, E.G.; Link, W.A.

    1999-01-01

    The probability of surviving and moving between 'states' is of great interest to biologists. Robust estimation of these transitions using multiple observations of individually identifiable marked individuals has received considerable attention in recent years. However, in some situations, individuals are not identifiable (or have a very low recapture rate), although all individuals in a sample can be assigned to a particular state (e.g. breeding or non-breeding) without error. In such cases, only aggregate data (number of individuals in a given state at each occasion) are available. If the underlying matrix of transition probabilities does not vary through time and aggregate data are available for several time periods, then it is possible to estimate these parameters using least-squares methods. Even when such data are available, this assumption of stationarity will usually be deemed overly restrictive and, frequently, data will only be available for two time periods. In these cases, the problem reduces to estimating the most likely matrix (or matrices) leading to the observed frequency distribution of individuals in each state. An entropy maximization approach has been previously suggested. In this paper, we show that the entropy approach rests on a particular limiting assumption, and does not provide estimates of latent population parameters (the transition probabilities), but rather predictions of realized rates.

  5. Geometric k-nearest neighbor estimation of entropy and mutual information

    NASA Astrophysics Data System (ADS)

    Lord, Warren M.; Sun, Jie; Bollt, Erik M.

    2018-03-01

    Nonparametric estimation of mutual information is used in a wide range of scientific problems to quantify dependence between variables. The k-nearest neighbor (knn) methods are consistent, and therefore expected to work well for a large sample size. These methods use geometrically regular local volume elements. This practice allows maximum localization of the volume elements, but can also induce a bias due to a poor description of the local geometry of the underlying probability measure. We introduce a new class of knn estimators that we call geometric knn estimators (g-knn), which use more complex local volume elements to better model the local geometry of the probability measures. As an example of this class of estimators, we develop a g-knn estimator of entropy and mutual information based on elliptical volume elements, capturing the local stretching and compression common to a wide range of dynamical system attractors. A series of numerical examples in which the thickness of the underlying distribution and the sample sizes are varied suggest that local geometry is a source of problems for knn methods such as the Kraskov-Stögbauer-Grassberger estimator when local geometric effects cannot be removed by global preprocessing of the data. The g-knn method performs well despite the manipulation of the local geometry. In addition, the examples suggest that the g-knn estimators can be of particular relevance to applications in which the system is large, but the data size is limited.

  6. Thermodynamic contribution of backbone conformational entropy in the binding between SH3 domain and proline-rich motif.

    PubMed

    Zeng, Danyun; Shen, Qingliang; Cho, Jae-Hyun

    2017-02-26

    Biological functions of intrinsically disordered proteins (IDPs), and proteins containing intrinsically disordered regions (IDRs) are often mediated by short linear motifs, like proline-rich motifs (PRMs). Upon binding to their target proteins, IDPs undergo a disorder-to-order transition which is accompanied by a large conformational entropy penalty. Hence, the molecular mechanisms underlying control of conformational entropy are critical for understanding the binding affinity and selectivity of IDPs-mediated protein-protein interactions (PPIs). Here, we investigated the backbone conformational entropy change accompanied by binding of the N-terminal SH3 domain (nSH3) of CrkII and PRM derived from guanine nucleotide exchange factor 1 (C3G). In particular, we focused on the estimation of conformational entropy change of disordered PRM upon binding to the nSH3 domain. Quantitative characterization of conformational dynamics of disordered peptides like PRMs is limited. Hence, we combined various methods, including NMR model-free analysis, δ2D, DynaMine, and structure-based calculation of entropy loss. This study demonstrates that the contribution of backbone conformational entropy change is significant in the PPIs mediated by IDPs/IDRs. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Minimum entropy deconvolution optimized sinusoidal synthesis and its application to vibration based fault detection

    NASA Astrophysics Data System (ADS)

    Li, Gang; Zhao, Qing

    2017-03-01

    In this paper, a minimum entropy deconvolution based sinusoidal synthesis (MEDSS) filter is proposed to improve the fault detection performance of the regular sinusoidal synthesis (SS) method. The SS filter is an efficient linear predictor that exploits the frequency properties during model construction. The phase information of the harmonic components is not used in the regular SS filter. However, the phase relationships are important in differentiating noise from characteristic impulsive fault signatures. Therefore, in this work, the minimum entropy deconvolution (MED) technique is used to optimize the SS filter during the model construction process. A time-weighted-error Kalman filter is used to estimate the MEDSS model parameters adaptively. Three simulation examples and a practical application case study are provided to illustrate the effectiveness of the proposed method. The regular SS method and the autoregressive MED (ARMED) method are also implemented for comparison. The MEDSS model has demonstrated superior performance compared to the regular SS method and it also shows comparable or better performance with much less computational intensity than the ARMED method.

  8. Identification of breathing cracks in a beam structure with entropy

    NASA Astrophysics Data System (ADS)

    Wimarshana, Buddhi; Wu, Nan; Wu, Christine

    2016-04-01

    A cantilever beam with a breathing crack is studied to detect and evaluate the crack using entropy measures. Closed cracks in engineering structures lead to proportional complexities to their vibration responses due to weak bi-linearity imposed by the crack breathing phenomenon. Entropy is a measure of system complexity and has the potential in quantifying the complexity. The weak bi-linearity in vibration signals can be amplified using wavelet transformation to increase the sensitivity of the measurements. A mathematical model of harmonically excited unit length steel cantilever beam with a breathing crack located near the fixed end is established, and an iterative numerical method is applied to generate accurate time domain dynamic responses. The bi-linearity in time domain signals due to the crack breathing are amplified by wavelet transformation first, and then the complexities due to bi-linearity is quantified using sample entropy to detect the possible crack and estimate the crack depth. It is observed that the method is capable of identifying crack depths even at very early stages of 3% with the increase in the entropy values more than 10% compared with the healthy beam. The current study extends the entropy based damage detection of rotary machines to structural analysis and takes a step further in high-sensitivity structural health monitoring by combining wavelet transformation with entropy calculations. The proposed technique can also be applied to other types of structures, such as plates and shells.

  9. Adaptive Importance Sampling for Control and Inference

    NASA Astrophysics Data System (ADS)

    Kappen, H. J.; Ruiz, H. C.

    2016-03-01

    Path integral (PI) control problems are a restricted class of non-linear control problems that can be solved formally as a Feynman-Kac PI and can be estimated using Monte Carlo sampling. In this contribution we review PI control theory in the finite horizon case. We subsequently focus on the problem how to compute and represent control solutions. We review the most commonly used methods in robotics and control. Within the PI theory, the question of how to compute becomes the question of importance sampling. Efficient importance samplers are state feedback controllers and the use of these requires an efficient representation. Learning and representing effective state-feedback controllers for non-linear stochastic control problems is a very challenging, and largely unsolved, problem. We show how to learn and represent such controllers using ideas from the cross entropy method. We derive a gradient descent method that allows to learn feed-back controllers using an arbitrary parametrisation. We refer to this method as the path integral cross entropy method or PICE. We illustrate this method for some simple examples. The PI control methods can be used to estimate the posterior distribution in latent state models. In neuroscience these problems arise when estimating connectivity from neural recording data using EM. We demonstrate the PI control method as an accurate alternative to particle filtering.

  10. An entropy and viscosity corrected potential method for rotor performance prediction

    NASA Technical Reports Server (NTRS)

    Bridgeman, John O.; Strawn, Roger C.; Caradonna, Francis X.

    1988-01-01

    An unsteady Full-Potential Rotor code (FPR) has been enhanced with modifications directed at improving its drag prediction capability. The shock generated entropy has been included to provide solutions comparable to the Euler equations. A weakly interacted integral boundary layer has also been coupled to FPR in order to estimate skin-friction drag. Pressure distributions, shock positions, and drag comparisons are made with various data sets derived from two-dimensional airfoil, hovering, and advancing high speed rotor tests. In all these comparisons, the effect of the nonisentropic modification improves (i.e., weakens) the shock strength and wave drag. In addition, the boundary layer method yields reasonable estimates of skin-friction drag. Airfoil drag and hover torque data comparisons are excellent, as are predicted shock strength and positions for a high speed advancing rotor.

  11. Campbell's Rule for Estimating Entropy Changes

    ERIC Educational Resources Information Center

    Jensen, William B.

    2004-01-01

    Campbell's rule for estimating entropy changes is discussed in relation to an earlier article by Norman Craig, where it was proposed that the approximate value of the entropy of reaction was related to net moles of gas consumed or generated. It was seen that the average for Campbell's data set was lower than that for Craig's data set and…

  12. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Jun; Zhang, Jiangjiang; Li, Weixuan

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less

  13. Information theory analysis of Australian humpback whale song.

    PubMed

    Miksis-Olds, Jennifer L; Buck, John R; Noad, Michael J; Cato, Douglas H; Stokes, M Dale

    2008-10-01

    Songs produced by migrating whales were recorded off the coast of Queensland, Australia, over six consecutive weeks in 2003. Forty-eight independent song sessions were analyzed using information theory techniques. The average length of the songs estimated by correlation analysis was approximately 100 units, with song sessions lasting from 300 to over 3100 units. Song entropy, a measure of structural constraints, was estimated using three different methodologies: (1) the independently identically distributed model, (2) a first-order Markov model, and (3) the nonparametric sliding window match length (SWML) method, as described by Suzuki et al. [(2006). "Information entropy of humpback whale song," J. Acoust. Soc. Am. 119, 1849-1866]. The analysis finds that the song sequences of migrating Australian whales are consistent with the hierarchical structure proposed by Payne and McVay [(1971). "Songs of humpback whales," Science 173, 587-597], and recently supported mathematically by Suzuki et al. (2006) for singers on the Hawaiian breeding grounds. Both the SWML entropy estimates and the song lengths for the Australian singers in 2003 were lower than that reported by Suzuki et al. (2006) for Hawaiian whales in 1976-1978; however, song redundancy did not differ between these two populations separated spatially and temporally. The average total information in the sequence of units in Australian song was approximately 35 bits/song. Aberrant songs (8%) yielded entropies similar to the typical songs.

  14. Temporal Correlations and Neural Spike Train Entropy

    NASA Astrophysics Data System (ADS)

    Schultz, Simon R.; Panzeri, Stefano

    2001-06-01

    Sampling considerations limit the experimental conditions under which information theoretic analyses of neurophysiological data yield reliable results. We develop a procedure for computing the full temporal entropy and information of ensembles of neural spike trains, which performs reliably for limited samples of data. This approach also yields insight to the role of correlations between spikes in temporal coding mechanisms. The method, when applied to recordings from complex cells of the monkey primary visual cortex, results in lower rms error information estimates in comparison to a ``brute force'' approach.

  15. Gender-specific heart rate dynamics in severe intrauterine growth-restricted fetuses.

    PubMed

    Gonçalves, Hernâni; Bernardes, João; Ayres-de-Campos, Diogo

    2013-06-01

    Management of intrauterine growth restriction (IUGR) remains a major issue in perinatology. The objective of this paper was the assessment of gender-specific fetal heart rate (FHR) dynamics as a diagnostic tool in severe IUGR. FHR was analyzed in the antepartum period in 15 severe IUGR fetuses and 18 controls, matched for gestational age, in relation to fetal gender. Linear and entropy methods, such as mean FHR (mFHR), low (LF), high (HF) and movement frequency (MF), approximate, sample and multiscale entropy. Sensitivities and specificities were estimated using Fisher linear discriminant analysis and the leave-one-out method. Overall, IUGR fetuses presented significantly lower mFHR and entropy compared with controls. However, gender-specific analysis showed that significantly lower mFHR was only evident in IUGR males and lower entropy in IUGR females. In addition, lower LF/(MF+HF) was patent in IUGR females compared with controls, but not in males. Rather high sensitivities and specificities were achieved in the detection of the FHR recordings related with IUGR male fetuses, when gender-specific analysis was performed at gestational ages less than 34 weeks. Severe IUGR fetuses present gender-specific linear and entropy FHR changes, compared with controls, characterized by a significantly lower entropy and sympathetic-vagal balance in females than in males. These findings need to be considered in order to achieve better diagnostic results. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Efficient Computation of Small-Molecule Configurational Binding Entropy and Free Energy Changes by Ensemble Enumeration

    PubMed Central

    2013-01-01

    Here we present a novel, end-point method using the dead-end-elimination and A* algorithms to efficiently and accurately calculate the change in free energy, enthalpy, and configurational entropy of binding for ligand–receptor association reactions. We apply the new approach to the binding of a series of human immunodeficiency virus (HIV-1) protease inhibitors to examine the effect ensemble reranking has on relative accuracy as well as to evaluate the role of the absolute and relative ligand configurational entropy losses upon binding in affinity differences for structurally related inhibitors. Our results suggest that most thermodynamic parameters can be estimated using only a small fraction of the full configurational space, and we see significant improvement in relative accuracy when using an ensemble versus single-conformer approach to ligand ranking. We also find that using approximate metrics based on the single-conformation enthalpy differences between the global minimum energy configuration in the bound as well as unbound states also correlates well with experiment. Using a novel, additive entropy expansion based on conditional mutual information, we also analyze the source of ligand configurational entropy loss upon binding in terms of both uncoupled per degree of freedom losses as well as changes in coupling between inhibitor degrees of freedom. We estimate entropic free energy losses of approximately +24 kcal/mol, 12 kcal/mol of which stems from loss of translational and rotational entropy. Coupling effects contribute only a small fraction to the overall entropy change (1–2 kcal/mol) but suggest differences in how inhibitor dihedral angles couple to each other in the bound versus unbound states. The importance of accounting for flexibility in drug optimization and design is also discussed. PMID:24250277

  17. Single-cell entropy for accurate estimation of differentiation potency from a cell's transcriptome

    NASA Astrophysics Data System (ADS)

    Teschendorff, Andrew E.; Enver, Tariq

    2017-06-01

    The ability to quantify differentiation potential of single cells is a task of critical importance. Here we demonstrate, using over 7,000 single-cell RNA-Seq profiles, that differentiation potency of a single cell can be approximated by computing the signalling promiscuity, or entropy, of a cell's transcriptome in the context of an interaction network, without the need for feature selection. We show that signalling entropy provides a more accurate and robust potency estimate than other entropy-based measures, driven in part by a subtle positive correlation between the transcriptome and connectome. Signalling entropy identifies known cell subpopulations of varying potency and drug resistant cancer stem-cell phenotypes, including those derived from circulating tumour cells. It further reveals that expression heterogeneity within single-cell populations is regulated. In summary, signalling entropy allows in silico estimation of the differentiation potency and plasticity of single cells and bulk samples, providing a means to identify normal and cancer stem-cell phenotypes.

  18. Single-cell entropy for accurate estimation of differentiation potency from a cell's transcriptome

    PubMed Central

    Teschendorff, Andrew E.; Enver, Tariq

    2017-01-01

    The ability to quantify differentiation potential of single cells is a task of critical importance. Here we demonstrate, using over 7,000 single-cell RNA-Seq profiles, that differentiation potency of a single cell can be approximated by computing the signalling promiscuity, or entropy, of a cell's transcriptome in the context of an interaction network, without the need for feature selection. We show that signalling entropy provides a more accurate and robust potency estimate than other entropy-based measures, driven in part by a subtle positive correlation between the transcriptome and connectome. Signalling entropy identifies known cell subpopulations of varying potency and drug resistant cancer stem-cell phenotypes, including those derived from circulating tumour cells. It further reveals that expression heterogeneity within single-cell populations is regulated. In summary, signalling entropy allows in silico estimation of the differentiation potency and plasticity of single cells and bulk samples, providing a means to identify normal and cancer stem-cell phenotypes. PMID:28569836

  19. Refined composite multiscale weighted-permutation entropy of financial time series

    NASA Astrophysics Data System (ADS)

    Zhang, Yongping; Shang, Pengjian

    2018-04-01

    For quantifying the complexity of nonlinear systems, multiscale weighted-permutation entropy (MWPE) has recently been proposed. MWPE has incorporated amplitude information and been applied to account for the multiple inherent dynamics of time series. However, MWPE may be unreliable, because its estimated values show large fluctuation for slight variation of the data locations, and a significant distinction only for the different length of time series. Therefore, we propose the refined composite multiscale weighted-permutation entropy (RCMWPE). By comparing the RCMWPE results with other methods' results on both synthetic data and financial time series, RCMWPE method shows not only the advantages inherited from MWPE but also lower sensitivity to the data locations, more stable and much less dependent on the length of time series. Moreover, we present and discuss the results of RCMWPE method on the daily price return series from Asian and European stock markets. There are significant differences between Asian markets and European markets, and the entropy values of Hang Seng Index (HSI) are close to but higher than those of European markets. The reliability of the proposed RCMWPE method has been supported by simulations on generated and real data. It could be applied to a variety of fields to quantify the complexity of the systems over multiple scales more accurately.

  20. The moving-window Bayesian Maximum Entropy framework: Estimation of PM2.5 yearly average concentration across the contiguous United States

    PubMed Central

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L.

    2013-01-01

    Geostatistical methods are widely used in estimating long-term exposures for air pollution epidemiological studies, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian Maximum Entropy (BME) method and applied this framework to estimate fine particulate matter (PM2.5) yearly average concentrations over the contiguous U.S. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingnees in the air monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM2.5 data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM2.5. Moreover, the MWBME method further reduces the MSE by 8.4% to 43.7% with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM2.5 across large geographical domains with expected spatial non-stationarity. PMID:22739679

  1. Vanishing Viscosity Approach to the Compressible Euler Equations for Transonic Nozzle and Spherically Symmetric Flows

    NASA Astrophysics Data System (ADS)

    Chen, Gui-Qiang G.; Schrecker, Matthew R. I.

    2018-04-01

    We are concerned with globally defined entropy solutions to the Euler equations for compressible fluid flows in transonic nozzles with general cross-sectional areas. Such nozzles include the de Laval nozzles and other more general nozzles whose cross-sectional area functions are allowed at the nozzle ends to be either zero (closed ends) or infinity (unbounded ends). To achieve this, in this paper, we develop a vanishing viscosity method to construct globally defined approximate solutions and then establish essential uniform estimates in weighted L p norms for the whole range of physical adiabatic exponents γ\\in (1, ∞) , so that the viscosity approximate solutions satisfy the general L p compensated compactness framework. The viscosity method is designed to incorporate artificial viscosity terms with the natural Dirichlet boundary conditions to ensure the uniform estimates. Then such estimates lead to both the convergence of the approximate solutions and the existence theory of globally defined finite-energy entropy solutions to the Euler equations for transonic flows that may have different end-states in the class of nozzles with general cross-sectional areas for all γ\\in (1, ∞) . The approach and techniques developed here apply to other problems with similar difficulties. In particular, we successfully apply them to construct globally defined spherically symmetric entropy solutions to the Euler equations for all γ\\in (1, ∞).

  2. The Dynameomics Entropy Dictionary: A Large-Scale Assessment of Conformational Entropy across Protein Fold Space.

    PubMed

    Towse, Clare-Louise; Akke, Mikael; Daggett, Valerie

    2017-04-27

    Molecular dynamics (MD) simulations contain considerable information with regard to the motions and fluctuations of a protein, the magnitude of which can be used to estimate conformational entropy. Here we survey conformational entropy across protein fold space using the Dynameomics database, which represents the largest existing data set of protein MD simulations for representatives of essentially all known protein folds. We provide an overview of MD-derived entropies accounting for all possible degrees of dihedral freedom on an unprecedented scale. Although different side chains might be expected to impose varying restrictions on the conformational space that the backbone can sample, we found that the backbone entropy and side chain size are not strictly coupled. An outcome of these analyses is the Dynameomics Entropy Dictionary, the contents of which have been compared with entropies derived by other theoretical approaches and experiment. As might be expected, the conformational entropies scale linearly with the number of residues, demonstrating that conformational entropy is an extensive property of proteins. The calculated conformational entropies of folding agree well with previous estimates. Detailed analysis of specific cases identifies deviations in conformational entropy from the average values that highlight how conformational entropy varies with sequence, secondary structure, and tertiary fold. Notably, α-helices have lower entropy on average than do β-sheets, and both are lower than coil regions.

  3. Will molecular dynamics simulations of proteins ever reach equilibrium?

    PubMed

    Genheden, Samuel; Ryde, Ulf

    2012-06-28

    We show that conformational entropies calculated for five proteins and protein-ligand complexes with dihedral-distribution histogramming, the von Mises approach, or quasi-harmonic analysis do not converge to any useful precision even if molecular dynamics (MD) simulations of 380-500 ns length are employed (the uncertainty is 12-89 kJ mol(-1)). To explain this, we suggest a simple protein model involving dihedrals with effective barriers forming a uniform distribution and show that for such a model, the entropy increases logarithmically with time until all significantly populated dihedral states have been sampled, in agreement with the simulations (during the simulations, 52-70% of the available dihedral phase space has been visited). This is also confirmed by the analysis of the trajectories of a 1 ms simulation of bovine pancreatic trypsin inhibitor (31 kJ mol(-1) difference in the entropy between the first and second part of the simulation). Strictly speaking, this means that it is practically impossible to equilibrate MD simulations of proteins. We discuss the implications of such a lack of strict equilibration of protein MD simulations and show that ligand-binding free energies estimated with the MM/GBSA method (molecular mechanics with generalised Born and surface-area solvation) vary by 3-15 kJ mol(-1) during a 500 ns simulation (the higher estimate is caused by rare conformational changes), although they involve a questionable but well-converged normal-mode entropy estimate, whereas free energies estimated by free-energy perturbation vary by less than 0.6 kJ mol(-1) for the same simulation.

  4. Minimal entropy probability paths between genome families.

    PubMed

    Ahlbrandt, Calvin; Benson, Gary; Casey, William

    2004-05-01

    We develop a metric for probability distributions with applications to biological sequence analysis. Our distance metric is obtained by minimizing a functional defined on the class of paths over probability measures on N categories. The underlying mathematical theory is connected to a constrained problem in the calculus of variations. The solution presented is a numerical solution, which approximates the true solution in a set of cases called rich paths where none of the components of the path is zero. The functional to be minimized is motivated by entropy considerations, reflecting the idea that nature might efficiently carry out mutations of genome sequences in such a way that the increase in entropy involved in transformation is as small as possible. We characterize sequences by frequency profiles or probability vectors, in the case of DNA where N is 4 and the components of the probability vector are the frequency of occurrence of each of the bases A, C, G and T. Given two probability vectors a and b, we define a distance function based as the infimum of path integrals of the entropy function H( p) over all admissible paths p(t), 0 < or = t< or =1, with p(t) a probability vector such that p(0)=a and p(1)=b. If the probability paths p(t) are parameterized as y(s) in terms of arc length s and the optimal path is smooth with arc length L, then smooth and "rich" optimal probability paths may be numerically estimated by a hybrid method of iterating Newton's method on solutions of a two point boundary value problem, with unknown distance L between the abscissas, for the Euler-Lagrange equations resulting from a multiplier rule for the constrained optimization problem together with linear regression to improve the arc length estimate L. Matlab code for these numerical methods is provided which works only for "rich" optimal probability vectors. These methods motivate a definition of an elementary distance function which is easier and faster to calculate, works on non-rich vectors, does not involve variational theory and does not involve differential equations, but is a better approximation of the minimal entropy path distance than the distance //b-a//(2). We compute minimal entropy distance matrices for examples of DNA myostatin genes and amino-acid sequences across several species. Output tree dendograms for our minimal entropy metric are compared with dendograms based on BLAST and BLAST identity scores.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dhabal, Debdas; Chakravarty, Charusita, E-mail: charus@chemistry.iitd.ac.in; Nguyen, Andrew Huy

    Molecular dynamics simulations are used to contrast the supercooling and crystallization behaviour of monatomic liquids that exemplify the transition from simple to anomalous, tetrahedral liquids. As examples of simple fluids, we use the Lennard-Jones (LJ) liquid and a pair-dominated Stillinger-Weber liquid (SW{sub 16}). As examples of tetrahedral, water-like fluids, we use the Stillinger-Weber model with variable tetrahedrality parameterized for germanium (SW{sub 20}), silicon (SW{sub 21}), and water (SW{sub 23.15} or mW model). The thermodynamic response functions show clear qualitative differences between simple and water-like liquids. For simple liquids, the compressibility and the heat capacity remain small on isobaric cooling. Themore » tetrahedral liquids in contrast show a very sharp rise in these two response functions as the lower limit of liquid-phase stability is reached. While the thermal expansivity decreases with temperature but never crosses zero in simple liquids, in all three tetrahedral liquids at the studied pressure, there is a temperature of maximum density below which thermal expansivity is negative. In contrast to the thermodynamic response functions, the excess entropy on isobaric cooling does not show qualitatively different features for simple and water-like liquids; however, the slope and curvature of the entropy-temperature plots reflect the heat capacity trends. Two trajectory-based computational estimation methods for the entropy and the heat capacity are compared for possible structural insights into supercooling, with the entropy obtained from thermodynamic integration. The two-phase thermodynamic estimator for the excess entropy proves to be fairly accurate in comparison to the excess entropy values obtained by thermodynamic integration, for all five Lennard-Jones and Stillinger-Weber liquids. The entropy estimator based on the multiparticle correlation expansion that accounts for both pair and triplet correlations, denoted by S{sub trip}, is also studied. S{sub trip} is a good entropy estimator for liquids where pair and triplet correlations are important such as Ge and Si, but loses accuracy for purely pair-dominated liquids, like LJ fluid, or near the crystallization temperature (T{sub thr}). Since local tetrahedral order is compatible with both liquid and crystalline states, the reorganisation of tetrahedral liquids is accompanied by a clear rise in the pair, triplet, and thermodynamic contributions to the heat capacity, resulting in the heat capacity anomaly. In contrast, the pair-dominated liquids show increasing dominance of triplet correlations on approaching crystallization but no sharp rise in either the pair or thermodynamic heat capacities.« less

  6. Algorithm based on the short-term Rényi entropy and IF estimation for noisy EEG signals analysis.

    PubMed

    Lerga, Jonatan; Saulig, Nicoletta; Mozetič, Vladimir

    2017-01-01

    Stochastic electroencephalogram (EEG) signals are known to be nonstationary and often multicomponential. Detecting and extracting their components may help clinicians to localize brain neurological dysfunctionalities for patients with motor control disorders due to the fact that movement-related cortical activities are reflected in spectral EEG changes. A new algorithm for EEG signal components detection from its time-frequency distribution (TFD) has been proposed in this paper. The algorithm utilizes the modification of the Rényi entropy-based technique for number of components estimation, called short-term Rényi entropy (STRE), and upgraded by an iterative algorithm which was shown to enhance existing approaches. Combined with instantaneous frequency (IF) estimation, the proposed method was applied to EEG signal analysis both in noise-free and noisy environments for limb movements EEG signals, and was shown to be an efficient technique providing spectral description of brain activities at each electrode location up to moderate additive noise levels. Furthermore, the obtained information concerning the number of EEG signal components and their IFs show potentials to enhance diagnostics and treatment of neurological disorders for patients with motor control illnesses. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Entropy vs. energy waveform processing: A comparison based on the heat equation

    DOE PAGES

    Hughes, Michael S.; McCarthy, John E.; Bruillard, Paul J.; ...

    2015-05-25

    Virtually all modern imaging devices collect electromagnetic or acoustic waves and use the energy carried by these waves to determine pixel values to create what is basically an “energy” picture. However, waves also carry “information”, as quantified by some form of entropy, and this may also be used to produce an “information” image. Numerous published studies have demonstrated the advantages of entropy, or “information imaging”, over conventional methods. The most sensitive information measure appears to be the joint entropy of the collected wave and a reference signal. The sensitivity of repeated experimental observations of a slowly-changing quantity may be definedmore » as the mean variation (i.e., observed change) divided by mean variance (i.e., noise). Wiener integration permits computation of the required mean values and variances as solutions to the heat equation, permitting estimation of their relative magnitudes. There always exists a reference, such that joint entropy has larger variation and smaller variance than the corresponding quantities for signal energy, matching observations of several studies. Moreover, a general prescription for finding an “optimal” reference for the joint entropy emerges, which also has been validated in several studies.« less

  8. Uncertainty estimation of the self-thinning process by Maximum-Entropy Principle

    Treesearch

    Shoufan Fang; George Z. Gertner

    2000-01-01

    When available information is scarce, the Maximum-Entropy Principle can estimate the distributions of parameters. In our case study, we estimated the distributions of the parameters of the forest self-thinning process based on literature information, and we derived the conditional distribution functions and estimated the 95 percent confidence interval (CI) of the self-...

  9. Transfer Entropy as a Log-Likelihood Ratio

    NASA Astrophysics Data System (ADS)

    Barnett, Lionel; Bossomaier, Terry

    2012-09-01

    Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.

  10. Transfer entropy as a log-likelihood ratio.

    PubMed

    Barnett, Lionel; Bossomaier, Terry

    2012-09-28

    Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.

  11. Numerical estimation of the relative entropy of entanglement

    NASA Astrophysics Data System (ADS)

    Zinchenko, Yuriy; Friedland, Shmuel; Gour, Gilad

    2010-11-01

    We propose a practical algorithm for the calculation of the relative entropy of entanglement (REE), defined as the minimum relative entropy between a state and the set of states with positive partial transpose. Our algorithm is based on a practical semidefinite cutting plane approach. In low dimensions the implementation of the algorithm in matlab provides an estimation for the REE with an absolute error smaller than 10-3.

  12. Recommendations and illustrations for the evaluation of photonic random number generators

    NASA Astrophysics Data System (ADS)

    Hart, Joseph D.; Terashima, Yuta; Uchida, Atsushi; Baumgartner, Gerald B.; Murphy, Thomas E.; Roy, Rajarshi

    2017-09-01

    The never-ending quest to improve the security of digital information combined with recent improvements in hardware technology has caused the field of random number generation to undergo a fundamental shift from relying solely on pseudo-random algorithms to employing optical entropy sources. Despite these significant advances on the hardware side, commonly used statistical measures and evaluation practices remain ill-suited to understand or quantify the optical entropy that underlies physical random number generation. We review the state of the art in the evaluation of optical random number generation and recommend a new paradigm: quantifying entropy generation and understanding the physical limits of the optical sources of randomness. In order to do this, we advocate for the separation of the physical entropy source from deterministic post-processing in the evaluation of random number generators and for the explicit consideration of the impact of the measurement and digitization process on the rate of entropy production. We present the Cohen-Procaccia estimate of the entropy rate h (𝜖 ,τ ) as one way to do this. In order to provide an illustration of our recommendations, we apply the Cohen-Procaccia estimate as well as the entropy estimates from the new NIST draft standards for physical random number generators to evaluate and compare three common optical entropy sources: single photon time-of-arrival detection, chaotic lasers, and amplified spontaneous emission.

  13. Calculation of Five Thermodynamic Molecular Descriptors by Means of a General Computer Algorithm Based on the Group-Additivity Method: Standard Enthalpies of Vaporization, Sublimation and Solvation, and Entropy of Fusion of Ordinary Organic Molecules and Total Phase-Change Entropy of Liquid Crystals.

    PubMed

    Naef, Rudolf; Acree, William E

    2017-06-25

    The calculation of the standard enthalpies of vaporization, sublimation and solvation of organic molecules is presented using a common computer algorithm on the basis of a group-additivity method. The same algorithm is also shown to enable the calculation of their entropy of fusion as well as the total phase-change entropy of liquid crystals. The present method is based on the complete breakdown of the molecules into their constituting atoms and their immediate neighbourhood; the respective calculations of the contribution of the atomic groups by means of the Gauss-Seidel fitting method is based on experimental data collected from literature. The feasibility of the calculations for each of the mentioned descriptors was verified by means of a 10-fold cross-validation procedure proving the good to high quality of the predicted values for the three mentioned enthalpies and for the entropy of fusion, whereas the predictive quality for the total phase-change entropy of liquid crystals was poor. The goodness of fit ( Q ²) and the standard deviation (σ) of the cross-validation calculations for the five descriptors was as follows: 0.9641 and 4.56 kJ/mol ( N = 3386 test molecules) for the enthalpy of vaporization, 0.8657 and 11.39 kJ/mol ( N = 1791) for the enthalpy of sublimation, 0.9546 and 4.34 kJ/mol ( N = 373) for the enthalpy of solvation, 0.8727 and 17.93 J/mol/K ( N = 2637) for the entropy of fusion and 0.5804 and 32.79 J/mol/K ( N = 2643) for the total phase-change entropy of liquid crystals. The large discrepancy between the results of the two closely related entropies is discussed in detail. Molecules for which both the standard enthalpies of vaporization and sublimation were calculable, enabled the estimation of their standard enthalpy of fusion by simple subtraction of the former from the latter enthalpy. For 990 of them the experimental enthalpy-of-fusion values are also known, allowing their comparison with predictions, yielding a correlation coefficient R ² of 0.6066.

  14. Estimation of the magnetic entropy change by means of Landau theory and phenomenological model in La0.6Ca0.2 Sr0.2MnO3/Sb2O3 ceramic composites

    NASA Astrophysics Data System (ADS)

    Nasri, M.; Dhahri, E.; Hlil, E. K.

    2018-06-01

    In this paper, magnetocaloric properties of La0.6Ca0.2Sr0.2MnO3/Sb2O3 oxides have been investigated. The composite samples were prepared using the conventional solid-state reaction method. The second-order phase transition can be testified with the positive slope in Arrott plots. An excellent agreement has been found between the -ΔSM values estimated by Landau theory and those obtained using the classical Maxwell relation. The field dependence of the magnetic entropy change analysis shows a power law dependence,|ΔSM|≈Hn , with n(TC) = 0.65. Moreover, the scaling analysis of magnetic entropy change exhibits that ΔSM(T) curves collapse into a single universal curve, indicating that the observed paramagnetic to ferromagnetic phase transition is an authentic second-order phase transition. The maximum value of magnetic entropy change of composites is found to decrease slightly with the further increasing of Sb2O3 concentration. A phenomenological model was used to predict magnetocaloric properties of La0.6Ca0.2Sr0.2MnO3/Sb2O3 composites. The theoretical calculations are compared with the available experimental data.

  15. Binding stability of peptides on major histocompatibility complex class I proteins: role of entropy and dynamics.

    PubMed

    Gul, Ahmet; Erman, Burak

    2018-01-16

    Prediction of peptide binding on specific human leukocyte antigens (HLA) has long been studied with successful results. We herein describe the effects of entropy and dynamics by investigating the binding stabilities of 10 nanopeptides on various HLA Class I alleles using a theoretical model based on molecular dynamics simulations. The fluctuational entropies of the peptides are estimated over a temperature range of 310-460 K. The estimated entropies correlate well with experimental binding affinities of the peptides: peptides that have higher binding affinities have lower entropies compared to non-binders, which have significantly larger entropies. The computation of the entropies is based on a simple model that requires short molecular dynamics trajectories and allows for approximate but rapid determination. The paper draws attention to the long neglected dynamic aspects of peptide binding, and provides a fast computation scheme that allows for rapid scanning of large numbers of peptides on selected HLA antigens, which may be useful in defining the right peptides for personal immunotherapy.

  16. Binding stability of peptides on major histocompatibility complex class I proteins: role of entropy and dynamics

    NASA Astrophysics Data System (ADS)

    Gul, Ahmet; Erman, Burak

    2018-03-01

    Prediction of peptide binding on specific human leukocyte antigens (HLA) has long been studied with successful results. We herein describe the effects of entropy and dynamics by investigating the binding stabilities of 10 nanopeptides on various HLA Class I alleles using a theoretical model based on molecular dynamics simulations. The fluctuational entropies of the peptides are estimated over a temperature range of 310-460 K. The estimated entropies correlate well with experimental binding affinities of the peptides: peptides that have higher binding affinities have lower entropies compared to non-binders, which have significantly larger entropies. The computation of the entropies is based on a simple model that requires short molecular dynamics trajectories and allows for approximate but rapid determination. The paper draws attention to the long neglected dynamic aspects of peptide binding, and provides a fast computation scheme that allows for rapid scanning of large numbers of peptides on selected HLA antigens, which may be useful in defining the right peptides for personal immunotherapy.

  17. Nighttime image dehazing using local atmospheric selection rule and weighted entropy for visible-light systems

    NASA Astrophysics Data System (ADS)

    Park, Dubok; Han, David K.; Ko, Hanseok

    2017-05-01

    Optical imaging systems are often degraded by scattering due to atmospheric particles, such as haze, fog, and mist. Imaging under nighttime haze conditions may suffer especially from the glows near active light sources as well as scattering. We present a methodology for nighttime image dehazing based on an optical imaging model which accounts for varying light sources and their glow. First, glow effects are decomposed using relative smoothness. Atmospheric light is then estimated by assessing global and local atmospheric light using a local atmospheric selection rule. The transmission of light is then estimated by maximizing an objective function designed on the basis of weighted entropy. Finally, haze is removed using two estimated parameters, namely, atmospheric light and transmission. The visual and quantitative comparison of the experimental results with the results of existing state-of-the-art methods demonstrates the significance of the proposed approach.

  18. Compression based entropy estimation of heart rate variability on multiple time scales.

    PubMed

    Baumert, Mathias; Voss, Andreas; Javorka, Michal

    2013-01-01

    Heart rate fluctuates beat by beat in a complex manner. The aim of this study was to develop a framework for entropy assessment of heart rate fluctuations on multiple time scales. We employed the Lempel-Ziv algorithm for lossless data compression to investigate the compressibility of RR interval time series on different time scales, using a coarse-graining procedure. We estimated the entropy of RR interval time series of 20 young and 20 old subjects and also investigated the compressibility of randomly shuffled surrogate RR time series. The original RR time series displayed significantly smaller compression entropy values than randomized RR interval data. The RR interval time series of older subjects showed significantly different entropy characteristics over multiple time scales than those of younger subjects. In conclusion, data compression may be useful approach for multiscale entropy assessment of heart rate variability.

  19. Estimating the melting point, entropy of fusion, and enthalpy of ...

    EPA Pesticide Factsheets

    The entropies of fusion, enthalies of fusion, and melting points of organic compounds can be estimated through three models developed using the SPARC (SPARC Performs Automated Reasoning in Chemistry) platform. The entropy of fusion is modeled through a combination of interaction terms and physical descriptors. The enthalpy of fusion is modeled as a function of the entropy of fusion, boiling point, and fexibility of the molecule. The melting point model is the enthlapy of fusion divided by the entropy of fusion. These models were developed in part to improve SPARC's vapor pressure and solubility models. These models have been tested on 904 unique compounds. The entropy model has a RMS of 12.5 J mol-1K-1. The enthalpy model has a RMS of 4.87 kJ mol-1. The melting point model has a RMS of 54.4°C. Published in the journal, SAR and QSAR in Environmental Research

  20. Estimating the melting point, entropy of fusion, and enthalpy of fusion of organic compounds via SPARC.

    PubMed

    Whiteside, T S; Hilal, S H; Brenner, A; Carreira, L A

    2016-08-01

    The entropy of fusion, enthalpy of fusion, and melting point of organic compounds can be estimated through three models developed using the SPARC (SPARC Performs Automated Reasoning in Chemistry) platform. The entropy of fusion is modelled through a combination of interaction terms and physical descriptors. The enthalpy of fusion is modelled as a function of the entropy of fusion, boiling point, and flexibility of the molecule. The melting point model is the enthalpy of fusion divided by the entropy of fusion. These models were developed in part to improve SPARC's vapour pressure and solubility models. These models have been tested on 904 unique compounds. The entropy model has a RMS of 12.5 J mol(-1) K(-1). The enthalpy model has a RMS of 4.87 kJ mol(-1). The melting point model has a RMS of 54.4°C.

  1. Entropy and long-range memory in random symbolic additive Markov chains

    NASA Astrophysics Data System (ADS)

    Melnik, S. S.; Usatenko, O. V.

    2016-06-01

    The goal of this paper is to develop an estimate for the entropy of random symbolic sequences with elements belonging to a finite alphabet. As a plausible model, we use the high-order additive stationary ergodic Markov chain with long-range memory. Supposing that the correlations between random elements of the chain are weak, we express the conditional entropy of the sequence by means of the symbolic pair correlation function. We also examine an algorithm for estimating the conditional entropy of finite symbolic sequences. We show that the entropy contains two contributions, i.e., the correlation and the fluctuation. The obtained analytical results are used for numerical evaluation of the entropy of written English texts and DNA nucleotide sequences. The developed theory opens the way for constructing a more consistent and sophisticated approach to describe the systems with strong short-range and weak long-range memory.

  2. Entropy and long-range memory in random symbolic additive Markov chains.

    PubMed

    Melnik, S S; Usatenko, O V

    2016-06-01

    The goal of this paper is to develop an estimate for the entropy of random symbolic sequences with elements belonging to a finite alphabet. As a plausible model, we use the high-order additive stationary ergodic Markov chain with long-range memory. Supposing that the correlations between random elements of the chain are weak, we express the conditional entropy of the sequence by means of the symbolic pair correlation function. We also examine an algorithm for estimating the conditional entropy of finite symbolic sequences. We show that the entropy contains two contributions, i.e., the correlation and the fluctuation. The obtained analytical results are used for numerical evaluation of the entropy of written English texts and DNA nucleotide sequences. The developed theory opens the way for constructing a more consistent and sophisticated approach to describe the systems with strong short-range and weak long-range memory.

  3. Estimation of typhoon rainfall in GaoPing River: A Multivariate Maximum Entropy Method

    NASA Astrophysics Data System (ADS)

    Pei-Jui, Wu; Hwa-Lung, Yu

    2016-04-01

    The heavy rainfall from typhoons is the main factor of the natural disaster in Taiwan, which causes the significant loss of human lives and properties. Statistically average 3.5 typhoons invade Taiwan every year, and the serious typhoon, Morakot in 2009, impacted Taiwan in recorded history. Because the duration, path and intensity of typhoon, also affect the temporal and spatial rainfall type in specific region , finding the characteristics of the typhoon rainfall type is advantageous when we try to estimate the quantity of rainfall. This study developed a rainfall prediction model and can be divided three parts. First, using the EEOF(extended empirical orthogonal function) to classify the typhoon events, and decompose the standard rainfall type of all stations of each typhoon event into the EOF and PC(principal component). So we can classify the typhoon events which vary similarly in temporally and spatially as the similar typhoon types. Next, according to the classification above, we construct the PDF(probability density function) in different space and time by means of using the multivariate maximum entropy from the first to forth moment statistically. Therefore, we can get the probability of each stations of each time. Final we use the BME(Bayesian Maximum Entropy method) to construct the typhoon rainfall prediction model , and to estimate the rainfall for the case of GaoPing river which located in south of Taiwan.This study could be useful for typhoon rainfall predictions in future and suitable to government for the typhoon disaster prevention .

  4. Efficient Bayesian experimental design for contaminant source identification

    NASA Astrophysics Data System (ADS)

    Zhang, Jiangjiang; Zeng, Lingzao; Chen, Cheng; Chen, Dingjiang; Wu, Laosheng

    2015-01-01

    In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameters identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from concentration measurements in identifying unknown parameters. In this approach, the sampling locations that give the maximum expected relative entropy are selected as the optimal design. After the sampling locations are determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport equation. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. It is shown that the methods can be used to assist in both single sampling location and monitoring network design for contaminant source identifications in groundwater.

  5. A fault diagnosis scheme for rolling bearing based on local mean decomposition and improved multiscale fuzzy entropy

    NASA Astrophysics Data System (ADS)

    Li, Yongbo; Xu, Minqiang; Wang, Rixin; Huang, Wenhu

    2016-01-01

    This paper presents a new rolling bearing fault diagnosis method based on local mean decomposition (LMD), improved multiscale fuzzy entropy (IMFE), Laplacian score (LS) and improved support vector machine based binary tree (ISVM-BT). When the fault occurs in rolling bearings, the measured vibration signal is a multi-component amplitude-modulated and frequency-modulated (AM-FM) signal. LMD, a new self-adaptive time-frequency analysis method can decompose any complicated signal into a series of product functions (PFs), each of which is exactly a mono-component AM-FM signal. Hence, LMD is introduced to preprocess the vibration signal. Furthermore, IMFE that is designed to avoid the inaccurate estimation of fuzzy entropy can be utilized to quantify the complexity and self-similarity of time series for a range of scales based on fuzzy entropy. Besides, the LS approach is introduced to refine the fault features by sorting the scale factors. Subsequently, the obtained features are fed into the multi-fault classifier ISVM-BT to automatically fulfill the fault pattern identifications. The experimental results validate the effectiveness of the methodology and demonstrate that proposed algorithm can be applied to recognize the different categories and severities of rolling bearings.

  6. Confidence intervals and hypothesis testing for the Permutation Entropy with an application to epilepsy

    NASA Astrophysics Data System (ADS)

    Traversaro, Francisco; O. Redelico, Francisco

    2018-04-01

    In nonlinear dynamics, and to a lesser extent in other fields, a widely used measure of complexity is the Permutation Entropy. But there is still no known method to determine the accuracy of this measure. There has been little research on the statistical properties of this quantity that characterize time series. The literature describes some resampling methods of quantities used in nonlinear dynamics - as the largest Lyapunov exponent - but these seems to fail. In this contribution, we propose a parametric bootstrap methodology using a symbolic representation of the time series to obtain the distribution of the Permutation Entropy estimator. We perform several time series simulations given by well-known stochastic processes: the 1/fα noise family, and show in each case that the proposed accuracy measure is as efficient as the one obtained by the frequentist approach of repeating the experiment. The complexity of brain electrical activity, measured by the Permutation Entropy, has been extensively used in epilepsy research for detection in dynamical changes in electroencephalogram (EEG) signal with no consideration of the variability of this complexity measure. An application of the parametric bootstrap methodology is used to compare normal and pre-ictal EEG signals.

  7. Multifractals embedded in short time series: An unbiased estimation of probability moment

    NASA Astrophysics Data System (ADS)

    Qiu, Lu; Yang, Tianguang; Yin, Yanhua; Gu, Changgui; Yang, Huijie

    2016-12-01

    An exact estimation of probability moments is the base for several essential concepts, such as the multifractals, the Tsallis entropy, and the transfer entropy. By means of approximation theory we propose a new method called factorial-moment-based estimation of probability moments. Theoretical prediction and computational results show that it can provide us an unbiased estimation of the probability moments of continuous order. Calculations on probability redistribution model verify that it can extract exactly multifractal behaviors from several hundred recordings. Its powerfulness in monitoring evolution of scaling behaviors is exemplified by two empirical cases, i.e., the gait time series for fast, normal, and slow trials of a healthy volunteer, and the closing price series for Shanghai stock market. By using short time series with several hundred lengths, a comparison with the well-established tools displays significant advantages of its performance over the other methods. The factorial-moment-based estimation can evaluate correctly the scaling behaviors in a scale range about three generations wider than the multifractal detrended fluctuation analysis and the basic estimation. The estimation of partition function given by the wavelet transform modulus maxima has unacceptable fluctuations. Besides the scaling invariance focused in the present paper, the proposed factorial moment of continuous order can find its various uses, such as finding nonextensive behaviors of a complex system and reconstructing the causality relationship network between elements of a complex system.

  8. Quantum-state reconstruction by maximizing likelihood and entropy.

    PubMed

    Teo, Yong Siah; Zhu, Huangjun; Englert, Berthold-Georg; Řeháček, Jaroslav; Hradil, Zdeněk

    2011-07-08

    Quantum-state reconstruction on a finite number of copies of a quantum system with informationally incomplete measurements, as a rule, does not yield a unique result. We derive a reconstruction scheme where both the likelihood and the von Neumann entropy functionals are maximized in order to systematically select the most-likely estimator with the largest entropy, that is, the least-bias estimator, consistent with a given set of measurement data. This is equivalent to the joint consideration of our partial knowledge and ignorance about the ensemble to reconstruct its identity. An interesting structure of such estimators will also be explored.

  9. Generalized entropies and the similarity of texts

    NASA Astrophysics Data System (ADS)

    Altmann, Eduardo G.; Dias, Laércio; Gerlach, Martin

    2017-01-01

    We show how generalized Gibbs-Shannon entropies can provide new insights on the statistical properties of texts. The universal distribution of word frequencies (Zipf’s law) implies that the generalized entropies, computed at the word level, are dominated by words in a specific range of frequencies. Here we show that this is the case not only for the generalized entropies but also for the generalized (Jensen-Shannon) divergences, used to compute the similarity between different texts. This finding allows us to identify the contribution of specific words (and word frequencies) for the different generalized entropies and also to estimate the size of the databases needed to obtain a reliable estimation of the divergences. We test our results in large databases of books (from the google n-gram database) and scientific papers (indexed by Web of Science).

  10. Local Renyi entropic profiles of DNA sequences.

    PubMed

    Vinga, Susana; Almeida, Jonas S

    2007-10-16

    In a recent report the authors presented a new measure of continuous entropy for DNA sequences, which allows the estimation of their randomness level. The definition therein explored was based on the Rényi entropy of probability density estimation (pdf) using the Parzen's window method and applied to Chaos Game Representation/Universal Sequence Maps (CGR/USM). Subsequent work proposed a fractal pdf kernel as a more exact solution for the iterated map representation. This report extends the concepts of continuous entropy by defining DNA sequence entropic profiles using the new pdf estimations to refine the density estimation of motifs. The new methodology enables two results. On the one hand it shows that the entropic profiles are directly related with the statistical significance of motifs, allowing the study of under and over-representation of segments. On the other hand, by spanning the parameters of the kernel function it is possible to extract important information about the scale of each conserved DNA region. The computational applications, developed in Matlab m-code, the corresponding binary executables and additional material and examples are made publicly available at http://kdbio.inesc-id.pt/~svinga/ep/. The ability to detect local conservation from a scale-independent representation of symbolic sequences is particularly relevant for biological applications where conserved motifs occur in multiple, overlapping scales, with significant future applications in the recognition of foreign genomic material and inference of motif structures.

  11. Local Renyi entropic profiles of DNA sequences

    PubMed Central

    Vinga, Susana; Almeida, Jonas S

    2007-01-01

    Background In a recent report the authors presented a new measure of continuous entropy for DNA sequences, which allows the estimation of their randomness level. The definition therein explored was based on the Rényi entropy of probability density estimation (pdf) using the Parzen's window method and applied to Chaos Game Representation/Universal Sequence Maps (CGR/USM). Subsequent work proposed a fractal pdf kernel as a more exact solution for the iterated map representation. This report extends the concepts of continuous entropy by defining DNA sequence entropic profiles using the new pdf estimations to refine the density estimation of motifs. Results The new methodology enables two results. On the one hand it shows that the entropic profiles are directly related with the statistical significance of motifs, allowing the study of under and over-representation of segments. On the other hand, by spanning the parameters of the kernel function it is possible to extract important information about the scale of each conserved DNA region. The computational applications, developed in Matlab m-code, the corresponding binary executables and additional material and examples are made publicly available at . Conclusion The ability to detect local conservation from a scale-independent representation of symbolic sequences is particularly relevant for biological applications where conserved motifs occur in multiple, overlapping scales, with significant future applications in the recognition of foreign genomic material and inference of motif structures. PMID:17939871

  12. Estimates of the information content and dimensionality of natural scenes from proximity distributions

    NASA Astrophysics Data System (ADS)

    Chandler, Damon M.; Field, David J.

    2007-04-01

    Natural scenes, like most all natural data sets, show considerable redundancy. Although many forms of redundancy have been investigated (e.g., pixel distributions, power spectra, contour relationships, etc.), estimates of the true entropy of natural scenes have been largely considered intractable. We describe a technique for estimating the entropy and relative dimensionality of image patches based on a function we call the proximity distribution (a nearest-neighbor technique). The advantage of this function over simple statistics such as the power spectrum is that the proximity distribution is dependent on all forms of redundancy. We demonstrate that this function can be used to estimate the entropy (redundancy) of 3×3 patches of known entropy as well as 8×8 patches of Gaussian white noise, natural scenes, and noise with the same power spectrum as natural scenes. The techniques are based on assumptions regarding the intrinsic dimensionality of the data, and although the estimates depend on an extrapolation model for images larger than 3×3, we argue that this approach provides the best current estimates of the entropy and compressibility of natural-scene patches and that it provides insights into the efficiency of any coding strategy that aims to reduce redundancy. We show that the sample of 8×8 patches of natural scenes used in this study has less than half the entropy of 8×8 white noise and less than 60% of the entropy of noise with the same power spectrum. In addition, given a finite number of samples (<220) drawn randomly from the space of 8×8 patches, the subspace of 8×8 natural-scene patches shows a dimensionality that depends on the sampling density and that for low densities is significantly lower dimensional than the space of 8×8 patches of white noise and noise with the same power spectrum.

  13. The Shannon entropy as a measure of diffusion in multidimensional dynamical systems

    NASA Astrophysics Data System (ADS)

    Giordano, C. M.; Cincotta, P. M.

    2018-05-01

    In the present work, we introduce two new estimators of chaotic diffusion based on the Shannon entropy. Using theoretical, heuristic and numerical arguments, we show that the entropy, S, provides a measure of the diffusion extent of a given small initial ensemble of orbits, while an indicator related with the time derivative of the entropy, S', estimates the diffusion rate. We show that in the limiting case of near ergodicity, after an appropriate normalization, S' coincides with the standard homogeneous diffusion coefficient. The very first application of this formulation to a 4D symplectic map and to the Arnold Hamiltonian reveals very successful and encouraging results.

  14. Parameters Selection for Bivariate Multiscale Entropy Analysis of Postural Fluctuations in Fallers and Non-Fallers Older Adults.

    PubMed

    Ramdani, Sofiane; Bonnet, Vincent; Tallon, Guillaume; Lagarde, Julien; Bernard, Pierre Louis; Blain, Hubert

    2016-08-01

    Entropy measures are often used to quantify the regularity of postural sway time series. Recent methodological developments provided both multivariate and multiscale approaches allowing the extraction of complexity features from physiological signals; see "Dynamical complexity of human responses: A multivariate data-adaptive framework," in Bulletin of Polish Academy of Science and Technology, vol. 60, p. 433, 2012. The resulting entropy measures are good candidates for the analysis of bivariate postural sway signals exhibiting nonstationarity and multiscale properties. These methods are dependant on several input parameters such as embedding parameters. Using two data sets collected from institutionalized frail older adults, we numerically investigate the behavior of a recent multivariate and multiscale entropy estimator; see "Multivariate multiscale entropy: A tool for complexity analysis of multichannel data," Physics Review E, vol. 84, p. 061918, 2011. We propose criteria for the selection of the input parameters. Using these optimal parameters, we statistically compare the multivariate and multiscale entropy values of postural sway data of non-faller subjects to those of fallers. These two groups are discriminated by the resulting measures over multiple time scales. We also demonstrate that the typical parameter settings proposed in the literature lead to entropy measures that do not distinguish the two groups. This last result confirms the importance of the selection of appropriate input parameters.

  15. Complexity analysis of brain activity in attention-deficit/hyperactivity disorder: A multiscale entropy analysis.

    PubMed

    Chenxi, Li; Chen, Yanni; Li, Youjun; Wang, Jue; Liu, Tian

    2016-06-01

    The multiscale entropy (MSE) is a novel method for quantifying the intrinsic dynamical complexity of physiological systems over several scales. To evaluate this method as a promising way to explore the neural mechanisms in ADHD, we calculated the MSE in EEG activity during the designed task. EEG data were collected from 13 outpatient boys with a confirmed diagnosis of ADHD and 13 age- and gender-matched normal control children during their doing multi-source interference task (MSIT). We estimated the MSE by calculating the sample entropy values of delta, theta, alpha and beta frequency bands over twenty time scales using coarse-grained procedure. The results showed increased complexity of EEG data in delta and theta frequency bands and decreased complexity in alpha frequency bands in ADHD children. The findings of this study revealed aberrant neural connectivity of kids with ADHD during interference task. The results showed that MSE method may be a new index to identify and understand the neural mechanism of ADHD. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Temperature-Dependent Estimation of Gibbs Energies Using an Updated Group-Contribution Method.

    PubMed

    Du, Bin; Zhang, Zhen; Grubner, Sharon; Yurkovich, James T; Palsson, Bernhard O; Zielinski, Daniel C

    2018-06-05

    Reaction-equilibrium constants determine the metabolite concentrations necessary to drive flux through metabolic pathways. Group-contribution methods offer a way to estimate reaction-equilibrium constants at wide coverage across the metabolic network. Here, we present an updated group-contribution method with 1) additional curated thermodynamic data used in fitting and 2) capabilities to calculate equilibrium constants as a function of temperature. We first collected and curated aqueous thermodynamic data, including reaction-equilibrium constants, enthalpies of reaction, Gibbs free energies of formation, enthalpies of formation, entropy changes of formation of compounds, and proton- and metal-ion-binding constants. Next, we formulated the calculation of equilibrium constants as a function of temperature and calculated the standard entropy change of formation (Δ f S ∘ ) using a model based on molecular properties. The median absolute error in estimating Δ f S ∘ was 0.013 kJ/K/mol. We also estimated magnesium binding constants for 618 compounds using a linear regression model validated against measured data. We demonstrate the improved performance of the current method (8.17 kJ/mol in median absolute residual) over the current state-of-the-art method (11.47 kJ/mol) in estimating the 185 new reactions added in this work. The efforts here fill in gaps for thermodynamic calculations under various conditions, specifically different temperatures and metal-ion concentrations. These, to our knowledge, new capabilities empower the study of thermodynamic driving forces underlying the metabolic function of organisms living under diverse conditions. Copyright © 2018 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  17. An estimator for the relative entropy rate of path measures for stochastic differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Opper, Manfred, E-mail: manfred.opper@tu-berlin.de

    2017-02-01

    We address the problem of estimating the relative entropy rate (RER) for two stochastic processes described by stochastic differential equations. For the case where the drift of one process is known analytically, but one has only observations from the second process, we use a variational bound on the RER to construct an estimator.

  18. An entropy-based method for determining the flow depth distribution in natural channels

    NASA Astrophysics Data System (ADS)

    Moramarco, Tommaso; Corato, Giovanni; Melone, Florisa; Singh, Vijay P.

    2013-08-01

    A methodology for determining the bathymetry of river cross-sections during floods by the sampling of surface flow velocity and existing low flow hydraulic data is developed . Similar to Chiu (1988) who proposed an entropy-based velocity distribution, the flow depth distribution in a cross-section of a natural channel is derived by entropy maximization. The depth distribution depends on one parameter, whose estimate is straightforward, and on the maximum flow depth. Applying to a velocity data set of five river gage sites, the method modeled the flow area observed during flow measurements and accurately assessed the corresponding discharge by coupling the flow depth distribution and the entropic relation between mean velocity and maximum velocity. The methodology unfolds a new perspective for flow monitoring by remote sensing, considering that the two main quantities on which the methodology is based, i.e., surface flow velocity and flow depth, might be potentially sensed by new sensors operating aboard an aircraft or satellite.

  19. The prediction of engineering cost for green buildings based on information entropy

    NASA Astrophysics Data System (ADS)

    Liang, Guoqiang; Huang, Jinglian

    2018-03-01

    Green building is the developing trend in the world building industry. Additionally, construction costs are an essential consideration in building constructions. Therefore, it is necessary to investigate the problems of cost prediction in green building. On the basis of analyzing the cost of green building, this paper proposes the forecasting method of actual cost in green building based on information entropy and provides the forecasting working procedure. Using the probability density obtained from statistical data, such as labor costs, material costs, machinery costs, administration costs, profits, risk costs a unit project quotation and etc., situations can be predicted which lead to cost variations between budgeted cost and actual cost in constructions, through estimating the information entropy of budgeted cost and actual cost. The research results of this article have a practical significance in cost control of green building. Additionally, the method proposed in this article can be generalized and applied to a variety of other aspects in building management.

  20. Multifractal characteristics of multiparticle production in heavy-ion collisions at SPS energies

    NASA Astrophysics Data System (ADS)

    Khan, Shaista; Ahmad, Shakeel

    Entropy, dimensions and other multifractal characteristics of multiplicity distributions of relativistic charged hadrons produced in ion-ion collisions at SPS energies are investigated. The analysis of the experimental data is carried out in terms of phase space bin-size dependence of multiplicity distributions following the Takagi’s approach. Yet another method is also followed to study the multifractality which, is not related to the bin-width and (or) the detector resolution, rather involves multiplicity distribution of charged particles in full phase space in terms of information entropy and its generalization, Rényi’s order-q information entropy. The findings reveal the presence of multifractal structure — a remarkable property of the fluctuations. Nearly constant values of multifractal specific heat “c” estimated by the two different methods of analysis followed indicate that the parameter “c” may be used as a universal characteristic of the particle production in high energy collisions. The results obtained from the analysis of the experimental data agree well with the predictions of Monte Carlo model AMPT.

  1. A new method of hybrid frequency hopping signals selection and blind parameter estimation

    NASA Astrophysics Data System (ADS)

    Zeng, Xiaoyu; Jiao, Wencheng; Sun, Huixian

    2018-04-01

    Frequency hopping communication is widely used in military communications at home and abroad. In the case of single-channel reception, it is scarce to process multiple frequency hopping signals both effectively and simultaneously. A method of hybrid FH signals selection and blind parameter estimation is proposed. The method makes use of spectral transformation, spectral entropy calculation and PRI transformation basic theory to realize the sorting and parameter estimation of the components in the hybrid frequency hopping signal. The simulation results show that this method can correctly classify the frequency hopping component signal, and the estimated error of the frequency hopping period is about 5% and the estimated error of the frequency hopping frequency is less than 1% when the SNR is 10dB. However, the performance of this method deteriorates seriously at low SNR.

  2. NOTE: Entropy-based automated classification of independent components separated from fMCG

    NASA Astrophysics Data System (ADS)

    Comani, S.; Srinivasan, V.; Alleva, G.; Romani, G. L.

    2007-03-01

    Fetal magnetocardiography (fMCG) is a noninvasive technique suitable for the prenatal diagnosis of the fetal heart function. Reliable fetal cardiac signals can be reconstructed from multi-channel fMCG recordings by means of independent component analysis (ICA). However, the identification of the separated components is usually accomplished by visual inspection. This paper discusses a novel automated system based on entropy estimators, namely approximate entropy (ApEn) and sample entropy (SampEn), for the classification of independent components (ICs). The system was validated on 40 fMCG datasets of normal fetuses with the gestational age ranging from 22 to 37 weeks. Both ApEn and SampEn were able to measure the stability and predictability of the physiological signals separated with ICA, and the entropy values of the three categories were significantly different at p <0.01. The system performances were compared with those of a method based on the analysis of the time and frequency content of the components. The outcomes of this study showed a superior performance of the entropy-based system, in particular for early gestation, with an overall ICs detection rate of 98.75% and 97.92% for ApEn and SampEn respectively, as against a value of 94.50% obtained with the time-frequency-based system.

  3. Time-series analysis of sleep wake stage of rat EEG using time-dependent pattern entropy

    NASA Astrophysics Data System (ADS)

    Ishizaki, Ryuji; Shinba, Toshikazu; Mugishima, Go; Haraguchi, Hikaru; Inoue, Masayoshi

    2008-05-01

    We performed electroencephalography (EEG) for six male Wistar rats to clarify temporal behaviors at different levels of consciousness. Levels were identified both by conventional sleep analysis methods and by our novel entropy method. In our method, time-dependent pattern entropy is introduced, by which EEG is reduced to binary symbolic dynamics and the pattern of symbols in a sliding temporal window is considered. A high correlation was obtained between level of consciousness as measured by the conventional method and mean entropy in our entropy method. Mean entropy was maximal while awake (stage W) and decreased as sleep deepened. These results suggest that time-dependent pattern entropy may offer a promising method for future sleep research.

  4. Comment on: The cancer Warburg effect may be a testable example of the minimum entropy production rate principle.

    PubMed

    Sadeghi Ghuchani, Mostafa

    2018-02-08

    This comment argues against the view that cancer cells produce less entropy than normal cells as stated in a recent paper by Marín and Sabater. The basic principle of estimation of entropy production rate in a living cell is discussed, emphasizing the fact that entropy production depends on both the amount of heat exchange during the metabolism and the entropy difference between products and substrates.

  5. Comment on: The cancer Warburg effect may be a testable example of the minimum entropy production rate principle

    NASA Astrophysics Data System (ADS)

    Sadeghi Ghuchani, Mostafa

    2018-03-01

    This comment argues against the view that cancer cells produce less entropy than normal cells as stated in a recent paper by Marín and Sabater. The basic principle of estimation of entropy production rate in a living cell is discussed, emphasizing the fact that entropy production depends on both the amount of heat exchange during the metabolism and the entropy difference between products and substrates.

  6. AdS and dS Entropy from String Junctions or The Function of Junction Conjunctions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silverstein, Eva M

    Flux compactifications of string theory exhibiting the possibility of discretely tuning the cosmological constant to small values have been constructed. The highly tuned vacua in this discretuum have curvature radii which scale as large powers of the flux quantum numbers, exponential in the number of cycles in the compactification. By the arguments of Susskind/Witten (in the AdS case) and Gibbons/Hawking (in the dS case), we expect correspondingly large entropies associated with these vacua. If they are to provide a dual description of these vacua on their Coulomb branch, branes traded for the flux need to account for this entropy atmore » the appropriate energy scale. In this note, we argue that simple string junctions and webs ending on the branes can account for this large entropy, obtaining a rough estimate for junction entropy that agrees with the existing rough estimates for the spacing of the discretuum. In particular, the brane entropy can account for the (A)dS entropy far away from string scale correspondence limits.« less

  7. Noise and Complexity in Human Postural Control: Interpreting the Different Estimations of Entropy

    PubMed Central

    Rhea, Christopher K.; Silver, Tobin A.; Hong, S. Lee; Ryu, Joong Hyun; Studenka, Breanna E.; Hughes, Charmayne M. L.; Haddad, Jeffrey M.

    2011-01-01

    Background Over the last two decades, various measures of entropy have been used to examine the complexity of human postural control. In general, entropy measures provide information regarding the health, stability and adaptability of the postural system that is not captured when using more traditional analytical techniques. The purpose of this study was to examine how noise, sampling frequency and time series length influence various measures of entropy when applied to human center of pressure (CoP) data, as well as in synthetic signals with known properties. Such a comparison is necessary to interpret data between and within studies that use different entropy measures, equipment, sampling frequencies or data collection durations. Methods and Findings The complexity of synthetic signals with known properties and standing CoP data was calculated using Approximate Entropy (ApEn), Sample Entropy (SampEn) and Recurrence Quantification Analysis Entropy (RQAEn). All signals were examined at varying sampling frequencies and with varying amounts of added noise. Additionally, an increment time series of the original CoP data was examined to remove long-range correlations. Of the three measures examined, ApEn was the least robust to sampling frequency and noise manipulations. Additionally, increased noise led to an increase in SampEn, but a decrease in RQAEn. Thus, noise can yield inconsistent results between the various entropy measures. Finally, the differences between the entropy measures were minimized in the increment CoP data, suggesting that long-range correlations should be removed from CoP data prior to calculating entropy. Conclusions The various algorithms typically used to quantify the complexity (entropy) of CoP may yield very different results, particularly when sampling frequency and noise are different. The results of this study are discussed within the context of the neural noise and loss of complexity hypotheses. PMID:21437281

  8. Investment strategy due to the minimization of portfolio noise level by observations of coarse-grained entropy

    NASA Astrophysics Data System (ADS)

    Urbanowicz, Krzysztof; Hołyst, Janusz A.

    2004-12-01

    Using a recently developed method of noise level estimation that makes use of properties of the coarse-grained entropy, we have analyzed the noise level for the Dow Jones index and a few stocks from the New York Stock Exchange. We have found that the noise level ranges from 40% to 80% of the signal variance. The condition of a minimal noise level has been applied to construct optimal portfolios from selected shares. We show that the implementation of a corresponding threshold investment strategy leads to positive returns for historical data.

  9. Random walk-percolation-based modeling of two-phase flow in porous media: Breakthrough time and net to gross ratio estimation

    NASA Astrophysics Data System (ADS)

    Ganjeh-Ghazvini, Mostafa; Masihi, Mohsen; Ghaedi, Mojtaba

    2014-07-01

    Fluid flow modeling in porous media has many applications in waste treatment, hydrology and petroleum engineering. In any geological model, flow behavior is controlled by multiple properties. These properties must be known in advance of common flow simulations. When uncertainties are present, deterministic modeling often produces poor results. Percolation and Random Walk (RW) methods have recently been used in flow modeling. Their stochastic basis is useful in dealing with uncertainty problems. They are also useful in finding the relationship between porous media descriptions and flow behavior. This paper employs a simple methodology based on random walk and percolation techniques. The method is applied to a well-defined model reservoir in which the breakthrough time distributions are estimated. The results of this method and the conventional simulation are then compared. The effect of the net to gross ratio on the breakthrough time distribution is studied in terms of Shannon entropy. Use of the entropy plot allows one to assign the appropriate net to gross ratio to any porous medium.

  10. Maximum entropy estimation of a Benzene contaminated plume using ecotoxicological assays.

    PubMed

    Wahyudi, Agung; Bartzke, Mariana; Küster, Eberhard; Bogaert, Patrick

    2013-01-01

    Ecotoxicological bioassays, e.g. based on Danio rerio teratogenicity (DarT) or the acute luminescence inhibition with Vibrio fischeri, could potentially lead to significant benefits for detecting on site contaminations on qualitative or semi-quantitative bases. The aim was to use the observed effects of two ecotoxicological assays for estimating the extent of a Benzene groundwater contamination plume. We used a Maximum Entropy (MaxEnt) method to rebuild a bivariate probability table that links the observed toxicity from the bioassays with Benzene concentrations. Compared with direct mapping of the contamination plume as obtained from groundwater samples, the MaxEnt concentration map exhibits on average slightly higher concentrations though the global pattern is close to it. This suggest MaxEnt is a valuable method to build a relationship between quantitative data, e.g. contaminant concentrations, and more qualitative or indirect measurements, in a spatial mapping framework, which is especially useful when clear quantitative relation is not at hand. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. GABAergic excitation of spider mechanoreceptors increases information capacity by increasing entropy rather than decreasing jitter.

    PubMed

    Pfeiffer, Keram; French, Andrew S

    2009-09-02

    Neurotransmitter chemicals excite or inhibit a range of sensory afferents and sensory pathways. These changes in firing rate or static sensitivity can also be associated with changes in dynamic sensitivity or membrane noise and thus action potential timing. We measured action potential firing produced by random mechanical stimulation of spider mechanoreceptor neurons during long-duration excitation by the GABAA agonist muscimol. Information capacity was estimated from signal-to-noise ratio by averaging responses to repeated identical stimulation sequences. Information capacity was also estimated from the coherence function between input and output signals. Entropy rate was estimated by a data compression algorithm and maximum entropy rate from the firing rate. Action potential timing variability, or jitter, was measured as normalized interspike interval distance. Muscimol increased firing rate, information capacity, and entropy rate, but jitter was unchanged. We compared these data with the effects of increasing firing rate by current injection. Our results indicate that the major increase in information capacity by neurotransmitter action arose from the increased entropy rate produced by increased firing rate, not from reduction in membrane noise and action potential jitter.

  12. Investigating Friction as a Main Source of Entropy Generation in the Expansion of Confined Gas in a Piston-and-Cylinder Device

    ERIC Educational Resources Information Center

    Kang, Dun-Yen; Liou, Kai-Hsin; Chang, Wei-Lun

    2015-01-01

    The expansion or compression of gas confined in a piston-and-cylinder device is a classic working example used for illustrating the First and Second Laws of Thermodynamics. The balance of energy and entropy enables the estimation of a number of thermodynamic properties. The entropy generation (also called entropy production) resulting from this…

  13. Complexity-entropy causality plane: A useful approach for distinguishing songs

    NASA Astrophysics Data System (ADS)

    Ribeiro, Haroldo V.; Zunino, Luciano; Mendes, Renio S.; Lenzi, Ervin K.

    2012-04-01

    Nowadays we are often faced with huge databases resulting from the rapid growth of data storage technologies. This is particularly true when dealing with music databases. In this context, it is essential to have techniques and tools able to discriminate properties from these massive sets. In this work, we report on a statistical analysis of more than ten thousand songs aiming to obtain a complexity hierarchy. Our approach is based on the estimation of the permutation entropy combined with an intensive complexity measure, building up the complexity-entropy causality plane. The results obtained indicate that this representation space is very promising to discriminate songs as well as to allow a relative quantitative comparison among songs. Additionally, we believe that the here-reported method may be applied in practical situations since it is simple, robust and has a fast numerical implementation.

  14. Exploiting the Maximum Entropy Principle to Increase Retrieval Effectiveness.

    ERIC Educational Resources Information Center

    Cooper, William S.

    1983-01-01

    Presents information retrieval design approach in which queries of computer-based system consist of sets of terms, either unweighted or weighted with subjective term precision estimates, and retrieval outputs ranked by probability of usefulness estimated by "maximum entropy principle." Boolean and weighted request systems are discussed.…

  15. Studying the dynamics of interbeat interval time series of healthy and congestive heart failure subjects using scale based symbolic entropy analysis

    PubMed Central

    Awan, Imtiaz; Aziz, Wajid; Habib, Nazneen; Alowibdi, Jalal S.; Saeed, Sharjil; Nadeem, Malik Sajjad Ahmed; Shah, Syed Ahsin Ali

    2018-01-01

    Considerable interest has been devoted for developing a deeper understanding of the dynamics of healthy biological systems and how these dynamics are affected due to aging and disease. Entropy based complexity measures have widely been used for quantifying the dynamics of physical and biological systems. These techniques have provided valuable information leading to a fuller understanding of the dynamics of these systems and underlying stimuli that are responsible for anomalous behavior. The single scale based traditional entropy measures yielded contradictory results about the dynamics of real world time series data of healthy and pathological subjects. Recently the multiscale entropy (MSE) algorithm was introduced for precise description of the complexity of biological signals, which was used in numerous fields since its inception. The original MSE quantified the complexity of coarse-grained time series using sample entropy. The original MSE may be unreliable for short signals because the length of the coarse-grained time series decreases with increasing scaling factor τ, however, MSE works well for long signals. To overcome the drawback of original MSE, various variants of this method have been proposed for evaluating complexity efficiently. In this study, we have proposed multiscale normalized corrected Shannon entropy (MNCSE), in which instead of using sample entropy, symbolic entropy measure NCSE has been used as an entropy estimate. The results of the study are compared with traditional MSE. The effectiveness of the proposed approach is demonstrated using noise signals as well as interbeat interval signals from healthy and pathological subjects. The preliminary results of the study indicate that MNCSE values are more stable and reliable than original MSE values. The results show that MNCSE based features lead to higher classification accuracies in comparison with the MSE based features. PMID:29771977

  16. 15N backbone dynamics of the S-peptide from ribonuclease A in its free and S-protein bound forms: toward a site-specific analysis of entropy changes upon folding.

    PubMed Central

    Alexandrescu, A. T.; Rathgeb-Szabo, K.; Rumpel, K.; Jahnke, W.; Schulthess, T.; Kammerer, R. A.

    1998-01-01

    Backbone 15N relaxation parameters (R1, R2, 1H-15N NOE) have been measured for a 22-residue recombinant variant of the S-peptide in its free and S-protein bound forms. NMR relaxation data were analyzed using the "model-free" approach (Lipari & Szabo, 1982). Order parameters obtained from "model-free" simulations were used to calculate 1H-15N bond vector entropies using a recently described method (Yang & Kay, 1996), in which the form of the probability density function for bond vector fluctuations is derived from a diffusion-in-a-cone motional model. The average change in 1H-15N bond vector entropies for residues T3-S15, which become ordered upon binding of the S-peptide to the S-protein, is -12.6+/-1.4 J/mol.residue.K. 15N relaxation data suggest a gradient of decreasing entropy values moving from the termini toward the center of the free peptide. The difference between the entropies of the terminal and central residues is about -12 J/mol residue K, a value comparable to that of the average entropy change per residue upon complex formation. Similar entropy gradients are evident in NMR relaxation studies of other denatured proteins. Taken together, these observations suggest denatured proteins may contain entropic contributions from non-local interactions. Consequently, calculations that model the entropy of a residue in a denatured protein as that of a residue in a di- or tri-peptide, might over-estimate the magnitude of entropy changes upon folding. PMID:9521116

  17. Studying the dynamics of interbeat interval time series of healthy and congestive heart failure subjects using scale based symbolic entropy analysis.

    PubMed

    Awan, Imtiaz; Aziz, Wajid; Shah, Imran Hussain; Habib, Nazneen; Alowibdi, Jalal S; Saeed, Sharjil; Nadeem, Malik Sajjad Ahmed; Shah, Syed Ahsin Ali

    2018-01-01

    Considerable interest has been devoted for developing a deeper understanding of the dynamics of healthy biological systems and how these dynamics are affected due to aging and disease. Entropy based complexity measures have widely been used for quantifying the dynamics of physical and biological systems. These techniques have provided valuable information leading to a fuller understanding of the dynamics of these systems and underlying stimuli that are responsible for anomalous behavior. The single scale based traditional entropy measures yielded contradictory results about the dynamics of real world time series data of healthy and pathological subjects. Recently the multiscale entropy (MSE) algorithm was introduced for precise description of the complexity of biological signals, which was used in numerous fields since its inception. The original MSE quantified the complexity of coarse-grained time series using sample entropy. The original MSE may be unreliable for short signals because the length of the coarse-grained time series decreases with increasing scaling factor τ, however, MSE works well for long signals. To overcome the drawback of original MSE, various variants of this method have been proposed for evaluating complexity efficiently. In this study, we have proposed multiscale normalized corrected Shannon entropy (MNCSE), in which instead of using sample entropy, symbolic entropy measure NCSE has been used as an entropy estimate. The results of the study are compared with traditional MSE. The effectiveness of the proposed approach is demonstrated using noise signals as well as interbeat interval signals from healthy and pathological subjects. The preliminary results of the study indicate that MNCSE values are more stable and reliable than original MSE values. The results show that MNCSE based features lead to higher classification accuracies in comparison with the MSE based features.

  18. Estimating the decomposition of predictive information in multivariate systems

    NASA Astrophysics Data System (ADS)

    Faes, Luca; Kugiumtzis, Dimitris; Nollo, Giandomenico; Jurysta, Fabrice; Marinazzo, Daniele

    2015-03-01

    In the study of complex systems from observed multivariate time series, insight into the evolution of one system may be under investigation, which can be explained by the information storage of the system and the information transfer from other interacting systems. We present a framework for the model-free estimation of information storage and information transfer computed as the terms composing the predictive information about the target of a multivariate dynamical process. The approach tackles the curse of dimensionality employing a nonuniform embedding scheme that selects progressively, among the past components of the multivariate process, only those that contribute most, in terms of conditional mutual information, to the present target process. Moreover, it computes all information-theoretic quantities using a nearest-neighbor technique designed to compensate the bias due to the different dimensionality of individual entropy terms. The resulting estimators of prediction entropy, storage entropy, transfer entropy, and partial transfer entropy are tested on simulations of coupled linear stochastic and nonlinear deterministic dynamic processes, demonstrating the superiority of the proposed approach over the traditional estimators based on uniform embedding. The framework is then applied to multivariate physiologic time series, resulting in physiologically well-interpretable information decompositions of cardiovascular and cardiorespiratory interactions during head-up tilt and of joint brain-heart dynamics during sleep.

  19. High Order Entropy-Constrained Residual VQ for Lossless Compression of Images

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen

    1995-01-01

    High order entropy coding is a powerful technique for exploiting high order statistical dependencies. However, the exponentially high complexity associated with such a method often discourages its use. In this paper, an entropy-constrained residual vector quantization method is proposed for lossless compression of images. The method consists of first quantizing the input image using a high order entropy-constrained residual vector quantizer and then coding the residual image using a first order entropy coder. The distortion measure used in the entropy-constrained optimization is essentially the first order entropy of the residual image. Experimental results show very competitive performance.

  20. Magnetic and lattice contributions to the magnetocaloric effect in Sm1-xSrxMnO3 manganites

    NASA Astrophysics Data System (ADS)

    Aliev, A. M.; Batdalov, A. B.; Khanov, L. N.

    2018-04-01

    A method is proposed to estimate the lattice and magnetic contributions to the total magnetocaloric effect (MCE) in materials with magnetostructural phase transitions. The method is based on two assumptions: (a) the lattice contribution is proportional to magnetostriction and (b) the magnetic contribution obeys a field dependence ΔTm ˜ Hn. Temperature and magnetic field dependences of the MCE and magnetostriction are used to estimate the contributions. Estimations of the contributions in Sm0.6Sr0.4MnO3 manganite are made for cases when n = 0.66 and n = 0.75. Analysis shows that in the area of the maximum of the effect for n = 0.66, the magnetic subsystem contributes about 58% of the total MCE, and the remaining 42% are due to a change in the entropy of the lattice. In the case of n = 0.75, the magnetic contribution remains predominant, but the ratio of the contributions changes: the magnetic contribution counts for 53% and the lattice contribution equals to 47%. The ratio of contributions varies with the temperature and magnetic field. The results of estimation agree with direct measurements of the MCE in Sm0.5Sr0.5MnO3 manganite, where almost the total effect is due to a change in the magnetic entropy.

  1. Thyrotropin secretion in mild and severe primary hypothyroidism is distinguished by amplified burst mass and Basal secretion with increased spikiness and approximate entropy.

    PubMed

    Roelfsema, Ferdinand; Pereira, Alberto M; Adriaanse, Ria; Endert, Erik; Fliers, Eric; Romijn, Johannes A; Veldhuis, Johannes D

    2010-02-01

    Twenty-four-hour TSH secretion profiles in primary hypothyroidism have been analyzed with methods no longer in use. The insights afforded by earlier methods are limited. We studied TSH secretion in patients with primary hypothyroidism (eight patients with severe and eight patients with mild hypothyroidism) with up-to-date analytical tools and compared the results with outcomes in 38 healthy controls. Patients and controls underwent a 24-h study with 10-min blood sampling. TSH data were analyzed with a newly developed automated deconvolution program, approximate entropy, spikiness assessment, and cosinor regression. Both basal and pulsatile TSH secretion rates were increased in hypothyroid patients, the latter by increased burst mass with unchanged frequency. Secretory regularity (approximate entropy) was diminished, and spikiness was increased only in patients with severe hypothyroidism. A diurnal TSH rhythm was present in all but two patients, although with an earlier acrophase in severe hypothyroidism. The estimated slow component of the TSH half-life was shortened in all patients. Increased TSH concentrations in hypothyroidism are mediated by amplification of basal secretion and burst size. Secretory abnormalities quantitated by approximate entropy and spikiness were only present in patients with severe disease and thus are possibly related to the increased thyrotrope cell mass.

  2. The Rényi entropy H2 as a rigorous, measurable lower bound for the entropy of the interaction region in multi-particle production processes

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyz, W.; Zalewski, K.

    2006-10-01

    A model-independent lower bound on the entropy S of the multi-particle system produced in high energy collisions, provided by the measurable Rényi entropy H2, is shown to be very effective. Estimates show that the ratio H2/S remains close to one half for all realistic values of the parameters.

  3. Refined generalized multiscale entropy analysis for physiological signals

    NASA Astrophysics Data System (ADS)

    Liu, Yunxiao; Lin, Youfang; Wang, Jing; Shang, Pengjian

    2018-01-01

    Multiscale entropy analysis has become a prevalent complexity measurement and been successfully applied in various fields. However, it only takes into account the information of mean values (first moment) in coarse-graining procedure. Then generalized multiscale entropy (MSEn) considering higher moments to coarse-grain a time series was proposed and MSEσ2 has been implemented. However, the MSEσ2 sometimes may yield an imprecise estimation of entropy or undefined entropy, and reduce statistical reliability of sample entropy estimation as scale factor increases. For this purpose, we developed the refined model, RMSEσ2, to improve MSEσ2. Simulations on both white noise and 1 / f noise show that RMSEσ2 provides higher entropy reliability and reduces the occurrence of undefined entropy, especially suitable for short time series. Besides, we discuss the effect on RMSEσ2 analysis from outliers, data loss and other concepts in signal processing. We apply the proposed model to evaluate the complexity of heartbeat interval time series derived from healthy young and elderly subjects, patients with congestive heart failure and patients with atrial fibrillation respectively, compared to several popular complexity metrics. The results demonstrate that RMSEσ2 measured complexity (a) decreases with aging and diseases, and (b) gives significant discrimination between different physiological/pathological states, which may facilitate clinical application.

  4. Entropy flow and entropy production in the human body in basal conditions.

    PubMed

    Aoki, I

    1989-11-08

    Entropy inflow and outflow for the naked human body in basal conditions in the respiration calorimeter due to infrared radiation, convection, evaporation of water and mass-flow are calculated by use of the energetic data obtained by Hardy & Du Bois. Also, the change of entropy content in the body is estimated. The entropy production in the human body is obtained as the change of entropy content minus the net entropy flow into the body. The entropy production thus calculated becomes positive. The magnitude of entropy production per effective radiating surface area does not show any significant variation with subjects. The entropy production is nearly constant at the calorimeter temperatures of 26-32 degrees C; the average in this temperature range is 0.172 J m-2 sec-1 K-1. The forced air currents around the human body and also clothing have almost no effect in changing the entropy production. Thus, the entropy production of the naked human body in basal conditions does not depend on its environmental factors.

  5. A Theory of Information Quality and a Framework for its Implementation in the Requirements Engineering Process

    NASA Astrophysics Data System (ADS)

    Grenn, Michael W.

    This dissertation introduces a theory of information quality to explain macroscopic behavior observed in the systems engineering process. The theory extends principles of Shannon's mathematical theory of communication [1948] and statistical mechanics to information development processes concerned with the flow, transformation, and meaning of information. The meaning of requirements information in the systems engineering context is estimated or measured in terms of the cumulative requirements quality Q which corresponds to the distribution of the requirements among the available quality levels. The requirements entropy framework (REF) implements the theory to address the requirements engineering problem. The REF defines the relationship between requirements changes, requirements volatility, requirements quality, requirements entropy and uncertainty, and engineering effort. The REF is evaluated via simulation experiments to assess its practical utility as a new method for measuring, monitoring and predicting requirements trends and engineering effort at any given time in the process. The REF treats the requirements engineering process as an open system in which the requirements are discrete information entities that transition from initial states of high entropy, disorder and uncertainty toward the desired state of minimum entropy as engineering effort is input and requirements increase in quality. The distribution of the total number of requirements R among the N discrete quality levels is determined by the number of defined quality attributes accumulated by R at any given time. Quantum statistics are used to estimate the number of possibilities P for arranging R among the available quality levels. The requirements entropy H R is estimated using R, N and P by extending principles of information theory and statistical mechanics to the requirements engineering process. The information I increases as HR and uncertainty decrease, and the change in information AI needed to reach the desired state of quality is estimated from the perspective of the receiver. The HR may increase, decrease or remain steady depending on the degree to which additions, deletions and revisions impact the distribution of R among the quality levels. Current requirements trend metrics generally treat additions, deletions and revisions the same and simply measure the quantity of these changes over time. The REF evaluates the quantity of requirements changes over time, distinguishes between their positive and negative effects by calculating their impact on HR, Q, and AI, and forecasts when the desired state will be reached, enabling more accurate assessment of the status and progress of the requirements engineering effort. Results from random variable simulations suggest the REF is an improved leading indicator of requirements trends that can be readily combined with current methods. The increase in I, or decrease in H R and uncertainty, is proportional to the engineering effort E input into the requirements engineering process. The REF estimates the AE needed to transition R from their current state of quality to the desired end state or some other interim state of interest. Simulation results are compared with measured engineering effort data for Department of Defense programs published in the SE literature, and the results suggest the REF is a promising new method for estimation of AE.

  6. Moisture sorption isotherms and thermodynamic properties of mexican mennonite-style cheese.

    PubMed

    Martinez-Monteagudo, Sergio I; Salais-Fierro, Fabiola

    2014-10-01

    Moisture adsorption isotherms of fresh and ripened Mexican Mennonite-style cheese were investigated using the static gravimetric method at 4, 8, and 12 °C in a water activity range (aw) of 0.08-0.96. These isotherms were modeled using GAB, BET, Oswin and Halsey equations through weighed non-linear regression. All isotherms were sigmoid in shape, showing a type II BET isotherm, and the data were best described by GAB model. GAB model coefficients revealed that water adsorption by cheese matrix is a multilayer process characterized by molecules that are strongly bound in the monolayer and molecules that are slightly structured in a multilayer. Using the GAB model, it was possible to estimate thermodynamic functions (net isosteric heat, differential entropy, integral enthalpy and entropy, and enthalpy-entropy compensation) as function of moisture content. For both samples, the isosteric heat and differential entropy decreased with moisture content in exponential fashion. The integral enthalpy gradually decreased with increasing moisture content after reached a maximum value, while the integral entropy decreased with increasing moisture content after reached a minimum value. A linear compensation was found between integral enthalpy and entropy suggesting enthalpy controlled adsorption. Determination of moisture content and aw relationship yields to important information of controlling the ripening, drying and storage operations as well as understanding of the water state within a cheese matrix.

  7. Optimization and experimental validation of a thermal cycle that maximizes entropy coefficient fisher identifiability for lithium iron phosphate cells

    NASA Astrophysics Data System (ADS)

    Mendoza, Sergio; Rothenberger, Michael; Hake, Alison; Fathy, Hosam

    2016-03-01

    This article presents a framework for optimizing the thermal cycle to estimate a battery cell's entropy coefficient at 20% state of charge (SOC). Our goal is to maximize Fisher identifiability: a measure of the accuracy with which a parameter can be estimated. Existing protocols in the literature for estimating entropy coefficients demand excessive laboratory time. Identifiability optimization makes it possible to achieve comparable accuracy levels in a fraction of the time. This article demonstrates this result for a set of lithium iron phosphate (LFP) cells. We conduct a 24-h experiment to obtain benchmark measurements of their entropy coefficients. We optimize a thermal cycle to maximize parameter identifiability for these cells. This optimization proceeds with respect to the coefficients of a Fourier discretization of this thermal cycle. Finally, we compare the estimated parameters using (i) the benchmark test, (ii) the optimized protocol, and (iii) a 15-h test from the literature (by Forgez et al.). The results are encouraging for two reasons. First, they confirm the simulation-based prediction that the optimized experiment can produce accurate parameter estimates in 2 h, compared to 15-24. Second, the optimized experiment also estimates a thermal time constant representing the effects of thermal capacitance and convection heat transfer.

  8. Application of online measures to monitor and evaluate multiplatform fusion performance

    NASA Astrophysics Data System (ADS)

    Stubberud, Stephen C.; Kowalski, Charlene; Klamer, Dale M.

    1999-07-01

    A primary concern of multiplatform data fusion is assessing the quality and utility of data shared among platforms. Constraints such as platform and sensor capability and task load necessitate development of an on-line system that computes a metric to determine which other platform can provide the best data for processing. To determine data quality, we are implementing an approach based on entropy coupled with intelligent agents. To determine data quality, we are implementing an approach based on entropy coupled with intelligent agents. Entropy measures quality of processed information such as localization, classification, and ambiguity in measurement-to-track association. Lower entropy scores imply less uncertainty about a particular target. When new information is provided, we compuete the level of improvement a particular track obtains from one measurement to another. The measure permits us to evaluate the utility of the new information. We couple entropy with intelligent agents that provide two main data gathering functions: estimation of another platform's performance and evaluation of the new measurement data's quality. Both functions result from the entropy metric. The intelligent agent on a platform makes an estimate of another platform's measurement and provides it to its own fusion system, which can then incorporate it, for a particular target. A resulting entropy measure is then calculated and returned to its own agent. From this metric, the agent determines a perceived value of the offboard platform's measurement. If the value is satisfactory, the agent requests the measurement from the other platform, usually by interacting with the other platform's agent. Once the actual measurement is received, again entropy is computed and the agent assesses its estimation process and refines it accordingly.

  9. Multibody local approximation: Application to conformational entropy calculations on biomolecules

    NASA Astrophysics Data System (ADS)

    Suárez, Ernesto; Suárez, Dimas

    2012-08-01

    Multibody type expansions like mutual information expansions are widely used for computing or analyzing properties of large composite systems. The power of such expansions stems from their generality. Their weaknesses, however, are the large computational cost of including high order terms due to the combinatorial explosion and the fact that truncation errors do not decrease strictly with the expansion order. Herein, we take advantage of the redundancy of multibody expansions in order to derive an efficient reformulation that captures implicitly all-order correlation effects within a given cutoff, avoiding the combinatory explosion. This approach, which is cutoff dependent rather than order dependent, keeps the generality of the original expansions and simultaneously mitigates their limitations provided that a reasonable cutoff can be used. An application of particular interest can be the computation of the conformational entropy of flexible peptide molecules from molecular dynamics trajectories. By combining the multibody local estimations of conformational entropy with average values of the rigid-rotor and harmonic-oscillator entropic contributions, we obtain by far a tighter upper bound of the absolute entropy than the one obtained by the broadly used quasi-harmonic method.

  10. Multibody local approximation: application to conformational entropy calculations on biomolecules.

    PubMed

    Suárez, Ernesto; Suárez, Dimas

    2012-08-28

    Multibody type expansions like mutual information expansions are widely used for computing or analyzing properties of large composite systems. The power of such expansions stems from their generality. Their weaknesses, however, are the large computational cost of including high order terms due to the combinatorial explosion and the fact that truncation errors do not decrease strictly with the expansion order. Herein, we take advantage of the redundancy of multibody expansions in order to derive an efficient reformulation that captures implicitly all-order correlation effects within a given cutoff, avoiding the combinatory explosion. This approach, which is cutoff dependent rather than order dependent, keeps the generality of the original expansions and simultaneously mitigates their limitations provided that a reasonable cutoff can be used. An application of particular interest can be the computation of the conformational entropy of flexible peptide molecules from molecular dynamics trajectories. By combining the multibody local estimations of conformational entropy with average values of the rigid-rotor and harmonic-oscillator entropic contributions, we obtain by far a tighter upper bound of the absolute entropy than the one obtained by the broadly used quasi-harmonic method.

  11. MuTE: a MATLAB toolbox to compare established and novel estimators of the multivariate transfer entropy.

    PubMed

    Montalto, Alessandro; Faes, Luca; Marinazzo, Daniele

    2014-01-01

    A challenge for physiologists and neuroscientists is to map information transfer between components of the systems that they study at different scales, in order to derive important knowledge on structure and function from the analysis of the recorded dynamics. The components of physiological networks often interact in a nonlinear way and through mechanisms which are in general not completely known. It is then safer that the method of choice for analyzing these interactions does not rely on any model or assumption on the nature of the data and their interactions. Transfer entropy has emerged as a powerful tool to quantify directed dynamical interactions. In this paper we compare different approaches to evaluate transfer entropy, some of them already proposed, some novel, and present their implementation in a freeware MATLAB toolbox. Applications to simulated and real data are presented.

  12. MuTE: A MATLAB Toolbox to Compare Established and Novel Estimators of the Multivariate Transfer Entropy

    PubMed Central

    Montalto, Alessandro; Faes, Luca; Marinazzo, Daniele

    2014-01-01

    A challenge for physiologists and neuroscientists is to map information transfer between components of the systems that they study at different scales, in order to derive important knowledge on structure and function from the analysis of the recorded dynamics. The components of physiological networks often interact in a nonlinear way and through mechanisms which are in general not completely known. It is then safer that the method of choice for analyzing these interactions does not rely on any model or assumption on the nature of the data and their interactions. Transfer entropy has emerged as a powerful tool to quantify directed dynamical interactions. In this paper we compare different approaches to evaluate transfer entropy, some of them already proposed, some novel, and present their implementation in a freeware MATLAB toolbox. Applications to simulated and real data are presented. PMID:25314003

  13. Cascade control of superheated steam temperature with neuro-PID controller.

    PubMed

    Zhang, Jianhua; Zhang, Fenfang; Ren, Mifeng; Hou, Guolian; Fang, Fang

    2012-11-01

    In this paper, an improved cascade control methodology for superheated processes is developed, in which the primary PID controller is implemented by neural networks trained by minimizing error entropy criterion. The entropy of the tracking error can be estimated recursively by utilizing receding horizon window technique. The measurable disturbances in superheated processes are input to the neuro-PID controller besides the sequences of tracking error in outer loop control system, hence, feedback control is combined with feedforward control in the proposed neuro-PID controller. The convergent condition of the neural networks is analyzed. The implementation procedures of the proposed cascade control approach are summarized. Compared with the neuro-PID controller using minimizing squared error criterion, the proposed neuro-PID controller using minimizing error entropy criterion may decrease fluctuations of the superheated steam temperature. A simulation example shows the advantages of the proposed method. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  14. A diameter-sensitive flow entropy method for reliability consideration in water distribution system design

    NASA Astrophysics Data System (ADS)

    Liu, Haixing; Savić, Dragan; Kapelan, Zoran; Zhao, Ming; Yuan, Yixing; Zhao, Hongbin

    2014-07-01

    Flow entropy is a measure of uniformity of pipe flows in water distribution systems. By maximizing flow entropy one can identify reliable layouts or connectivity in networks. In order to overcome the disadvantage of the common definition of flow entropy that does not consider the impact of pipe diameter on reliability, an extended definition of flow entropy, termed as diameter-sensitive flow entropy, is proposed. This new methodology is then assessed by using other reliability methods, including Monte Carlo Simulation, a pipe failure probability model, and a surrogate measure (resilience index) integrated with water demand and pipe failure uncertainty. The reliability assessment is based on a sample of WDS designs derived from an optimization process for each of the two benchmark networks. Correlation analysis is used to evaluate quantitatively the relationship between entropy and reliability. To ensure reliability, a comparative analysis between the flow entropy and the new method is conducted. The results demonstrate that the diameter-sensitive flow entropy shows consistently much stronger correlation with the three reliability measures than simple flow entropy. Therefore, the new flow entropy method can be taken as a better surrogate measure for reliability and could be potentially integrated into the optimal design problem of WDSs. Sensitivity analysis results show that the velocity parameters used in the new flow entropy has no significant impact on the relationship between diameter-sensitive flow entropy and reliability.

  15. Generating Multivariate Ordinal Data via Entropy Principles.

    PubMed

    Lee, Yen; Kaplan, David

    2018-03-01

    When conducting robustness research where the focus of attention is on the impact of non-normality, the marginal skewness and kurtosis are often used to set the degree of non-normality. Monte Carlo methods are commonly applied to conduct this type of research by simulating data from distributions with skewness and kurtosis constrained to pre-specified values. Although several procedures have been proposed to simulate data from distributions with these constraints, no corresponding procedures have been applied for discrete distributions. In this paper, we present two procedures based on the principles of maximum entropy and minimum cross-entropy to estimate the multivariate observed ordinal distributions with constraints on skewness and kurtosis. For these procedures, the correlation matrix of the observed variables is not specified but depends on the relationships between the latent response variables. With the estimated distributions, researchers can study robustness not only focusing on the levels of non-normality but also on the variations in the distribution shapes. A simulation study demonstrates that these procedures yield excellent agreement between specified parameters and those of estimated distributions. A robustness study concerning the effect of distribution shape in the context of confirmatory factor analysis shows that shape can affect the robust [Formula: see text] and robust fit indices, especially when the sample size is small, the data are severely non-normal, and the fitted model is complex.

  16. Diagnosing Undersampling in Monte Carlo Eigenvalue and Flux Tally Estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2015-01-01

    This study explored the impact of undersampling on the accuracy of tally estimates in Monte Carlo (MC) calculations. Steady-state MC simulations were performed for models of several critical systems with varying degrees of spatial and isotopic complexity, and the impact of undersampling on eigenvalue and fuel pin flux/fission estimates was examined. This study observed biases in MC eigenvalue estimates as large as several percent and biases in fuel pin flux/fission tally estimates that exceeded tens, and in some cases hundreds, of percent. This study also investigated five statistical metrics for predicting the occurrence of undersampling biases in MC simulations. Threemore » of the metrics (the Heidelberger-Welch RHW, the Geweke Z-Score, and the Gelman-Rubin diagnostics) are commonly used for diagnosing the convergence of Markov chains, and two of the methods (the Contributing Particles per Generation and Tally Entropy) are new convergence metrics developed in the course of this study. These metrics were implemented in the KENO MC code within the SCALE code system and were evaluated for their reliability at predicting the onset and magnitude of undersampling biases in MC eigenvalue and flux tally estimates in two of the critical models. Of the five methods investigated, the Heidelberger-Welch RHW, the Gelman-Rubin diagnostics, and Tally Entropy produced test metrics that correlated strongly to the size of the observed undersampling biases, indicating their potential to effectively predict the size and prevalence of undersampling biases in MC simulations.« less

  17. The performance evaluation model of mining project founded on the weight optimization entropy value method

    NASA Astrophysics Data System (ADS)

    Mao, Chao; Chen, Shou

    2017-01-01

    According to the traditional entropy value method still have low evaluation accuracy when evaluating the performance of mining projects, a performance evaluation model of mineral project founded on improved entropy is proposed. First establish a new weight assignment model founded on compatible matrix analysis of analytic hierarchy process (AHP) and entropy value method, when the compatibility matrix analysis to achieve consistency requirements, if it has differences between subjective weights and objective weights, moderately adjust both proportions, then on this basis, the fuzzy evaluation matrix for performance evaluation. The simulation experiments show that, compared with traditional entropy and compatible matrix analysis method, the proposed performance evaluation model of mining project based on improved entropy value method has higher accuracy assessment.

  18. Practical Aspects of Stabilized FEM Discretizations of Nonlinear Conservation Law Systems with Convex Extension

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Saini, Subhash (Technical Monitor)

    1999-01-01

    This talk considers simplified finite element discretization techniques for first-order systems of conservation laws equipped with a convex (entropy) extension. Using newly developed techniques in entropy symmetrization theory, simplified forms of the Galerkin least-squares (GLS) and the discontinuous Galerkin (DG) finite element method have been developed and analyzed. The use of symmetrization variables yields numerical schemes which inherit global entropy stability properties of the POE system. Central to the development of the simplified GLS and DG methods is the Degenerative Scaling Theorem which characterizes right symmetrizes of an arbitrary first-order hyperbolic system in terms of scaled eigenvectors of the corresponding flux Jacobean matrices. A constructive proof is provided for the Eigenvalue Scaling Theorem with detailed consideration given to the Euler, Navier-Stokes, and magnetohydrodynamic (MHD) equations. Linear and nonlinear energy stability is proven for the simplified GLS and DG methods. Spatial convergence properties of the simplified GLS and DO methods are numerical evaluated via the computation of Ringleb flow on a sequence of successively refined triangulations. Finally, we consider a posteriori error estimates for the GLS and DG demoralization assuming error functionals related to the integrated lift and drag of a body. Sample calculations in 20 are shown to validate the theory and implementation.

  19. Spectral and correlation analysis with applications to middle-atmosphere radars

    NASA Technical Reports Server (NTRS)

    Rastogi, Prabhat K.

    1989-01-01

    The correlation and spectral analysis methods for uniformly sampled stationary random signals, estimation of their spectral moments, and problems arising due to nonstationary are reviewed. Some of these methods are already in routine use in atmospheric radar experiments. Other methods based on the maximum entropy principle and time series models have been used in analyzing data, but are just beginning to receive attention in the analysis of radar signals. These methods are also briefly discussed.

  20. Geotechnical parameter spatial distribution stochastic analysis based on multi-precision information assimilation

    NASA Astrophysics Data System (ADS)

    Wang, C.; Rubin, Y.

    2014-12-01

    Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.

  1. Controlling the Shannon Entropy of Quantum Systems

    PubMed Central

    Xing, Yifan; Wu, Jun

    2013-01-01

    This paper proposes a new quantum control method which controls the Shannon entropy of quantum systems. For both discrete and continuous entropies, controller design methods are proposed based on probability density function control, which can drive the quantum state to any target state. To drive the entropy to any target at any prespecified time, another discretization method is proposed for the discrete entropy case, and the conditions under which the entropy can be increased or decreased are discussed. Simulations are done on both two- and three-dimensional quantum systems, where division and prediction are used to achieve more accurate tracking. PMID:23818819

  2. Controlling the shannon entropy of quantum systems.

    PubMed

    Xing, Yifan; Wu, Jun

    2013-01-01

    This paper proposes a new quantum control method which controls the Shannon entropy of quantum systems. For both discrete and continuous entropies, controller design methods are proposed based on probability density function control, which can drive the quantum state to any target state. To drive the entropy to any target at any prespecified time, another discretization method is proposed for the discrete entropy case, and the conditions under which the entropy can be increased or decreased are discussed. Simulations are done on both two- and three-dimensional quantum systems, where division and prediction are used to achieve more accurate tracking.

  3. Dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization

    NASA Astrophysics Data System (ADS)

    Li, Li

    2018-03-01

    In order to extract target from complex background more quickly and accurately, and to further improve the detection effect of defects, a method of dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization was proposed. Firstly, the method of single-threshold selection based on Arimoto entropy was extended to dual-threshold selection in order to separate the target from the background more accurately. Then intermediate variables in formulae of Arimoto entropy dual-threshold selection was calculated by recursion to eliminate redundant computation effectively and to reduce the amount of calculation. Finally, the local search phase of artificial bee colony algorithm was improved by chaotic sequence based on tent mapping. The fast search for two optimal thresholds was achieved using the improved bee colony optimization algorithm, thus the search could be accelerated obviously. A large number of experimental results show that, compared with the existing segmentation methods such as multi-threshold segmentation method using maximum Shannon entropy, two-dimensional Shannon entropy segmentation method, two-dimensional Tsallis gray entropy segmentation method and multi-threshold segmentation method using reciprocal gray entropy, the proposed method can segment target more quickly and accurately with superior segmentation effect. It proves to be an instant and effective method for image segmentation.

  4. Shannon information entropy in the canonical genetic code.

    PubMed

    Nemzer, Louis R

    2017-02-21

    The Shannon entropy measures the expected information value of messages. As with thermodynamic entropy, the Shannon entropy is only defined within a system that identifies at the outset the collections of possible messages, analogous to microstates, that will be considered indistinguishable macrostates. This fundamental insight is applied here for the first time to amino acid alphabets, which group the twenty common amino acids into families based on chemical and physical similarities. To evaluate these schemas objectively, a novel quantitative method is introduced based the inherent redundancy in the canonical genetic code. Each alphabet is taken as a separate system that partitions the 64 possible RNA codons, the microstates, into families, the macrostates. By calculating the normalized mutual information, which measures the reduction in Shannon entropy, conveyed by single nucleotide messages, groupings that best leverage this aspect of fault tolerance in the code are identified. The relative importance of properties related to protein folding - like hydropathy and size - and function, including side-chain acidity, can also be estimated. This approach allows the quantification of the average information value of nucleotide positions, which can shed light on the coevolution of the canonical genetic code with the tRNA-protein translation mechanism. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Quantifying uncertainty due to fission-fusion dynamics as a component of social complexity.

    PubMed

    Ramos-Fernandez, Gabriel; King, Andrew J; Beehner, Jacinta C; Bergman, Thore J; Crofoot, Margaret C; Di Fiore, Anthony; Lehmann, Julia; Schaffner, Colleen M; Snyder-Mackler, Noah; Zuberbühler, Klaus; Aureli, Filippo; Boyer, Denis

    2018-05-30

    Groups of animals (including humans) may show flexible grouping patterns, in which temporary aggregations or subgroups come together and split, changing composition over short temporal scales, (i.e. fission and fusion). A high degree of fission-fusion dynamics may constrain the regulation of social relationships, introducing uncertainty in interactions between group members. Here we use Shannon's entropy to quantify the predictability of subgroup composition for three species known to differ in the way their subgroups come together and split over time: spider monkeys ( Ateles geoffroyi ), chimpanzees ( Pan troglodytes ) and geladas ( Theropithecus gelada ). We formulate a random expectation of entropy that considers subgroup size variation and sample size, against which the observed entropy in subgroup composition can be compared. Using the theory of set partitioning, we also develop a method to estimate the number of subgroups that the group is likely to be divided into, based on the composition and size of single focal subgroups. Our results indicate that Shannon's entropy and the estimated number of subgroups present at a given time provide quantitative metrics of uncertainty in the social environment (within which social relationships must be regulated) for groups with different degrees of fission-fusion dynamics. These metrics also represent an indirect quantification of the cognitive challenges posed by socially dynamic environments. Overall, our novel methodological approach provides new insight for understanding the evolution of social complexity and the mechanisms to cope with the uncertainty that results from fission-fusion dynamics. © 2017 The Author(s).

  6. A Maximum Entropy Test for Evaluating Higher-Order Correlations in Spike Counts

    PubMed Central

    Onken, Arno; Dragoi, Valentin; Obermayer, Klaus

    2012-01-01

    Evaluating the importance of higher-order correlations of neural spike counts has been notoriously hard. A large number of samples are typically required in order to estimate higher-order correlations and resulting information theoretic quantities. In typical electrophysiology data sets with many experimental conditions, however, the number of samples in each condition is rather small. Here we describe a method that allows to quantify evidence for higher-order correlations in exactly these cases. We construct a family of reference distributions: maximum entropy distributions, which are constrained only by marginals and by linear correlations as quantified by the Pearson correlation coefficient. We devise a Monte Carlo goodness-of-fit test, which tests - for a given divergence measure of interest - whether the experimental data lead to the rejection of the null hypothesis that it was generated by one of the reference distributions. Applying our test to artificial data shows that the effects of higher-order correlations on these divergence measures can be detected even when the number of samples is small. Subsequently, we apply our method to spike count data which were recorded with multielectrode arrays from the primary visual cortex of anesthetized cat during an adaptation experiment. Using mutual information as a divergence measure we find that there are spike count bin sizes at which the maximum entropy hypothesis can be rejected for a substantial number of neuronal pairs. These results demonstrate that higher-order correlations can matter when estimating information theoretic quantities in V1. They also show that our test is able to detect their presence in typical in-vivo data sets, where the number of samples is too small to estimate higher-order correlations directly. PMID:22685392

  7. Entropy Generation in Regenerative Systems

    NASA Technical Reports Server (NTRS)

    Kittel, Peter

    1995-01-01

    Heat exchange to the oscillating flows in regenerative coolers generates entropy. These flows are characterized by oscillating mass flows and oscillating temperatures. Heat is transferred between the flow and heat exchangers and regenerators. In the former case, there is a steady temperature difference between the flow and the heat exchangers. In the latter case, there is no mean temperature difference. In this paper a mathematical model of the entropy generated is developed for both cases. Estimates of the entropy generated by this process are given for oscillating flows in heat exchangers and in regenerators. The practical significance of this entropy is also discussed.

  8. Evaluation of scaling invariance embedded in short time series.

    PubMed

    Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping

    2014-01-01

    Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length ~10(2). Calculations with specified Hurst exponent values of 0.2,0.3,...,0.9 show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias (≤0.03) and sharp confidential interval (standard deviation ≤0.05). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records.

  9. Evaluation of Scaling Invariance Embedded in Short Time Series

    PubMed Central

    Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping

    2014-01-01

    Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length . Calculations with specified Hurst exponent values of show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias () and sharp confidential interval (standard deviation ). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records. PMID:25549356

  10. New Fault Recognition Method for Rotary Machinery Based on Information Entropy and a Probabilistic Neural Network.

    PubMed

    Jiang, Quansheng; Shen, Yehu; Li, Hua; Xu, Fengyu

    2018-01-24

    Feature recognition and fault diagnosis plays an important role in equipment safety and stable operation of rotating machinery. In order to cope with the complexity problem of the vibration signal of rotating machinery, a feature fusion model based on information entropy and probabilistic neural network is proposed in this paper. The new method first uses information entropy theory to extract three kinds of characteristics entropy in vibration signals, namely, singular spectrum entropy, power spectrum entropy, and approximate entropy. Then the feature fusion model is constructed to classify and diagnose the fault signals. The proposed approach can combine comprehensive information from different aspects and is more sensitive to the fault features. The experimental results on simulated fault signals verified better performances of our proposed approach. In real two-span rotor data, the fault detection accuracy of the new method is more than 10% higher compared with the methods using three kinds of information entropy separately. The new approach is proved to be an effective fault recognition method for rotating machinery.

  11. Entropy Splitting for High Order Numerical Simulation of Vortex Sound at Low Mach Numbers

    NASA Technical Reports Server (NTRS)

    Mueller, B.; Yee, H. C.; Mansour, Nagi (Technical Monitor)

    2001-01-01

    A method of minimizing numerical errors, and improving nonlinear stability and accuracy associated with low Mach number computational aeroacoustics (CAA) is proposed. The method consists of two levels. From the governing equation level, we condition the Euler equations in two steps. The first step is to split the inviscid flux derivatives into a conservative and a non-conservative portion that satisfies a so called generalized energy estimate. This involves the symmetrization of the Euler equations via a transformation of variables that are functions of the physical entropy. Owing to the large disparity of acoustic and stagnation quantities in low Mach number aeroacoustics, the second step is to reformulate the split Euler equations in perturbation form with the new unknowns as the small changes of the conservative variables with respect to their large stagnation values. From the numerical scheme level, a stable sixth-order central interior scheme with a third-order boundary schemes that satisfies the discrete analogue of the integration-by-parts procedure used in the continuous energy estimate (summation-by-parts property) is employed.

  12. Decoherence estimation in quantum theory and beyond

    NASA Astrophysics Data System (ADS)

    Pfister, Corsin

    The quantum physics literature provides many different characterizations of decoherence. Most of them have in common that they describe decoherence as a kind of influence on a quantum system upon interacting with an another system. In the spirit of quantum information theory, we adapt a particular viewpoint on decoherence which describes it as the loss of information into a system that is possibly controlled by an adversary. We use a quantitative framework for decoherence that builds on operational characterizations of the min-entropy that have been developed in the quantum information literature. It characterizes decoherence as an influence on quantum channels that reduces their suitability for a variety of quantifiable tasks such as the distribution of secret cryptographic keys of a certain length or the distribution of a certain number of maximally entangled qubit pairs. This allows for a quantitative and operational characterization of decoherence via operational characterizations of the min-entropy. In this thesis, we present a series of results about the estimation of the minentropy, subdivided into three parts. The first part concerns the estimation of a quantum adversary's uncertainty about classical information--expressed by the smooth min-entropy--as it is done in protocols for quantum key distribution (QKD). We analyze this form of min-entropy estimation in detail and find that some of the more recently suggested QKD protocols have previously unnoticed security loopholes. We show that the specifics of the sifting subroutine of a QKD protocol are crucial for security by pointing out mistakes in the security analysis in the literature and by presenting eavesdropping attacks on those problematic protocols. We provide solutions to the identified problems and present a formalized analysis of the min-entropy estimate that incorporates the sifting stage of QKD protocols. In the second part, we extend ideas from QKD to a protocol that allows to estimate an adversary's uncertainty about quantum information, expressed by the fully quantum smooth min-entropy. Roughly speaking, we show that a protocol that resembles the parallel execution of two QKD protocols can be used to lower bound the min-entropy of some unmeasured qubits. We explain how this result may influence the ongoing search for protocols for entanglement distribution. The third part is dedicated to the development of a framework that allows the estimation of decoherence even in experiments that cannot be correctly described by quantum theory. Inspired by an equivalent formulation of the min-entropy that relates it to the fidelity with a maximally entangled state, we define a decoherence quantity for a very general class of probabilistic theories that reduces to the min-entropy in the special case of quantum theory. This entails a definition of maximal entanglement for generalized probabilistic theories. Using techniques from semidefinite and linear programming, we show how bounds on this quantity can be estimated through Bell-type experiments. This allows to test models for decoherence that cannot be described by quantum theory. As an example application, we devise an experimental test of a model for gravitational decoherence that has been suggested in the literature.

  13. Bayesian approach to inverse statistical mechanics.

    PubMed

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  14. Bayesian approach to inverse statistical mechanics

    NASA Astrophysics Data System (ADS)

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  15. Information entropy of humpback whale songs.

    PubMed

    Suzuki, Ryuji; Buck, John R; Tyack, Peter L

    2006-03-01

    The structure of humpback whale (Megaptera novaeangliae) songs was examined using information theory techniques. The song is an ordered sequence of individual sound elements separated by gaps of silence. Song samples were converted into sequences of discrete symbols by both human and automated classifiers. This paper analyzes the song structure in these symbol sequences using information entropy estimators and autocorrelation estimators. Both parametric and nonparametric entropy estimators are applied to the symbol sequences representing the songs. The results provide quantitative evidence consistent with the hierarchical structure proposed for these songs by Payne and McVay [Science 173, 587-597 (1971)]. Specifically, this analysis demonstrates that: (1) There is a strong structural constraint, or syntax, in the generation of the songs, and (2) the structural constraints exhibit periodicities with periods of 6-8 and 180-400 units. This implies that no empirical Markov model is capable of representing the songs' structure. The results are robust to the choice of either human or automated song-to-symbol classifiers. In addition, the entropy estimates indicate that the maximum amount of information that could be communicated by the sequence of sounds made is less than 1 bit per second.

  16. High-order computer-assisted estimates of topological entropy

    NASA Astrophysics Data System (ADS)

    Grote, Johannes

    The concept of Taylor Models is introduced, which offers highly accurate C0-estimates for the enclosures of functional dependencies, combining high-order Taylor polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified interval arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly nonlinear dynamical systems. A method to obtain sharp rigorous enclosures of Poincare maps for certain types of flows and surfaces is developed and numerical examples are presented. Differential algebraic techniques allow the efficient and accurate computation of polynomial approximations for invariant curves of certain planar maps around hyperbolic fixed points. Subsequently we introduce a procedure to extend these polynomial curves to verified Taylor Model enclosures of local invariant manifolds with C0-errors of size 10-10--10 -14, and proceed to generate the global invariant manifold tangle up to comparable accuracy through iteration in Taylor Model arithmetic. Knowledge of the global manifold structure up to finite iterations of the local manifold pieces enables us to find all homoclinic and heteroclinic intersections in the generated manifold tangle. Combined with the mapping properties of the homoclinic points and their ordering we are able to construct a subshift of finite type as a topological factor of the original planar system to obtain rigorous lower bounds for its topological entropy. This construction is fully automatic and yields homoclinic tangles with several hundred homoclinic points. As an example rigorous lower bounds for the topological entropy of the Henon map are computed, which to the best knowledge of the authors yield the largest such estimates published so far.

  17. Fusion information entropy method of rolling bearing fault diagnosis based on n-dimensional characteristic parameter distance

    NASA Astrophysics Data System (ADS)

    Ai, Yan-Ting; Guan, Jiao-Yue; Fei, Cheng-Wei; Tian, Jing; Zhang, Feng-Ling

    2017-05-01

    To monitor rolling bearing operating status with casings in real time efficiently and accurately, a fusion method based on n-dimensional characteristic parameters distance (n-DCPD) was proposed for rolling bearing fault diagnosis with two types of signals including vibration signal and acoustic emission signals. The n-DCPD was investigated based on four information entropies (singular spectrum entropy in time domain, power spectrum entropy in frequency domain, wavelet space characteristic spectrum entropy and wavelet energy spectrum entropy in time-frequency domain) and the basic thought of fusion information entropy fault diagnosis method with n-DCPD was given. Through rotor simulation test rig, the vibration and acoustic emission signals of six rolling bearing faults (ball fault, inner race fault, outer race fault, inner-ball faults, inner-outer faults and normal) are collected under different operation conditions with the emphasis on the rotation speed from 800 rpm to 2000 rpm. In the light of the proposed fusion information entropy method with n-DCPD, the diagnosis of rolling bearing faults was completed. The fault diagnosis results show that the fusion entropy method holds high precision in the recognition of rolling bearing faults. The efforts of this study provide a novel and useful methodology for the fault diagnosis of an aeroengine rolling bearing.

  18. Use of information entropy measures of sitting postural sway to quantify developmental delay in infants

    PubMed Central

    Deffeyes, Joan E; Harbourne, Regina T; DeJong, Stacey L; Kyvelidou, Anastasia; Stuberg, Wayne A; Stergiou, Nicholas

    2009-01-01

    Background By quantifying the information entropy of postural sway data, the complexity of the postural movement of different populations can be assessed, giving insight into pathologic motor control functioning. Methods In this study, developmental delay of motor control function in infants was assessed by analysis of sitting postural sway data acquired from force plate center of pressure measurements. Two types of entropy measures were used: symbolic entropy, including a new asymmetric symbolic entropy measure, and approximate entropy, a more widely used entropy measure. For each method of analysis, parameters were adjusted to optimize the separation of the results from the infants with delayed development from infants with typical development. Results The method that gave the widest separation between the populations was the asymmetric symbolic entropy method, which we developed by modification of the symbolic entropy algorithm. The approximate entropy algorithm also performed well, using parameters optimized for the infant sitting data. The infants with delayed development were found to have less complex patterns of postural sway in the medial-lateral direction, and were found to have different left-right symmetry in their postural sway, as compared to typically developing infants. Conclusion The results of this study indicate that optimization of the entropy algorithm for infant sitting postural sway data can greatly improve the ability to separate the infants with developmental delay from typically developing infants. PMID:19671183

  19. High-Order Entropy Stable Finite Difference Schemes for Nonlinear Conservation Laws: Finite Domains

    NASA Technical Reports Server (NTRS)

    Fisher, Travis C.; Carpenter, Mark H.

    2013-01-01

    Developing stable and robust high-order finite difference schemes requires mathematical formalism and appropriate methods of analysis. In this work, nonlinear entropy stability is used to derive provably stable high-order finite difference methods with formal boundary closures for conservation laws. Particular emphasis is placed on the entropy stability of the compressible Navier-Stokes equations. A newly derived entropy stable weighted essentially non-oscillatory finite difference method is used to simulate problems with shocks and a conservative, entropy stable, narrow-stencil finite difference approach is used to approximate viscous terms.

  20. Grid mapping: a novel method of signal quality evaluation on a single lead electrocardiogram.

    PubMed

    Li, Yanjun; Tang, Xiaoying

    2017-12-01

    Diagnosis of long-term electrocardiogram (ECG) calls for automatic and accurate methods of ECG signal quality estimation, not only to lighten the burden of the doctors but also to avoid misdiagnoses. In this paper, a novel waveform-based method of phase-space reconstruction for signal quality estimation on a single lead ECG was proposed by projecting the amplitude of the ECG and its first order difference into grid cells. The waveform of a single lead ECG was divided into non-overlapping episodes (T s  = 10, 20, 30 s), and the number of grids in both the width and the height of each map are in the range [20, 100] (N X  = N Y  = 20, 30, 40, … 90, 100). The blank pane ratio (BPR) and the entropy were calculated from the distribution of ECG sampling points which were projected into the grid cells. Signal Quality Indices (SQI) bSQI and eSQI were calculated according to the BPR and the entropy, respectively. The MIT-BIH Noise Stress Test Database was used to test the performance of bSQI and eSQI on ECG signal quality estimation. The signal-to-noise ratio (SNR) during the noisy segments of the ECG records in the database is 24, 18, 12, 6, 0 and - 6 dB, respectively. For the SQI quantitative analysis, the records were divided into three groups: good quality group (24, 18 dB), moderate group (12, 6 dB) and bad quality group (0, - 6 dB). The classification among good quality group, moderate quality group and bad quality group were made by linear support-vector machine with the combination of the BPR, the entropy, the bSQI and the eSQI. The classification accuracy was 82.4% and the Cohen's Kappa coefficient was 0.74 on a scale of N X  = 40 and T s  = 20 s. In conclusion, the novel grid mapping offers an intuitive and simple approach to achieving signal quality estimation on a single lead ECG.

  1. Entropy-based gene ranking without selection bias for the predictive classification of microarray data.

    PubMed

    Furlanello, Cesare; Serafini, Maria; Merler, Stefano; Jurman, Giuseppe

    2003-11-06

    We describe the E-RFE method for gene ranking, which is useful for the identification of markers in the predictive classification of array data. The method supports a practical modeling scheme designed to avoid the construction of classification rules based on the selection of too small gene subsets (an effect known as the selection bias, in which the estimated predictive errors are too optimistic due to testing on samples already considered in the feature selection process). With E-RFE, we speed up the recursive feature elimination (RFE) with SVM classifiers by eliminating chunks of uninteresting genes using an entropy measure of the SVM weights distribution. An optimal subset of genes is selected according to a two-strata model evaluation procedure: modeling is replicated by an external stratified-partition resampling scheme, and, within each run, an internal K-fold cross-validation is used for E-RFE ranking. Also, the optimal number of genes can be estimated according to the saturation of Zipf's law profiles. Without a decrease of classification accuracy, E-RFE allows a speed-up factor of 100 with respect to standard RFE, while improving on alternative parametric RFE reduction strategies. Thus, a process for gene selection and error estimation is made practical, ensuring control of the selection bias, and providing additional diagnostic indicators of gene importance.

  2. Maximum entropy approach to statistical inference for an ocean acoustic waveguide.

    PubMed

    Knobles, D P; Sagers, J D; Koch, R A

    2012-02-01

    A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America

  3. Thermodynamics of the Si-O-H System

    NASA Technical Reports Server (NTRS)

    Jacobson, Nathan S.; Opila, Elizabeth J.; Myers, Dwight; Copland, Evan

    2004-01-01

    Thermodynamic functions for Si(OH)4(g) and SiO(OH)2(g) have been measured using the transpiration method. A second law enthalpy of formation and entropy and a third law enthalpy of formation has been calculated for Si(OH)4. The results are in very good agreement with previous experimental measurements, ab-initio calculations, and estimates.

  4. ENTROPY VS. ENERGY WAVEFORM PROCESSING: A COMPARISON ON THE HEAT EQUATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, Michael S.; McCarthy, John; Bruillard, Paul J.

    2015-05-25

    Virtually all modern imaging devices function by collecting either electromagnetic or acoustic backscattered waves and using the energy carried by these waves to determine pixel values that build up what is basically an ”energy” picture. However, waves also carry ”informa- tion” that also may be used to compute the pixel values in an image. We have employed several measures of information, all of which are based on different forms of entropy. Numerous published studies have demonstrated the advantages of entropy, or “information imaging”, over conventional methods for materials characterization and medical imaging. Similar results also have been obtained with microwaves.more » The most sensitive information measure appears to be the joint entropy of the backscattered wave and a reference signal. A typical study is comprised of repeated acquisition of backscattered waves from a specimen that is changing slowing with acquisition time or location. The sensitivity of repeated experimental observations of such a slowly changing quantity may be defined as the mean variation (i.e., observed change) divided by mean variance (i.e., observed noise). We compute the sensitivity for joint entropy and signal energy measurements assuming that noise is Gaussian and using Wiener integration to compute the required mean values and variances. These can be written as solutions to the Heat equation, which permits estimation of their magnitudes. There always exists a reference such that joint entropy has larger variation and smaller variance than the corresponding quantities for signal energy, matching observations of several studies. Moreover, a general prescription for finding an “optimal” reference for the joint entropy emerges, which also has been validated in several studies.« less

  5. On entropy change measurements around first order phase transitions in caloric materials.

    PubMed

    Caron, Luana; Ba Doan, Nguyen; Ranno, Laurent

    2017-02-22

    In this work we discuss the measurement protocols for indirect determination of the isothermal entropy change associated with first order phase transitions in caloric materials. The magneto-structural phase transitions giving rise to giant magnetocaloric effects in Cu-doped MnAs and FeRh are used as case studies to exemplify how badly designed protocols may affect isothermal measurements and lead to incorrect entropy change estimations. Isothermal measurement protocols which allow correct assessment of the entropy change around first order phase transitions in both direct and inverse cases are presented.

  6. Information analysis of hyperspectral images from the hyperion satellite

    NASA Astrophysics Data System (ADS)

    Puzachenko, Yu. G.; Sandlersky, R. B.; Krenke, A. N.; Puzachenko, M. Yu.

    2017-07-01

    A new method of estimating the outgoing radiation spectra data obtained from the Hyperion EO-1 satellite is considered. In theoretical terms, this method is based on the nonequilibrium thermodynamics concept with corresponding estimates of the entropy and the Kullbak information. The obtained information estimates make it possible to assess the effective work of the landscape cover both in general and for its various types and to identify the spectrum ranges primarily responsible for the information increment and, accordingly, for the effective work. The information is measured in the frequency band intervals corresponding to the peaks of solar radiation absorption by different pigments, mesophyll, and water to evaluate the system operation by their synthesis and moisture accumulation. This method is assumed to be effective in investigation of ecosystem functioning by hyperspectral remote sensing.

  7. Minimax Quantum Tomography: Estimators and Relative Entropy Bounds.

    PubMed

    Ferrie, Christopher; Blume-Kohout, Robin

    2016-03-04

    A minimax estimator has the minimum possible error ("risk") in the worst case. We construct the first minimax estimators for quantum state tomography with relative entropy risk. The minimax risk of nonadaptive tomography scales as O(1/sqrt[N])-in contrast to that of classical probability estimation, which is O(1/N)-where N is the number of copies of the quantum state used. We trace this deficiency to sampling mismatch: future observations that determine risk may come from a different sample space than the past data that determine the estimate. This makes minimax estimators very biased, and we propose a computationally tractable alternative with similar behavior in the worst case, but superior accuracy on most states.

  8. Thermodynamic criteria for estimating the kinetic parameters of catalytic reactions

    NASA Astrophysics Data System (ADS)

    Mitrichev, I. I.; Zhensa, A. V.; Kol'tsova, E. M.

    2017-01-01

    Kinetic parameters are estimated using two criteria in addition to the traditional criterion that considers the consistency between experimental and modeled conversion data: thermodynamic consistency and the consistency with entropy production (i.e., the absolute rate of the change in entropy due to exchange with the environment is consistent with the rate of entropy production in the steady state). A special procedure is developed and executed on a computer to achieve the thermodynamic consistency of a set of kinetic parameters with respect to both the standard entropy of a reaction and the standard enthalpy of a reaction. A problem of multi-criterion optimization, reduced to a single-criterion problem by summing weighted values of the three criteria listed above, is solved. Using the reaction of NO reduction with CO on a platinum catalyst as an example, it is shown that the set of parameters proposed by D.B. Mantri and P. Aghalayam gives much worse agreement with experimental values than the set obtained on the basis of three criteria: the sum of the squares of deviations for conversion, the thermodynamic consistency, and the consistency with entropy production.

  9. Magnetocaloric effect in potassium doped lanthanum manganite perovskites prepared by a pyrophoric method

    NASA Astrophysics Data System (ADS)

    Das, Soma; Dey, T. K.

    2006-08-01

    The magnetocaloric effect (MCE) in fine grained perovskite manganites of the type La1-xKxMnO3 (0

  10. On determining absolute entropy without quantum theory or the third law of thermodynamics

    NASA Astrophysics Data System (ADS)

    Steane, Andrew M.

    2016-04-01

    We employ classical thermodynamics to gain information about absolute entropy, without recourse to statistical methods, quantum mechanics or the third law of thermodynamics. The Gibbs-Duhem equation yields various simple methods to determine the absolute entropy of a fluid. We also study the entropy of an ideal gas and the ionization of a plasma in thermal equilibrium. A single measurement of the degree of ionization can be used to determine an unknown constant in the entropy equation, and thus determine the absolute entropy of a gas. It follows from all these examples that the value of entropy at absolute zero temperature does not need to be assigned by postulate, but can be deduced empirically.

  11. Application of an improved minimum entropy deconvolution method for railway rolling element bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Cheng, Yao; Zhou, Ning; Zhang, Weihua; Wang, Zhiwei

    2018-07-01

    Minimum entropy deconvolution is a widely-used tool in machinery fault diagnosis, because it enhances the impulse component of the signal. The filter coefficients that greatly influence the performance of the minimum entropy deconvolution are calculated by an iterative procedure. This paper proposes an improved deconvolution method for the fault detection of rolling element bearings. The proposed method solves the filter coefficients by the standard particle swarm optimization algorithm, assisted by a generalized spherical coordinate transformation. When optimizing the filters performance for enhancing the impulses in fault diagnosis (namely, faulty rolling element bearings), the proposed method outperformed the classical minimum entropy deconvolution method. The proposed method was validated in simulation and experimental signals from railway bearings. In both simulation and experimental studies, the proposed method delivered better deconvolution performance than the classical minimum entropy deconvolution method, especially in the case of low signal-to-noise ratio.

  12. Nonlinear aerodynamic effects on bodies in supersonic flow

    NASA Technical Reports Server (NTRS)

    Pittman, J. L.; Siclari, M. J.

    1984-01-01

    The supersonic flow about generic bodies was analyzed to identify the elments of the nonlinear flow and to determine the influence of geometry and flow conditions on the magnitude of these nonlinearities. The nonlinear effects were attributed to separated-flow nonlinearities and attached-flow nonlinearities. The nonlinear attached-flow contribution was further broken down into large-disturbance effects and entropy effects. Conical, attached-flow bundaries were developed to illustrate the flow regimes where the nonlinear effects are significant, and the use of these boundaries for angle of attack and three-dimensional geometries was indicated. Normal-force and pressure comparisons showed that the large-disturbance and separated-flow effects were the dominant nonlinear effects at low supersonic Mach numbers and that the entropy effects were dominant for high supersonic Mach number flow. The magnitude of all the nonlinear effects increased with increasing angle of attack. A full-potential method, NCOREL, which includes an approximate entropy correction, was shown to provide accurate attached-flow pressure estimates from Mach 1.6 through 4.6.

  13. A maximum entropy thermodynamics of small systems.

    PubMed

    Dixit, Purushottam D

    2013-05-14

    We present a maximum entropy approach to analyze the state space of a small system in contact with a large bath, e.g., a solvated macromolecular system. For the solute, the fluctuations around the mean values of observables are not negligible and the probability distribution P(r) of the state space depends on the intricate details of the interaction of the solute with the solvent. Here, we employ a superstatistical approach: P(r) is expressed as a marginal distribution summed over the variation in β, the inverse temperature of the solute. The joint distribution P(β, r) is estimated by maximizing its entropy. We also calculate the first order system-size corrections to the canonical ensemble description of the state space. We test the development on a simple harmonic oscillator interacting with two baths with very different chemical identities, viz., (a) Lennard-Jones particles and (b) water molecules. In both cases, our method captures the state space of the oscillator sufficiently well. Future directions and connections with traditional statistical mechanics are discussed.

  14. Characterizing Protease Specificity: How Many Substrates Do We Need?

    PubMed Central

    Schauperl, Michael; Fuchs, Julian E.; Waldner, Birgit J.; Huber, Roland G.; Kramer, Christian; Liedl, Klaus R.

    2015-01-01

    Calculation of cleavage entropies allows to quantify, map and compare protease substrate specificity by an information entropy based approach. The metric intrinsically depends on the number of experimentally determined substrates (data points). Thus a statistical analysis of its numerical stability is crucial to estimate the systematic error made by estimating specificity based on a limited number of substrates. In this contribution, we show the mathematical basis for estimating the uncertainty in cleavage entropies. Sets of cleavage entropies are calculated using experimental cleavage data and modeled extreme cases. By analyzing the underlying mathematics and applying statistical tools, a linear dependence of the metric in respect to 1/n was found. This allows us to extrapolate the values to an infinite number of samples and to estimate the errors. Analyzing the errors, a minimum number of 30 substrates was found to be necessary to characterize substrate specificity, in terms of amino acid variability, for a protease (S4-S4’) with an uncertainty of 5 percent. Therefore, we encourage experimental researchers in the protease field to record specificity profiles of novel proteases aiming to identify at least 30 peptide substrates of maximum sequence diversity. We expect a full characterization of protease specificity helpful to rationalize biological functions of proteases and to assist rational drug design. PMID:26559682

  15. Measuring Renyi entanglement entropy in quantum Monte Carlo simulations.

    PubMed

    Hastings, Matthew B; González, Iván; Kallin, Ann B; Melko, Roger G

    2010-04-16

    We develop a quantum Monte Carlo procedure, in the valence bond basis, to measure the Renyi entanglement entropy of a many-body ground state as the expectation value of a unitary Swap operator acting on two copies of the system. An improved estimator involving the ratio of Swap operators for different subregions enables convergence of the entropy in a simulation time polynomial in the system size. We demonstrate convergence of the Renyi entropy to exact results for a Heisenberg chain. Finally, we calculate the scaling of the Renyi entropy in the two-dimensional Heisenberg model and confirm that the Néel ground state obeys the expected area law for systems up to linear size L=32.

  16. Nonadditive entropy maximization is inconsistent with Bayesian updating.

    PubMed

    Pressé, Steve

    2014-11-01

    The maximum entropy method-used to infer probabilistic models from data-is a special case of Bayes's model inference prescription which, in turn, is grounded in basic propositional logic. By contrast to the maximum entropy method, the compatibility of nonadditive entropy maximization with Bayes's model inference prescription has never been established. Here we demonstrate that nonadditive entropy maximization is incompatible with Bayesian updating and discuss the immediate implications of this finding. We focus our attention on special cases as illustrations.

  17. Non-invasive estimation of dissipation from non-equilibrium fluctuations in chemical reactions.

    PubMed

    Muy, S; Kundu, A; Lacoste, D

    2013-09-28

    We show how to extract an estimate of the entropy production from a sufficiently long time series of stationary fluctuations of chemical reactions. This method, which is based on recent work on fluctuation theorems, is direct, non-invasive, does not require any knowledge about the underlying dynamics and is applicable even when only partial information is available. We apply it to simple stochastic models of chemical reactions involving a finite number of states, and for this case, we study how the estimate of dissipation is affected by the degree of coarse-graining present in the input data.

  18. Constrained signal reconstruction from wavelet transform coefficients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brislawn, C.M.

    1991-12-31

    A new method is introduced for reconstructing a signal from an incomplete sampling of its Discrete Wavelet Transform (DWT). The algorithm yields a minimum-norm estimate satisfying a priori upper and lower bounds on the signal. The method is based on a finite-dimensional representation theory for minimum-norm estimates of bounded signals developed by R.E. Cole. Cole`s work has its origins in earlier techniques of maximum-entropy spectral estimation due to Lang and McClellan, which were adapted by Steinhardt, Goodrich and Roberts for minimum-norm spectral estimation. Cole`s extension of their work provides a representation for minimum-norm estimates of a class of generalized transformsmore » in terms of general correlation data (not just DFT`s of autocorrelation lags, as in spectral estimation). One virtue of this great generality is that it includes the inverse DWT. 20 refs.« less

  19. Comparing Postural Stability Entropy Analyses to Differentiate Fallers and Non-Fallers

    PubMed Central

    Fino, Peter C.; Mojdehi, Ahmad R.; Adjerid, Khaled; Habibi, Mohammad; Lockhart, Thurmon E.; Ross, Shane D.

    2015-01-01

    The health and financial cost of falls has spurred research to differentiate the characteristics of fallers and non-fallers. Postural stability has received much of the attention with recent studies exploring various measures of entropy. This study compared the discriminatory ability of several entropy methods at differentiating two paradigms in the center-of-pressure (COP) of elderly individuals: 1.) eyes open (EO) versus eyes closed (EC) and 2.) fallers (F) versus non-fallers (NF). Methods were compared using the area under the curve (AUC) of the receiver-operating characteristic (ROC) curves developed from logistic regression models. Overall, multiscale entropy (MSE) and composite multiscale entropy (CompMSE) performed the best with AUCs of 0.71 for EO/EC and 0.77 for F/NF. When methods were combined together to maximize the AUC, the entropy classifier had an AUC of for 0.91 the F/NF comparison. These results suggest researchers and clinicians attempting to create clinical tests to identify fallers should consider a combination of every entropy method when creating a classifying test. Additionally, MSE and CompMSE classifiers using polar coordinate data outperformed rectangular coordinate data, encouraging more research into the most appropriate time series for postural stability entropy analysis. PMID:26464267

  20. Comparing Postural Stability Entropy Analyses to Differentiate Fallers and Non-fallers.

    PubMed

    Fino, Peter C; Mojdehi, Ahmad R; Adjerid, Khaled; Habibi, Mohammad; Lockhart, Thurmon E; Ross, Shane D

    2016-05-01

    The health and financial cost of falls has spurred research to differentiate the characteristics of fallers and non-fallers. Postural stability has received much of the attention with recent studies exploring various measures of entropy. This study compared the discriminatory ability of several entropy methods at differentiating two paradigms in the center-of-pressure of elderly individuals: (1) eyes open (EO) vs. eyes closed (EC) and (2) fallers (F) vs. non-fallers (NF). Methods were compared using the area under the curve (AUC) of the receiver-operating characteristic curves developed from logistic regression models. Overall, multiscale entropy (MSE) and composite multiscale entropy (CompMSE) performed the best with AUCs of 0.71 for EO/EC and 0.77 for F/NF. When methods were combined together to maximize the AUC, the entropy classifier had an AUC of for 0.91 the F/NF comparison. These results suggest researchers and clinicians attempting to create clinical tests to identify fallers should consider a combination of every entropy method when creating a classifying test. Additionally, MSE and CompMSE classifiers using polar coordinate data outperformed rectangular coordinate data, encouraging more research into the most appropriate time series for postural stability entropy analysis.

  1. Signatures of Solvation Thermodynamics in Spectra of Intermolecular Vibrations

    PubMed Central

    2017-01-01

    This study explores the thermodynamic and vibrational properties of water in the three-dimensional environment of solvated ions and small molecules using molecular simulations. The spectrum of intermolecular vibrations in liquid solvents provides detailed information on the shape of the local potential energy surface, which in turn determines local thermodynamic properties such as the entropy. Here, we extract this information using a spatially resolved extension of the two-phase thermodynamics method to estimate hydration water entropies based on the local vibrational density of states (3D-2PT). Combined with an analysis of solute–water and water–water interaction energies, this allows us to resolve local contributions to the solvation enthalpy, entropy, and free energy. We use this approach to study effects of ions on their surrounding water hydrogen bond network, its spectrum of intermolecular vibrations, and resulting thermodynamic properties. In the three-dimensional environment of polar and nonpolar functional groups of molecular solutes, we identify distinct hydration water species and classify them by their characteristic vibrational density of states and molecular entropies. In each case, we are able to assign variations in local hydration water entropies to specific changes in the spectrum of intermolecular vibrations. This provides an important link for the thermodynamic interpretation of vibrational spectra that are accessible to far-infrared absorption and Raman spectroscopy experiments. Our analysis provides unique microscopic details regarding the hydration of hydrophobic and hydrophilic functional groups, which enable us to identify interactions and molecular degrees of freedom that determine relevant contributions to the solvation entropy and consequently the free energy. PMID:28783431

  2. Theoretical Considerations of the Prigogine-Defay Ration with Regard to the Glass-Forming Ability of Drugs from Undercooled Melts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wyttenbach, Nicole; Kirchmeyer, Wiebke; Alsenz, Jochem

    Drug behavior in undercooled melts is highly important for pharmaceutics with regard to amorphous solid dispersions, and therefore, categories were recently introduced that differentiate glass formers (GFs) from other drugs that are nonglass formers (nGFs). The present study is based on the assumption that molecular properties relevant for the so-called Prigogine–Defay (PD) ratio would be indicative of a drug’s glass-forming ability. The PD ratio depends in theory on the entropy of fusion and molar volume. Experimental data were gathered from a broad set of pharmaceutical compounds (n = 54) using differential scanning calorimetry. The obtained entropy of fusion and molarmore » volume were indeed found to significantly discriminate GFs from nGFs. In a next step, the entropy of fusion was predicted by different in silico methods. A first group contribution method provided rather unreliable estimates for the entropy of fusion, while an alternative in silico approach seemed more promising for drug categorization. Thus, a significant discrimination model employed molar volume, a so-called effective hydrogen bond number, and effective number of torsional bonds (or torsional units) to categorize GFs and nGFs (p ≤ 0.0000). The results led to new insights into drug vitrification and to practical rules of thumb. The latter may serve as guidance in pharmaceutical profiling and early formulation development with respect to amorphous drug formulations.« less

  3. Semi-empirical estimation of organic compound fugacity ratios at environmentally relevant system temperatures.

    PubMed

    van Noort, Paul C M

    2009-06-01

    Fugacity ratios of organic compounds are used to calculate (subcooled) liquid properties, such as solubility or vapour pressure, from solid properties and vice versa. They can be calculated from the entropy of fusion, the melting temperature, and heat capacity data for the solid and the liquid. For many organic compounds, values for the fusion entropy are lacking. Heat capacity data are even scarcer. In the present study, semi-empirical compound class specific equations were derived to estimate fugacity ratios from molecular weight and melting temperature for polycyclic aromatic hydrocarbons and polychlorinated benzenes, biphenyls, dibenzo[p]dioxins and dibenzofurans. These equations estimate fugacity ratios with an average standard error of about 0.05 log units. In addition, for compounds with known fusion entropy values, a general semi-empirical correction equation based on molecular weight and melting temperature was derived for estimation of the contribution of heat capacity differences to the fugacity ratio. This equation estimates the heat capacity contribution correction factor with an average standard error of 0.02 log units for polycyclic aromatic hydrocarbons, polychlorinated benzenes, biphenyls, dibenzo[p]dioxins and dibenzofurans.

  4. [Evaluation of a simplified index (spectral entropy) about sleep state of electrocardiogram recorded by a simplified polygraph, MemCalc-Makin2].

    PubMed

    Ohisa, Noriko; Ogawa, Hiromasa; Murayama, Nobuki; Yoshida, Katsumi

    2010-02-01

    Polysomnography (PSG) is the gold standard for the diagnosis of sleep apnea hypopnea syndrome (SAHS), but it takes time to analyze the PSG and PSG cannot be performed repeatedly because of efforts and costs. Therefore, simplified sleep respiratory disorder indices in which are reflected the PSG results are needed. The Memcalc method, which is a combination of the maximum entropy method for spectral analysis and the non-linear least squares method for fitting analysis (Makin2, Suwa Trust, Tokyo, Japan) has recently been developed. Spectral entropy which is derived by the Memcalc method might be useful to expressing the trend of time-series behavior. Spectral entropy of ECG which is calculated with the Memcalc method was evaluated by comparing to the PSG results. Obstructive SAS patients (n = 79) and control volanteer (n = 7) ECG was recorded using MemCalc-Makin2 (GMS) with PSG recording using Alice IV (Respironics) from 20:00 to 6:00. Spectral entropy of ECG, which was calculated every 2 seconds using the Memcalc method, was compared to sleep stages which were analyzed manually from PSG recordings. Spectral entropy value (-0.473 vs. -0.418, p < 0.05) were significantly increased in the OSAHS compared to the control. For the entropy cutoff level of -0.423, sensitivity and specificity for OSAHS were 86.1% and 71.4%, respectively, resulting in a receiver operating characteristic with an area under the curve of 0.837. The absolute value of entropy had inverse correlation with stage 3. Spectral entropy, which was calculated with Memcalc method, might be a possible index evaluating the quality of sleep.

  5. Joint Estimation of Effective Brain Wave Activation Modes Using EEG/MEG Sensor Arrays and Multimodal MRI Volumes.

    PubMed

    Galinsky, Vitaly L; Martinez, Antigona; Paulus, Martin P; Frank, Lawrence R

    2018-04-13

    In this letter, we present a new method for integration of sensor-based multifrequency bands of electroencephalography and magnetoencephalography data sets into a voxel-based structural-temporal magnetic resonance imaging analysis by utilizing the general joint estimation using entropy regularization (JESTER) framework. This allows enhancement of the spatial-temporal localization of brain function and the ability to relate it to morphological features and structural connectivity. This method has broad implications for both basic neuroscience research and clinical neuroscience focused on identifying disease-relevant biomarkers by enhancing the spatial-temporal resolution of the estimates derived from current neuroimaging modalities, thereby providing a better picture of the normal human brain in basic neuroimaging experiments and variations associated with disease states.

  6. Entropy method of measuring and evaluating periodicity of quasi-periodic trajectories

    NASA Astrophysics Data System (ADS)

    Ni, Yanshuo; Turitsyn, Konstantin; Baoyin, Hexi; Junfeng, Li

    2018-06-01

    This paper presents a method for measuring the periodicity of quasi-periodic trajectories by applying discrete Fourier transform (DFT) to the trajectories and analyzing the frequency domain within the concept of entropy. Having introduced the concept of entropy, analytical derivation and numerical results indicate that entropies increase as a logarithmic function of time. Periodic trajectories typically have higher entropies, and trajectories with higher entropies mean the periodicities of the motions are stronger. Theoretical differences between two trajectories expressed as summations of trigonometric functions are also derived analytically. Trajectories in the Henon-Heiles system and the circular restricted three-body problem (CRTBP) are analyzed with the indicator entropy and compared with orthogonal fast Lyapunov indicator (OFLI). The results show that entropy is a better tool for discriminating periodicity in quasiperiodic trajectories than OFLI and can detect periodicity while excluding the spirals that are judged as periodic cases by OFLI. Finally, trajectories in the vicinity of 243 Ida and 6489 Golevka are considered as examples, and the numerical results verify these conclusions. Some trajectories near asteroids look irregular, but their higher entropy values as analyzed by this method serve as evidence of frequency regularity in three directions. Moreover, these results indicate that applying DFT to the trajectories in the vicinity of irregular small bodies and calculating their entropy in the frequency domain provides a useful quantitative analysis method for evaluating orderliness in the periodicity of quasi-periodic trajectories within a given time interval.

  7. Entropy Stable Spectral Collocation Schemes for the Navier-Stokes Equations: Discontinuous Interfaces

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Fisher, Travis C.; Nielsen, Eric J.; Frankel, Steven H.

    2013-01-01

    Nonlinear entropy stability and a summation-by-parts framework are used to derive provably stable, polynomial-based spectral collocation methods of arbitrary order. The new methods are closely related to discontinuous Galerkin spectral collocation methods commonly known as DGFEM, but exhibit a more general entropy stability property. Although the new schemes are applicable to a broad class of linear and nonlinear conservation laws, emphasis herein is placed on the entropy stability of the compressible Navier-Stokes equations.

  8. DECONV-TOOL: An IDL based deconvolution software package

    NASA Technical Reports Server (NTRS)

    Varosi, F.; Landsman, W. B.

    1992-01-01

    There are a variety of algorithms for deconvolution of blurred images, each having its own criteria or statistic to be optimized in order to estimate the original image data. Using the Interactive Data Language (IDL), we have implemented the Maximum Likelihood, Maximum Entropy, Maximum Residual Likelihood, and sigma-CLEAN algorithms in a unified environment called DeConv_Tool. Most of the algorithms have as their goal the optimization of statistics such as standard deviation and mean of residuals. Shannon entropy, log-likelihood, and chi-square of the residual auto-correlation are computed by DeConv_Tool for the purpose of determining the performance and convergence of any particular method and comparisons between methods. DeConv_Tool allows interactive monitoring of the statistics and the deconvolved image during computation. The final results, and optionally, the intermediate results, are stored in a structure convenient for comparison between methods and review of the deconvolution computation. The routines comprising DeConv_Tool are available via anonymous FTP through the IDL Astronomy User's Library.

  9. Parameters of oscillation generation regions in open star cluster models

    NASA Astrophysics Data System (ADS)

    Danilov, V. M.; Putkov, S. I.

    2017-07-01

    We determine the masses and radii of central regions of open star cluster (OCL) models with small or zero entropy production and estimate the masses of oscillation generation regions in clustermodels based on the data of the phase-space coordinates of stars. The radii of such regions are close to the core radii of the OCL models. We develop a new method for estimating the total OCL masses based on the cluster core mass, the cluster and cluster core radii, and radial distribution of stars. This method yields estimates of dynamical masses of Pleiades, Praesepe, and M67, which agree well with the estimates of the total masses of the corresponding clusters based on proper motions and spectroscopic data for cluster stars.We construct the spectra and dispersion curves of the oscillations of the field of azimuthal velocities v φ in OCL models. Weak, low-amplitude unstable oscillations of v φ develop in cluster models near the cluster core boundary, and weak damped oscillations of v φ often develop at frequencies close to the frequencies of more powerful oscillations, which may reduce the non-stationarity degree in OCL models. We determine the number and parameters of such oscillations near the cores boundaries of cluster models. Such oscillations points to the possible role that gradient instability near the core of cluster models plays in the decrease of the mass of the oscillation generation regions and production of entropy in the cores of OCL models with massive extended cores.

  10. LIBOR troubles: Anomalous movements detection based on maximum entropy

    NASA Astrophysics Data System (ADS)

    Bariviera, Aurelio F.; Martín, María T.; Plastino, Angelo; Vampa, Victoria

    2016-05-01

    According to the definition of the London Interbank Offered Rate (LIBOR), contributing banks should give fair estimates of their own borrowing costs in the interbank market. Between 2007 and 2009, several banks made inappropriate submissions of LIBOR, sometimes motivated by profit-seeking from their trading positions. In 2012, several newspapers' articles began to cast doubt on LIBOR integrity, leading surveillance authorities to conduct investigations on banks' behavior. Such procedures resulted in severe fines imposed to involved banks, who recognized their financial inappropriate conduct. In this paper, we uncover such unfair behavior by using a forecasting method based on the Maximum Entropy principle. Our results are robust against changes in parameter settings and could be of great help for market surveillance.

  11. Fault detection of the connection of lithium-ion power batteries based on entropy for electric vehicles

    NASA Astrophysics Data System (ADS)

    Yao, Lei; Wang, Zhenpo; Ma, Jun

    2015-10-01

    This paper proposes a method of fault detection of the connection of Lithium-Ion batteries based on entropy for electric vehicle. In electric vehicle operation process, some factors, such as road conditions, driving habits, vehicle performance, always affect batteries by vibration, which easily cause loosing or virtual connection between batteries. Through the simulation of the battery charging and discharging experiment under vibration environment, the data of voltage fluctuation can be obtained. Meanwhile, an optimal filtering method is adopted using discrete cosine filter method to analyze the characteristics of system noise, based on the voltage set when batteries are working under different vibration frequency. Experimental data processed by filtering is analyzed based on local Shannon entropy, ensemble Shannon entropy and sample entropy. And the best way to find a method of fault detection of the connection of lithium-ion batteries based on entropy is presented for electric vehicle. The experimental data shows that ensemble Shannon entropy can predict the accurate time and the location of battery connection failure in real time. Besides electric-vehicle industry, this method can also be used in other areas in complex vibration environment.

  12. Driver fatigue detection through multiple entropy fusion analysis in an EEG-based system.

    PubMed

    Min, Jianliang; Wang, Ping; Hu, Jianfeng

    2017-01-01

    Driver fatigue is an important contributor to road accidents, and fatigue detection has major implications for transportation safety. The aim of this research is to analyze the multiple entropy fusion method and evaluate several channel regions to effectively detect a driver's fatigue state based on electroencephalogram (EEG) records. First, we fused multiple entropies, i.e., spectral entropy, approximate entropy, sample entropy and fuzzy entropy, as features compared with autoregressive (AR) modeling by four classifiers. Second, we captured four significant channel regions according to weight-based electrodes via a simplified channel selection method. Finally, the evaluation model for detecting driver fatigue was established with four classifiers based on the EEG data from four channel regions. Twelve healthy subjects performed continuous simulated driving for 1-2 hours with EEG monitoring on a static simulator. The leave-one-out cross-validation approach obtained an accuracy of 98.3%, a sensitivity of 98.3% and a specificity of 98.2%. The experimental results verified the effectiveness of the proposed method, indicating that the multiple entropy fusion features are significant factors for inferring the fatigue state of a driver.

  13. Inverting Monotonic Nonlinearities by Entropy Maximization

    PubMed Central

    López-de-Ipiña Pena, Karmele; Caiafa, Cesar F.

    2016-01-01

    This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results. PMID:27780261

  14. Inverting Monotonic Nonlinearities by Entropy Maximization.

    PubMed

    Solé-Casals, Jordi; López-de-Ipiña Pena, Karmele; Caiafa, Cesar F

    2016-01-01

    This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results.

  15. Bivariate Rainfall and Runoff Analysis Using Shannon Entropy Theory

    NASA Astrophysics Data System (ADS)

    Rahimi, A.; Zhang, L.

    2012-12-01

    Rainfall-Runoff analysis is the key component for many hydrological and hydraulic designs in which the dependence of rainfall and runoff needs to be studied. It is known that the convenient bivariate distribution are often unable to model the rainfall-runoff variables due to that they either have constraints on the range of the dependence or fixed form for the marginal distributions. Thus, this paper presents an approach to derive the entropy-based joint rainfall-runoff distribution using Shannon entropy theory. The distribution derived can model the full range of dependence and allow different specified marginals. The modeling and estimation can be proceeded as: (i) univariate analysis of marginal distributions which includes two steps, (a) using the nonparametric statistics approach to detect modes and underlying probability density, and (b) fitting the appropriate parametric probability density functions; (ii) define the constraints based on the univariate analysis and the dependence structure; (iii) derive and validate the entropy-based joint distribution. As to validate the method, the rainfall-runoff data are collected from the small agricultural experimental watersheds located in semi-arid region near Riesel (Waco), Texas, maintained by the USDA. The results of unviariate analysis show that the rainfall variables follow the gamma distribution, whereas the runoff variables have mixed structure and follow the mixed-gamma distribution. With this information, the entropy-based joint distribution is derived using the first moments, the first moments of logarithm transformed rainfall and runoff, and the covariance between rainfall and runoff. The results of entropy-based joint distribution indicate: (1) the joint distribution derived successfully preserves the dependence between rainfall and runoff, and (2) the K-S goodness of fit statistical tests confirm the marginal distributions re-derived reveal the underlying univariate probability densities which further assure that the entropy-based joint rainfall-runoff distribution are satisfactorily derived. Overall, the study shows the Shannon entropy theory can be satisfactorily applied to model the dependence between rainfall and runoff. The study also shows that the entropy-based joint distribution is an appropriate approach to capture the dependence structure that cannot be captured by the convenient bivariate joint distributions. Joint Rainfall-Runoff Entropy Based PDF, and Corresponding Marginal PDF and Histogram for W12 Watershed The K-S Test Result and RMSE on Univariate Distributions Derived from the Maximum Entropy Based Joint Probability Distribution;

  16. A Joint Multitarget Estimator for the Joint Target Detection and Tracking Filter

    DTIC Science & Technology

    2015-06-27

    function is the information theoretic part of the problem and aims for entropy maximization, while the second one arises from the constraint in the...objective functions in conflict. The first objective function is the information theo- retic part of the problem and aims for entropy maximization...theory. For the sake of completeness and clarity, we also summarize how each concept is utilized later. Entropy : A random variable is statistically

  17. Plasma Transport and Magnetic Flux Circulation in Saturn's Magnetosphere

    NASA Astrophysics Data System (ADS)

    Neupane, B. R.; Delamere, P. A.; Ma, X.; Wilson, R. J.

    2017-12-01

    Radial transport of plasma in the rapidly rotating magnetospheres is an important dynamical process. Radial transport is due to the centrifugally driven interchange instability and magnetodisc reconnection, allowing net mass to be transported outward while conserving magnetic flux. Using Cassini Plasma Spectrometer instrument (CAPS) data products (e.g., Thomsen et al., [2010]; Wilson et al., [2017]) we estimate plasma mass and magnetic flux transport rates as functions of radial distance and local time. The physical requirement for zero net magnetic flux transport provides a key benchmark for assessing the validity of our mass transport estimate. We also evaluate magnetodisc stability using a two-dimensional axisymmetric equilibrium model [Caudal, 1986]. Observed local properties (e.g., specific entropy and estimates of flux tube mass and entropy content) are compared with modeled equilibrium conditions such that departures from equilibrium can be correlated with radial flows and local magnetic field structure. Finally, observations of specific entropy indicate that plasma is non-adiabatic heated during transport. However, the values of specific entropy are well organized in inner magnetosphere (i.e. L<10), and become widely scattered in the middle magnetosphere, suggesting that the transport dynamics of the inner and middle magnetosphere are different.

  18. Nonequilibrium Thermodynamics in Biological Systems

    NASA Astrophysics Data System (ADS)

    Aoki, I.

    2005-12-01

    1. Respiration Oxygen-uptake by respiration in organisms decomposes macromolecules such as carbohydrate, protein and lipid and liberates chemical energy of high quality, which is then used to chemical reactions and motions of matter in organisms to support lively order in structure and function in organisms. Finally, this chemical energy becomes heat energy of low quality and is discarded to the outside (dissipation function). Accompanying this heat energy, entropy production which inevitably occurs by irreversibility also is discarded to the outside. Dissipation function and entropy production are estimated from data of respiration. 2. Human body From the observed data of respiration (oxygen absorption), the entropy production in human body can be estimated. Entropy production from 0 to 75 years old human has been obtained, and extrapolated to fertilized egg (beginning of human life) and to 120 years old (maximum period of human life). Entropy production show characteristic behavior in human life span : early rapid increase in short growing phase and later slow decrease in long aging phase. It is proposed that this tendency is ubiquitous and constitutes a Principle of Organization in complex biotic systems. 3. Ecological communities From the data of respiration of eighteen aquatic communities, specific (i.e. per biomass) entropy productions are obtained. They show two phase character with respect to trophic diversity : early increase and later decrease with the increase of trophic diversity. The trophic diversity in these aquatic ecosystems is shown to be positively correlated with the degree of eutrophication, and the degree of eutrophication is an "arrow of time" in the hierarchy of aquatic ecosystems. Hence specific entropy production has the two phase: early increase and later decrease with time. 4. Entropy principle for living systems The Second Law of Thermodynamics has been expressed as follows. 1) In isolated systems, entropy increases with time and approaches to a maximum value. This is well-known classical Clausius principle. 2) In open systems near equilibrium entropy production always decreases with time approaching a minimum stationary level. This is the minimum entropy production principle by Prigogine. These two principle are established ones. However, living systems are not isolated and not near to equilibrium. Hence, these two principles can not be applied to living systems. What is entropy principle for living systems? Answer: Entropy production in living systems consists of multi-stages with time: early increasing, later decreasing and/or intermediate stages. This tendency is supported by various living systems.

  19. Sample entropy analysis for the estimating depth of anaesthesia through human EEG signal at different levels of unconsciousness during surgeries.

    PubMed

    Liu, Quan; Ma, Li; Fan, Shou-Zen; Abbod, Maysam F; Shieh, Jiann-Shing

    2018-01-01

    Estimating the depth of anaesthesia (DoA) in operations has always been a challenging issue due to the underlying complexity of the brain mechanisms. Electroencephalogram (EEG) signals are undoubtedly the most widely used signals for measuring DoA. In this paper, a novel EEG-based index is proposed to evaluate DoA for 24 patients receiving general anaesthesia with different levels of unconsciousness. Sample Entropy (SampEn) algorithm was utilised in order to acquire the chaotic features of the signals. After calculating the SampEn from the EEG signals, Random Forest was utilised for developing learning regression models with Bispectral index (BIS) as the target. Correlation coefficient, mean absolute error, and area under the curve (AUC) were used to verify the perioperative performance of the proposed method. Validation comparisons with typical nonstationary signal analysis methods (i.e., recurrence analysis and permutation entropy) and regression methods (i.e., neural network and support vector machine) were conducted. To further verify the accuracy and validity of the proposed methodology, the data is divided into four unconsciousness-level groups on the basis of BIS levels. Subsequently, analysis of variance (ANOVA) was applied to the corresponding index (i.e., regression output). Results indicate that the correlation coefficient improved to 0.72 ± 0.09 after filtering and to 0.90 ± 0.05 after regression from the initial values of 0.51 ± 0.17. Similarly, the final mean absolute error dramatically declined to 5.22 ± 2.12. In addition, the ultimate AUC increased to 0.98 ± 0.02, and the ANOVA analysis indicates that each of the four groups of different anaesthetic levels demonstrated significant difference from the nearest levels. Furthermore, the Random Forest output was extensively linear in relation to BIS, thus with better DoA prediction accuracy. In conclusion, the proposed method provides a concrete basis for monitoring patients' anaesthetic level during surgeries.

  20. An Improved Otsu Threshold Segmentation Method for Underwater Simultaneous Localization and Mapping-Based Navigation

    PubMed Central

    Yuan, Xin; Martínez, José-Fernán; Eckert, Martina; López-Santidrián, Lourdes

    2016-01-01

    The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is shown in simulated experiments. PMID:27455279

  1. An Improved Otsu Threshold Segmentation Method for Underwater Simultaneous Localization and Mapping-Based Navigation.

    PubMed

    Yuan, Xin; Martínez, José-Fernán; Eckert, Martina; López-Santidrián, Lourdes

    2016-07-22

    The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is shown in simulated experiments.

  2. Gradient Dynamics and Entropy Production Maximization

    NASA Astrophysics Data System (ADS)

    Janečka, Adam; Pavelka, Michal

    2018-01-01

    We compare two methods for modeling dissipative processes, namely gradient dynamics and entropy production maximization. Both methods require similar physical inputs-how energy (or entropy) is stored and how it is dissipated. Gradient dynamics describes irreversible evolution by means of dissipation potential and entropy, it automatically satisfies Onsager reciprocal relations as well as their nonlinear generalization (Maxwell-Onsager relations), and it has statistical interpretation. Entropy production maximization is based on knowledge of free energy (or another thermodynamic potential) and entropy production. It also leads to the linear Onsager reciprocal relations and it has proven successful in thermodynamics of complex materials. Both methods are thermodynamically sound as they ensure approach to equilibrium, and we compare them and discuss their advantages and shortcomings. In particular, conditions under which the two approaches coincide and are capable of providing the same constitutive relations are identified. Besides, a commonly used but not often mentioned step in the entropy production maximization is pinpointed and the condition of incompressibility is incorporated into gradient dynamics.

  3. Improving Estimations of Spatial Distribution of Soil Respiration Using the Bayesian Maximum Entropy Algorithm and Soil Temperature as Auxiliary Data.

    PubMed

    Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi

    2016-01-01

    Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points.

  4. Improving Estimations of Spatial Distribution of Soil Respiration Using the Bayesian Maximum Entropy Algorithm and Soil Temperature as Auxiliary Data

    PubMed Central

    Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi

    2016-01-01

    Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points. PMID:26807579

  5. The Arrow of Time in the Collapse of Collisionless Self-gravitating Systems: Non-validity of the Vlasov-Poisson Equation during Violent Relaxation

    NASA Astrophysics Data System (ADS)

    Beraldo e Silva, Leandro; de Siqueira Pedra, Walter; Sodré, Laerte; Perico, Eder L. D.; Lima, Marcos

    2017-09-01

    The collapse of a collisionless self-gravitating system, with the fast achievement of a quasi-stationary state, is driven by violent relaxation, with a typical particle interacting with the time-changing collective potential. It is traditionally assumed that this evolution is governed by the Vlasov-Poisson equation, in which case entropy must be conserved. We run N-body simulations of isolated self-gravitating systems, using three simulation codes, NBODY-6 (direct summation without softening), NBODY-2 (direct summation with softening), and GADGET-2 (tree code with softening), for different numbers of particles and initial conditions. At each snapshot, we estimate the Shannon entropy of the distribution function with three different techniques: Kernel, Nearest Neighbor, and EnBiD. For all simulation codes and estimators, the entropy evolution converges to the same limit as N increases. During violent relaxation, the entropy has a fast increase followed by damping oscillations, indicating that violent relaxation must be described by a kinetic equation other than the Vlasov-Poisson equation, even for N as large as that of astronomical structures. This indicates that violent relaxation cannot be described by a time-reversible equation, shedding some light on the so-called “fundamental paradox of stellar dynamics.” The long-term evolution is well-described by the orbit-averaged Fokker-Planck model, with Coulomb logarithm values in the expected range 10{--}12. By means of NBODY-2, we also study the dependence of the two-body relaxation timescale on the softening length. The approach presented in the current work can potentially provide a general method for testing any kinetic equation intended to describe the macroscopic evolution of N-body systems.

  6. Bayesian Maximum Entropy space/time estimation of surface water chloride in Maryland using river distances.

    PubMed

    Jat, Prahlad; Serre, Marc L

    2016-12-01

    Widespread contamination of surface water chloride is an emerging environmental concern. Consequently accurate and cost-effective methods are needed to estimate chloride along all river miles of potentially contaminated watersheds. Here we introduce a Bayesian Maximum Entropy (BME) space/time geostatistical estimation framework that uses river distances, and we compare it with Euclidean BME to estimate surface water chloride from 2005 to 2014 in the Gunpowder-Patapsco, Severn, and Patuxent subbasins in Maryland. River BME improves the cross-validation R 2 by 23.67% over Euclidean BME, and river BME maps are significantly different than Euclidean BME maps, indicating that it is important to use river BME maps to assess water quality impairment. The river BME maps of chloride concentration show wide contamination throughout Baltimore and Columbia-Ellicott cities, the disappearance of a clean buffer separating these two large urban areas, and the emergence of multiple localized pockets of contamination in surrounding areas. The number of impaired river miles increased by 0.55% per year in 2005-2009 and by 1.23% per year in 2011-2014, corresponding to a marked acceleration of the rate of impairment. Our results support the need for control measures and increased monitoring of unassessed river miles. Copyright © 2016. Published by Elsevier Ltd.

  7. Shallow water equations: viscous solutions and inviscid limit

    NASA Astrophysics Data System (ADS)

    Chen, Gui-Qiang; Perepelitsa, Mikhail

    2012-12-01

    We establish the inviscid limit of the viscous shallow water equations to the Saint-Venant system. For the viscous equations, the viscosity terms are more degenerate when the shallow water is close to the bottom, in comparison with the classical Navier-Stokes equations for barotropic gases; thus, the analysis in our earlier work for the classical Navier-Stokes equations does not apply directly, which require new estimates to deal with the additional degeneracy. We first introduce a notion of entropy solutions to the viscous shallow water equations and develop an approach to establish the global existence of such solutions and their uniform energy-type estimates with respect to the viscosity coefficient. These uniform estimates yield the existence of measure-valued solutions to the Saint-Venant system generated by the viscous solutions. Based on the uniform energy-type estimates and the features of the Saint-Venant system, we further establish that the entropy dissipation measures of the viscous solutions for weak entropy-entropy flux pairs, generated by compactly supported C 2 test-functions, are confined in a compact set in H -1, which yields that the measure-valued solutions are confined by the Tartar-Murat commutator relation. Then, the reduction theorem established in Chen and Perepelitsa [5] for the measure-valued solutions with unbounded support leads to the convergence of the viscous solutions to a finite-energy entropy solution of the Saint-Venant system with finite-energy initial data, which is relative with respect to the different end-states of the bottom topography of the shallow water at infinity. The analysis also applies to the inviscid limit problem for the Saint-Venant system in the presence of friction.

  8. Influence of temperature variations on the entropy and correlation of the Grey-Level Co-occurrence Matrix from B-Mode images.

    PubMed

    Alvarenga, André V; Teixeira, César A; Ruano, Maria Graça; Pereira, Wagner C A

    2010-02-01

    In this work, the feasibility of texture parameters extracted from B-Mode images were explored in quantifying medium temperature variation. The goal is to understand how parameters obtained from the gray-level content can be used to improve the actual state-of-the-art methods for non-invasive temperature estimation (NITE). B-Mode images were collected from a tissue mimic phantom heated in a water bath. The phantom is a mixture of water, glycerin, agar-agar and graphite powder. This mixture aims to have similar acoustical properties to in vivo muscle. Images from the phantom were collected using an ultrasound system that has a mechanical sector transducer working at 3.5 MHz. Three temperature curves were collected, and variations between 27 and 44 degrees C during 60 min were allowed. Two parameters (correlation and entropy) were determined from Grey-Level Co-occurrence Matrix (GLCM) extracted from image, and then assessed for non-invasive temperature estimation. Entropy values were capable of identifying variations of 2.0 degrees C. Besides, it was possible to quantify variations from normal human body temperature (37 degrees C) to critical values, as 41 degrees C. In contrast, despite correlation parameter values (obtained from GLCM) presented a correlation coefficient of 0.84 with temperature variation, the high dispersion of values limited the temperature assessment.

  9. Generalized sample entropy analysis for traffic signals based on similarity measure

    NASA Astrophysics Data System (ADS)

    Shang, Du; Xu, Mengjia; Shang, Pengjian

    2017-05-01

    Sample entropy is a prevailing method used to quantify the complexity of a time series. In this paper a modified method of generalized sample entropy and surrogate data analysis is proposed as a new measure to assess the complexity of a complex dynamical system such as traffic signals. The method based on similarity distance presents a different way of signals patterns match showing distinct behaviors of complexity. Simulations are conducted over synthetic data and traffic signals for providing the comparative study, which is provided to show the power of the new method. Compared with previous sample entropy and surrogate data analysis, the new method has two main advantages. The first one is that it overcomes the limitation about the relationship between the dimension parameter and the length of series. The second one is that the modified sample entropy functions can be used to quantitatively distinguish time series from different complex systems by the similar measure.

  10. Estimation of time-series properties of gourd observed solar irradiance data using cloud properties derived from satellite observations

    NASA Astrophysics Data System (ADS)

    Watanabe, T.; Nohara, D.

    2017-12-01

    The shorter temporal scale variation in the downward solar irradiance at the ground level (DSI) is not understood well because researches in the shorter-scale variation in the DSI is based on the ground observation and ground observation stations are located coarsely. Use of dataset derived from satellite observation will overcome such defect. DSI data and MODIS cloud properties product are analyzed simultaneously. Three metrics: mean, standard deviation and sample entropy, are used to evaluate time-series properties of the DSI. Three metrics are computed from two-hours time-series centered at the observation time of MODIS over the ground observation stations. We apply the regression methods to design prediction models of each three metrics from cloud properties. The validation of the model accuracy show that mean and standard deviation are predicted with a higher degree of accuracy and that the accuracy of prediction of sample entropy, which represents the complexity of time-series, is not high. One of causes of lower prediction skill of sample entropy is the resolution of the MODIS cloud properties. Higher sample entropy is corresponding to the rapid fluctuation, which is caused by the small and unordered cloud. It seems that such clouds isn't retrieved well.

  11. Free Energy, Enthalpy and Entropy from Implicit Solvent End-Point Simulations.

    PubMed

    Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro

    2018-01-01

    Free energy is the key quantity to describe the thermodynamics of biological systems. In this perspective we consider the calculation of free energy, enthalpy and entropy from end-point molecular dynamics simulations. Since the enthalpy may be calculated as the ensemble average over equilibrated simulation snapshots the difficulties related to free energy calculation are ultimately related to the calculation of the entropy of the system and in particular of the solvent entropy. In the last two decades implicit solvent models have been used to circumvent the problem and to take into account solvent entropy implicitly in the solvation terms. More recently outstanding advancement in both implicit solvent models and in entropy calculations are making the goal of free energy estimation from end-point simulations more feasible than ever before. We review briefly the basic theory and discuss the advancements in light of practical applications.

  12. Entropy Stable Wall Boundary Conditions for the Compressible Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Parsani, Matteo; Carpenter, Mark H.; Nielsen, Eric J.

    2014-01-01

    Non-linear entropy stability and a summation-by-parts framework are used to derive entropy stable wall boundary conditions for the compressible Navier-Stokes equations. A semi-discrete entropy estimate for the entire domain is achieved when the new boundary conditions are coupled with an entropy stable discrete interior operator. The data at the boundary are weakly imposed using a penalty flux approach and a simultaneous-approximation-term penalty technique. Although discontinuous spectral collocation operators are used herein for the purpose of demonstrating their robustness and efficacy, the new boundary conditions are compatible with any diagonal norm summation-by-parts spatial operator, including finite element, finite volume, finite difference, discontinuous Galerkin, and flux reconstruction schemes. The proposed boundary treatment is tested for three-dimensional subsonic and supersonic flows. The numerical computations corroborate the non-linear stability (entropy stability) and accuracy of the boundary conditions.

  13. Entropy Stable Wall Boundary Conditions for the Three-Dimensional Compressible Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Parsani, Matteo; Carpenter, Mark H.; Nielsen, Eric J.

    2015-01-01

    Non-linear entropy stability and a summation-by-parts framework are used to derive entropy stable wall boundary conditions for the three-dimensional compressible Navier-Stokes equations. A semi-discrete entropy estimate for the entire domain is achieved when the new boundary conditions are coupled with an entropy stable discrete interior operator. The data at the boundary are weakly imposed using a penalty flux approach and a simultaneous-approximation-term penalty technique. Although discontinuous spectral collocation operators on unstructured grids are used herein for the purpose of demonstrating their robustness and efficacy, the new boundary conditions are compatible with any diagonal norm summation-by-parts spatial operator, including finite element, finite difference, finite volume, discontinuous Galerkin, and flux reconstruction/correction procedure via reconstruction schemes. The proposed boundary treatment is tested for three-dimensional subsonic and supersonic flows. The numerical computations corroborate the non-linear stability (entropy stability) and accuracy of the boundary conditions.

  14. Derivation of Hunt equation for suspension distribution using Shannon entropy theory

    NASA Astrophysics Data System (ADS)

    Kundu, Snehasis

    2017-12-01

    In this study, the Hunt equation for computing suspension concentration in sediment-laden flows is derived using Shannon entropy theory. Considering the inverse of the void ratio as a random variable and using principle of maximum entropy, probability density function and cumulative distribution function of suspension concentration is derived. A new and more general cumulative distribution function for the flow domain is proposed which includes several specific other models of CDF reported in literature. This general form of cumulative distribution function also helps to derive the Rouse equation. The entropy based approach helps to estimate model parameters using suspension data of sediment concentration which shows the advantage of using entropy theory. Finally model parameters in the entropy based model are also expressed as functions of the Rouse number to establish a link between the parameters of the deterministic and probabilistic approaches.

  15. Entropy Econometrics for combining regional economic forecasts: A Data-Weighted Prior Estimator

    NASA Astrophysics Data System (ADS)

    Fernández-Vázquez, Esteban; Moreno, Blanca

    2017-10-01

    Forecast combination has been studied in econometrics for a long time, and the literature has shown the superior performance of forecast combination over individual predictions. However, there is still controversy on which is the best procedure to specify the forecast weights. This paper explores the possibility of using a procedure based on Entropy Econometrics, which allows setting the weights for the individual forecasts as a mixture of different alternatives. In particular, we examine the ability of the Data-Weighted Prior Estimator proposed by Golan (J Econom 101(1):165-193, 2001) to combine forecasting models in a context of small sample sizes, a relative common scenario when dealing with time series for regional economies. We test the validity of the proposed approach using a simulation exercise and a real-world example that aims at predicting gross regional product growth rates for a regional economy. The forecasting performance of the Data-Weighted Prior Estimator proposed is compared with other combining methods. The simulation results indicate that in scenarios of heavily ill-conditioned datasets the approach suggested dominates other forecast combination strategies. The empirical results are consistent with the conclusions found in the numerical experiment.

  16. Inferring Species Richness and Turnover by Statistical Multiresolution Texture Analysis of Satellite Imagery

    PubMed Central

    Convertino, Matteo; Mangoubi, Rami S.; Linkov, Igor; Lowry, Nathan C.; Desai, Mukund

    2012-01-01

    Background The quantification of species-richness and species-turnover is essential to effective monitoring of ecosystems. Wetland ecosystems are particularly in need of such monitoring due to their sensitivity to rainfall, water management and other external factors that affect hydrology, soil, and species patterns. A key challenge for environmental scientists is determining the linkage between natural and human stressors, and the effect of that linkage at the species level in space and time. We propose pixel intensity based Shannon entropy for estimating species-richness, and introduce a method based on statistical wavelet multiresolution texture analysis to quantitatively assess interseasonal and interannual species turnover. Methodology/Principal Findings We model satellite images of regions of interest as textures. We define a texture in an image as a spatial domain where the variations in pixel intensity across the image are both stochastic and multiscale. To compare two textures quantitatively, we first obtain a multiresolution wavelet decomposition of each. Either an appropriate probability density function (pdf) model for the coefficients at each subband is selected, and its parameters estimated, or, a non-parametric approach using histograms is adopted. We choose the former, where the wavelet coefficients of the multiresolution decomposition at each subband are modeled as samples from the generalized Gaussian pdf. We then obtain the joint pdf for the coefficients for all subbands, assuming independence across subbands; an approximation that simplifies the computational burden significantly without sacrificing the ability to statistically distinguish textures. We measure the difference between two textures' representative pdf's via the Kullback-Leibler divergence (KL). Species turnover, or diversity, is estimated using both this KL divergence and the difference in Shannon entropy. Additionally, we predict species richness, or diversity, based on the Shannon entropy of pixel intensity.To test our approach, we specifically use the green band of Landsat images for a water conservation area in the Florida Everglades. We validate our predictions against data of species occurrences for a twenty-eight years long period for both wet and dry seasons. Our method correctly predicts 73% of species richness. For species turnover, the newly proposed KL divergence prediction performance is near 100% accurate. This represents a significant improvement over the more conventional Shannon entropy difference, which provides 85% accuracy. Furthermore, we find that changes in soil and water patterns, as measured by fluctuations of the Shannon entropy for the red and blue bands respectively, are positively correlated with changes in vegetation. The fluctuations are smaller in the wet season when compared to the dry season. Conclusions/Significance Texture-based statistical multiresolution image analysis is a promising method for quantifying interseasonal differences and, consequently, the degree to which vegetation, soil, and water patterns vary. The proposed automated method for quantifying species richness and turnover can also provide analysis at higher spatial and temporal resolution than is currently obtainable from expensive monitoring campaigns, thus enabling more prompt, more cost effective inference and decision making support regarding anomalous variations in biodiversity. Additionally, a matrix-based visualization of the statistical multiresolution analysis is presented to facilitate both insight and quick recognition of anomalous data. PMID:23115629

  17. Minimax Quantum Tomography: Estimators and Relative Entropy Bounds

    DOE PAGES

    Ferrie, Christopher; Blume-Kohout, Robin

    2016-03-04

    A minimax estimator has the minimum possible error (“risk”) in the worst case. Here we construct the first minimax estimators for quantum state tomography with relative entropy risk. The minimax risk of nonadaptive tomography scales as O (1/more » $$\\sqrt{N}$$ ) —in contrast to that of classical probability estimation, which is O (1/N) —where N is the number of copies of the quantum state used. We trace this deficiency to sampling mismatch: future observations that determine risk may come from a different sample space than the past data that determine the estimate. Lastly, this makes minimax estimators very biased, and we propose a computationally tractable alternative with similar behavior in the worst case, but superior accuracy on most states.« less

  18. Driver fatigue detection through multiple entropy fusion analysis in an EEG-based system

    PubMed Central

    Min, Jianliang; Wang, Ping

    2017-01-01

    Driver fatigue is an important contributor to road accidents, and fatigue detection has major implications for transportation safety. The aim of this research is to analyze the multiple entropy fusion method and evaluate several channel regions to effectively detect a driver's fatigue state based on electroencephalogram (EEG) records. First, we fused multiple entropies, i.e., spectral entropy, approximate entropy, sample entropy and fuzzy entropy, as features compared with autoregressive (AR) modeling by four classifiers. Second, we captured four significant channel regions according to weight-based electrodes via a simplified channel selection method. Finally, the evaluation model for detecting driver fatigue was established with four classifiers based on the EEG data from four channel regions. Twelve healthy subjects performed continuous simulated driving for 1–2 hours with EEG monitoring on a static simulator. The leave-one-out cross-validation approach obtained an accuracy of 98.3%, a sensitivity of 98.3% and a specificity of 98.2%. The experimental results verified the effectiveness of the proposed method, indicating that the multiple entropy fusion features are significant factors for inferring the fatigue state of a driver. PMID:29220351

  19. Exact Maximum-Entropy Estimation with Feynman Diagrams

    NASA Astrophysics Data System (ADS)

    Netser Zernik, Amitai; Schlank, Tomer M.; Tessler, Ran J.

    2018-02-01

    A longstanding open problem in statistics is finding an explicit expression for the probability measure which maximizes entropy with respect to given constraints. In this paper a solution to this problem is found, using perturbative Feynman calculus. The explicit expression is given as a sum over weighted trees.

  20. Computational Methods for Configurational Entropy Using Internal and Cartesian Coordinates.

    PubMed

    Hikiri, Simon; Yoshidome, Takashi; Ikeguchi, Mitsunori

    2016-12-13

    The configurational entropy of solute molecules is a crucially important quantity to study various biophysical processes. Consequently, it is necessary to establish an efficient quantitative computational method to calculate configurational entropy as accurately as possible. In the present paper, we investigate the quantitative performance of the quasi-harmonic and related computational methods, including widely used methods implemented in popular molecular dynamics (MD) software packages, compared with the Clausius method, which is capable of accurately computing the change of the configurational entropy upon temperature change. Notably, we focused on the choice of the coordinate systems (i.e., internal or Cartesian coordinates). The Boltzmann-quasi-harmonic (BQH) method using internal coordinates outperformed all the six methods examined here. The introduction of improper torsions in the BQH method improves its performance, and anharmonicity of proper torsions in proteins is identified to be the origin of the superior performance of the BQH method. In contrast, widely used methods implemented in MD packages show rather poor performance. In addition, the enhanced sampling of replica-exchange MD simulations was found to be efficient for the convergent behavior of entropy calculations. Also in folding/unfolding transitions of a small protein, Chignolin, the BQH method was reasonably accurate. However, the independent term without the correlation term in the BQH method was most accurate for the folding entropy among the methods considered in this study, because the QH approximation of the correlation term in the BQH method was no longer valid for the divergent unfolded structures.

  1. Application of a Real-Time, Calculable Limiting Form of the Renyi Entropy for Molecular Imaging of Tumors

    PubMed Central

    Marsh, J. N.; Wallace, K. D.; McCarthy, J. E.; Wickerhauser, M. V.; Maurizi, B. N.; Lanza, G. M.; Wickline, S. A.; Hughes, M. S.

    2011-01-01

    Previously, we reported new methods for ultrasound signal characterization using entropy, Hf; a generalized entropy, the Renyi entropy, If(r); and a limiting form of Renyi entropy suitable for real-time calculation, If,∞. All of these quantities demonstrated significantly more sensitivity to subtle changes in scattering architecture than energy-based methods in certain settings. In this study, the real-time calculable limit of the Renyi entropy, If,∞, is applied for the imaging of angiogenic murine neovasculature in a breast cancer xenograft using a targeted contrast agent. It is shown that this approach may be used to detect reliably the accumulation of targeted nanoparticles at five minutes post-injection in this in vivo model. PMID:20679020

  2. Music viewed by its entropy content: A novel window for comparative analysis.

    PubMed

    Febres, Gerardo; Jaffe, Klaus

    2017-01-01

    Polyphonic music files were analyzed using the set of symbols that produced the Minimal Entropy Description, which we call the Fundamental Scale. This allowed us to create a novel space to represent music pieces by developing: (a) a method to adjust a textual description from its original scale of observation to an arbitrarily selected scale, (b) a method to model the structure of any textual description based on the shape of the symbol frequency profiles, and (c) the concept of higher order entropy as the entropy associated with the deviations of a frequency-ranked symbol profile from a perfect Zipfian profile. We call this diversity index the '2nd Order Entropy'. Applying these methods to a variety of musical pieces showed how the space of 'symbolic specific diversity-entropy' and that of '2nd order entropy' captures characteristics that are unique to each music type, style, composer and genre. Some clustering of these properties around each musical category is shown. These methods allow us to visualize a historic trajectory of academic music across this space, from medieval to contemporary academic music. We show that the description of musical structures using entropy, symbol frequency profiles and specific symbolic diversity allows us to characterize traditional and popular expressions of music. These classification techniques promise to be useful in other disciplines for pattern recognition and machine learning.

  3. Risk Assessment on Constructors during Over-water Riprap Based on Entropy Weight and FAHP

    NASA Astrophysics Data System (ADS)

    Wu, Tongqing; Li, Liang; Liang, Zelong; Mao, Tian; Shao, Weifeng

    2017-07-01

    Being aimed at waterway regulation engineering, there exist risks of over-water riprap for constructors which keeps uncertainty and complexity. For the purpose of evaluating the possibility and consequence, this paper utilizes fuzzy analytic hierarchy process with abbreviation of FAHP to do empowerment on the related risk indicators, constructs FAHP under entropy weight and establishes relevant evaluation factor set and evaluation language for constructors during over-water riprap construction process. Through doing risk probability estimation and risk consequence size evaluation on the factor of constructors, this paper introduces this model into risk analysis on constructors during over-water riprap of Ching River waterway regulation project. Results show that evaluation of this method is so credible that it could be utilized in practical engineering.

  4. Bounds on the entanglement entropy of droplet states in the XXZ spin chain

    NASA Astrophysics Data System (ADS)

    Beaud, V.; Warzel, S.

    2018-01-01

    We consider a class of one-dimensional quantum spin systems on the finite lattice Λ ⊂Z , related to the XXZ spin chain in its Ising phase. It includes in particular the so-called droplet Hamiltonian. The entanglement entropy of energetically low-lying states over a bipartition Λ = B ∪ Bc is investigated and proven to satisfy a logarithmic bound in terms of min{n, |B|, |Bc|}, where n denotes the maximal number of down spins in the considered state. Upon addition of any (positive) random potential, the bound becomes uniformly constant on average, thereby establishing an area law. The proof is based on spectral methods: a deterministic bound on the local (many-body integrated) density of states is derived from an energetically motivated Combes-Thomas estimate.

  5. Role of binding entropy in the refinement of protein-ligand docking predictions: analysis based on the use of 11 scoring functions.

    PubMed

    Ruvinsky, Anatoly M

    2007-06-01

    We present results of testing the ability of eleven popular scoring functions to predict native docked positions using a recently developed method (Ruvinsky and Kozintsev, J Comput Chem 2005, 26, 1089) for estimation the entropy contributions of relative motions to protein-ligand binding affinity. The method is based on the integration of the configurational integral over clusters obtained from multiple docked positions. We use a test set of 100 PDB protein-ligand complexes and ensembles of 101 docked positions generated by (Wang et al. J Med Chem 2003, 46, 2287) for each ligand in the test set. To test the suggested method we compared the averaged root-mean square deviations (RMSD) of the top-scored ligand docked positions, accounting and not accounting for entropy contributions, relative to the experimentally determined positions. We demonstrate that the method increases docking accuracy by 10-21% when used in conjunction with the AutoDock scoring function, by 2-25% with G-Score, by 7-41% with D-Score, by 0-8% with LigScore, by 1-6% with PLP, by 0-12% with LUDI, by 2-8% with F-Score, by 7-29% with ChemScore, by 0-9% with X-Score, by 2-19% with PMF, and by 1-7% with DrugScore. We also compared the performance of the suggested method with the method based on ranking by cluster occupancy only. We analyze how the choice of a clustering-RMSD and a low bound of dense clusters impacts on docking accuracy of the scoring methods. We derive optimal intervals of the clustering-RMSD for 11 scoring functions.

  6. Comparing Evaporative Sources of Terrestrial Precipitation and Their Extremes in MERRA Using Relative Entropy

    NASA Technical Reports Server (NTRS)

    Dirmeyer, Paul A.; Wei, Jiangfeng; Bosilovich, Michael G.; Mocko, David M.

    2014-01-01

    A quasi-isentropic back trajectory scheme is applied to output from the Modern Era Retrospective-analysis for Research and Applications and a land-only replay with corrected precipitation to estimate surface evaporative sources of moisture supplying precipitation over every ice-free land location for the period 1979-2005. The evaporative source patterns for any location and time period are effectively two dimensional probability distributions. As such, the evaporative sources for extreme situations like droughts or wet intervals can be compared to the corresponding climatological distributions using the method of relative entropy. Significant differences are found to be common and widespread for droughts, but not wet periods, when monthly data are examined. At pentad temporal resolution, which is more able to isolate floods and situations of atmospheric rivers, values of relative entropy over North America are typically 50-400 larger than at monthly time scales. Significant differences suggest that moisture transport may be the key to precipitation extremes. Where evaporative sources do not change significantly, it implies other local causes may underlie the extreme events.

  7. Quantum chemical approach for condensed-phase thermochemistry (V): Development of rigid-body type harmonic solvation model

    NASA Astrophysics Data System (ADS)

    Tarumi, Moto; Nakai, Hiromi

    2018-05-01

    This letter proposes an approximate treatment of the harmonic solvation model (HSM) assuming the solute to be a rigid body (RB-HSM). The HSM method can appropriately estimate the Gibbs free energy for condensed phases even where an ideal gas model used by standard quantum chemical programs fails. The RB-HSM method eliminates calculations for intra-molecular vibrations in order to reduce the computational costs. Numerical assessments indicated that the RB-HSM method can evaluate entropies and internal energies with the same accuracy as the HSM method but with lower calculation costs.

  8. Performance Analysis of Entropy Methods on K Means in Clustering Process

    NASA Astrophysics Data System (ADS)

    Dicky Syahputra Lubis, Mhd.; Mawengkang, Herman; Suwilo, Saib

    2017-12-01

    K Means is a non-hierarchical data clustering method that attempts to partition existing data into one or more clusters / groups. This method partitions the data into clusters / groups so that data that have the same characteristics are grouped into the same cluster and data that have different characteristics are grouped into other groups.The purpose of this data clustering is to minimize the objective function set in the clustering process, which generally attempts to minimize variation within a cluster and maximize the variation between clusters. However, the main disadvantage of this method is that the number k is often not known before. Furthermore, a randomly chosen starting point may cause two points to approach the distance to be determined as two centroids. Therefore, for the determination of the starting point in K Means used entropy method where this method is a method that can be used to determine a weight and take a decision from a set of alternatives. Entropy is able to investigate the harmony in discrimination among a multitude of data sets. Using Entropy criteria with the highest value variations will get the highest weight. Given this entropy method can help K Means work process in determining the starting point which is usually determined at random. Thus the process of clustering on K Means can be more quickly known by helping the entropy method where the iteration process is faster than the K Means Standard process. Where the postoperative patient dataset of the UCI Repository Machine Learning used and using only 12 data as an example of its calculations is obtained by entropy method only with 2 times iteration can get the desired end result.

  9. Exploring stability of entropy analysis for signal with different trends

    NASA Astrophysics Data System (ADS)

    Zhang, Yin; Li, Jin; Wang, Jun

    2017-03-01

    Considering the effects of environment disturbances and instrument systems, the actual detecting signals always are carrying different trends, which result in that it is difficult to accurately catch signals complexity. So choosing steady and effective analysis methods is very important. In this paper, we applied entropy measures-the base-scale entropy and approximate entropy to analyze signal complexity, and studied the effect of trends on the ideal signal and the heart rate variability (HRV) signals, that is, linear, periodic, and power-law trends which are likely to occur in actual signals. The results show that approximate entropy is unsteady when we embed different trends into the signals, so it is not suitable to analyze signal with trends. However, the base-scale entropy has preferable stability and accuracy for signal with different trends. So the base-scale entropy is an effective method to analyze the actual signals.

  10. Estimation of Melting Points of Organics.

    PubMed

    Yalkowsky, Samuel H; Alantary, Doaa

    2018-05-01

    Unified physicochemical property estimation relationships is a system of empirical and theoretical relationships that relate 20 physicochemical properties of organic molecules to each other and to chemical structure. Melting point is a key parameter in the unified physicochemical property estimation relationships scheme because it is a determinant of several other properties including vapor pressure, and solubility. This review describes the first-principals calculation of the melting points of organic compounds from structure. The calculation is based on the fact that the melting point, T m , is equal to the ratio of the heat of melting, ΔH m , to the entropy of melting, ΔS m . The heat of melting is shown to be an additive constitutive property. However, the entropy of melting is not entirely group additive. It is primarily dependent on molecular geometry, including parameters which reflect the degree of restriction of molecular motion in the crystal to that of the liquid. Symmetry, eccentricity, chirality, flexibility, and hydrogen bonding, each affect molecular freedom in different ways and thus make different contributions to the total entropy of fusion. The relationships of these entropy determining parameters to chemical structure are used to develop a reasonably accurate means of predicting the melting points over 2000 compounds. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  11. Spatiotemporal modeling of ozone levels in Quebec (Canada): a comparison of kriging, land-use regression (LUR), and combined Bayesian maximum entropy-LUR approaches.

    PubMed

    Adam-Poupart, Ariane; Brand, Allan; Fournier, Michel; Jerrett, Michael; Smargiassi, Audrey

    2014-09-01

    Ambient air ozone (O3) is a pulmonary irritant that has been associated with respiratory health effects including increased lung inflammation and permeability, airway hyperreactivity, respiratory symptoms, and decreased lung function. Estimation of O3 exposure is a complex task because the pollutant exhibits complex spatiotemporal patterns. To refine the quality of exposure estimation, various spatiotemporal methods have been developed worldwide. We sought to compare the accuracy of three spatiotemporal models to predict summer ground-level O3 in Quebec, Canada. We developed a land-use mixed-effects regression (LUR) model based on readily available data (air quality and meteorological monitoring data, road networks information, latitude), a Bayesian maximum entropy (BME) model incorporating both O3 monitoring station data and the land-use mixed model outputs (BME-LUR), and a kriging method model based only on available O3 monitoring station data (BME kriging). We performed leave-one-station-out cross-validation and visually assessed the predictive capability of each model by examining the mean temporal and spatial distributions of the average estimated errors. The BME-LUR was the best predictive model (R2 = 0.653) with the lowest root mean-square error (RMSE ;7.06 ppb), followed by the LUR model (R2 = 0.466, RMSE = 8.747) and the BME kriging model (R2 = 0.414, RMSE = 9.164). Our findings suggest that errors of estimation in the interpolation of O3 concentrations with BME can be greatly reduced by incorporating outputs from a LUR model developed with readily available data.

  12. Entropy of Ultrasound-Contrast-Agent Velocity Fields for Angiogenesis Imaging in Prostate Cancer.

    PubMed

    van Sloun, Ruud J G; Demi, Libertario; Postema, Arnoud W; Jmch De La Rosette, Jean; Wijkstra, Hessel; Mischi, Massimo

    2017-03-01

    Prostate cancer care can benefit from accurate and cost-efficient imaging modalities that are able to reveal prognostic indicators for cancer. Angiogenesis is known to play a central role in the growth of tumors towards a metastatic or a lethal phenotype. With the aim of localizing angiogenic activity in a non-invasive manner, Dynamic Contrast Enhanced Ultrasound (DCE-US) has been widely used. Usually, the passage of ultrasound contrast agents thought the organ of interest is analyzed for the assessment of tissue perfusion. However, the heterogeneous nature of blood flow in angiogenic vasculature hampers the diagnostic effectiveness of perfusion parameters. In this regard, quantification of the heterogeneity of flow may provide a relevant additional feature for localizing angiogenesis. Statistics based on flow magnitude as well as its orientation can be exploited for this purpose. In this paper, we estimate the microbubble velocity fields from a standard bolus injection and provide a first statistical characterization by performing a spatial entropy analysis. By testing the method on 24 patients with biopsy-proven prostate cancer, we show that the proposed method can be applied effectively to clinically acquired DCE-US data. The method permits estimation of the in-plane flow vector fields and their local intricacy, and yields promising results (receiver-operating-characteristic curve area of 0.85) for the detection of prostate cancer.

  13. Reply to "Comment on 'Quantum Kaniadakis entropy under projective measurement' ".

    PubMed

    Ourabah, Kamel; Tribeche, Mouloud

    2016-08-01

    We rely on our proof of the nondecreasing character of quantum Kaniadakis entropy under projective measurement [Phys. Rev. E 92, 032114 (2015)PLEEE81539-375510.1103/PhysRevE.92.032114], and we put it into perspective with the results of Bosyk et al. [Quantum Inf Process 15, 3393 (2016)10.1007/s11128-016-1329-5]. Our method, adopted for the proof that Kaniadakis entropy does not decrease under a projective measurement, is based on Jensen's inequalities, while the method proposed by the authors of the Comment represents another alternative and clearly correct method to prove the same thing. Furthermore, we clarify that our interest in Kaniadakis entropy is due to the fact that this entropy has a transparent physical significance, emerging within the special relativity.

  14. Fast estimate of Hartley entropy in image sharpening

    NASA Astrophysics Data System (ADS)

    Krbcová, Zuzana; Kukal, Jaromír.; Svihlik, Jan; Fliegel, Karel

    2016-09-01

    Two classes of linear IIR filters: Laplacian of Gaussian (LoG) and Difference of Gaussians (DoG) are frequently used as high pass filters for contextual vision and edge detection. They are also used for image sharpening when linearly combined with the original image. Resulting sharpening filters are radially symmetric in spatial and frequency domains. Our approach is based on the radial approximation of unknown optimal filter, which is designed as a weighted sum of Gaussian filters with various radii. The novel filter is designed for MRI image enhancement where the image intensity represents anatomical structure plus additive noise. We prefer the gradient norm of Hartley entropy of whole image intensity as a measure which has to be maximized for the best sharpening. The entropy estimation procedure is as fast as FFT included in the filter but this estimate is a continuous function of enhanced image intensities. Physically motivated heuristic is used for optimum sharpening filter design by its parameter tuning. Our approach is compared with Wiener filter on MRI images.

  15. Inference of gene regulatory networks from time series by Tsallis entropy

    PubMed Central

    2011-01-01

    Background The inference of gene regulatory networks (GRNs) from large-scale expression profiles is one of the most challenging problems of Systems Biology nowadays. Many techniques and models have been proposed for this task. However, it is not generally possible to recover the original topology with great accuracy, mainly due to the short time series data in face of the high complexity of the networks and the intrinsic noise of the expression measurements. In order to improve the accuracy of GRNs inference methods based on entropy (mutual information), a new criterion function is here proposed. Results In this paper we introduce the use of generalized entropy proposed by Tsallis, for the inference of GRNs from time series expression profiles. The inference process is based on a feature selection approach and the conditional entropy is applied as criterion function. In order to assess the proposed methodology, the algorithm is applied to recover the network topology from temporal expressions generated by an artificial gene network (AGN) model as well as from the DREAM challenge. The adopted AGN is based on theoretical models of complex networks and its gene transference function is obtained from random drawing on the set of possible Boolean functions, thus creating its dynamics. On the other hand, DREAM time series data presents variation of network size and its topologies are based on real networks. The dynamics are generated by continuous differential equations with noise and perturbation. By adopting both data sources, it is possible to estimate the average quality of the inference with respect to different network topologies, transfer functions and network sizes. Conclusions A remarkable improvement of accuracy was observed in the experimental results by reducing the number of false connections in the inferred topology by the non-Shannon entropy. The obtained best free parameter of the Tsallis entropy was on average in the range 2.5 ≤ q ≤ 3.5 (hence, subextensive entropy), which opens new perspectives for GRNs inference methods based on information theory and for investigation of the nonextensivity of such networks. The inference algorithm and criterion function proposed here were implemented and included in the DimReduction software, which is freely available at http://sourceforge.net/projects/dimreduction and http://code.google.com/p/dimreduction/. PMID:21545720

  16. Salient target detection based on pseudo-Wigner-Ville distribution and Rényi entropy.

    PubMed

    Xu, Yuannan; Zhao, Yuan; Jin, Chenfei; Qu, Zengfeng; Liu, Liping; Sun, Xiudong

    2010-02-15

    We present what we believe to be a novel method based on pseudo-Wigner-Ville distribution (PWVD) and Rényi entropy for salient targets detection. In the foundation of studying the statistical property of Rényi entropy via PWVD, the residual entropy-based saliency map of an input image can be obtained. From the saliency map, target detection is completed by the simple and convenient threshold segmentation. Experimental results demonstrate the proposed method can detect targets effectively in complex ground scenes.

  17. Consistent maximum entropy representations of pipe flow networks

    NASA Astrophysics Data System (ADS)

    Waldrip, Steven H.; Niven, Robert K.; Abel, Markus; Schlegel, Michael

    2017-06-01

    The maximum entropy method is used to predict flows on water distribution networks. This analysis extends the water distribution network formulation of Waldrip et al. (2016) Journal of Hydraulic Engineering (ASCE), by the use of a continuous relative entropy defined on a reduced parameter set. This reduction in the parameters that the entropy is defined over ensures consistency between different representations of the same network. The performance of the proposed reduced parameter method is demonstrated with a one-loop network case study.

  18. Maximum entropy method applied to deblurring images on a MasPar MP-1 computer

    NASA Technical Reports Server (NTRS)

    Bonavito, N. L.; Dorband, John; Busse, Tim

    1991-01-01

    A statistical inference method based on the principle of maximum entropy is developed for the purpose of enhancing and restoring satellite images. The proposed maximum entropy image restoration method is shown to overcome the difficulties associated with image restoration and provide the smoothest and most appropriate solution consistent with the measured data. An implementation of the method on the MP-1 computer is described, and results of tests on simulated data are presented.

  19. Application of a real-time, calculable limiting form of the Renyi entropy for molecular imaging of tumors.

    PubMed

    Marsh, Jon N; Wallace, Kirk D; McCarthy, John E; Wickerhauser, Mladen V; Maurizi, Brian N; Lanza, Gregory M; Wickline, Samuel A; Hughes, Michael S

    2010-08-01

    Previously, we reported new methods for ultrasound signal characterization using entropy, H(f); a generalized entropy, the Renyi entropy, I(f)(r); and a limiting form of Renyi entropy suitable for real-time calculation, I(f),(infinity). All of these quantities demonstrated significantly more sensitivity to subtle changes in scattering architecture than energy-based methods in certain settings. In this study, the real-time calculable limit of the Renyi entropy, I(f),(infinity), is applied for the imaging of angiogenic murine neovasculature in a breast cancer xenograft using a targeted contrast agent. It is shown that this approach may be used to reliably detect the accumulation of targeted nanoparticles at five minutes post-injection in this in vivo model.

  20. Fault Diagnosis for Micro-Gas Turbine Engine Sensors via Wavelet Entropy

    PubMed Central

    Yu, Bing; Liu, Dongdong; Zhang, Tianhong

    2011-01-01

    Sensor fault diagnosis is necessary to ensure the normal operation of a gas turbine system. However, the existing methods require too many resources and this need can’t be satisfied in some occasions. Since the sensor readings are directly affected by sensor state, sensor fault diagnosis can be performed by extracting features of the measured signals. This paper proposes a novel fault diagnosis method for sensors based on wavelet entropy. Based on the wavelet theory, wavelet decomposition is utilized to decompose the signal in different scales. Then the instantaneous wavelet energy entropy (IWEE) and instantaneous wavelet singular entropy (IWSE) are defined based on the previous wavelet entropy theory. Subsequently, a fault diagnosis method for gas turbine sensors is proposed based on the results of a numerically simulated example. Then, experiments on this method are carried out on a real micro gas turbine engine. In the experiment, four types of faults with different magnitudes are presented. The experimental results show that the proposed method for sensor fault diagnosis is efficient. PMID:22163734

  1. Fault diagnosis for micro-gas turbine engine sensors via wavelet entropy.

    PubMed

    Yu, Bing; Liu, Dongdong; Zhang, Tianhong

    2011-01-01

    Sensor fault diagnosis is necessary to ensure the normal operation of a gas turbine system. However, the existing methods require too many resources and this need can't be satisfied in some occasions. Since the sensor readings are directly affected by sensor state, sensor fault diagnosis can be performed by extracting features of the measured signals. This paper proposes a novel fault diagnosis method for sensors based on wavelet entropy. Based on the wavelet theory, wavelet decomposition is utilized to decompose the signal in different scales. Then the instantaneous wavelet energy entropy (IWEE) and instantaneous wavelet singular entropy (IWSE) are defined based on the previous wavelet entropy theory. Subsequently, a fault diagnosis method for gas turbine sensors is proposed based on the results of a numerically simulated example. Then, experiments on this method are carried out on a real micro gas turbine engine. In the experiment, four types of faults with different magnitudes are presented. The experimental results show that the proposed method for sensor fault diagnosis is efficient.

  2. An estimate of the second law thermodynamic efficiency of the various units comprising an Environmental Control and Life Support System (ECLSS)

    NASA Technical Reports Server (NTRS)

    Chatterjee, Sharmista; Seagrave, Richard C.

    1993-01-01

    The objective of this paper is to present an estimate of the second law thermodynamic efficiency of the various units comprising an Environmental Control and Life Support System (ECLSS). The technique adopted here is based on an evaluation of the 'lost work' within each functional unit of the subsystem. Pertinent information for our analysis is obtained from a user interactive integrated model of an ECLSS. The model was developed using ASPEN. A potential benefit of this analysis is the identification of subsystems with high entropy generation as the most likely candidates for engineering improvements. This work has been motivated by the fact that the design objective for a long term mission should be the evaluation of existing ECLSS technologies not only the basis of the quantity of work needed for or obtained from each subsystem but also on the quality of work. In a previous study Brandhorst showed that the power consumption for partially closed and completely closed regenerable life support systems was estimated as 3.5 kw/individual and 10-12 kw/individual respectively. With the increasing cost and scarcity of energy resources, our attention is drawn to evaluate the existing ECLSS technologies on the basis of their energy efficiency. In general the first law efficiency of a system is usually greater than 50 percent. From literature, the second law efficiency is usually about 10 percent. The estimation of second law efficiency of the system indicates the percentage of energy degraded as irreversibilities within the process. This estimate offers more room for improvement in the design of equipment. From another perspective, our objective is to keep the total entropy production of a life support system as low as possible and still ensure a positive entropy gradient between the system and the surroundings. The reason for doing so is as the entropy production of the system increases, the entropy gradient between the system and the surroundings decreases, and the system will gradually approach equilibrium with the surroundings until it reaches the point where the entropy gradient is zero. At this point no work can be extracted from the system. This is called the 'dead state' of the system.

  3. DNA entropy reveals a significant difference in complexity between housekeeping and tissue specific gene promoters.

    PubMed

    Thomas, David; Finan, Chris; Newport, Melanie J; Jones, Susan

    2015-10-01

    The complexity of DNA can be quantified using estimates of entropy. Variation in DNA complexity is expected between the promoters of genes with different transcriptional mechanisms; namely housekeeping (HK) and tissue specific (TS). The former are transcribed constitutively to maintain general cellular functions, and the latter are transcribed in restricted tissue and cells types for specific molecular events. It is known that promoter features in the human genome are related to tissue specificity, but this has been difficult to quantify on a genomic scale. If entropy effectively quantifies DNA complexity, calculating the entropies of HK and TS gene promoters as profiles may reveal significant differences. Entropy profiles were calculated for a total dataset of 12,003 human gene promoters and for 501 housekeeping (HK) and 587 tissue specific (TS) human gene promoters. The mean profiles show the TS promoters have a significantly lower entropy (p<2.2e-16) than HK gene promoters. The entropy distributions for the 3 datasets show that promoter entropies could be used to identify novel HK genes. Functional features comprise DNA sequence patterns that are non-random and hence they have lower entropies. The lower entropy of TS gene promoters can be explained by a higher density of positive and negative regulatory elements, required for genes with complex spatial and temporary expression. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Vibrational entropy of a protein: large differences between distinct conformations.

    PubMed

    Goethe, Martin; Fita, Ignacio; Rubi, J Miguel

    2015-01-13

    In this article, it is investigated whether vibrational entropy (VE) is an important contribution to the free energy of globular proteins at ambient conditions. VE represents the major configurational-entropy contribution of these proteins. By definition, it is an average of the configurational entropies of the protein within single minima of the energy landscape, weighted by their occupation probabilities. Its large part originates from thermal motion of flexible torsion angles giving rise to the finite peak widths observed in torsion angle distributions. While VE may affect the equilibrium properties of proteins, it is usually neglected in numerical calculations as its consideration is difficult. Moreover, it is sometimes believed that all well-packed conformations of a globular protein have similar VE anyway. Here, we measure explicitly the VE for six different conformations from simulation data of a test protein. Estimates are obtained using the quasi-harmonic approximation for three coordinate sets, Cartesian, bond-angle-torsion (BAT), and a new set termed rotamer-degeneracy lifted BAT coordinates by us. The new set gives improved estimates as it overcomes a known shortcoming of the quasi-harmonic approximation caused by multiply populated rotamer states, and it may serve for VE estimation of macromolecules in a very general context. The obtained VE values depend considerably on the type of coordinates used. However, for all coordinate sets we find large entropy differences between the conformations, of the order of the overall stability of the protein. This result may have important implications on the choice of free energy expressions used in software for protein structure prediction, protein design, and NMR refinement.

  5. Rényi information flow in the Ising model with single-spin dynamics.

    PubMed

    Deng, Zehui; Wu, Jinshan; Guo, Wenan

    2014-12-01

    The n-index Rényi mutual information and transfer entropies for the two-dimensional kinetic Ising model with arbitrary single-spin dynamics in the thermodynamic limit are derived as functions of ensemble averages of observables and spin-flip probabilities. Cluster Monte Carlo algorithms with different dynamics from the single-spin dynamics are thus applicable to estimate the transfer entropies. By means of Monte Carlo simulations with the Wolff algorithm, we calculate the information flows in the Ising model with the Metropolis dynamics and the Glauber dynamics, respectively. We find that not only the global Rényi transfer entropy, but also the pairwise Rényi transfer entropy, peaks in the disorder phase.

  6. Model-Free Reconstruction of Excitatory Neuronal Connectivity from Calcium Imaging Signals

    PubMed Central

    Stetter, Olav; Battaglia, Demian; Soriano, Jordi; Geisel, Theo

    2012-01-01

    A systematic assessment of global neural network connectivity through direct electrophysiological assays has remained technically infeasible, even in simpler systems like dissociated neuronal cultures. We introduce an improved algorithmic approach based on Transfer Entropy to reconstruct structural connectivity from network activity monitored through calcium imaging. We focus in this study on the inference of excitatory synaptic links. Based on information theory, our method requires no prior assumptions on the statistics of neuronal firing and neuronal connections. The performance of our algorithm is benchmarked on surrogate time series of calcium fluorescence generated by the simulated dynamics of a network with known ground-truth topology. We find that the functional network topology revealed by Transfer Entropy depends qualitatively on the time-dependent dynamic state of the network (bursting or non-bursting). Thus by conditioning with respect to the global mean activity, we improve the performance of our method. This allows us to focus the analysis to specific dynamical regimes of the network in which the inferred functional connectivity is shaped by monosynaptic excitatory connections, rather than by collective synchrony. Our method can discriminate between actual causal influences between neurons and spurious non-causal correlations due to light scattering artifacts, which inherently affect the quality of fluorescence imaging. Compared to other reconstruction strategies such as cross-correlation or Granger Causality methods, our method based on improved Transfer Entropy is remarkably more accurate. In particular, it provides a good estimation of the excitatory network clustering coefficient, allowing for discrimination between weakly and strongly clustered topologies. Finally, we demonstrate the applicability of our method to analyses of real recordings of in vitro disinhibited cortical cultures where we suggest that excitatory connections are characterized by an elevated level of clustering compared to a random graph (although not extreme) and can be markedly non-local. PMID:22927808

  7. Motions and entropies in proteins as seen in NMR relaxation experiments and molecular dynamics simulations.

    PubMed

    Allnér, Olof; Foloppe, Nicolas; Nilsson, Lennart

    2015-01-22

    Molecular dynamics simulations of E. coli glutaredoxin1 in water have been performed to relate the dynamical parameters and entropy obtained in NMR relaxation experiments, with results extracted from simulated trajectory data. NMR relaxation is the most widely used experimental method to obtain data on dynamics of proteins, but it is limited to relatively short timescales and to motions of backbone amides or in some cases (13)C-H vectors. By relating the experimental data to the all-atom picture obtained in molecular dynamics simulations, valuable insights on the interpretation of the experiment can be gained. We have estimated the internal dynamics and their timescales by calculating the generalized order parameters (O) for different time windows. We then calculate the quasiharmonic entropy (S) and compare it to the entropy calculated from the NMR-derived generalized order parameter of the amide vectors. Special emphasis is put on characterizing dynamics that are not expressed through the motions of the amide group. The NMR and MD methods suffer from complementary limitations, with NMR being restricted to local vectors and dynamics on a timescale determined by the rotational diffusion of the solute, while in simulations, it may be difficult to obtain sufficient sampling to ensure convergence of the results. We also evaluate the amount of sampling obtained with molecular dynamics simulations and how it is affected by the length of individual simulations, by clustering of the sampled conformations. We find that two structural turns act as hinges, allowing the α helix between them to undergo large, long timescale motions that cannot be detected in the time window of the NMR dipolar relaxation experiments. We also show that the entropy obtained from the amide vector does not account for correlated motions of adjacent residues. Finally, we show that the sampling in a total of 100 ns molecular dynamics simulation can be increased by around 50%, by dividing the trajectory into 10 replicas with different starting velocities.

  8. Spatiotemporal analysis and mapping of oral cancer risk in changhua county (taiwan): an application of generalized bayesian maximum entropy method.

    PubMed

    Yu, Hwa-Lung; Chiang, Chi-Ting; Lin, Shu-De; Chang, Tsun-Kuo

    2010-02-01

    Incidence rate of oral cancer in Changhua County is the highest among the 23 counties of Taiwan during 2001. However, in health data analysis, crude or adjusted incidence rates of a rare event (e.g., cancer) for small populations often exhibit high variances and are, thus, less reliable. We proposed a generalized Bayesian Maximum Entropy (GBME) analysis of spatiotemporal disease mapping under conditions of considerable data uncertainty. GBME was used to study the oral cancer population incidence in Changhua County (Taiwan). Methodologically, GBME is based on an epistematics principles framework and generates spatiotemporal estimates of oral cancer incidence rates. In a way, it accounts for the multi-sourced uncertainty of rates, including small population effects, and the composite space-time dependence of rare events in terms of an extended Poisson-based semivariogram. The results showed that GBME analysis alleviates the noises of oral cancer data from population size effect. Comparing to the raw incidence data, the maps of GBME-estimated results can identify high risk oral cancer regions in Changhua County, where the prevalence of betel quid chewing and cigarette smoking is relatively higher than the rest of the areas. GBME method is a valuable tool for spatiotemporal disease mapping under conditions of uncertainty. 2010 Elsevier Inc. All rights reserved.

  9. Age-related variation in EEG complexity to photic stimulation: A multiscale entropy analysis

    PubMed Central

    Takahashi, Tetsuya; Cho, Raymond Y.; Murata, Tetsuhito; Mizuno, Tomoyuki; Kikuchi, Mitsuru; Mizukami, Kimiko; Kosaka, Hirotaka; Takahashi, Koichi; Wada, Yuji

    2010-01-01

    Objective This study was intended to examine variations in electroencephalographic (EEG) complexity in response to photic stimulation (PS) during aging to test the hypothesis that the aging process reduces physiologic complexity and functional responsiveness. Methods Multiscale entropy (MSE), an estimate of time-series signal complexity associated with long-range temporal correlation, is used as a recently proposed method for quantifying EEG complexity with multiple coarse-grained sequences. We recorded EEG in 13 healthy elderly subjects and 12 healthy young subjects during pre-PS and post-PS conditions and estimated their respective MSE values. Results For the pre-PS condition, no significant complexity difference was found between the groups. However, a significant MSE change (complexity increase) was found post-PS only in young subjects, thereby revealing a power-law scaling property, which means long-range temporal correlation. Conclusions Enhancement of long-range temporal correlation in young subjects after PS might reflect a cortical response to stimuli, which was absent in elderly subjects. These results are consistent with the general “loss of complexity/diminished functional response to stimuli” theory of aging. Significance Our findings demonstrate that application of MSE analysis to EEG is a powerful approach for studying age-related changes in brain function. PMID:19231279

  10. Merging daily sea surface temperature data from multiple satellites using a Bayesian maximum entropy method

    NASA Astrophysics Data System (ADS)

    Tang, Shaolei; Yang, Xiaofeng; Dong, Di; Li, Ziwei

    2015-12-01

    Sea surface temperature (SST) is an important variable for understanding interactions between the ocean and the atmosphere. SST fusion is crucial for acquiring SST products of high spatial resolution and coverage. This study introduces a Bayesian maximum entropy (BME) method for blending daily SSTs from multiple satellite sensors. A new spatiotemporal covariance model of an SST field is built to integrate not only single-day SSTs but also time-adjacent SSTs. In addition, AVHRR 30-year SST climatology data are introduced as soft data at the estimation points to improve the accuracy of blended results within the BME framework. The merged SSTs, with a spatial resolution of 4 km and a temporal resolution of 24 hours, are produced in the Western Pacific Ocean region to demonstrate and evaluate the proposed methodology. Comparisons with in situ drifting buoy observations show that the merged SSTs are accurate and the bias and root-mean-square errors for the comparison are 0.15°C and 0.72°C, respectively.

  11. Classification of epileptic seizures using wavelet packet log energy and norm entropies with recurrent Elman neural network classifier.

    PubMed

    Raghu, S; Sriraam, N; Kumar, G Pradeep

    2017-02-01

    Electroencephalogram shortly termed as EEG is considered as the fundamental segment for the assessment of the neural activities in the brain. In cognitive neuroscience domain, EEG-based assessment method is found to be superior due to its non-invasive ability to detect deep brain structure while exhibiting superior spatial resolutions. Especially for studying the neurodynamic behavior of epileptic seizures, EEG recordings reflect the neuronal activity of the brain and thus provide required clinical diagnostic information for the neurologist. This specific proposed study makes use of wavelet packet based log and norm entropies with a recurrent Elman neural network (REN) for the automated detection of epileptic seizures. Three conditions, normal, pre-ictal and epileptic EEG recordings were considered for the proposed study. An adaptive Weiner filter was initially applied to remove the power line noise of 50 Hz from raw EEG recordings. Raw EEGs were segmented into 1 s patterns to ensure stationarity of the signal. Then wavelet packet using Haar wavelet with a five level decomposition was introduced and two entropies, log and norm were estimated and were applied to REN classifier to perform binary classification. The non-linear Wilcoxon statistical test was applied to observe the variation in the features under these conditions. The effect of log energy entropy (without wavelets) was also studied. It was found from the simulation results that the wavelet packet log entropy with REN classifier yielded a classification accuracy of 99.70 % for normal-pre-ictal, 99.70 % for normal-epileptic and 99.85 % for pre-ictal-epileptic.

  12. Entanglement Entropy of Black Holes.

    PubMed

    Solodukhin, Sergey N

    2011-01-01

    The entanglement entropy is a fundamental quantity, which characterizes the correlations between sub-systems in a larger quantum-mechanical system. For two sub-systems separated by a surface the entanglement entropy is proportional to the area of the surface and depends on the UV cutoff, which regulates the short-distance correlations. The geometrical nature of entanglement-entropy calculation is particularly intriguing when applied to black holes when the entangling surface is the black-hole horizon. I review a variety of aspects of this calculation: the useful mathematical tools such as the geometry of spaces with conical singularities and the heat kernel method, the UV divergences in the entropy and their renormalization, the logarithmic terms in the entanglement entropy in four and six dimensions and their relation to the conformal anomalies. The focus in the review is on the systematic use of the conical singularity method. The relations to other known approaches such as 't Hooft's brick-wall model and the Euclidean path integral in the optical metric are discussed in detail. The puzzling behavior of the entanglement entropy due to fields, which non-minimally couple to gravity, is emphasized. The holographic description of the entanglement entropy of the blackhole horizon is illustrated on the two- and four-dimensional examples. Finally, I examine the possibility to interpret the Bekenstein-Hawking entropy entirely as the entanglement entropy.

  13. Entanglement Entropy of Black Holes

    NASA Astrophysics Data System (ADS)

    Solodukhin, Sergey N.

    2011-10-01

    The entanglement entropy is a fundamental quantity, which characterizes the correlations between sub-systems in a larger quantum-mechanical system. For two sub-systems separated by a surface the entanglement entropy is proportional to the area of the surface and depends on the UV cutoff, which regulates the short-distance correlations. The geometrical nature of entanglement-entropy calculation is particularly intriguing when applied to black holes when the entangling surface is the black-hole horizon. I review a variety of aspects of this calculation: the useful mathematical tools such as the geometry of spaces with conical singularities and the heat kernel method, the UV divergences in the entropy and their renormalization, the logarithmic terms in the entanglement entropy in four and six dimensions and their relation to the conformal anomalies. The focus in the review is on the systematic use of the conical singularity method. The relations to other known approaches such as 't Hooft's brick-wall model and the Euclidean path integral in the optical metric are discussed in detail. The puzzling behavior of the entanglement entropy due to fields, which non-minimally couple to gravity, is emphasized. The holographic description of the entanglement entropy of the blackhole horizon is illustrated on the two- and four-dimensional examples. Finally, I examine the possibility to interpret the Bekenstein-Hawking entropy entirely as the entanglement entropy.

  14. Direct comparison of phase-sensitive vibrational sum frequency generation with maximum entropy method: case study of water.

    PubMed

    de Beer, Alex G F; Samson, Jean-Sebastièn; Hua, Wei; Huang, Zishuai; Chen, Xiangke; Allen, Heather C; Roke, Sylvie

    2011-12-14

    We present a direct comparison of phase sensitive sum-frequency generation experiments with phase reconstruction obtained by the maximum entropy method. We show that both methods lead to the same complex spectrum. Furthermore, we discuss the strengths and weaknesses of each of these methods, analyzing possible sources of experimental and analytical errors. A simulation program for maximum entropy phase reconstruction is available at: http://lbp.epfl.ch/. © 2011 American Institute of Physics

  15. Entropy in self-similar shock profiles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margolin, Len G.; Reisner, Jon Michael; Jordan, Pedro M.

    In this paper, we study the structure of a gaseous shock, and in particular the distribution of entropy within, in both a thermodynamics and a statistical mechanics context. The problem of shock structure has a long and distinguished history that we review. We employ the Navier–Stokes equations to construct a self–similar version of Becker’s solution for a shock assuming a particular (physically plausible) Prandtl number; that solution reproduces the well–known result of Morduchow & Libby that features a maximum of the equilibrium entropy inside the shock profile. We then construct an entropy profile, based on gas kinetic theory, that ismore » smooth and monotonically increasing. The extension of equilibrium thermodynamics to irreversible processes is based in part on the assumption of local thermodynamic equilibrium. We show that this assumption is not valid except for the weakest shocks. Finally, we conclude by hypothesizing a thermodynamic nonequilibrium entropy and demonstrating that it closely estimates the gas kinetic nonequilibrium entropy within a shock.« less

  16. Entropy in self-similar shock profiles

    DOE PAGES

    Margolin, Len G.; Reisner, Jon Michael; Jordan, Pedro M.

    2017-07-16

    In this paper, we study the structure of a gaseous shock, and in particular the distribution of entropy within, in both a thermodynamics and a statistical mechanics context. The problem of shock structure has a long and distinguished history that we review. We employ the Navier–Stokes equations to construct a self–similar version of Becker’s solution for a shock assuming a particular (physically plausible) Prandtl number; that solution reproduces the well–known result of Morduchow & Libby that features a maximum of the equilibrium entropy inside the shock profile. We then construct an entropy profile, based on gas kinetic theory, that ismore » smooth and monotonically increasing. The extension of equilibrium thermodynamics to irreversible processes is based in part on the assumption of local thermodynamic equilibrium. We show that this assumption is not valid except for the weakest shocks. Finally, we conclude by hypothesizing a thermodynamic nonequilibrium entropy and demonstrating that it closely estimates the gas kinetic nonequilibrium entropy within a shock.« less

  17. Estimating cognitive workload using wavelet entropy-based features during an arithmetic task.

    PubMed

    Zarjam, Pega; Epps, Julien; Chen, Fang; Lovell, Nigel H

    2013-12-01

    Electroencephalography (EEG) has shown promise as an indicator of cognitive workload; however, precise workload estimation is an ongoing research challenge. In this investigation, seven levels of workload were induced using an arithmetic task, and the entropy of wavelet coefficients extracted from EEG signals is shown to distinguish all seven levels. For a subject-independent multi-channel classification scheme, the entropy features achieved high accuracy, up to 98% for channels from the frontal lobes, in the delta frequency band. This suggests that a smaller number of EEG channels in only one frequency band can be deployed for an effective EEG-based workload classification system. Together with analysis based on phase locking between channels, these results consistently suggest increased synchronization of neural responses for higher load levels. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Efficient optimization of the quantum relative entropy

    NASA Astrophysics Data System (ADS)

    Fawzi, Hamza; Fawzi, Omar

    2018-04-01

    Many quantum information measures can be written as an optimization of the quantum relative entropy between sets of states. For example, the relative entropy of entanglement of a state is the minimum relative entropy to the set of separable states. The various capacities of quantum channels can also be written in this way. We propose a unified framework to numerically compute these quantities using off-the-shelf semidefinite programming solvers, exploiting the approximation method proposed in Fawzi, Saunderson and Parrilo (2017 arXiv: 1705.00812). As a notable application, this method allows us to provide numerical counterexamples for a proposed lower bound on the quantum conditional mutual information in terms of the relative entropy of recovery.

  19. Characterization of Early Partial Seizure Onset: Frequency, Complexity and Entropy

    PubMed Central

    Jouny, Christophe C.; Bergey, Gregory K.

    2011-01-01

    Objective A clear classification of partial seizures onset features is not yet established. Complexity and entropy have been very widely used to describe dynamical systems, but a systematic evaluation of these measures to characterize partial seizures has never been performed. Methods Eighteen different measures including power in frequency bands up to 300Hz, Gabor atom density (GAD), Higuchi fractal dimension (HFD), Lempel-Ziv complexity, Shannon entropy, sample entropy, and permutation entropy, were selected to test sensitivity to partial seizure onset. Intracranial recordings from forty-five patients with mesial temporal, neocortical temporal and neocortical extratemporal seizure foci were included (331 partial seizures). Results GAD, Lempel-Ziv complexity, HFD, high frequency activity, and sample entropy were the most reliable measures to assess early seizure onset. Conclusions Increases in complexity and occurrence of high-frequency components appear to be commonly associated with early stages of partial seizure evolution from all regions. The type of measure (frequency-based, complexity or entropy) does not predict the efficiency of the method to detect seizure onset. Significance Differences between measures such as GAD and HFD highlight the multimodal nature of partial seizure onsets. Improved methods for early seizure detection may be achieved from a better understanding of these underlying dynamics. PMID:21872526

  20. Comparison of transfer entropy methods for financial time series

    NASA Astrophysics Data System (ADS)

    He, Jiayi; Shang, Pengjian

    2017-09-01

    There is a certain relationship between the global financial markets, which creates an interactive network of global finance. Transfer entropy, a measurement for information transfer, offered a good way to analyse the relationship. In this paper, we analysed the relationship between 9 stock indices from the U.S., Europe and China (from 1995 to 2015) by using transfer entropy (TE), effective transfer entropy (ETE), Rényi transfer entropy (RTE) and effective Rényi transfer entropy (ERTE). We compared the four methods in the sense of the effectiveness for identification of the relationship between stock markets. In this paper, two kinds of information flows are given. One reveals that the U.S. took the leading position when in terms of lagged-current cases, but when it comes to the same date, China is the most influential. And ERTE could provide superior results.

  1. Distinguishing anticipation from causality: anticipatory bias in the estimation of information flow.

    PubMed

    Hahs, Daniel W; Pethel, Shawn D

    2011-09-16

    We report that transfer entropy estimates obtained from low-resolution and/or small data sets show net information flow away from a purely anticipatory element whereas transfer entropy calculated using exact distributions show the flow towards it. This means that for real-world data sets anticipatory elements can appear to be strongly driving the network dynamics even when there is no possibility of such an influence. Furthermore, we show that in the low-resolution limit there is no statistic that can distinguish anticipatory elements from causal ones.

  2. Quasiparticle entropy in superconductor/normal metal/superconductor proximity junctions in the diffusive limit

    NASA Astrophysics Data System (ADS)

    Virtanen, P.; Vischi, F.; Strambini, E.; Carrega, M.; Giazotto, F.

    2017-12-01

    We discuss the quasiparticle entropy and heat capacity of a dirty superconductor/normal metal/superconductor junction. In the case of short junctions, the inverse proximity effect extending in the superconducting banks plays a crucial role in determining the thermodynamic quantities. In this case, commonly used approximations can violate thermodynamic relations between supercurrent and quasiparticle entropy. We provide analytical and numerical results as a function of different geometrical parameters. Quantitative estimates for the heat capacity can be relevant for the design of caloritronic devices or radiation sensor applications.

  3. A head motion estimation algorithm for motion artifact correction in dental CT imaging

    NASA Astrophysics Data System (ADS)

    Hernandez, Daniel; Elsayed Eldib, Mohamed; Hegazy, Mohamed A. A.; Hye Cho, Myung; Cho, Min Hyoung; Lee, Soo Yeol

    2018-03-01

    A small head motion of the patient can compromise the image quality in a dental CT, in which a slow cone-beam scan is adopted. We introduce a retrospective head motion estimation method by which we can estimate the motion waveform from the projection images without employing any external motion monitoring devices. We compute the cross-correlation between every two successive projection images, which results in a sinusoid-like displacement curve over the projection view when there is no patient motion. However, the displacement curve deviates from the sinusoid-like form when patient motion occurs. We develop a method to estimate the motion waveform with a single parameter derived from the displacement curve with aid of image entropy minimization. To verify the motion estimation method, we use a lab-built micro-CT that can emulate major head motions during dental CT scans, such as tilting and nodding, in a controlled way. We find that the estimated motion waveform conforms well to the actual motion waveform. To further verify the motion estimation method, we correct the motion artifacts with the estimated motion waveform. After motion artifact correction, the corrected images look almost identical to the reference images, with structural similarity index values greater than 0.81 in the phantom and rat imaging studies.

  4. Magnetic field dependence of Griffith phase and magnetocaloric effect in Ca0.85Dy0.15MnO3

    NASA Astrophysics Data System (ADS)

    Nag, Ripan; Sarkar, Bidyut; Pal, Sudipta

    2018-03-01

    Temperature and Magnetic field dependent magnetization properties of electron doped polycrystalline sample Ca0.85Dy0.15MnO3 (CDMO) prepared by solid state reaction method have been studied. The sample undergoes ferromagnetic to paramagnetic phase transition at about 111k. From the study of magnetic properties in terms of Arrot plots it is observed that the phase transition is of 2nd order. The Griffith phase behavior of the sample is suppressed with the increase of the applied magnetic field strength H. We have estimated the magnetic entropy change from experimental magnetization and temperature data. For a magnetic field change of 8000 Oe, the maximum value of magnetic entropy change arrives at a value of 1.126 J-kg-1 k-1 in this magnetocaloric material.

  5. Interaction Entropy: A New Paradigm for Highly Efficient and Reliable Computation of Protein-Ligand Binding Free Energy.

    PubMed

    Duan, Lili; Liu, Xiao; Zhang, John Z H

    2016-05-04

    Efficient and reliable calculation of protein-ligand binding free energy is a grand challenge in computational biology and is of critical importance in drug design and many other molecular recognition problems. The main challenge lies in the calculation of entropic contribution to protein-ligand binding or interaction systems. In this report, we present a new interaction entropy method which is theoretically rigorous, computationally efficient, and numerically reliable for calculating entropic contribution to free energy in protein-ligand binding and other interaction processes. Drastically different from the widely employed but extremely expensive normal mode method for calculating entropy change in protein-ligand binding, the new method calculates the entropic component (interaction entropy or -TΔS) of the binding free energy directly from molecular dynamics simulation without any extra computational cost. Extensive study of over a dozen randomly selected protein-ligand binding systems demonstrated that this interaction entropy method is both computationally efficient and numerically reliable and is vastly superior to the standard normal mode approach. This interaction entropy paradigm introduces a novel and intuitive conceptual understanding of the entropic effect in protein-ligand binding and other general interaction systems as well as a practical method for highly efficient calculation of this effect.

  6. Multi-Scale Low-Entropy Method for Optimizing the Processing Parameters during Automated Fiber Placement.

    PubMed

    Han, Zhenyu; Sun, Shouzheng; Fu, Hongya; Fu, Yunzhong

    2017-09-03

    Automated fiber placement (AFP) process includes a variety of energy forms and multi-scale effects. This contribution proposes a novel multi-scale low-entropy method aiming at optimizing processing parameters in an AFP process, where multi-scale effect, energy consumption, energy utilization efficiency and mechanical properties of micro-system could be taken into account synthetically. Taking a carbon fiber/epoxy prepreg as an example, mechanical properties of macro-meso-scale are obtained by Finite Element Method (FEM). A multi-scale energy transfer model is then established to input the macroscopic results into the microscopic system as its boundary condition, which can communicate with different scales. Furthermore, microscopic characteristics, mainly micro-scale adsorption energy, diffusion coefficient entropy-enthalpy values, are calculated under different processing parameters based on molecular dynamics method. Low-entropy region is then obtained in terms of the interrelation among entropy-enthalpy values, microscopic mechanical properties (interface adsorbability and matrix fluidity) and processing parameters to guarantee better fluidity, stronger adsorption, lower energy consumption and higher energy quality collaboratively. Finally, nine groups of experiments are carried out to verify the validity of the simulation results. The results show that the low-entropy optimization method can reduce void content effectively, and further improve the mechanical properties of laminates.

  7. [Identification of special quality eggs with NIR spectroscopy technology based on symbol entropy feature extraction method].

    PubMed

    Zhao, Yong; Hong, Wen-Xue

    2011-11-01

    Fast, nondestructive and accurate identification of special quality eggs is an urgent problem. The present paper proposed a new feature extraction method based on symbol entropy to identify near infrared spectroscopy of special quality eggs. The authors selected normal eggs, free range eggs, selenium-enriched eggs and zinc-enriched eggs as research objects and measured the near-infrared diffuse reflectance spectra in the range of 12 000-4 000 cm(-1). Raw spectra were symbolically represented with aggregation approximation algorithm and symbolic entropy was extracted as feature vector. An error-correcting output codes multiclass support vector machine classifier was designed to identify the spectrum. Symbolic entropy feature is robust when parameter changed and the highest recognition rate reaches up to 100%. The results show that the identification method of special quality eggs using near-infrared is feasible and the symbol entropy can be used as a new feature extraction method of near-infrared spectra.

  8. Entropy and generalized least square methods in assessment of the regional value of streamgages

    USGS Publications Warehouse

    Markus, M.; Vernon, Knapp H.; Tasker, Gary D.

    2003-01-01

    The Illinois State Water Survey performed a study to assess the streamgaging network in the State of Illinois. One of the important aspects of the study was to assess the regional value of each station through an assessment of the information transfer among gaging records for low, average, and high flow conditions. This analysis was performed for the main hydrologic regions in the State, and the stations were initially evaluated using a new approach based on entropy analysis. To determine the regional value of each station within a region, several information parameters, including total net information, were defined based on entropy. Stations were ranked based on the total net information. For comparison, the regional value of the same stations was assessed using the generalized least square regression (GLS) method, developed by the US Geological Survey. Finally, a hybrid combination of GLS and entropy was created by including a function of the negative net information as a penalty function in the GLS. The weights of the combined model were determined to maximize the average correlation with the results of GLS and entropy. The entropy and GLS methods were evaluated using the high-flow data from southern Illinois stations. The combined method was compared with the entropy and GLS approaches using the high-flow data from eastern Illinois stations. ?? 2003 Elsevier B.V. All rights reserved.

  9. Configurational entropy measurements in extremely supercooled liquids that break the glass ceiling.

    PubMed

    Berthier, Ludovic; Charbonneau, Patrick; Coslovich, Daniele; Ninarello, Andrea; Ozawa, Misaki; Yaida, Sho

    2017-10-24

    Liquids relax extremely slowly on approaching the glass state. One explanation is that an entropy crisis, because of the rarefaction of available states, makes it increasingly arduous to reach equilibrium in that regime. Validating this scenario is challenging, because experiments offer limited resolution, while numerical studies lag more than eight orders of magnitude behind experimentally relevant timescales. In this work, we not only close the colossal gap between experiments and simulations but manage to create in silico configurations that have no experimental analog yet. Deploying a range of computational tools, we obtain four estimates of their configurational entropy. These measurements consistently confirm that the steep entropy decrease observed in experiments is also found in simulations, even beyond the experimental glass transition. Our numerical results thus extend the observational window into the physics of glasses and reinforce the relevance of an entropy crisis for understanding their formation. Published under the PNAS license.

  10. Breaking the glass ceiling: Configurational entropy measurements in extremely supercooled liquids

    NASA Astrophysics Data System (ADS)

    Berthier, Ludovic

    Liquids relax extremely slowly on approaching the glass state. One explanation is that an entropy crisis, due to the rarefaction of available states, makes it increasingly arduous to reach equilibrium in that regime. Validating this scenario is challenging, because experiments offer limited resolution, while numerical studies lag more than eight orders of magnitude behind experimentally-relevant timescales. In this work we not only close the colossal gap between experiments and simulations but manage to create in-silico configurations that have no experimental analog yet. Deploying a range of computational tools, we obtain four independent estimates of their configurational entropy. These measurements consistently indicate that the steep entropy decrease observed in experiments is found in simulations even beyond the experimental glass transition. Our numerical results thus open a new observational window into the physics of glasses and reinforce the relevance of an entropy crisis for understanding their formation.

  11. MoNbTaV Medium-Entropy Alloy

    DOE PAGES

    Yao, Hongwei; Qiao, Jun -Wei; Gao, Michael; ...

    2016-05-19

    Guided by CALPHAD (Calculation of Phase Diagrams) modeling, the refractory medium-entropy alloy MoNbTaV was synthesized by vacuum arc melting under a high-purity argon atmosphere. A body-centered cubic solid solution phase was experimentally confirmed in the as-cast ingot using X-ray diffraction and scanning electron microscopy. The measured lattice parameter of the alloy (3.208 Å) obeys the rule of mixtures (ROM), but the Vickers microhardness (4.95 GPa) and the yield strength (1.5 GPa) are about 4.5 and 4.6 times those estimated from the ROM, respectively. Using a simple model on solid solution strengthening predicts a yield strength of approximately 1.5 GPa. Inmore » conclusion, thermodynamic analysis shows that the total entropy of the alloy is more than three times the configurational entropy at room temperature, and the entropy of mixing exhibits a small negative departure from ideal mixing.« less

  12. Configurational entropy measurements in extremely supercooled liquids that break the glass ceiling

    PubMed Central

    Berthier, Ludovic; Charbonneau, Patrick; Coslovich, Daniele; Ninarello, Andrea; Ozawa, Misaki

    2017-01-01

    Liquids relax extremely slowly on approaching the glass state. One explanation is that an entropy crisis, because of the rarefaction of available states, makes it increasingly arduous to reach equilibrium in that regime. Validating this scenario is challenging, because experiments offer limited resolution, while numerical studies lag more than eight orders of magnitude behind experimentally relevant timescales. In this work, we not only close the colossal gap between experiments and simulations but manage to create in silico configurations that have no experimental analog yet. Deploying a range of computational tools, we obtain four estimates of their configurational entropy. These measurements consistently confirm that the steep entropy decrease observed in experiments is also found in simulations, even beyond the experimental glass transition. Our numerical results thus extend the observational window into the physics of glasses and reinforce the relevance of an entropy crisis for understanding their formation. PMID:29073056

  13. Configurational entropy measurements in extremely supercooled liquids that break the glass ceiling

    NASA Astrophysics Data System (ADS)

    Berthier, Ludovic; Charbonneau, Patrick; Coslovich, Daniele; Ninarello, Andrea; Ozawa, Misaki; Yaida, Sho

    2017-10-01

    Liquids relax extremely slowly on approaching the glass state. One explanation is that an entropy crisis, because of the rarefaction of available states, makes it increasingly arduous to reach equilibrium in that regime. Validating this scenario is challenging, because experiments offer limited resolution, while numerical studies lag more than eight orders of magnitude behind experimentally relevant timescales. In this work, we not only close the colossal gap between experiments and simulations but manage to create in silico configurations that have no experimental analog yet. Deploying a range of computational tools, we obtain four estimates of their configurational entropy. These measurements consistently confirm that the steep entropy decrease observed in experiments is also found in simulations, even beyond the experimental glass transition. Our numerical results thus extend the observational window into the physics of glasses and reinforce the relevance of an entropy crisis for understanding their formation.

  14. DEM interpolation weight calculation modulus based on maximum entropy

    NASA Astrophysics Data System (ADS)

    Chen, Tian-wei; Yang, Xia

    2015-12-01

    There is negative-weight in traditional interpolation of gridding DEM, in the article, the principle of Maximum Entropy is utilized to analyze the model system which depends on modulus of space weight. Negative-weight problem of the DEM interpolation is researched via building Maximum Entropy model, and adding nonnegative, first and second order's Moment constraints, the negative-weight problem is solved. The correctness and accuracy of the method was validated with genetic algorithm in matlab program. The method is compared with the method of Yang Chizhong interpolation and quadratic program. Comparison shows that the volume and scaling of Maximum Entropy's weight is fit to relations of space and the accuracy is superior to the latter two.

  15. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-03-10

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.

  16. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.

  17. Entropy generation in Gaussian quantum transformations: applying the replica method to continuous-variable quantum information theory

    NASA Astrophysics Data System (ADS)

    Gagatsos, Christos N.; Karanikas, Alexandros I.; Kordas, Georgios; Cerf, Nicolas J.

    2016-02-01

    In spite of their simple description in terms of rotations or symplectic transformations in phase space, quadratic Hamiltonians such as those modelling the most common Gaussian operations on bosonic modes remain poorly understood in terms of entropy production. For instance, determining the quantum entropy generated by a Bogoliubov transformation is notably a hard problem, with generally no known analytical solution, while it is vital to the characterisation of quantum communication via bosonic channels. Here we overcome this difficulty by adapting the replica method, a tool borrowed from statistical physics and quantum field theory. We exhibit a first application of this method to continuous-variable quantum information theory, where it enables accessing entropies in an optical parametric amplifier. As an illustration, we determine the entropy generated by amplifying a binary superposition of the vacuum and a Fock state, which yields a surprisingly simple, yet unknown analytical expression.

  18. SD-MSAEs: Promoter recognition in human genome based on deep feature extraction.

    PubMed

    Xu, Wenxuan; Zhang, Li; Lu, Yaping

    2016-06-01

    The prediction and recognition of promoter in human genome play an important role in DNA sequence analysis. Entropy, in Shannon sense, of information theory is a multiple utility in bioinformatic details analysis. The relative entropy estimator methods based on statistical divergence (SD) are used to extract meaningful features to distinguish different regions of DNA sequences. In this paper, we choose context feature and use a set of methods of SD to select the most effective n-mers distinguishing promoter regions from other DNA regions in human genome. Extracted from the total possible combinations of n-mers, we can get four sparse distributions based on promoter and non-promoters training samples. The informative n-mers are selected by optimizing the differentiating extents of these distributions. Specially, we combine the advantage of statistical divergence and multiple sparse auto-encoders (MSAEs) in deep learning to extract deep feature for promoter recognition. And then we apply multiple SVMs and a decision model to construct a human promoter recognition method called SD-MSAEs. Framework is flexible that it can integrate new feature extraction or new classification models freely. Experimental results show that our method has high sensitivity and specificity. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Finite-time braiding exponents

    NASA Astrophysics Data System (ADS)

    Budišić, Marko; Thiffeault, Jean-Luc

    2015-08-01

    Topological entropy of a dynamical system is an upper bound for the sum of positive Lyapunov exponents; in practice, it is strongly indicative of the presence of mixing in a subset of the domain. Topological entropy can be computed by partition methods, by estimating the maximal growth rate of material lines or other material elements, or by counting the unstable periodic orbits of the flow. All these methods require detailed knowledge of the velocity field that is not always available, for example, when ocean flows are measured using a small number of floating sensors. We propose an alternative calculation, applicable to two-dimensional flows, that uses only a sparse set of flow trajectories as its input. To represent the sparse set of trajectories, we use braids, algebraic objects that record how trajectories exchange positions with respect to a projection axis. Material curves advected by the flow are represented as simplified loop coordinates. The exponential rate at which a braid stretches loops over a finite time interval is the Finite-Time Braiding Exponent (FTBE). We study FTBEs through numerical simulations of the Aref Blinking Vortex flow, as a representative of a general class of flows having a single invariant component with positive topological entropy. The FTBEs approach the value of the topological entropy from below as the length and number of trajectories is increased; we conjecture that this result holds for a general class of ergodic, mixing systems. Furthermore, FTBEs are computed robustly with respect to the numerical time step, details of braid representation, and choice of initial conditions. We find that, in the class of systems we describe, trajectories can be re-used to form different braids, which greatly reduces the amount of data needed to assess the complexity of the flow.

  20. Finite-time braiding exponents.

    PubMed

    Budišić, Marko; Thiffeault, Jean-Luc

    2015-08-01

    Topological entropy of a dynamical system is an upper bound for the sum of positive Lyapunov exponents; in practice, it is strongly indicative of the presence of mixing in a subset of the domain. Topological entropy can be computed by partition methods, by estimating the maximal growth rate of material lines or other material elements, or by counting the unstable periodic orbits of the flow. All these methods require detailed knowledge of the velocity field that is not always available, for example, when ocean flows are measured using a small number of floating sensors. We propose an alternative calculation, applicable to two-dimensional flows, that uses only a sparse set of flow trajectories as its input. To represent the sparse set of trajectories, we use braids, algebraic objects that record how trajectories exchange positions with respect to a projection axis. Material curves advected by the flow are represented as simplified loop coordinates. The exponential rate at which a braid stretches loops over a finite time interval is the Finite-Time Braiding Exponent (FTBE). We study FTBEs through numerical simulations of the Aref Blinking Vortex flow, as a representative of a general class of flows having a single invariant component with positive topological entropy. The FTBEs approach the value of the topological entropy from below as the length and number of trajectories is increased; we conjecture that this result holds for a general class of ergodic, mixing systems. Furthermore, FTBEs are computed robustly with respect to the numerical time step, details of braid representation, and choice of initial conditions. We find that, in the class of systems we describe, trajectories can be re-used to form different braids, which greatly reduces the amount of data needed to assess the complexity of the flow.

  1. Shearlet-based measures of entropy and complexity for two-dimensional patterns

    NASA Astrophysics Data System (ADS)

    Brazhe, Alexey

    2018-06-01

    New spatial entropy and complexity measures for two-dimensional patterns are proposed. The approach is based on the notion of disequilibrium and is built on statistics of directional multiscale coefficients of the fast finite shearlet transform. Shannon entropy and Jensen-Shannon divergence measures are employed. Both local and global spatial complexity and entropy estimates can be obtained, thus allowing for spatial mapping of complexity in inhomogeneous patterns. The algorithm is validated in numerical experiments with a gradually decaying periodic pattern and Ising surfaces near critical state. It is concluded that the proposed algorithm can be instrumental in describing a wide range of two-dimensional imaging data, textures, or surfaces, where an understanding of the level of order or randomness is desired.

  2. Calculation of heat transfer on shuttle type configurations including the effects of variable entropy at boundary layer edge

    NASA Technical Reports Server (NTRS)

    Dejarnette, F. R.

    1972-01-01

    A relatively simple method is presented for including the effect of variable entropy at the boundary-layer edge in a heat transfer method developed previously. For each inviscid surface streamline an approximate shockwave shape is calculated using a modified form of Maslen's method for inviscid axisymmetric flows. The entropy for the streamline at the edge of the boundary layer is determined by equating the mass flux through the shock wave to that inside the boundary layer. Approximations used in this technique allow the heating rates along each inviscid surface streamline to be calculated independent of the other streamlines. The shock standoff distances computed by the present method are found to compare well with those computed by Maslen's asymmetric method. Heating rates are presented for blunted circular and elliptical cones and a typical space shuttle orbiter at angles of attack. Variable entropy effects are found to increase heating rates downstream of the nose significantly higher than those computed using normal-shock entropy, and turbulent heating rates increased more than laminar rates. Effects of Reynolds number and angles of attack are also shown.

  3. Formulating the shear stress distribution in circular open channels based on the Renyi entropy

    NASA Astrophysics Data System (ADS)

    Khozani, Zohreh Sheikh; Bonakdari, Hossein

    2018-01-01

    The principle of maximum entropy is employed to derive the shear stress distribution by maximizing the Renyi entropy subject to some constraints and by assuming that dimensionless shear stress is a random variable. A Renyi entropy-based equation can be used to model the shear stress distribution along the entire wetted perimeter of circular channels and circular channels with flat beds and deposited sediments. A wide range of experimental results for 12 hydraulic conditions with different Froude numbers (0.375 to 1.71) and flow depths (20.3 to 201.5 mm) were used to validate the derived shear stress distribution. For circular channels, model performance enhanced with increasing flow depth (mean relative error (RE) of 0.0414) and only deteriorated slightly at the greatest flow depth (RE of 0.0573). For circular channels with flat beds, the Renyi entropy model predicted the shear stress distribution well at lower sediment depth. The Renyi entropy model results were also compared with Shannon entropy model results. Both models performed well for circular channels, but for circular channels with flat beds the Renyi entropy model displayed superior performance in estimating the shear stress distribution. The Renyi entropy model was highly precise and predicted the shear stress distribution in a circular channel with RE of 0.0480 and in a circular channel with a flat bed with RE of 0.0488.

  4. Multiscale permutation entropy analysis of laser beam wandering in isotropic turbulence.

    PubMed

    Olivares, Felipe; Zunino, Luciano; Gulich, Damián; Pérez, Darío G; Rosso, Osvaldo A

    2017-10-01

    We have experimentally quantified the temporal structural diversity from the coordinate fluctuations of a laser beam propagating through isotropic optical turbulence. The main focus here is on the characterization of the long-range correlations in the wandering of a thin Gaussian laser beam over a screen after propagating through a turbulent medium. To fulfill this goal, a laboratory-controlled experiment was conducted in which coordinate fluctuations of the laser beam were recorded at a sufficiently high sampling rate for a wide range of turbulent conditions. Horizontal and vertical displacements of the laser beam centroid were subsequently analyzed by implementing the symbolic technique based on ordinal patterns to estimate the well-known permutation entropy. We show that the permutation entropy estimations at multiple time scales evidence an interplay between different dynamical behaviors. More specifically, a crossover between two different scaling regimes is observed. We confirm a transition from an integrated stochastic process contaminated with electronic noise to a fractional Brownian motion with a Hurst exponent H=5/6 as the sampling time increases. Besides, we are able to quantify, from the estimated entropy, the amount of electronic noise as a function of the turbulence strength. We have also demonstrated that these experimental observations are in very good agreement with numerical simulations of noisy fractional Brownian motions with a well-defined crossover between two different scaling regimes.

  5. Scaling estimates of vegetation structure in Amazonian tropical forests using multi-angle MODIS observations

    PubMed Central

    de Moura, Yhasmin Mendes; Hilker, Thomas; Goncalves, Fabio Guimarães; Galvão, Lênio Soares; dos Santos, João Roberto; Lyapustin, Alexei; Maeda, Eduardo Eiji; de Jesus Silva, Camila Valéria

    2018-01-01

    Detailed knowledge of vegetation structure is required for accurate modelling of terrestrial ecosystems, but direct measurements of the three dimensional distribution of canopy elements, for instance from LiDAR, are not widely available. We investigate the potential for modelling vegetation roughness, a key parameter for climatological models, from directional scattering of visible and near-infrared (NIR) reflectance acquired from NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS). We compare our estimates across different tropical forest types to independent measures obtained from: (1) airborne laser scanning (ALS), (2) spaceborne Geoscience Laser Altimeter System (GLAS)/ICESat, and (3) the spaceborne SeaWinds/QSCAT. Our results showed linear correlation between MODIS-derived anisotropy to ALS-derived entropy (r2= 0.54, RMSE=0.11), even in high biomass regions. Significant relationships were also obtained between MODIS-derived anisotropy and GLAS-derived entropy (0.52≤ r2≤ 0.61; p<0.05), with similar slopes and offsets found throughout the season, and RMSE between 0.26 and 0.30 (units of entropy). The relationships between the MODIS-derived anisotropy and backscattering measurements (σ0) from SeaWinds/QuikSCAT presented an r2 of 0.59 and a RMSE of 0.11. We conclude that multi-angular MODIS observations are suitable to extrapolate measures of canopy entropy across different forest types, providing additional estimates of vegetation structure in the Amazon. PMID:29618964

  6. Assessing the strength of cardiac and sympathetic baroreflex controls via transfer entropy during orthostatic challenge

    NASA Astrophysics Data System (ADS)

    Porta, Alberto; Marchi, Andrea; Bari, Vlasta; De Maria, Beatrice; Esler, Murray; Lambert, Elisabeth; Baumert, Mathias

    2017-05-01

    The study assesses the strength of the causal relation along baroreflex (BR) in humans during an incremental postural challenge soliciting the BR. Both cardiac BR (cBR) and sympathetic BR (sBR) were characterized via BR sequence approaches from spontaneous fluctuations of heart period (HP), systolic arterial pressure (SAP), diastolic arterial pressure (DAP) and muscle sympathetic nerve activity (MSNA). A model-based transfer entropy method was applied to quantify the strength of the coupling from SAP to HP and from DAP to MSNA. The confounding influences of respiration were accounted for. Twelve young healthy subjects (20-36 years, nine females) were sequentially tilted at 0°, 20°, 30° and 40°. We found that (i) the strength of the causal relation along the cBR increases with tilt table inclination, while that along the sBR is unrelated to it; (ii) the strength of the causal coupling is unrelated to the gain of the relation; (iii) transfer entropy indexes are significantly and positively associated with simplified causality indexes derived from BR sequence analysis. The study proves that causality indexes are complementary to traditional characterization of the BR and suggests that simple markers derived from BR sequence analysis might be fruitfully exploited to estimate causality along the BR. This article is part of the themed issue `Mathematical methods in medicine: neuroscience, cardiology and pathology'.

  7. Identification of hydrometeor mixtures in polarimetric radar measurements and their linear de-mixing

    NASA Astrophysics Data System (ADS)

    Besic, Nikola; Ventura, Jordi Figueras i.; Grazioli, Jacopo; Gabella, Marco; Germann, Urs; Berne, Alexis

    2017-04-01

    The issue of hydrometeor mixtures affects radar sampling volumes without a clear dominant hydrometeor type. Containing a number of different hydrometeor types which significantly contribute to the polarimetric variables, these volumes are likely to occur in the vicinity of the melting layer and mainly, at large distance from a given radar. Motivated by potential benefits for both quantitative and qualitative applications of dual-pol radar, we propose a method for the identification of hydrometeor mixtures and their subsequent linear de-mixing. This method is intrinsically related to our recently proposed semi-supervised approach for hydrometeor classification. The mentioned classification approach [1] performs labeling of radar sampling volumes by using as a criterion the Euclidean distance with respect to five-dimensional centroids, depicting nine hydrometeor classes. The positions of the centroids in the space formed by four radar moments and one external parameter (phase indicator), are derived through a technique of k-medoids clustering, applied on a selected representative set of radar observations, and coupled with statistical testing which introduces the assumed microphysical properties of the different hydrometeor types. Aside from a hydrometeor type label, each radar sampling volume is characterized by an entropy estimate, indicating the uncertainty of the classification. Here, we revisit the concept of entropy presented in [1], in order to emphasize its presumed potential for the identification of hydrometeor mixtures. The calculation of entropy is based on the estimate of the probability (pi ) that the observation corresponds to the hydrometeor type i (i = 1,ṡṡṡ9) . The probability is derived from the Euclidean distance (di ) of the observation to the centroid characterizing the hydrometeor type i . The parametrization of the d → p transform is conducted in a controlled environment, using synthetic polarimetric radar datasets. It ensures balanced entropy values: low for pure volumes, and high for different possible combinations of mixed hydrometeors. The parametrized entropy is further on applied to real polarimetric C and X band radar datasets, where we demonstrate the potential of linear de-mixing using a simplex formed by a set of pre-defined centroids in the five-dimensional space. As main outcome, the proposed approach allows to provide plausible proportions of the different hydrometeors contained in a given radar sampling volume. [1] Besic, N., Figueras i Ventura, J., Grazioli, J., Gabella, M., Germann, U., and Berne, A.: Hydrometeor classification through statistical clustering of polarimetric radar measurements: a semi-supervised approach, Atmos. Meas. Tech., 9, 4425-4445, doi:10.5194/amt-9-4425-2016, 2016.

  8. Double symbolic joint entropy in nonlinear dynamic complexity analysis

    NASA Astrophysics Data System (ADS)

    Yao, Wenpo; Wang, Jun

    2017-07-01

    Symbolizations, the base of symbolic dynamic analysis, are classified as global static and local dynamic approaches which are combined by joint entropy in our works for nonlinear dynamic complexity analysis. Two global static methods, symbolic transformations of Wessel N. symbolic entropy and base-scale entropy, and two local ones, namely symbolizations of permutation and differential entropy, constitute four double symbolic joint entropies that have accurate complexity detections in chaotic models, logistic and Henon map series. In nonlinear dynamical analysis of different kinds of heart rate variability, heartbeats of healthy young have higher complexity than those of the healthy elderly, and congestive heart failure (CHF) patients are lowest in heartbeats' joint entropy values. Each individual symbolic entropy is improved by double symbolic joint entropy among which the combination of base-scale and differential symbolizations have best complexity analysis. Test results prove that double symbolic joint entropy is feasible in nonlinear dynamic complexity analysis.

  9. The cancer Warburg effect may be a testable example of the minimum entropy production rate principle

    NASA Astrophysics Data System (ADS)

    Marín, Dolores; Sabater, Bartolomé

    2017-04-01

    Cancer cells consume more glucose by glycolytic fermentation to lactate than by respiration, a characteristic known as the Warburg effect. In contrast with the 36 moles of ATP produced by respiration, fermentation produces two moles of ATP per mole of glucose consumed, which poses a puzzle with regard to the function of the Warburg effect. The production of free energy (ΔG), enthalpy (ΔH), and entropy (ΔS) per mole linearly varies with the fraction (x) of glucose consumed by fermentation that is frequently estimated around 0.9. Hence, calculation shows that, in respect to pure respiration, the predominant fermentative metabolism decreases around 10% the production of entropy per mole of glucose consumed in cancer cells. We hypothesize that increased fermentation could allow cancer cells to accomplish the Prigogine theorem of the trend to minimize the rate of production of entropy. According to the theorem, open cellular systems near the steady state could evolve to minimize the rates of entropy production that may be reached by modified replicating cells producing entropy at a low rate. Remarkably, at CO2 concentrations above 930 ppm, glucose respiration produces less entropy than fermentation, which suggests experimental tests to validate the hypothesis of minimization of the rate of entropy production through the Warburg effect.

  10. The cancer Warburg effect may be a testable example of the minimum entropy production rate principle.

    PubMed

    Marín, Dolores; Sabater, Bartolomé

    2017-04-28

    Cancer cells consume more glucose by glycolytic fermentation to lactate than by respiration, a characteristic known as the Warburg effect. In contrast with the 36 moles of ATP produced by respiration, fermentation produces two moles of ATP per mole of glucose consumed, which poses a puzzle with regard to the function of the Warburg effect. The production of free energy (ΔG), enthalpy (ΔH), and entropy (ΔS) per mole linearly varies with the fraction (x) of glucose consumed by fermentation that is frequently estimated around 0.9. Hence, calculation shows that, in respect to pure respiration, the predominant fermentative metabolism decreases around 10% the production of entropy per mole of glucose consumed in cancer cells. We hypothesize that increased fermentation could allow cancer cells to accomplish the Prigogine theorem of the trend to minimize the rate of production of entropy. According to the theorem, open cellular systems near the steady state could evolve to minimize the rates of entropy production that may be reached by modified replicating cells producing entropy at a low rate. Remarkably, at CO 2 concentrations above 930 ppm, glucose respiration produces less entropy than fermentation, which suggests experimental tests to validate the hypothesis of minimization of the rate of entropy production through the Warburg effect.

  11. Conditional Entropy-Constrained Residual VQ with Application to Image Coding

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.

    1996-01-01

    This paper introduces an extension of entropy-constrained residual vector quantization (VQ) where intervector dependencies are exploited. The method, which we call conditional entropy-constrained residual VQ, employs a high-order entropy conditioning strategy that captures local information in the neighboring vectors. When applied to coding images, the proposed method is shown to achieve better rate-distortion performance than that of entropy-constrained residual vector quantization with less computational complexity and lower memory requirements. Moreover, it can be designed to support progressive transmission in a natural way. It is also shown to outperform some of the best predictive and finite-state VQ techniques reported in the literature. This is due partly to the joint optimization between the residual vector quantizer and a high-order conditional entropy coder as well as the efficiency of the multistage residual VQ structure and the dynamic nature of the prediction.

  12. The MEM of spectral analysis applied to L.O.D.

    NASA Astrophysics Data System (ADS)

    Fernandez, L. I.; Arias, E. F.

    The maximum entropy method (MEM) has been widely applied for polar motion studies taking advantage of its performance on the management of complex time series. The authors used the algorithm of the MEM to estimate Cross Spectral function in order to compare interannual Length-of-Day (LOD) time series with Southern Oscillation Index (SOI) and Sea Surface Temperature (SST) series, which are close related to El Niño-Southern Oscillation (ENSO) events.

  13. Minimal entropy approximation for cellular automata

    NASA Astrophysics Data System (ADS)

    Fukś, Henryk

    2014-02-01

    We present a method for the construction of approximate orbits of measures under the action of cellular automata which is complementary to the local structure theory. The local structure theory is based on the idea of Bayesian extension, that is, construction of a probability measure consistent with given block probabilities and maximizing entropy. If instead of maximizing entropy one minimizes it, one can develop another method for the construction of approximate orbits, at the heart of which is the iteration of finite-dimensional maps, called minimal entropy maps. We present numerical evidence that the minimal entropy approximation sometimes outperforms the local structure theory in characterizing the properties of cellular automata. The density response curve for elementary CA rule 26 is used to illustrate this claim.

  14. Third law of thermodynamics as a key test of generalized entropies.

    PubMed

    Bento, E P; Viswanathan, G M; da Luz, M G E; Silva, R

    2015-02-01

    The laws of thermodynamics constrain the formulation of statistical mechanics at the microscopic level. The third law of thermodynamics states that the entropy must vanish at absolute zero temperature for systems with nondegenerate ground states in equilibrium. Conversely, the entropy can vanish only at absolute zero temperature. Here we ask whether or not generalized entropies satisfy this fundamental property. We propose a direct analytical procedure to test if a generalized entropy satisfies the third law, assuming only very general assumptions for the entropy S and energy U of an arbitrary N-level classical system. Mathematically, the method relies on exact calculation of β=dS/dU in terms of the microstate probabilities p(i). To illustrate this approach, we present exact results for the two best known generalizations of statistical mechanics. Specifically, we study the Kaniadakis entropy S(κ), which is additive, and the Tsallis entropy S(q), which is nonadditive. We show that the Kaniadakis entropy correctly satisfies the third law only for -1<κ<+1, thereby shedding light on why κ is conventionally restricted to this interval. Surprisingly, however, the Tsallis entropy violates the third law for q<1. Finally, we give a concrete example of the power of our proposed method by applying it to a paradigmatic system: the one-dimensional ferromagnetic Ising model with nearest-neighbor interactions.

  15. Noise and complexity in human postural control: interpreting the different estimations of entropy.

    PubMed

    Rhea, Christopher K; Silver, Tobin A; Hong, S Lee; Ryu, Joong Hyun; Studenka, Breanna E; Hughes, Charmayne M L; Haddad, Jeffrey M

    2011-03-17

    Over the last two decades, various measures of entropy have been used to examine the complexity of human postural control. In general, entropy measures provide information regarding the health, stability and adaptability of the postural system that is not captured when using more traditional analytical techniques. The purpose of this study was to examine how noise, sampling frequency and time series length influence various measures of entropy when applied to human center of pressure (CoP) data, as well as in synthetic signals with known properties. Such a comparison is necessary to interpret data between and within studies that use different entropy measures, equipment, sampling frequencies or data collection durations. The complexity of synthetic signals with known properties and standing CoP data was calculated using Approximate Entropy (ApEn), Sample Entropy (SampEn) and Recurrence Quantification Analysis Entropy (RQAEn). All signals were examined at varying sampling frequencies and with varying amounts of added noise. Additionally, an increment time series of the original CoP data was examined to remove long-range correlations. Of the three measures examined, ApEn was the least robust to sampling frequency and noise manipulations. Additionally, increased noise led to an increase in SampEn, but a decrease in RQAEn. Thus, noise can yield inconsistent results between the various entropy measures. Finally, the differences between the entropy measures were minimized in the increment CoP data, suggesting that long-range correlations should be removed from CoP data prior to calculating entropy. The various algorithms typically used to quantify the complexity (entropy) of CoP may yield very different results, particularly when sampling frequency and noise are different. The results of this study are discussed within the context of the neural noise and loss of complexity hypotheses.

  16. Estimating Thermal Inertia with a Maximum Entropy Boundary Condition

    NASA Astrophysics Data System (ADS)

    Nearing, G.; Moran, M. S.; Scott, R.; Ponce-Campos, G.

    2012-04-01

    Thermal inertia, P [Jm-2s-1/2K-1], is a physical property the land surface which determines resistance to temperature change under seasonal or diurnal heating. It is a function of volumetric heat capacity, c [Jm-3K-1], and thermal conductivity, k [Wm-1K-1] of the soil near the surface: P=√ck. Thermal inertia of soil varies with moisture content due the difference between thermal properties of water and air, and a number of studies have demonstrated that it is feasible to estimate soil moisture given thermal inertia (e.g. Lu et al, 2009, Murray and Verhoef, 2007). We take the common approach to estimating thermal inertia using measurements of surface temperature by modeling the Earth's surface as a 1-dimensional homogeneous diffusive half-space. In this case, surface temperature is a function of the ground heat flux (G) boundary condition and thermal inertia and a daily value of P was estimated by matching measured and modeled diurnal surface temperature fluctuations. The difficulty is in measuring G; we demonstrate that the new maximum entropy production (MEP) method for partitioning net radiation into surface energy fluxes (Wang and Bras, 2011) provides a suitable boundary condition for estimating P. Adding the diffusion representation of heat transfer in the soil reduces the number of free parameters in the MEP model from two to one, and we provided a sensitivity analysis which suggests that, for the purpose of estimating P, it is preferable to parameterize the coupled MEP-diffusion model by the ratio of thermal inertia of the soil to the effective thermal inertia of convective heat transfer to the atmosphere. We used this technique to estimate thermal inertia at two semiarid, non-vegetated locations in the Walnut Gulch Experimental Watershed in southeast AZ, USA and compared these estimates to estimates of P made using the Xue and Cracknell (1995) solution for a linearized ground heat flux boundary condition, and we found that the MEP-diffusion model produced superior thermal inertia estimates. The MEP-diffusion estimates also agreed well with P estimates made using a boundary condition measured with buried flux plates. We further demonstrated the new method using diurnal surface temperature fluctuations estimated from day/night MODIS image pairs and, excluding instances where the soil was extremely dry, found a strong relationship between estimated thermal inertia and measured 5 cm soil moisture. Lu, S., Ju, Z.Q., Ren, T.S. & Horton, R. (2009). A general approach to estimate soil water content from thermal inertia. Agricultural and Forest Meteorology, 149, 1693-1698. Murray, T. & Verhoef, A. (2007). Moving towards a more mechanistic approach in the determination of soil heat flux from remote measurements - I. A universal approach to calculate thermal inertia. Agricultural and Forest Meteorology, 147, 80-87. Wang, J.F. & Bras, R.L. (2011). A model of evapotranspiration based on the theory of maximum entropy production. Water Resources Research, 47. Xue, Y. & Cracknell, A.P. (1995). Advanced thermal inertia modeling. International Journal of Remote Sensing, 16, 431-446.

  17. Accuracy in the estimation of quantitative minimal area from the diversity/area curve.

    PubMed

    Vives, Sergi; Salicrú, Miquel

    2005-05-01

    The problem of representativity is fundamental in ecological studies. A qualitative minimal area that gives a good representation of species pool [C.M. Bouderesque, Methodes d'etude qualitative et quantitative du benthos (en particulier du phytobenthos), Tethys 3(1) (1971) 79] can be discerned from a quantitative minimal area which reflects the structural complexity of community [F.X. Niell, Sobre la biologia de Ascophyllum nosodum (L.) Le Jolis en Galicia, Invest. Pesq. 43 (1979) 501]. This suggests that the populational diversity can be considered as the value of the horizontal asymptote corresponding to the curve sample diversity/biomass [F.X. Niell, Les applications de l'index de Shannon a l'etude de la vegetation interdidale, Soc. Phycol. Fr. Bull. 19 (1974) 238]. In this study we develop a expression to determine minimal areas and use it to obtain certain information about the community structure based on diversity/area curve graphs. This expression is based on the functional relationship between the expected value of the diversity and the sample size used to estimate it. In order to establish the quality of the estimation process, we obtained the confidence intervals as a particularization of the functional (h-phi)-entropies proposed in [M. Salicru, M.L. Menendez, D. Morales, L. Pardo, Asymptotic distribution of (h,phi)-entropies, Commun. Stat. (Theory Methods) 22 (7) (1993) 2015]. As an example used to demonstrate the possibilities of this method, and only for illustrative purposes, data about a study on the rocky intertidal seawed populations in the Ria of Vigo (N.W. Spain) are analyzed [F.X. Niell, Estudios sobre la estructura, dinamica y produccion del Fitobentos intermareal (Facies rocosa) de la Ria de Vigo. Ph.D. Mem. University of Barcelona, Barcelona, 1979].

  18. Sample entropy analysis for the estimating depth of anaesthesia through human EEG signal at different levels of unconsciousness during surgeries

    PubMed Central

    Fan, Shou-Zen; Abbod, Maysam F.

    2018-01-01

    Estimating the depth of anaesthesia (DoA) in operations has always been a challenging issue due to the underlying complexity of the brain mechanisms. Electroencephalogram (EEG) signals are undoubtedly the most widely used signals for measuring DoA. In this paper, a novel EEG-based index is proposed to evaluate DoA for 24 patients receiving general anaesthesia with different levels of unconsciousness. Sample Entropy (SampEn) algorithm was utilised in order to acquire the chaotic features of the signals. After calculating the SampEn from the EEG signals, Random Forest was utilised for developing learning regression models with Bispectral index (BIS) as the target. Correlation coefficient, mean absolute error, and area under the curve (AUC) were used to verify the perioperative performance of the proposed method. Validation comparisons with typical nonstationary signal analysis methods (i.e., recurrence analysis and permutation entropy) and regression methods (i.e., neural network and support vector machine) were conducted. To further verify the accuracy and validity of the proposed methodology, the data is divided into four unconsciousness-level groups on the basis of BIS levels. Subsequently, analysis of variance (ANOVA) was applied to the corresponding index (i.e., regression output). Results indicate that the correlation coefficient improved to 0.72 ± 0.09 after filtering and to 0.90 ± 0.05 after regression from the initial values of 0.51 ± 0.17. Similarly, the final mean absolute error dramatically declined to 5.22 ± 2.12. In addition, the ultimate AUC increased to 0.98 ± 0.02, and the ANOVA analysis indicates that each of the four groups of different anaesthetic levels demonstrated significant difference from the nearest levels. Furthermore, the Random Forest output was extensively linear in relation to BIS, thus with better DoA prediction accuracy. In conclusion, the proposed method provides a concrete basis for monitoring patients’ anaesthetic level during surgeries. PMID:29844970

  19. Sample entropy analysis of cervical neoplasia gene-expression signatures

    PubMed Central

    Botting, Shaleen K; Trzeciakowski, Jerome P; Benoit, Michelle F; Salama, Salama A; Diaz-Arrastia, Concepcion R

    2009-01-01

    Background We introduce Approximate Entropy as a mathematical method of analysis for microarray data. Approximate entropy is applied here as a method to classify the complex gene expression patterns resultant of a clinical sample set. Since Entropy is a measure of disorder in a system, we believe that by choosing genes which display minimum entropy in normal controls and maximum entropy in the cancerous sample set we will be able to distinguish those genes which display the greatest variability in the cancerous set. Here we describe a method of utilizing Approximate Sample Entropy (ApSE) analysis to identify genes of interest with the highest probability of producing an accurate, predictive, classification model from our data set. Results In the development of a diagnostic gene-expression profile for cervical intraepithelial neoplasia (CIN) and squamous cell carcinoma of the cervix, we identified 208 genes which are unchanging in all normal tissue samples, yet exhibit a random pattern indicative of the genetic instability and heterogeneity of malignant cells. This may be measured in terms of the ApSE when compared to normal tissue. We have validated 10 of these genes on 10 Normal and 20 cancer and CIN3 samples. We report that the predictive value of the sample entropy calculation for these 10 genes of interest is promising (75% sensitivity, 80% specificity for prediction of cervical cancer over CIN3). Conclusion The success of the Approximate Sample Entropy approach in discerning alterations in complexity from biological system with such relatively small sample set, and extracting biologically relevant genes of interest hold great promise. PMID:19232110

  20. Logarithmic black hole entropy corrections and holographic Rényi entropy

    NASA Astrophysics Data System (ADS)

    Mahapatra, Subhash

    2018-01-01

    The entanglement and Rényi entropies for spherical entangling surfaces in CFTs with gravity duals can be explicitly calculated by mapping these entropies first to the thermal entropy on hyperbolic space and then, using the AdS/CFT correspondence, to the Wald entropy of topological black holes. Here we extend this idea by taking into account corrections to the Wald entropy. Using the method based on horizon symmetries and the asymptotic Cardy formula, we calculate corrections to the Wald entropy and find that these corrections are proportional to the logarithm of the area of the horizon. With the corrected expression for the entropy of the black hole, we then find corrections to the Rényi entropies. We calculate these corrections for both Einstein and Gauss-Bonnet gravity duals. Corrections with logarithmic dependence on the area of the entangling surface naturally occur at the order GD^0. The entropic c-function and the inequalities of the Rényi entropy are also satisfied even with the correction terms.

  1. Efficient Bayesian experimental design for contaminant source identification

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Zeng, L.

    2013-12-01

    In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameter identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from indirect concentration measurements in identifying unknown source parameters such as the release time, strength and location. In this approach, the sampling location that gives the maximum relative entropy is selected as the optimal one. Once the sampling location is determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown source parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. Compared with the traditional optimal design, which is based on the Gaussian linear assumption, the method developed in this study can cope with arbitrary nonlinearity. It can be used to assist in groundwater monitor network design and identification of unknown contaminant sources. Contours of the expected information gain. The optimal observing location corresponds to the maximum value. Posterior marginal probability densities of unknown parameters, the thick solid black lines are for the designed location. For comparison, other 7 lines are for randomly chosen locations. The true values are denoted by vertical lines. It is obvious that the unknown parameters are estimated better with the desinged location.

  2. Device-Independent Tests of Entropy

    NASA Astrophysics Data System (ADS)

    Chaves, Rafael; Brask, Jonatan Bohr; Brunner, Nicolas

    2015-09-01

    We show that the entropy of a message can be tested in a device-independent way. Specifically, we consider a prepare-and-measure scenario with classical or quantum communication, and develop two different methods for placing lower bounds on the communication entropy, given observable data. The first method is based on the framework of causal inference networks. The second technique, based on convex optimization, shows that quantum communication provides an advantage over classical communication, in the sense of requiring a lower entropy to reproduce given data. These ideas may serve as a basis for novel applications in device-independent quantum information processing.

  3. Harvesting Entropy for Random Number Generation for Internet of Things Constrained Devices Using On-Board Sensors

    PubMed Central

    Pawlowski, Marcin Piotr; Jara, Antonio; Ogorzalek, Maciej

    2015-01-01

    Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors) have been analyzed. Additionally, the costs (i.e., time and memory consumption) of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things. PMID:26506357

  4. Harvesting entropy for random number generation for internet of things constrained devices using on-board sensors.

    PubMed

    Pawlowski, Marcin Piotr; Jara, Antonio; Ogorzalek, Maciej

    2015-10-22

    Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors) have been analyzed. Additionally, the costs (i.e., time and memory consumption) of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things.

  5. On cell entropy inequality for discontinuous Galerkin methods

    NASA Technical Reports Server (NTRS)

    Jiang, Guangshan; Shu, Chi-Wang

    1993-01-01

    We prove a cell entropy inequality for a class of high order discontinuous Galerkin finite element methods approximating conservation laws, which implies convergence for the one dimensional scalar convex case.

  6. Quantile-based Bayesian maximum entropy approach for spatiotemporal modeling of ambient air quality levels.

    PubMed

    Yu, Hwa-Lung; Wang, Chih-Hsin

    2013-02-05

    Understanding the daily changes in ambient air quality concentrations is important to the assessing human exposure and environmental health. However, the fine temporal scales (e.g., hourly) involved in this assessment often lead to high variability in air quality concentrations. This is because of the complex short-term physical and chemical mechanisms among the pollutants. Consequently, high heterogeneity is usually present in not only the averaged pollution levels, but also the intraday variance levels of the daily observations of ambient concentration across space and time. This characteristic decreases the estimation performance of common techniques. This study proposes a novel quantile-based Bayesian maximum entropy (QBME) method to account for the nonstationary and nonhomogeneous characteristics of ambient air pollution dynamics. The QBME method characterizes the spatiotemporal dependence among the ambient air quality levels based on their location-specific quantiles and accounts for spatiotemporal variations using a local weighted smoothing technique. The epistemic framework of the QBME method can allow researchers to further consider the uncertainty of space-time observations. This study presents the spatiotemporal modeling of daily CO and PM10 concentrations across Taiwan from 1998 to 2009 using the QBME method. Results show that the QBME method can effectively improve estimation accuracy in terms of lower mean absolute errors and standard deviations over space and time, especially for pollutants with strong nonhomogeneous variances across space. In addition, the epistemic framework can allow researchers to assimilate the site-specific secondary information where the observations are absent because of the common preferential sampling issues of environmental data. The proposed QBME method provides a practical and powerful framework for the spatiotemporal modeling of ambient pollutants.

  7. Nonadditive entropy maximization is inconsistent with Bayesian updating

    NASA Astrophysics Data System (ADS)

    Pressé, Steve

    2014-11-01

    The maximum entropy method—used to infer probabilistic models from data—is a special case of Bayes's model inference prescription which, in turn, is grounded in basic propositional logic. By contrast to the maximum entropy method, the compatibility of nonadditive entropy maximization with Bayes's model inference prescription has never been established. Here we demonstrate that nonadditive entropy maximization is incompatible with Bayesian updating and discuss the immediate implications of this finding. We focus our attention on special cases as illustrations.

  8. Identification of Watershed-scale Critical Source Areas Using Bayesian Maximum Entropy Spatiotemporal Analysis

    NASA Astrophysics Data System (ADS)

    Roostaee, M.; Deng, Z.

    2017-12-01

    The states' environmental agencies are required by The Clean Water Act to assess all waterbodies and evaluate potential sources of impairments. Spatial and temporal distributions of water quality parameters are critical in identifying Critical Source Areas (CSAs). However, due to limitations in monetary resources and a large number of waterbodies, available monitoring stations are typically sparse with intermittent periods of data collection. Hence, scarcity of water quality data is a major obstacle in addressing sources of pollution through management strategies. In this study spatiotemporal Bayesian Maximum Entropy method (BME) is employed to model the inherent temporal and spatial variability of measured water quality indicators such as Dissolved Oxygen (DO) concentration for Turkey Creek Watershed. Turkey Creek is located in northern Louisiana and has been listed in 303(d) list for DO impairment since 2014 in Louisiana Water Quality Inventory Reports due to agricultural practices. BME method is proved to provide more accurate estimates than the methods of purely spatial analysis by incorporating space/time distribution and uncertainty in available measured soft and hard data. This model would be used to estimate DO concentration at unmonitored locations and times and subsequently identifying CSAs. The USDA's crop-specific land cover data layers of the watershed were then used to determine those practices/changes that led to low DO concentration in identified CSAs. Primary results revealed that cultivation of corn and soybean as well as urban runoff are main contributing sources in low dissolved oxygen in Turkey Creek Watershed.

  9. Possibility-based robust design optimization for the structural-acoustic system with fuzzy parameters

    NASA Astrophysics Data System (ADS)

    Yin, Hui; Yu, Dejie; Yin, Shengwen; Xia, Baizhan

    2018-03-01

    The conventional engineering optimization problems considering uncertainties are based on the probabilistic model. However, the probabilistic model may be unavailable because of the lack of sufficient objective information to construct the precise probability distribution of uncertainties. This paper proposes a possibility-based robust design optimization (PBRDO) framework for the uncertain structural-acoustic system based on the fuzzy set model, which can be constructed by expert opinions. The objective of robust design is to optimize the expectation and variability of system performance with respect to uncertainties simultaneously. In the proposed PBRDO, the entropy of the fuzzy system response is used as the variability index; the weighted sum of the entropy and expectation of the fuzzy response is used as the objective function, and the constraints are established in the possibility context. The computations for the constraints and objective function of PBRDO are a triple-loop and a double-loop nested problem, respectively, whose computational costs are considerable. To improve the computational efficiency, the target performance approach is introduced to transform the calculation of the constraints into a double-loop nested problem. To further improve the computational efficiency, a Chebyshev fuzzy method (CFM) based on the Chebyshev polynomials is proposed to estimate the objective function, and the Chebyshev interval method (CIM) is introduced to estimate the constraints, thereby the optimization problem is transformed into a single-loop one. Numerical results on a shell structural-acoustic system verify the effectiveness and feasibility of the proposed methods.

  10. Robust and fast pedestrian detection method for far-infrared automotive driving assistance systems

    NASA Astrophysics Data System (ADS)

    Liu, Qiong; Zhuang, Jiajun; Ma, Jun

    2013-09-01

    Despite considerable effort has been contributed to night-time pedestrian detection for automotive driving assistance systems recent years, robust and real-time pedestrian detection is by no means a trivial task and is still underway due to the moving cameras, uncontrolled outdoor environments, wide range of possible pedestrian presentations and the stringent performance criteria for automotive applications. This paper presents an alternative night-time pedestrian detection method using monocular far-infrared (FIR) camera, which includes two modules (regions of interest (ROIs) generation and pedestrian recognition) in a cascade fashion. Pixel-gradient oriented vertical projection is first proposed to estimate the vertical image stripes that might contain pedestrians, and then local thresholding image segmentation is adopted to generate ROIs more accurately within the estimated vertical stripes. A novel descriptor called PEWHOG (pyramid entropy weighted histograms of oriented gradients) is proposed to represent FIR pedestrians in recognition module. Specifically, PEWHOG is used to capture both the local object shape described by the entropy weighted distribution of oriented gradient histograms and its pyramid spatial layout. Then PEWHOG is fed to a three-branch structured classifier using support vector machines (SVM) with histogram intersection kernel (HIK). An off-line training procedure combining both the bootstrapping and early-stopping strategy is introduced to generate a more robust classifier by exploiting hard negative samples iteratively. Finally, multi-frame validation is utilized to suppress some transient false positives. Experimental results on FIR video sequences from various scenarios demonstrate that the presented method is effective and promising.

  11. Hyperbolicity of the Nonlinear Models of Maxwell's Equations

    NASA Astrophysics Data System (ADS)

    Serre, Denis

    . We consider the class of nonlinear models of electromagnetism that has been described by Coleman & Dill [7]. A model is completely determined by its energy density W(B,D). Viewing the electromagnetic field (B,D) as a 3×2 matrix, we show that polyconvexity of W implies the local well-posedness of the Cauchy problem within smooth functions of class Hs with s>1+d/2. The method follows that designed by Dafermos in his book [9] in the context of nonlinear elasticity. We use the fact that B×D is a (vectorial, non-convex) entropy, and we enlarge the system from 6 to 9 equations. The resulting system admits an entropy (actually the energy) that is convex. Since the energy conservation law does not derive from the system of conservation laws itself (Faraday's and Ampère's laws), but also needs the compatibility relations divB=divD=0 (the latter may be relaxed in order to take into account electric charges), the energy density is not an entropy in the classical sense. Thus the system cannot be symmetrized, strictly speaking. However, we show that the structure is close enough to symmetrizability, so that the standard estimates still hold true.

  12. Multidimensional density shaping by sigmoids.

    PubMed

    Roth, Z; Baram, Y

    1996-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the output entropy of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's optimization method, applied to the estimated density, yields a recursive estimator for a random variable or a random sequence. A constrained connectivity structure yields a linear estimator, which is particularly suitable for "real time" prediction. A Gaussian nonlinearity yields a closed-form solution for the network's parameters, which may also be used for initializing the optimization algorithm when other nonlinearities are employed. A triangular connectivity between the neurons and the input, which is naturally suggested by the statistical setting, reduces the number of parameters. Applications to classification and forecasting problems are demonstrated.

  13. Information-Based Analysis of Data Assimilation (Invited)

    NASA Astrophysics Data System (ADS)

    Nearing, G. S.; Gupta, H. V.; Crow, W. T.; Gong, W.

    2013-12-01

    Data assimilation is defined as the Bayesian conditioning of uncertain model simulations on observations for the purpose of reducing uncertainty about model states. Practical data assimilation methods make the application of Bayes' law tractable either by employing assumptions about the prior, posterior and likelihood distributions (e.g., the Kalman family of filters) or by using resampling methods (e.g., bootstrap filter). We propose to quantify the efficiency of these approximations in an OSSE setting using information theory and, in an OSSE or real-world validation setting, to measure the amount - and more importantly, the quality - of information extracted from observations during data assimilation. To analyze DA assumptions, uncertainty is quantified as the Shannon-type entropy of a discretized probability distribution. The maximum amount of information that can be extracted from observations about model states is the mutual information between states and observations, which is equal to the reduction in entropy in our estimate of the state due to Bayesian filtering. The difference between this potential and the actual reduction in entropy due to Kalman (or other type of) filtering measures the inefficiency of the filter assumptions. Residual uncertainty in DA posterior state estimates can be attributed to three sources: (i) non-injectivity of the observation operator, (ii) noise in the observations, and (iii) filter approximations. The contribution of each of these sources is measurable in an OSSE setting. The amount of information extracted from observations by data assimilation (or system identification, including parameter estimation) can also be measured by Shannon's theory. Since practical filters are approximations of Bayes' law, it is important to know whether the information that is extracted form observations by a filter is reliable. We define information as either good or bad, and propose to measure these two types of information using partial Kullback-Leibler divergences. Defined this way, good and bad information sum to total information. This segregation of information into good and bad components requires a validation target distribution; in a DA OSSE setting, this can be the true Bayesian posterior, but in a real-world setting the validation target might be determined by a set of in situ observations.

  14. Degradation trend estimation of slewing bearing based on LSSVM model

    NASA Astrophysics Data System (ADS)

    Lu, Chao; Chen, Jie; Hong, Rongjing; Feng, Yang; Li, Yuanyuan

    2016-08-01

    A novel prediction method is proposed based on least squares support vector machine (LSSVM) to estimate the slewing bearing's degradation trend with small sample data. This method chooses the vibration signal which contains rich state information as the object of the study. Principal component analysis (PCA) was applied to fuse multi-feature vectors which could reflect the health state of slewing bearing, such as root mean square, kurtosis, wavelet energy entropy, and intrinsic mode function (IMF) energy. The degradation indicator fused by PCA can reflect the degradation more comprehensively and effectively. Then the degradation trend of slewing bearing was predicted by using the LSSVM model optimized by particle swarm optimization (PSO). The proposed method was demonstrated to be more accurate and effective by the whole life experiment of slewing bearing. Therefore, it can be applied in engineering practice.

  15. An Error-Entropy Minimization Algorithm for Tracking Control of Nonlinear Stochastic Systems with Non-Gaussian Variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yunlong; Wang, Aiping; Guo, Lei

    This paper presents an error-entropy minimization tracking control algorithm for a class of dynamic stochastic system. The system is represented by a set of time-varying discrete nonlinear equations with non-Gaussian stochastic input, where the statistical properties of stochastic input are unknown. By using Parzen windowing with Gaussian kernel to estimate the probability densities of errors, recursive algorithms are then proposed to design the controller such that the tracking error can be minimized. The performance of the error-entropy minimization criterion is compared with the mean-square-error minimization in the simulation results.

  16. Identification of the release history of a groundwater contaminant in non-uniform flow field through the minimum relative entropy method

    NASA Astrophysics Data System (ADS)

    Cupola, F.; Tanda, M. G.; Zanini, A.

    2014-12-01

    The interest in approaches that allow the estimation of pollutant source release in groundwater has increased exponentially over the last decades. This is due to the large number of groundwater reclamation procedures that have been carried out: the remediation is expensive and the costs can be easily shared among the different actors if the release history is known. Moreover, a reliable release history can be a useful tool for predicting the plume evolution and for minimizing the harmful effects of the contamination. In this framework, Woodbury and Ulrych (1993, 1996) adopted and improved the minimum relative entropy (MRE) method to solve linear inverse problems for the recovery of the pollutant release history in an aquifer. In this work, the MRE method has been improved to detect the source release history in 2-D aquifer characterized by a non-uniform flow-field. The approach has been tested on two cases: a 2-D homogeneous conductivity field and a strong heterogeneous one (the hydraulic conductivity presents three orders of magnitude in terms of variability). In the latter case the transfer function could not be described with an analytical formulation, thus, the transfer functions were estimated by means of the method developed by Butera et al. (2006). In order to demonstrate its scope, this method was applied with two different datasets: observations collected at the same time at 20 different monitoring points, and observations collected at 2 monitoring points at different times (15-25 monitoring points). The data observed were considered affected by a random error. These study cases have been carried out considering a Boxcar and a Gaussian function as expected value of the prior distribution of the release history. The agreement between the true and the estimated release history has been evaluated through the calculation of the normalized Root Mean Square (nRMSE) error: this has shown the ability of the method of recovering the release history even in the most severe cases. Finally, the forward simulation has been carried out by using the estimated release history in order to compare the true data with the estimated one: the best agreement has been obtained in the homogeneous case, even if also in the heterogenous one the nRMSE is acceptable.

  17. Complexity Measures in Magnetoencephalography: Measuring "Disorder" in Schizophrenia

    PubMed Central

    Brookes, Matthew J.; Hall, Emma L.; Robson, Siân E.; Price, Darren; Palaniyappan, Lena; Liddle, Elizabeth B.; Liddle, Peter F.; Robinson, Stephen E.; Morris, Peter G.

    2015-01-01

    This paper details a methodology which, when applied to magnetoencephalography (MEG) data, is capable of measuring the spatio-temporal dynamics of ‘disorder’ in the human brain. Our method, which is based upon signal entropy, shows that spatially separate brain regions (or networks) generate temporally independent entropy time-courses. These time-courses are modulated by cognitive tasks, with an increase in local neural processing characterised by localised and transient increases in entropy in the neural signal. We explore the relationship between entropy and the more established time-frequency decomposition methods, which elucidate the temporal evolution of neural oscillations. We observe a direct but complex relationship between entropy and oscillatory amplitude, which suggests that these metrics are complementary. Finally, we provide a demonstration of the clinical utility of our method, using it to shed light on aberrant neurophysiological processing in schizophrenia. We demonstrate significantly increased task induced entropy change in patients (compared to controls) in multiple brain regions, including a cingulo-insula network, bilateral insula cortices and a right fronto-parietal network. These findings demonstrate potential clinical utility for our method and support a recent hypothesis that schizophrenia can be characterised by abnormalities in the salience network (a well characterised distributed network comprising bilateral insula and cingulate cortices). PMID:25886553

  18. Dynamical noise filter and conditional entropy analysis in chaos synchronization.

    PubMed

    Wang, Jiao; Lai, C-H

    2006-06-01

    It is shown that, in a chaotic synchronization system whose driving signal is exposed to channel noise, the estimation of the drive system states can be greatly improved by applying the dynamical noise filtering to the response system states. If the noise is bounded in a certain range, the estimation errors, i.e., the difference between the filtered responding states and the driving states, can be made arbitrarily small. This property can be used in designing an alternative digital communication scheme. An analysis based on the conditional entropy justifies the application of dynamical noise filtering in generating quality synchronization.

  19. Energy and maximum norm estimates for nonlinear conservation laws

    NASA Technical Reports Server (NTRS)

    Olsson, Pelle; Oliger, Joseph

    1994-01-01

    We have devised a technique that makes it possible to obtain energy estimates for initial-boundary value problems for nonlinear conservation laws. The two major tools to achieve the energy estimates are a certain splitting of the flux vector derivative f(u)(sub x), and a structural hypothesis, referred to as a cone condition, on the flux vector f(u). These hypotheses are fulfilled for many equations that occur in practice, such as the Euler equations of gas dynamics. It should be noted that the energy estimates are obtained without any assumptions on the gradient of the solution u. The results extend to weak solutions that are obtained as point wise limits of vanishing viscosity solutions. As a byproduct we obtain explicit expressions for the entropy function and the entropy flux of symmetrizable systems of conservation laws. Under certain circumstances the proposed technique can be applied repeatedly so as to yield estimates in the maximum norm.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosales-Zarate, Laura E. C.; Drummond, P. D.

    We calculate the quantum Renyi entropy in a phase-space representation for either fermions or bosons. This can also be used to calculate purity and fidelity, or the entanglement between two systems. We show that it is possible to calculate the entropy from sampled phase-space distributions in normally ordered representations, although this is not possible for all quantum states. We give an example of the use of this method in an exactly soluble thermal case. The quantum entropy cannot be calculated at all using sampling methods in classical symmetric (Wigner) or antinormally ordered (Husimi) phase spaces, due to inner-product divergences. Themore » preferred method is to use generalized Gaussian phase-space methods, which utilize a distribution over stochastic Green's functions. We illustrate this approach by calculating the reduced entropy and entanglement of bosonic or fermionic modes coupled to a time-evolving, non-Markovian reservoir.« less

  1. A new assessment method for urbanization environmental impact: urban environment entropy model and its application.

    PubMed

    Ouyang, Tingping; Fu, Shuqing; Zhu, Zhaoyu; Kuang, Yaoqiu; Huang, Ningsheng; Wu, Zhifeng

    2008-11-01

    The thermodynamic law is one of the most widely used scientific principles. The comparability between the environmental impact of urbanization and the thermodynamic entropy was systematically analyzed. Consequently, the concept "Urban Environment Entropy" was brought forward and the "Urban Environment Entropy" model was established for urbanization environmental impact assessment in this study. The model was then utilized in a case study for the assessment of river water quality in the Pearl River Delta Economic Zone. The results indicated that the assessing results of the model are consistent to that of the equalized synthetic pollution index method. Therefore, it can be concluded that the Urban Environment Entropy model has high reliability and can be applied widely in urbanization environmental assessment research using many different environmental parameters.

  2. Fast and Efficient Stochastic Optimization for Analytic Continuation

    DOE PAGES

    Bao, Feng; Zhang, Guannan; Webster, Clayton G; ...

    2016-09-28

    In this analytic continuation of imaginary-time quantum Monte Carlo data to extract real-frequency spectra remains a key problem in connecting theory with experiment. Here we present a fast and efficient stochastic optimization method (FESOM) as a more accessible variant of the stochastic optimization method introduced by Mishchenko et al. [Phys. Rev. B 62, 6317 (2000)], and we benchmark the resulting spectra with those obtained by the standard maximum entropy method for three representative test cases, including data taken from studies of the two-dimensional Hubbard model. Genearally, we find that our FESOM approach yields spectra similar to the maximum entropy results.more » In particular, while the maximum entropy method yields superior results when the quality of the data is strong, we find that FESOM is able to resolve fine structure with more detail when the quality of the data is poor. In addition, because of its stochastic nature, the method provides detailed information on the frequency-dependent uncertainty of the resulting spectra, while the maximum entropy method does so only for the spectral weight integrated over a finite frequency region. Therefore, we believe that this variant of the stochastic optimization approach provides a viable alternative to the routinely used maximum entropy method, especially for data of poor quality.« less

  3. An entropy correction method for unsteady full potential flows with strong shocks

    NASA Technical Reports Server (NTRS)

    Whitlow, W., Jr.; Hafez, M. M.; Osher, S. J.

    1986-01-01

    An entropy correction method for the unsteady full potential equation is presented. The unsteady potential equation is modified to account for entropy jumps across shock waves. The conservative form of the modified equation is solved in generalized coordinates using an implicit, approximate factorization method. A flux-biasing differencing method, which generates the proper amounts of artificial viscosity in supersonic regions, is used to discretize the flow equations in space. Comparisons between the present method and solutions of the Euler equations and between the present method and experimental data are presented. The comparisons show that the present method more accurately models solutions of the Euler equations and experiment than does the isentropic potential formulation.

  4. Multiwavelet packet entropy and its application in transmission line fault recognition and classification.

    PubMed

    Liu, Zhigang; Han, Zhiwei; Zhang, Yang; Zhang, Qiaoge

    2014-11-01

    Multiwavelets possess better properties than traditional wavelets. Multiwavelet packet transformation has more high-frequency information. Spectral entropy can be applied as an analysis index to the complexity or uncertainty of a signal. This paper tries to define four multiwavelet packet entropies to extract the features of different transmission line faults, and uses a radial basis function (RBF) neural network to recognize and classify 10 fault types of power transmission lines. First, the preprocessing and postprocessing problems of multiwavelets are presented. Shannon entropy and Tsallis entropy are introduced, and their difference is discussed. Second, multiwavelet packet energy entropy, time entropy, Shannon singular entropy, and Tsallis singular entropy are defined as the feature extraction methods of transmission line fault signals. Third, the plan of transmission line fault recognition using multiwavelet packet entropies and an RBF neural network is proposed. Finally, the experimental results show that the plan with the four multiwavelet packet energy entropies defined in this paper achieves better performance in fault recognition. The performance with SA4 (symmetric antisymmetric) multiwavelet packet Tsallis singular entropy is the best among the combinations of different multiwavelet packets and the four multiwavelet packet entropies.

  5. Information Entropy Analysis of the H1N1 Genetic Code

    NASA Astrophysics Data System (ADS)

    Martwick, Andy

    2010-03-01

    During the current H1N1 pandemic, viral samples are being obtained from large numbers of infected people world-wide and are being sequenced on the NCBI Influenza Virus Resource Database. The information entropy of the sequences was computed from the probability of occurrence of each nucleotide base at every position of each set of sequences using Shannon's definition of information entropy, [ H=∑bpb,2( 1pb ) ] where H is the observed information entropy at each nucleotide position and pb is the probability of the base pair of the nucleotides A, C, G, U. Information entropy of the current H1N1 pandemic is compared to reference human and swine H1N1 entropy. As expected, the current H1N1 entropy is in a low entropy state and has a very large mutation potential. Using the entropy method in mature genes we can identify low entropy regions of nucleotides that generally correlate to critical protein function.

  6. Infrared image segmentation method based on spatial coherence histogram and maximum entropy

    NASA Astrophysics Data System (ADS)

    Liu, Songtao; Shen, Tongsheng; Dai, Yao

    2014-11-01

    In order to segment the target well and suppress background noises effectively, an infrared image segmentation method based on spatial coherence histogram and maximum entropy is proposed. First, spatial coherence histogram is presented by weighting the importance of the different position of these pixels with the same gray-level, which is obtained by computing their local density. Then, after enhancing the image by spatial coherence histogram, 1D maximum entropy method is used to segment the image. The novel method can not only get better segmentation results, but also have a faster computation time than traditional 2D histogram-based segmentation methods.

  7. Copula Entropy coupled with Wavelet Neural Network Model for Hydrological Prediction

    NASA Astrophysics Data System (ADS)

    Wang, Yin; Yue, JiGuang; Liu, ShuGuang; Wang, Li

    2018-02-01

    Artificial Neural network(ANN) has been widely used in hydrological forecasting. in this paper an attempt has been made to find an alternative method for hydrological prediction by combining Copula Entropy(CE) with Wavelet Neural Network(WNN), CE theory permits to calculate mutual information(MI) to select Input variables which avoids the limitations of the traditional linear correlation(LCC) analysis. Wavelet analysis can provide the exact locality of any changes in the dynamical patterns of the sequence Coupled with ANN Strong non-linear fitting ability. WNN model was able to provide a good fit with the hydrological data. finally, the hybrid model(CE+WNN) have been applied to daily water level of Taihu Lake Basin, and compared with CE ANN, LCC WNN and LCC ANN. Results showed that the hybrid model produced better results in estimating the hydrograph properties than the latter models.

  8. Image reconstruction of IRAS survey scans

    NASA Technical Reports Server (NTRS)

    Bontekoe, Tj. Romke

    1990-01-01

    The IRAS survey data can be used successfully to produce images of extended objects. The major difficulties, viz. non-uniform sampling, different response functions for each detector, and varying signal-to-noise levels for each detector for each scan, were resolved. The results of three different image construction techniques are compared: co-addition, constrained least squares, and maximum entropy. The maximum entropy result is superior. An image of the galaxy M51 with an average spatial resolution of 45 arc seconds is presented, using 60 micron survey data. This exceeds the telescope diffraction limit of 1 minute of arc, at this wavelength. Data fusion is a proposed method for combining data from different instruments, with different spacial resolutions, at different wavelengths. Data estimates of the physical parameters, temperature, density and composition, can be made from the data without prior image (re-)construction. An increase in the accuracy of these parameters is expected as the result of this more systematic approach.

  9. Nonequilibrium thermodynamics and a fluctuation theorem for individual reaction steps in a chemical reaction network

    NASA Astrophysics Data System (ADS)

    Pal, Krishnendu; Das, Biswajit; Banerjee, Kinshuk; Gangopadhyay, Gautam

    2015-09-01

    We have introduced an approach to nonequilibrium thermodynamics of an open chemical reaction network in terms of the propensities of the individual elementary reactions and the corresponding reverse reactions. The method is a microscopic formulation of the dissipation function in terms of the relative entropy or Kullback-Leibler distance which is based on the analogy of phase space trajectory with the path of elementary reactions in a network of chemical process. We have introduced here a fluctuation theorem valid for each opposite pair of elementary reactions which is useful in determining the contribution of each sub-reaction on the nonequilibrium thermodynamics of overall reaction. The methodology is applied to an oligomeric enzyme kinetics at a chemiostatic condition that leads the reaction to a nonequilibrium steady state for which we have estimated how each step of the reaction is energy driven or entropy driven to contribute to the overall reaction.

  10. Research on interpolation methods in medical image processing.

    PubMed

    Pan, Mei-Sen; Yang, Xiao-Li; Tang, Jing-Tian

    2012-04-01

    Image interpolation is widely used for the field of medical image processing. In this paper, interpolation methods are divided into three groups: filter interpolation, ordinary interpolation and general partial volume interpolation. Some commonly-used filter methods for image interpolation are pioneered, but the interpolation effects need to be further improved. When analyzing and discussing ordinary interpolation, many asymmetrical kernel interpolation methods are proposed. Compared with symmetrical kernel ones, the former are have some advantages. After analyzing the partial volume and generalized partial volume estimation interpolations, the new concept and constraint conditions of the general partial volume interpolation are defined, and several new partial volume interpolation functions are derived. By performing the experiments of image scaling, rotation and self-registration, the interpolation methods mentioned in this paper are compared in the entropy, peak signal-to-noise ratio, cross entropy, normalized cross-correlation coefficient and running time. Among the filter interpolation methods, the median and B-spline filter interpolations have a relatively better interpolating performance. Among the ordinary interpolation methods, on the whole, the symmetrical cubic kernel interpolations demonstrate a strong advantage, especially the symmetrical cubic B-spline interpolation. However, we have to mention that they are very time-consuming and have lower time efficiency. As for the general partial volume interpolation methods, from the total error of image self-registration, the symmetrical interpolations provide certain superiority; but considering the processing efficiency, the asymmetrical interpolations are better.

  11. Accurate Image Analysis of the Retina Using Hessian Matrix and Binarisation of Thresholded Entropy with Application of Texture Mapping

    PubMed Central

    Yin, Xiaoxia; Ng, Brian W-H; He, Jing; Zhang, Yanchun; Abbott, Derek

    2014-01-01

    In this paper, we demonstrate a comprehensive method for segmenting the retinal vasculature in camera images of the fundus. This is of interest in the area of diagnostics for eye diseases that affect the blood vessels in the eye. In a departure from other state-of-the-art methods, vessels are first pre-grouped together with graph partitioning, using a spectral clustering technique based on morphological features. Local curvature is estimated over the whole image using eigenvalues of Hessian matrix in order to enhance the vessels, which appear as ridges in images of the retina. The result is combined with a binarized image, obtained using a threshold that maximizes entropy, to extract the retinal vessels from the background. Speckle type noise is reduced by applying a connectivity constraint on the extracted curvature based enhanced image. This constraint is varied over the image according to each region's predominant blood vessel size. The resultant image exhibits the central light reflex of retinal arteries and veins, which prevents the segmentation of whole vessels. To address this, the earlier entropy-based binarization technique is repeated on the original image, but crucially, with a different threshold to incorporate the central reflex vessels. The final segmentation is achieved by combining the segmented vessels with and without central light reflex. We carry out our approach on DRIVE and REVIEW, two publicly available collections of retinal images for research purposes. The obtained results are compared with state-of-the-art methods in the literature using metrics such as sensitivity (true positive rate), selectivity (false positive rate) and accuracy rates for the DRIVE images and measured vessel widths for the REVIEW images. Our approach out-performs the methods in the literature. PMID:24781033

  12. An improvement of the measurement of time series irreversibility with visibility graph approach

    NASA Astrophysics Data System (ADS)

    Wu, Zhenyu; Shang, Pengjian; Xiong, Hui

    2018-07-01

    We propose a method to improve the measure of real-valued time series irreversibility which contains two tools: the directed horizontal visibility graph and the Kullback-Leibler divergence. The degree of time irreversibility is estimated by the Kullback-Leibler divergence between the in and out degree distributions presented in the associated visibility graph. In our work, we reframe the in and out degree distributions by encoding them with different embedded dimensions used in calculating permutation entropy(PE). With this improved method, we can not only estimate time series irreversibility efficiently, but also detect time series irreversibility from multiple dimensions. We verify the validity of our method and then estimate the amount of time irreversibility of series generated by chaotic maps as well as global stock markets over the period 2005-2015. The result shows that the amount of time irreversibility reaches the peak with embedded dimension d = 3 under circumstances of experiment and financial markets.

  13. State fusion entropy for continuous and site-specific analysis of landslide stability changing regularities

    NASA Astrophysics Data System (ADS)

    Liu, Yong; Qin, Zhimeng; Hu, Baodan; Feng, Shuai

    2018-04-01

    Stability analysis is of great significance to landslide hazard prevention, especially the dynamic stability. However, many existing stability analysis methods are difficult to analyse the continuous landslide stability and its changing regularities in a uniform criterion due to the unique landslide geological conditions. Based on the relationship between displacement monitoring data, deformation states and landslide stability, a state fusion entropy method is herein proposed to derive landslide instability through a comprehensive multi-attribute entropy analysis of deformation states, which are defined by a proposed joint clustering method combining K-means and a cloud model. Taking Xintan landslide as the detailed case study, cumulative state fusion entropy presents an obvious increasing trend after the landslide entered accelerative deformation stage and historical maxima match highly with landslide macroscopic deformation behaviours in key time nodes. Reasonable results are also obtained in its application to several other landslides in the Three Gorges Reservoir in China. Combined with field survey, state fusion entropy may serve for assessing landslide stability and judging landslide evolutionary stages.

  14. Entropy-based artificial viscosity stabilization for non-equilibrium Grey Radiation-Hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delchini, Marc O., E-mail: delchinm@email.tamu.edu; Ragusa, Jean C., E-mail: jean.ragusa@tamu.edu; Morel, Jim, E-mail: jim.morel@tamu.edu

    2015-09-01

    The entropy viscosity method is extended to the non-equilibrium Grey Radiation-Hydrodynamic equations. The method employs a viscous regularization to stabilize the numerical solution. The artificial viscosity coefficient is modulated by the entropy production and peaks at shock locations. The added dissipative terms are consistent with the entropy minimum principle. A new functional form of the entropy residual, suitable for the Radiation-Hydrodynamic equations, is derived. We demonstrate that the viscous regularization preserves the equilibrium diffusion limit. The equations are discretized with a standard Continuous Galerkin Finite Element Method and a fully implicit temporal integrator within the MOOSE multiphysics framework. The methodmore » of manufactured solutions is employed to demonstrate second-order accuracy in both the equilibrium diffusion and streaming limits. Several typical 1-D radiation-hydrodynamic test cases with shocks (from Mach 1.05 to Mach 50) are presented to establish the ability of the technique to capture and resolve shocks.« less

  15. Renyi entanglement entropy of interacting fermions calculated using the continuous-time quantum Monte Carlo method.

    PubMed

    Wang, Lei; Troyer, Matthias

    2014-09-12

    We present a new algorithm for calculating the Renyi entanglement entropy of interacting fermions using the continuous-time quantum Monte Carlo method. The algorithm only samples the interaction correction of the entanglement entropy, which by design ensures the efficient calculation of weakly interacting systems. Combined with Monte Carlo reweighting, the algorithm also performs well for systems with strong interactions. We demonstrate the potential of this method by studying the quantum entanglement signatures of the charge-density-wave transition of interacting fermions on a square lattice.

  16. Understanding materials behavior from atomistic simulations: Case study of al-containing high entropy alloys and thermally grown aluminum oxide

    NASA Astrophysics Data System (ADS)

    Yinkai Lei

    Atomistic simulation refers to a set of simulation methods that model the materials on the atomistic scale. These simulation methods are faster and cheaper alternative approaches to investigate thermodynamics and kinetics of materials compared to experiments. In this dissertation, atomistic simulation methods have been used to study the thermodynamic and kinetic properties of two material systems, i.e. the entropy of Al-containing high entropy alloys (HEAs) and the vacancy migration energy of thermally grown aluminum oxide. (Abstract shortened by ProQuest.).

  17. Surface entropy of liquids via a direct Monte Carlo approach - Application to liquid Si

    NASA Technical Reports Server (NTRS)

    Wang, Z. Q.; Stroud, D.

    1990-01-01

    Two methods are presented for a direct Monte Carlo evaluation of the surface entropy S(s) of a liquid interacting by specified, volume-independent potentials. The first method is based on an application of the approach of Ferrenberg and Swendsen (1988, 1989) to Monte Carlo simulations at two different temperatures; it gives much more reliable results for S(s) in liquid Si than previous calculations based on numerical differentiation. The second method expresses the surface entropy directly as a canonical average at fixed temperature.

  18. Scaling Estimates of Vegetation Structure in Amazonian Tropical Forests Using Multi-Angle MODIS Observations

    NASA Technical Reports Server (NTRS)

    Mendes De Moura, Yhasmin; Hilker, Thomas; Goncalves, Fabio Guimaraes; Galvao, Lenio Soares; Roberto dos Santos, Joao; Lyapustin, Alexei; Maeda, Eduardo Eiji; de Jesus Silva, Camila Valeria

    2016-01-01

    Detailed knowledge of vegetation structure is required for accurate modelling of terrestrial ecosystems, but direct measurements of the three dimensional distribution of canopy elements, for instance from LiDAR, are not widely available. We investigate the potential for modelling vegetation roughness, a key parameter for climatological models, from directional scattering of visible and near-infrared (NIR) reflectance acquired from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS). We compare our estimates across different tropical forest types to independent measures obtained from: (1) airborne laser scanning (ALS), (2) spaceborne Geoscience Laser Altimeter System (GLAS)/ICESat, and (3) the spaceborne SeaWinds/QSCAT. Our results showed linear correlation between MODIS-derived anisotropy to ALS-derived entropy (r(exp 2)= 0.54, RMSE= 0.11), even in high biomass regions. Significant relationships were also obtained between MODIS-derived anisotropy and GLAS-derived entropy(0.52 less than or equal to r(exp 2) less than or equal to 0.61; p less than 0.05), with similar slopes and offsets found throughout the season, and RMSE between 0.26 and 0.30 (units of entropy). The relationships between the MODIS-derived anisotropy and backscattering measurements (sigma(sup 0)) from SeaWinds/QuikSCAT presented an r(exp 2) of 0.59 and a RMSE of 0.11. We conclude that multi-angular MODIS observations are suitable to extrapolate measures of canopy entropy across different forest types, providing additional estimates of vegetation structure in the Amazon.

  19. Entropy-based goodness-of-fit test: Application to the Pareto distribution

    NASA Astrophysics Data System (ADS)

    Lequesne, Justine

    2013-08-01

    Goodness-of-fit tests based on entropy have been introduced in [13] for testing normality. The maximum entropy distribution in a class of probability distributions defined by linear constraints induces a Pythagorean equality between the Kullback-Leibler information and an entropy difference. This allows one to propose a goodness-of-fit test for maximum entropy parametric distributions which is based on the Kullback-Leibler information. We will focus on the application of the method to the Pareto distribution. The power of the proposed test is computed through Monte Carlo simulation.

  20. Histogram analysis parameters of dynamic contrast-enhanced magnetic resonance imaging can predict histopathological findings including proliferation potential, cellularity, and nucleic areas in head and neck squamous cell carcinoma.

    PubMed

    Surov, Alexey; Meyer, Hans Jonas; Leifels, Leonard; Höhn, Anne-Kathrin; Richter, Cindy; Winter, Karsten

    2018-04-20

    Our purpose was to analyze possible associations between histogram analysis parameters of dynamic contrast-enhanced magnetic resonance imaging DCE MRI and histopathological findings like proliferation index, cell count and nucleic areas in head and neck squamous cell carcinoma (HNSCC). 30 patients (mean age 57.0 years) with primary HNSCC were included in the study. In every case, histogram analysis parameters of K trans , V e , and K ep were estimated using a mathlab based software. Tumor proliferation index, cell count, and nucleic areas were estimated on Ki 67 antigen stained specimens. Spearman's non-parametric rank sum correlation coefficients were calculated between DCE and different histopathological parameters. KI 67 correlated with K trans min ( p = -0.386, P = 0.043) and s K trans skewness ( p = 0.382, P = 0.045), V e min ( p = -0.473, P = 0.011), Ve entropy ( p = 0.424, P = 0.025), and K ep entropy ( p = 0.464, P = 0.013). Cell count correlated with K trans kurtosis ( p = 0.40, P = 0.034), V e entropy ( p = 0.475, P = 0.011). Total nucleic area correlated with V e max ( p = 0.386, P = 0.042) and V e entropy ( p = 0.411, P = 0.030). In G1/2 tumors, only K trans entropy correlated well with total ( P =0.78, P =0.013) and average nucleic areas ( p = 0.655, P = 0.006). In G3 tumors, KI 67 correlated with Ve min ( p = -0.552, P = 0.022) and V e entropy ( p = 0.524, P = 0.031). Ve max correlated with total nucleic area ( p = 0.483, P = 0.049). Kep max correlated with total area ( p = -0.51, P = 0.037), and K ep entropy with KI 67 ( p = 0.567, P = 0.018). We concluded that histogram-based parameters skewness, kurtosis and entropy of K trans , V e , and K ep can be used as markers for proliferation activity, cellularity and nucleic content in HNSCC. Tumor grading influences significantly associations between perfusion and histopathological parameters.

  1. Entropy of nonrotating isolated horizons in Lovelock theory from loop quantum gravity

    NASA Astrophysics Data System (ADS)

    Wang, Jing-Bo; Huang, Chao-Guang; Li, Lin

    2016-08-01

    In this paper, the BF theory method is applied to the nonrotating isolated horizons in Lovelock theory. The final entropy matches the Wald entropy formula for this theory. We also confirm the conclusion obtained by Bodendorfer et al. that the entropy is related to the flux operator rather than the area operator in general diffeomorphic-invariant theory. Supported by National Natural Science Foundation of China (11275207)

  2. Path length entropy analysis of diastolic heart sounds.

    PubMed

    Griffel, Benjamin; Zia, Mohammad K; Fridman, Vladamir; Saponieri, Cesare; Semmlow, John L

    2013-09-01

    Early detection of coronary artery disease (CAD) using the acoustic approach, a noninvasive and cost-effective method, would greatly improve the outcome of CAD patients. To detect CAD, we analyze diastolic sounds for possible CAD murmurs. We observed diastolic sounds to exhibit 1/f structure and developed a new method, path length entropy (PLE) and a scaled version (SPLE), to characterize this structure to improve CAD detection. We compare SPLE results to Hurst exponent, Sample entropy and Multiscale entropy for distinguishing between normal and CAD patients. SPLE achieved a sensitivity-specificity of 80%-81%, the best of the tested methods. However, PLE and SPLE are not sufficient to prove nonlinearity, and evaluation using surrogate data suggests that our cardiovascular sound recordings do not contain significant nonlinear properties. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Path Length Entropy Analysis of Diastolic Heart Sounds

    PubMed Central

    Griffel, B.; Zia, M. K.; Fridman, V.; Saponieri, C.; Semmlow, J. L.

    2013-01-01

    Early detection of coronary artery disease (CAD) using the acoustic approach, a noninvasive and cost-effective method, would greatly improve the outcome of CAD patients. To detect CAD, we analyze diastolic sounds for possible CAD murmurs. We observed diastolic sounds to exhibit 1/f structure and developed a new method, path length entropy (PLE) and a scaled version (SPLE), to characterize this structure to improve CAD detection. We compare SPLE results to Hurst exponent, Sample entropy and Multi-scale entropy for distinguishing between normal and CAD patients. SPLE achieved a sensitivity-specificity of 80%–81%, the best of the tested methods. However, PLE and SPLE are not sufficient to prove nonlinearity, and evaluation using surrogate data suggests that our cardiovascular sound recordings do not contain significant nonlinear properties. PMID:23930808

  4. Free energy and entropy of a dipolar liquid by computer simulations

    NASA Astrophysics Data System (ADS)

    Palomar, Ricardo; Sesé, Gemma

    2018-02-01

    Thermodynamic properties for a system composed of dipolar molecules are computed. Free energy is evaluated by means of the thermodynamic integration technique, and it is also estimated by using a perturbation theory approach, in which every molecule is modeled as a hard sphere within a square well, with an electric dipole at its center. The hard sphere diameter, the range and depth of the well, and the dipole moment have been calculated from properties easily obtained in molecular dynamics simulations. Connection between entropy and dynamical properties is explored in the liquid and supercooled states by using instantaneous normal mode calculations. A model is proposed in order to analyze translation and rotation contributions to entropy separately. Both contributions decrease upon cooling, and a logarithmic correlation between excess entropy associated with translation and the corresponding proportion of imaginary frequency modes is encountered. Rosenfeld scaling law between reduced diffusion and excess entropy is tested, and the origin of its failure at low temperatures is investigated.

  5. On the statistical equivalence of restrained-ensemble simulations with the maximum entropy method

    PubMed Central

    Roux, Benoît; Weare, Jonathan

    2013-01-01

    An issue of general interest in computer simulations is to incorporate information from experiments into a structural model. An important caveat in pursuing this goal is to avoid corrupting the resulting model with spurious and arbitrary biases. While the problem of biasing thermodynamic ensembles can be formulated rigorously using the maximum entropy method introduced by Jaynes, the approach can be cumbersome in practical applications with the need to determine multiple unknown coefficients iteratively. A popular alternative strategy to incorporate the information from experiments is to rely on restrained-ensemble molecular dynamics simulations. However, the fundamental validity of this computational strategy remains in question. Here, it is demonstrated that the statistical distribution produced by restrained-ensemble simulations is formally consistent with the maximum entropy method of Jaynes. This clarifies the underlying conditions under which restrained-ensemble simulations will yield results that are consistent with the maximum entropy method. PMID:23464140

  6. Using Rényi parameter to improve the predictive power of singular value decomposition entropy on stock market

    NASA Astrophysics Data System (ADS)

    Jiang, Jiaqi; Gu, Rongbao

    2016-04-01

    This paper generalizes the method of traditional singular value decomposition entropy by incorporating orders q of Rényi entropy. We analyze the predictive power of the entropy based on trajectory matrix using Shanghai Composite Index and Dow Jones Index data in both static test and dynamic test. In the static test on SCI, results of global granger causality tests all turn out to be significant regardless of orders selected. But this entropy fails to show much predictability in American stock market. In the dynamic test, we find that the predictive power can be significantly improved in SCI by our generalized method but not in DJI. This suggests that noises and errors affect SCI more frequently than DJI. In the end, results obtained using different length of sliding window also corroborate this finding.

  7. Spatiotemporal modeling of PM2.5 concentrations at the national scale combining land use regression and Bayesian maximum entropy in China.

    PubMed

    Chen, Li; Gao, Shuang; Zhang, Hui; Sun, Yanling; Ma, Zhenxing; Vedal, Sverre; Mao, Jian; Bai, Zhipeng

    2018-05-03

    Concentrations of particulate matter with aerodynamic diameter <2.5 μm (PM 2.5 ) are relatively high in China. Estimation of PM 2.5 exposure is complex because PM 2.5 exhibits complex spatiotemporal patterns. To improve the validity of exposure predictions, several methods have been developed and applied worldwide. A hybrid approach combining a land use regression (LUR) model and Bayesian Maximum Entropy (BME) interpolation of the LUR space-time residuals were developed to estimate the PM 2.5 concentrations on a national scale in China. This hybrid model could potentially provide more valid predictions than a commonly-used LUR model. The LUR/BME model had good performance characteristics, with R 2  = 0.82 and root mean square error (RMSE) of 4.6 μg/m 3 . Prediction errors of the LUR/BME model were reduced by incorporating soft data accounting for data uncertainty, with the R 2 increasing by 6%. The performance of LUR/BME is better than OK/BME. The LUR/BME model is the most accurate fine spatial scale PM 2.5 model developed to date for China. Copyright © 2018. Published by Elsevier Ltd.

  8. A proposed methodology for studying the historical trajectory of words' meaning through Tsallis entropy

    NASA Astrophysics Data System (ADS)

    Neuman, Yair; Cohen, Yochai; Israeli, Navot; Tamir, Boaz

    2018-02-01

    The availability of historical textual corpora has led to the study of words' frequency along the historical time line, as representing the public's focus of attention over time. However, studying of the dynamics of words' meaning is still in its infancy. In this paper, we propose a methodology for studying the historical trajectory of words' meaning through Tsallis entropy. First, we present the idea that the meaning of a word may be studied through the entropy of its embedding. Using two historical case studies, we show that this entropy measure is correlated with the intensity in which a word is used. More importantly, we show that using Tsallis entropy with a superadditive entropy index may provide a better estimation of a word's frequency of use than using Shannon entropy. We explain this finding as resulting from an increasing redundancy between the words that comprise the semantic field of the target word and develop a new measure of redundancy between words. Using this measure, which relies on the Tsallis version of the Kullback-Leibler divergence, we show that the evolving meaning of a word involves the dynamics of increasing redundancy between components of its semantic field. The proposed methodology may enrich the toolkit of researchers who study the dynamics of word senses.

  9. Two-phase thermodynamic model for computing entropies of liquids reanalyzed

    NASA Astrophysics Data System (ADS)

    Sun, Tao; Xian, Jiawei; Zhang, Huai; Zhang, Zhigang; Zhang, Yigang

    2017-11-01

    The two-phase thermodynamic (2PT) model [S.-T. Lin et al., J. Chem. Phys. 119, 11792-11805 (2003)] provides a promising paradigm to efficiently determine the ionic entropies of liquids from molecular dynamics. In this model, the vibrational density of states (VDoS) of a liquid is decomposed into a diffusive gas-like component and a vibrational solid-like component. By treating the diffusive component as hard sphere (HS) gas and the vibrational component as harmonic oscillators, the ionic entropy of the liquid is determined. Here we examine three issues crucial for practical implementations of the 2PT model: (i) the mismatch between the VDoS of the liquid system and that of the HS gas; (ii) the excess entropy of the HS gas; (iii) the partition of the gas-like and solid-like components. Some of these issues have not been addressed before, yet they profoundly change the entropy predicted from the model. Based on these findings, a revised 2PT formalism is proposed and successfully tested in systems with Lennard-Jones potentials as well as many-atom potentials of liquid metals. Aside from being capable of performing quick entropy estimations for a wide range of systems, the formalism also supports fine-tuning to accurately determine entropies at specific thermal states.

  10. LANDMARK-BASED SPEECH RECOGNITION: REPORT OF THE 2004 JOHNS HOPKINS SUMMER WORKSHOP.

    PubMed

    Hasegawa-Johnson, Mark; Baker, James; Borys, Sarah; Chen, Ken; Coogan, Emily; Greenberg, Steven; Juneja, Amit; Kirchhoff, Katrin; Livescu, Karen; Mohan, Srividya; Muller, Jennifer; Sonmez, Kemal; Wang, Tianyu

    2005-01-01

    Three research prototype speech recognition systems are described, all of which use recently developed methods from artificial intelligence (specifically support vector machines, dynamic Bayesian networks, and maximum entropy classification) in order to implement, in the form of an automatic speech recognizer, current theories of human speech perception and phonology (specifically landmark-based speech perception, nonlinear phonology, and articulatory phonology). All three systems begin with a high-dimensional multiframe acoustic-to-distinctive feature transformation, implemented using support vector machines trained to detect and classify acoustic phonetic landmarks. Distinctive feature probabilities estimated by the support vector machines are then integrated using one of three pronunciation models: a dynamic programming algorithm that assumes canonical pronunciation of each word, a dynamic Bayesian network implementation of articulatory phonology, or a discriminative pronunciation model trained using the methods of maximum entropy classification. Log probability scores computed by these models are then combined, using log-linear combination, with other word scores available in the lattice output of a first-pass recognizer, and the resulting combination score is used to compute a second-pass speech recognition output.

  11. Computer program for calculating and fitting thermodynamic functions

    NASA Technical Reports Server (NTRS)

    Mcbride, Bonnie J.; Gordon, Sanford

    1992-01-01

    A computer program is described which (1) calculates thermodynamic functions (heat capacity, enthalpy, entropy, and free energy) for several optional forms of the partition function, (2) fits these functions to empirical equations by means of a least-squares fit, and (3) calculates, as a function of temperture, heats of formation and equilibrium constants. The program provides several methods for calculating ideal gas properties. For monatomic gases, three methods are given which differ in the technique used for truncating the partition function. For diatomic and polyatomic molecules, five methods are given which differ in the corrections to the rigid-rotator harmonic-oscillator approximation. A method for estimating thermodynamic functions for some species is also given.

  12. Comparison of Texture Analysis Techniques in Both Frequency and Spatial Domains for Cloud Feature Extraction

    DTIC Science & Technology

    1992-01-01

    entropy , energy. variance, skewness, and object. It can also be applied to an image of a phenomenon. It kurtosis. These parameters are then used as...statistic. The co-occurrence matrix method is used in this study to derive texture values of entropy . Limogeneity. energy (similar to the GLDV angular...from working with the co-occurrence matrix method. Seven convolution sizes were chosen to derive the texture values of entropy , local homogeneity, and

  13. Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy

    NASA Astrophysics Data System (ADS)

    Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng

    2018-06-01

    To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.

  14. Entropy coders for image compression based on binary forward classification

    NASA Astrophysics Data System (ADS)

    Yoo, Hoon; Jeong, Jechang

    2000-12-01

    Entropy coders as a noiseless compression method are widely used as final step compression for images, and there have been many contributions to increase of entropy coder performance and to reduction of entropy coder complexity. In this paper, we propose some entropy coders based on the binary forward classification (BFC). The BFC requires overhead of classification but there is no change between the amount of input information and the total amount of classified output information, which we prove this property in this paper. And using the proved property, we propose entropy coders that are the BFC followed by Golomb-Rice coders (BFC+GR) and the BFC followed by arithmetic coders (BFC+A). The proposed entropy coders introduce negligible additional complexity due to the BFC. Simulation results also show better performance than other entropy coders that have similar complexity to the proposed coders.

  15. Inability of the entropy vector method to certify nonclassicality in linelike causal structures

    NASA Astrophysics Data System (ADS)

    Weilenmann, Mirjam; Colbeck, Roger

    2016-10-01

    Bell's theorem shows that our intuitive understanding of causation must be overturned in light of quantum correlations. Nevertheless, quantum mechanics does not permit signaling and hence a notion of cause remains. Understanding this notion is not only important at a fundamental level, but also for technological applications such as key distribution and randomness expansion. It has recently been shown that a useful way to decide which classical causal structures could give rise to a given set of correlations is to use entropy vectors. These are vectors whose components are the entropies of all subsets of the observed variables in the causal structure. The entropy vector method employs causal relationships among the variables to restrict the set of possible entropy vectors. Here, we consider whether the same approach can lead to useful certificates of nonclassicality within a given causal structure. Surprisingly, we find that for a family of causal structures that includes the usual bipartite Bell structure they do not. For all members of this family, no function of the entropies of the observed variables gives such a certificate, in spite of the existence of nonclassical correlations. It is therefore necessary to look beyond entropy vectors to understand cause from a quantum perspective.

  16. Computing a Non-trivial Lower Bound on the Joint Entropy between Two Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S.

    In this report, a non-trivial lower bound on the joint entropy of two non-identical images is developed, which is greater than the individual entropies of the images. The lower bound is the least joint entropy possible among all pairs of images that have the same histograms as those of the given images. New algorithms are presented to compute the joint entropy lower bound with a computation time proportional to S log S where S is the number of histogram bins of the images. This is faster than the traditional methods of computing the exact joint entropy with a computation timemore » that is quadratic in S .« less

  17. Time-series analysis of multiple foreign exchange rates using time-dependent pattern entropy

    NASA Astrophysics Data System (ADS)

    Ishizaki, Ryuji; Inoue, Masayoshi

    2018-01-01

    Time-dependent pattern entropy is a method that reduces variations to binary symbolic dynamics and considers the pattern of symbols in a sliding temporal window. We use this method to analyze the instability of daily variations in multiple foreign exchange rates. The time-dependent pattern entropy of 7 foreign exchange rates (AUD/USD, CAD/USD, CHF/USD, EUR/USD, GBP/USD, JPY/USD, and NZD/USD) was found to be high in the long period after the Lehman shock, and be low in the long period after Mar 2012. We compared the correlation matrix between exchange rates in periods of high and low of the time-dependent pattern entropy.

  18. Biomathematical modeling of pulsatile hormone secretion: a historical perspective.

    PubMed

    Evans, William S; Farhy, Leon S; Johnson, Michael L

    2009-01-01

    Shortly after the recognition of the profound physiological significance of the pulsatile nature of hormone secretion, computer-based modeling techniques were introduced for the identification and characterization of such pulses. Whereas these earlier approaches defined perturbations in hormone concentration-time series, deconvolution procedures were subsequently employed to separate such pulses into their secretion event and clearance components. Stochastic differential equation modeling was also used to define basal and pulsatile hormone secretion. To assess the regulation of individual components within a hormone network, a method that quantitated approximate entropy within hormone concentration-times series was described. To define relationships within coupled hormone systems, methods including cross-correlation and cross-approximate entropy were utilized. To address some of the inherent limitations of these methods, modeling techniques with which to appraise the strength of feedback signaling between and among hormone-secreting components of a network have been developed. Techniques such as dynamic modeling have been utilized to reconstruct dose-response interactions between hormones within coupled systems. A logical extension of these advances will require the development of mathematical methods with which to approximate endocrine networks exhibiting multiple feedback interactions and subsequently reconstruct their parameters based on experimental data for the purpose of testing regulatory hypotheses and estimating alterations in hormone release control mechanisms.

  19. Chaos in the brain: imaging via chaoticity of EEG/MEG signals

    NASA Astrophysics Data System (ADS)

    Kowalik, Zbigniew J.; Elbert, Thomas; Rockstroh, Brigitte; Hoke, Manfried

    1995-03-01

    Brain electro- (EEG) or magnetoencephalogram (MEG) can be analyzed by using methods of the nonlinear system theory. We show that even for very short and nonstationary time series it is possible to functionally differentiate various brain activities. Usually the analysis assumes that the analyzed signals are both long and stationary, so that the classic spectral methods can be used. Even more convincing results can be obtained under these circumstances when the dimensional analysis or estimation of the Kolmogorov entropy or the Lyapunov exponent are performed. When measuring the spontaneous activity of a human brain the assumption of stationarity is questionable and `static' methods (correlation dimension, entropy, etc.) are then not adequate. In this case `dynamic' methods like pointwise-D2 dimension or chaoticity measures should be applied. Predictability measures in the form of local Lyapunov exponents are capable of revealing directly the chaoticity of a given process, and can practically be applied for functional differentiation of brain activity. We exemplify these in cases of apallic syndrome, tinnitus and schizophrenia. We show that: the average chaoticity in apallic syndrome differentiates brain states both in space and time, chaoticity changes temporally in case of schizophrenia (critical jumps of chaoticity), chaoticity changes locally in space, i.e., in the cortex plane in case of tinnitus.

  20. Loss of conformational entropy in protein folding calculated using realistic ensembles and its implications for NMR-based calculations

    PubMed Central

    Baxa, Michael C.; Haddadian, Esmael J.; Jumper, John M.; Freed, Karl F.; Sosnick, Tobin R.

    2014-01-01

    The loss of conformational entropy is a major contribution in the thermodynamics of protein folding. However, accurate determination of the quantity has proven challenging. We calculate this loss using molecular dynamic simulations of both the native protein and a realistic denatured state ensemble. For ubiquitin, the total change in entropy is TΔSTotal = 1.4 kcal⋅mol−1 per residue at 300 K with only 20% from the loss of side-chain entropy. Our analysis exhibits mixed agreement with prior studies because of the use of more accurate ensembles and contributions from correlated motions. Buried side chains lose only a factor of 1.4 in the number of conformations available per rotamer upon folding (ΩU/ΩN). The entropy loss for helical and sheet residues differs due to the smaller motions of helical residues (TΔShelix−sheet = 0.5 kcal⋅mol−1), a property not fully reflected in the amide N-H and carbonyl C=O bond NMR order parameters. The results have implications for the thermodynamics of folding and binding, including estimates of solvent ordering and microscopic entropies obtained from NMR. PMID:25313044

  1. An entropy-assisted musculoskeletal shoulder model.

    PubMed

    Xu, Xu; Lin, Jia-Hua; McGorry, Raymond W

    2017-04-01

    Optimization combined with a musculoskeletal shoulder model has been used to estimate mechanical loading of musculoskeletal elements around the shoulder. Traditionally, the objective function is to minimize the summation of the total activities of the muscles with forces, moments, and stability constraints. Such an objective function, however, tends to neglect the antagonist muscle co-contraction. In this study, an objective function including an entropy term is proposed to address muscle co-contractions. A musculoskeletal shoulder model is developed to apply the proposed objective function. To find the optimal weight for the entropy term, an experiment was conducted. In the experiment, participants generated various 3-D shoulder moments in six shoulder postures. The surface EMG of 8 shoulder muscles was measured and compared with the predicted muscle activities based on the proposed objective function using Bhattacharyya distance and concordance ratio under different weight of the entropy term. The results show that a small weight of the entropy term can improve the predictability of the model in terms of muscle activities. Such a result suggests that the concept of entropy could be helpful for further understanding the mechanism of muscle co-contractions as well as developing a shoulder biomechanical model with greater validity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Generation of skeletal mechanism by means of projected entropy participation indices

    NASA Astrophysics Data System (ADS)

    Paolucci, Samuel; Valorani, Mauro; Ciottoli, Pietro Paolo; Galassi, Riccardo Malpica

    2017-11-01

    When the dynamics of reactive systems develop very-slow and very-fast time scales separated by a range of active time scales, with gaps in the fast/active and slow/active time scales, then it is possible to achieve multi-scale adaptive model reduction along-with the integration of the ODEs using the G-Scheme. The scheme assumes that the dynamics is decomposed into active, slow, fast, and invariant subspaces. We derive expressions that establish a direct link between time scales and entropy production by using estimates provided by the G-Scheme. To calculate the contribution to entropy production, we resort to a standard model of a constant pressure, adiabatic, batch reactor, where the mixture temperature of the reactants is initially set above the auto-ignition temperature. Numerical experiments show that the contribution to entropy production of the fast subspace is of the same magnitude as the error threshold chosen for the identification of the decomposition of the tangent space, and the contribution of the slow subspace is generally much smaller than that of the active subspace. The information on entropy production associated with reactions within each subspace is used to define an entropy participation index that is subsequently utilized for model reduction.

  3. The progression of the entropy of a five dimensional psychotherapeutic system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badalamenti, A.F.; Langs, R.J.

    This paper presents a study of the deterministic and stochastic behavior of the entropy of a 5-dimensional, 2400 state, system across each of six psychotherapeutic sessions. The growth of entropy was found to be logarithmic in each session. The stochastic behavior of a moving 600 second estimator of entropy revealed a Box-Jenkins model of type (1,1,0) - that is, the first difference of the entropy series was first order autoregressive or prior state sensitive. In addition, the patient and therapist entropy series exhibited no significant cross correlation across lags of -300 to +300 seconds. Yet all such pairs of seriesmore » exhibited high coherency past the frequency of .06 (on a full range of 0 to .5). Furthermore, all the patients and therapists were attracted to a geometric center of mass in 5-dimensional space which was different from the geometric center of the region where the system lived. The process significance of the findings and the relationship between the deterministic and stochastic results are discussed. The paper is then placed in the broader context of our efforts to provide new and meaningful quantitative dimensions and mathematical models to psychotherapy research. 59 refs.« less

  4. Resolution of Closely Spaced Optical Targets Using Maximum Likelihood Estimator and Maximum Entropy Method: A Comparison Study

    DTIC Science & Technology

    1981-03-03

    Government Agencies. The views and conclusions contained in this document are those of the contractor and should not be interpreted as necessarily...resolving closely spaced j optical point targets are compared using Monte Carlo simulation ,esults for three different examples. It is found that the MEM is...although no direct compari- son was given. The objective of this report is to compare the capabilities of MLE and MEM in resolving two optical CSO’s

  5. Maximum entropy PDF projection: A review

    NASA Astrophysics Data System (ADS)

    Baggenstoss, Paul M.

    2017-06-01

    We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.

  6. Crowd macro state detection using entropy model

    NASA Astrophysics Data System (ADS)

    Zhao, Ying; Yuan, Mengqi; Su, Guofeng; Chen, Tao

    2015-08-01

    In the crowd security research area a primary concern is to identify the macro state of crowd behaviors to prevent disasters and to supervise the crowd behaviors. The entropy is used to describe the macro state of a self-organization system in physics. The entropy change indicates the system macro state change. This paper provides a method to construct crowd behavior microstates and the corresponded probability distribution using the individuals' velocity information (magnitude and direction). Then an entropy model was built up to describe the crowd behavior macro state. Simulation experiments and video detection experiments were conducted. It was verified that in the disordered state, the crowd behavior entropy is close to the theoretical maximum entropy; while in ordered state, the entropy is much lower than half of the theoretical maximum entropy. The crowd behavior macro state sudden change leads to the entropy change. The proposed entropy model is more applicable than the order parameter model in crowd behavior detection. By recognizing the entropy mutation, it is possible to detect the crowd behavior macro state automatically by utilizing cameras. Results will provide data support on crowd emergency prevention and on emergency manual intervention.

  7. Two aspects of black hole entropy in Lanczos-Lovelock models of gravity

    NASA Astrophysics Data System (ADS)

    Kolekar, Sanved; Kothawala, Dawood; Padmanabhan, T.

    2012-03-01

    We consider two specific approaches to evaluate the black hole entropy which are known to produce correct results in the case of Einstein’s theory and generalize them to Lanczos-Lovelock models. In the first approach (which could be called extrinsic), we use a procedure motivated by earlier work by Pretorius, Vollick, and Israel, and by Oppenheim, and evaluate the entropy of a configuration of densely packed gravitating shells on the verge of forming a black hole in Lanczos-Lovelock theories of gravity. We find that this matter entropy is not equal to (it is less than) Wald entropy, except in the case of Einstein theory, where they are equal. The matter entropy is proportional to the Wald entropy if we consider a specific mth-order Lanczos-Lovelock model, with the proportionality constant depending on the spacetime dimensions D and the order m of the Lanczos-Lovelock theory as (D-2m)/(D-2). Since the proportionality constant depends on m, the proportionality between matter entropy and Wald entropy breaks down when we consider a sum of Lanczos-Lovelock actions involving different m. In the second approach (which could be called intrinsic), we generalize a procedure, previously introduced by Padmanabhan in the context of general relativity, to study off-shell entropy of a class of metrics with horizon using a path integral method. We consider the Euclidean action of Lanczos-Lovelock models for a class of metrics off shell and interpret it as a partition function. We show that in the case of spherically symmetric metrics, one can interpret the Euclidean action as the free energy and read off both the entropy and energy of a black hole spacetime. Surprisingly enough, this leads to exactly the Wald entropy and the energy of the spacetime in Lanczos-Lovelock models obtained by other methods. We comment on possible implications of the result.

  8. Predicting the potential environmental suitability for Theileria orientalis transmission in New Zealand cattle using maximum entropy niche modelling.

    PubMed

    Lawrence, K E; Summers, S R; Heath, A C G; McFadden, A M J; Pulford, D J; Pomroy, W E

    2016-07-15

    The tick-borne haemoparasite Theileria orientalis is the most important infectious cause of anaemia in New Zealand cattle. Since 2012 a previously unrecorded type, T. orientalis type 2 (Ikeda), has been associated with disease outbreaks of anaemia, lethargy, jaundice and deaths on over 1000 New Zealand cattle farms, with most of the affected farms found in the upper North Island. The aim of this study was to model the relative environmental suitability for T. orientalis transmission throughout New Zealand, to predict the proportion of cattle farms potentially suitable for active T. orientalis infection by region, island and the whole of New Zealand and to estimate the average relative environmental suitability per farm by region, island and the whole of New Zealand. The relative environmental suitability for T. orientalis transmission was estimated using the Maxent (maximum entropy) modelling program. The Maxent model predicted that 99% of North Island cattle farms (n=36,257), 64% South Island cattle farms (n=15,542) and 89% of New Zealand cattle farms overall (n=51,799) could potentially be suitable for T. orientalis transmission. The average relative environmental suitability of T. orientalis transmission at the farm level was 0.34 in the North Island, 0.02 in the South Island and 0.24 overall. The study showed that the potential spatial distribution of T. orientalis environmental suitability was much greater than presumed in the early part of the Theileria associated bovine anaemia (TABA) epidemic. Maximum entropy offers a computer efficient method of modelling the probability of habitat suitability for an arthropod vectored disease. This model could help estimate the boundaries of the endemically stable and endemically unstable areas for T. orientalis transmission within New Zealand and be of considerable value in informing practitioner and farmer biosecurity decisions in these respective areas. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Dispersion entropy for the analysis of resting-state MEG regularity in Alzheimer's disease.

    PubMed

    Azami, Hamed; Rostaghi, Mostafa; Fernandez, Alberto; Escudero, Javier

    2016-08-01

    Alzheimer's disease (AD) is a progressive degenerative brain disorder affecting memory, thinking, behaviour and emotion. It is the most common form of dementia and a big social problem in western societies. The analysis of brain activity may help to diagnose this disease. Changes in entropy methods have been reported useful in research studies to characterize AD. We have recently proposed dispersion entropy (DisEn) as a very fast and powerful tool to quantify the irregularity of time series. The aim of this paper is to evaluate the ability of DisEn, in comparison with fuzzy entropy (FuzEn), sample entropy (SampEn), and permutation entropy (PerEn), to discriminate 36 AD patients from 26 elderly control subjects using resting-state magnetoencephalogram (MEG) signals. The results obtained by DisEn, FuzEn, and SampEn, unlike PerEn, show that the AD patients' signals are more regular than controls' time series. The p-values obtained by DisEn, FuzEn, SampEn, and PerEn based methods demonstrate the superiority of DisEn over PerEn, SampEn, and PerEn. Moreover, the computation time for the newly proposed DisEn-based method is noticeably less than for the FuzEn, SampEn, and PerEn based approaches.

  10. Evaluation of spatial and spatiotemporal estimation methods in simulation of precipitation variability patterns

    NASA Astrophysics Data System (ADS)

    Bayat, Bardia; Zahraie, Banafsheh; Taghavi, Farahnaz; Nasseri, Mohsen

    2013-08-01

    Identification of spatial and spatiotemporal precipitation variations plays an important role in different hydrological applications such as missing data estimation. In this paper, the results of Bayesian maximum entropy (BME) and ordinary kriging (OK) are compared for modeling spatial and spatiotemporal variations of annual precipitation with and without incorporating elevation variations. The study area of this research is Namak Lake watershed located in the central part of Iran with an area of approximately 90,000 km2. The BME and OK methods have been used to model the spatial and spatiotemporal variations of precipitation in this watershed, and their performances have been evaluated using cross-validation statistics. The results of the case study have shown the superiority of BME over OK in both spatial and spatiotemporal modes. The results have shown that BME estimates are less biased and more accurate than OK. The improvements in the BME estimates are mostly related to incorporating hard and soft data in the estimation process, which resulted in more detailed and reliable results. Estimation error variance for BME results is less than OK estimations in the study area in both spatial and spatiotemporal modes.

  11. Fugacity ratio estimations for high-melting rigid aromatic compounds.

    PubMed

    Van Noort, Paul C M

    2004-07-01

    Prediction of the environmental fate of organic compounds requires knowledge of their tendency to stay in the gas and water phase. Vapor pressure and aqueous solubility are commonly used descriptors for these processes. Depending on the type of distribution process, values for either the pure solid state or the (subcooled) liquid state have to be used. Values for the (subcooled) liquid state can be calculated from those for the solid state, and vice versa, using the fugacity ratio. Fugacity ratios are usually calculated from the entropy of fusion and the melting point. For polycyclic aromatic hydrocarbons, chlorobenzenes, chlorodibenzofuranes, and chlorodibenzo(p)dioxins, fugacity ratios calculated using experimental entropies of fusion were systematically less than those obtained from a thermodynamically more rigorous approach using heat capacity data. The deviation was more than 1 order of magnitude at the highest melting point. The use of a universal value for the entropy of fusion of 56 J/molK resulted in either over or underestimation by up to more than 1 order of magnitude. A simple correction factor, based on the melting point only, was derived. This correction factor allowed the fugacity ratios to be estimated from experimental entropies of fusion and melting point with an accuracy better than 0.1-0.2 log units. Copyright 2004 Elsevier Ltd.

  12. Weighted fractional permutation entropy and fractional sample entropy for nonlinear Potts financial dynamics

    NASA Astrophysics Data System (ADS)

    Xu, Kaixuan; Wang, Jun

    2017-02-01

    In this paper, recently introduced permutation entropy and sample entropy are further developed to the fractional cases, weighted fractional permutation entropy (WFPE) and fractional sample entropy (FSE). The fractional order generalization of information entropy is utilized in the above two complexity approaches, to detect the statistical characteristics of fractional order information in complex systems. The effectiveness analysis of proposed methods on the synthetic data and the real-world data reveals that tuning the fractional order allows a high sensitivity and more accurate characterization to the signal evolution, which is useful in describing the dynamics of complex systems. Moreover, the numerical research on nonlinear complexity behaviors is compared between the returns series of Potts financial model and the actual stock markets. And the empirical results confirm the feasibility of the proposed model.

  13. Magnetization and isothermal magnetic entropy change of a mixed spin-1 and spin-2 Heisenberg superlattice

    NASA Astrophysics Data System (ADS)

    Xu, Ping; Du, An

    2017-09-01

    A superlattice composed of spin-1 and spin-2 with ABAB … structure was described with Heisenberg model. The magnetizations and magnetic entropy changes under different magnetic fields were calculated by the Green's function method. The magnetization compensation phenomenon could be observed by altering the intralayer exchange interactions and the single-ion anisotropies of spins. Along with the temperature increasing, the system in the absence of magnetization compensation shows normal magnetic entropy change and displays a peak near the critical temperature, and yet the system with magnetization compensation shows normal magnetic entropy change near the compensation temperature but inverse magnetic entropy change near the critical temperature. Finally, we illustrated the reasons of different behaviors of magnetic entropy change by analyzing the contributions of two sublattices to the total magnetic entropy change.

  14. Measuring entanglement entropy of a generic many-body system with a quantum switch.

    PubMed

    Abanin, Dmitry A; Demler, Eugene

    2012-07-13

    Entanglement entropy has become an important theoretical concept in condensed matter physics because it provides a unique tool for characterizing quantum mechanical many-body phases and new kinds of quantum order. However, the experimental measurement of entanglement entropy in a many-body system is widely believed to be unfeasible, owing to the nonlocal character of this quantity. Here, we propose a general method to measure the entanglement entropy. The method is based on a quantum switch (a two-level system) coupled to a composite system consisting of several copies of the original many-body system. The state of the switch controls how different parts of the composite system connect to each other. We show that, by studying the dynamics of the quantum switch only, the Rényi entanglement entropy of the many-body system can be extracted. We propose a possible design of the quantum switch, which can be realized in cold atomic systems. Our work provides a route towards testing the scaling of entanglement in critical systems as well as a method for a direct experimental detection of topological order.

  15. Application of the Maximum Entropy Method to Risk Analysis of Mergers and Acquisitions

    NASA Astrophysics Data System (ADS)

    Xie, Jigang; Song, Wenyun

    The maximum entropy (ME) method can be used to analyze the risk of mergers and acquisitions when only pre-acquisition information is available. A practical example of the risk analysis of China listed firms’ mergers and acquisitions is provided to testify the feasibility and practicality of the method.

  16. Binarized cross-approximate entropy in crowdsensing environment.

    PubMed

    Skoric, Tamara; Mohamoud, Omer; Milovanovic, Branislav; Japundzic-Zigon, Nina; Bajic, Dragana

    2017-01-01

    Personalised monitoring in health applications has been recognised as part of the mobile crowdsensing concept, where subjects equipped with sensors extract information and share them for personal or common benefit. Limited transmission resources impose the use of local analyses methodology, but this approach is incompatible with analytical tools that require stationary and artefact-free data. This paper proposes a computationally efficient binarised cross-approximate entropy, referred to as (X)BinEn, for unsupervised cardiovascular signal processing in environments where energy and processor resources are limited. The proposed method is a descendant of the cross-approximate entropy ((X)ApEn). It operates on binary, differentially encoded data series split into m-sized vectors. The Hamming distance is used as a distance measure, while a search for similarities is performed on the vector sets. The procedure is tested on rats under shaker and restraint stress, and compared to the existing (X)ApEn results. The number of processing operations is reduced. (X)BinEn captures entropy changes in a similar manner to (X)ApEn. The coding coarseness yields an adverse effect of reduced sensitivity, but it attenuates parameter inconsistency and binary bias. A special case of (X)BinEn is equivalent to Shannon's entropy. A binary conditional entropy for m =1 vectors is embedded into the (X)BinEn procedure. (X)BinEn can be applied to a single time series as an auto-entropy method, or to a pair of time series, as a cross-entropy method. Its low processing requirements makes it suitable for mobile, battery operated, self-attached sensing devices, with limited power and processor resources. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Event by event analysis and entropy of multiparticle systems

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyz, W.

    2000-04-01

    The coincidence method of measuring the entropy of a system, proposed some time ago by Ma, is generalized to include systems out of equilibrium. It is suggested that the method can be adapted to analyze multiparticle states produced in high-energy collisions.

  18. Fine structure of the entanglement entropy in the O(2) model.

    PubMed

    Yang, Li-Ping; Liu, Yuzhi; Zou, Haiyuan; Xie, Z Y; Meurice, Y

    2016-01-01

    We compare two calculations of the particle density in the superfluid phase of the O(2) model with a chemical potential μ in 1+1 dimensions. The first relies on exact blocking formulas from the Tensor Renormalization Group (TRG) formulation of the transfer matrix. The second is a worm algorithm. We show that the particle number distributions obtained with the two methods agree well. We use the TRG method to calculate the thermal entropy and the entanglement entropy. We describe the particle density, the two entropies and the topology of the world lines as we increase μ to go across the superfluid phase between the first two Mott insulating phases. For a sufficiently large temporal size, this process reveals an interesting fine structure: the average particle number and the winding number of most of the world lines in the Euclidean time direction increase by one unit at a time. At each step, the thermal entropy develops a peak and the entanglement entropy increases until we reach half-filling and then decreases in a way that approximately mirrors the ascent. This suggests an approximate fermionic picture.

  19. Probing the extensive nature of entropy

    NASA Astrophysics Data System (ADS)

    Salagaram, T.; Chetty, N.

    2013-08-01

    We have devised a general numerical scheme applied to a system of independent, distinguishable, non-interacting particles, to demonstrate in a direct manner the extensive nature of statistical entropy. Working within the microcanonical ensemble, our methods enable one to directly monitor the approach to the thermodynamic limit (N → ∞) in a manner that has not been known before. We show that (sN - s∞) → N-α where sN is the entropy per particle for N particles and S∞ is the entropy per particle in the thermodynamic limit. We demonstrate universal behaviour by considering a number of different systems each defined by its unique single-particle spectrum. Various thermodynamic quantities as a function of N may be computed using our methods; in this paper, we focus on the entropy, the chemical potential and the temperature. Our results are applicable to systems of finite size, e.g. nano-particle systems. Furthermore, we demonstrate a new phenomenon, referred to as entropic interference, which manifests as a cancellation of terms in the thermodynamic limit and which results in the additive nature of entropy.

  20. Entropy-Based Registration of Point Clouds Using Terrestrial Laser Scanning and Smartphone GPS.

    PubMed

    Chen, Maolin; Wang, Siying; Wang, Mingwei; Wan, Youchuan; He, Peipei

    2017-01-20

    Automatic registration of terrestrial laser scanning point clouds is a crucial but unresolved topic that is of great interest in many domains. This study combines terrestrial laser scanner with a smartphone for the coarse registration of leveled point clouds with small roll and pitch angles and height differences, which is a novel sensor combination mode for terrestrial laser scanning. The approximate distance between two neighboring scan positions is firstly calculated with smartphone GPS coordinates. Then, 2D distribution entropy is used to measure the distribution coherence between the two scans and search for the optimal initial transformation parameters. To this end, we propose a method called Iterative Minimum Entropy (IME) to correct initial transformation parameters based on two criteria: the difference between the average and minimum entropy and the deviation from the minimum entropy to the expected entropy. Finally, the presented method is evaluated using two data sets that contain tens of millions of points from panoramic and non-panoramic, vegetation-dominated and building-dominated cases and can achieve high accuracy and efficiency.

  1. Judging The Effectiveness Of Wool Combing By The Entropy Of The Images Of Wool Slivers

    NASA Astrophysics Data System (ADS)

    Rodrigues, F. Carvalho; Carvalho, Fernando D.; Peixoto, J. Pinto; Silva, M. Santos

    1989-04-01

    In general it can be said that the textile industry endeavours to render a bunch of fibers chaotically distributed in space into an ordered spatial distribution. This fact is independent of the nature the fibers, i.e., the aim of getting into higher order states in the spatial distribution of the fibers dictates different industrial processes depending on whether the fibers are wool, cotton or man made but the all effect is centred on obtaining at every step of any of the processes a more ordered state regarding the spatial distribution of the fibers. Thinking about the textile processes as a method of getting order out of chaos, the concept of entropy appears as the most appropriate judging parameter on the effectiveness of a step in the chain of an industrial process to produce a regular textile. In fact, entropy is the hidden parameter not only for the textile industry but also for the non woven and paper industrial processes. It happens that in these industries the state of order is linked with the spatial distribution of fibers and to obtain an image of a spatial distribution is an easy matter. To compute the image entropy from the grey level distribution requires only the use of the Shannon formula. In this paper to illustrate the usefulness of employing the entropy of an image concept to textiles the evolution of the entropy of wool slivers along the combing process is matched against the state of parallelization of the fibbers along the seven steps as measured by the existing method. The advantages of the entropy method over the previous method based on diffraction is also demonstrated.

  2. Entropy-based adaptive attitude estimation

    NASA Astrophysics Data System (ADS)

    Kiani, Maryam; Barzegar, Aylin; Pourtakdoust, Seid H.

    2018-03-01

    Gaussian approximation filters have increasingly been developed to enhance the accuracy of attitude estimation in space missions. The effective employment of these algorithms demands accurate knowledge of system dynamics and measurement models, as well as their noise characteristics, which are usually unavailable or unreliable. An innovation-based adaptive filtering approach has been adopted as a solution to this problem; however, it exhibits two major challenges, namely appropriate window size selection and guaranteed assurance of positive definiteness for the estimated noise covariance matrices. The current work presents two novel techniques based on relative entropy and confidence level concepts in order to address the abovementioned drawbacks. The proposed adaptation techniques are applied to two nonlinear state estimation algorithms of the extended Kalman filter and cubature Kalman filter for attitude estimation of a low earth orbit satellite equipped with three-axis magnetometers and Sun sensors. The effectiveness of the proposed adaptation scheme is demonstrated by means of comprehensive sensitivity analysis on the system and environmental parameters by using extensive independent Monte Carlo simulations.

  3. Critical evaluation of methods to incorporate entropy loss upon binding in high-throughput docking.

    PubMed

    Salaniwal, Sumeet; Manas, Eric S; Alvarez, Juan C; Unwalla, Rayomand J

    2007-02-01

    Proper accounting of the positional/orientational/conformational entropy loss associated with protein-ligand binding is important to obtain reliable predictions of binding affinity. Herein, we critically examine two simplified statistical mechanics-based approaches, namely a constant penalty per rotor method, and a more rigorous method, referred to here as the partition function-based scoring (PFS) method, to account for such entropy losses in high-throughput docking calculations. Our results on the estrogen receptor beta and dihydrofolate reductase proteins demonstrate that, while the constant penalty method over-penalizes molecules for their conformational flexibility, the PFS method behaves in a more "DeltaG-like" manner by penalizing different rotors differently depending on their residual entropy in the bound state. Furthermore, in contrast to no entropic penalty or the constant penalty approximation, the PFS method does not exhibit any bias towards either rigid or flexible molecules in the hit list. Preliminary enrichment studies using a lead-like random molecular database suggest that an accurate representation of the "true" energy landscape of the protein-ligand complex is critical for reliable predictions of relative binding affinities by the PFS method. Copyright 2006 Wiley-Liss, Inc.

  4. An exploratory statistical approach to depression pattern identification

    NASA Astrophysics Data System (ADS)

    Feng, Qing Yi; Griffiths, Frances; Parsons, Nick; Gunn, Jane

    2013-02-01

    Depression is a complex phenomenon thought to be due to the interaction of biological, psychological and social factors. Currently depression assessment uses self-reported depressive symptoms but this is limited in the degree to which it can characterise the different expressions of depression emerging from the complex causal pathways that are thought to underlie depression. In this study, we aimed to represent the different patterns of depression with pattern values unique to each individual, where each value combines all the available information about an individual’s depression. We considered the depressed individual as a subsystem of an open complex system, proposed Generalized Information Entropy (GIE) to represent the general characteristics of information entropy of the system, and then implemented Maximum Entropy Estimates to derive equations for depression patterns. We also introduced a numerical simulation method to process the depression related data obtained by the Diamond Cohort Study which has been underway in Australia since 2005 involving 789 people. Unlike traditional assessment, we obtained a unique value for each depressed individual which gives an overall assessment of the depression pattern. Our work provides a novel way to visualise and quantitatively measure the depression pattern of the depressed individual which could be used for pattern categorisation. This may have potential for tailoring health interventions to depressed individuals to maximize health benefit.

  5. Shannon information entropy in heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Ma, Chun-Wang; Ma, Yu-Gang

    2018-03-01

    The general idea of information entropy provided by C.E. Shannon "hangs over everything we do" and can be applied to a great variety of problems once the connection between a distribution and the quantities of interest is found. The Shannon information entropy essentially quantify the information of a quantity with its specific distribution, for which the information entropy based methods have been deeply developed in many scientific areas including physics. The dynamical properties of heavy-ion collisions (HICs) process make it difficult and complex to study the nuclear matter and its evolution, for which Shannon information entropy theory can provide new methods and observables to understand the physical phenomena both theoretically and experimentally. To better understand the processes of HICs, the main characteristics of typical models, including the quantum molecular dynamics models, thermodynamics models, and statistical models, etc., are briefly introduced. The typical applications of Shannon information theory in HICs are collected, which cover the chaotic behavior in branching process of hadron collisions, the liquid-gas phase transition in HICs, and the isobaric difference scaling phenomenon for intermediate mass fragments produced in HICs of neutron-rich systems. Even though the present applications in heavy-ion collision physics are still relatively simple, it would shed light on key questions we are seeking for. It is suggested to further develop the information entropy methods in nuclear reactions models, as well as to develop new analysis methods to study the properties of nuclear matters in HICs, especially the evolution of dynamics system.

  6. Large time behavior of entropy solutions to one-dimensional unipolar hydrodynamic model for semiconductor devices

    NASA Astrophysics Data System (ADS)

    Huang, Feimin; Li, Tianhong; Yu, Huimin; Yuan, Difan

    2018-06-01

    We are concerned with the global existence and large time behavior of entropy solutions to the one-dimensional unipolar hydrodynamic model for semiconductors in the form of Euler-Poisson equations in a bounded interval. In this paper, we first prove the global existence of entropy solution by vanishing viscosity and compensated compactness framework. In particular, the solutions are uniformly bounded with respect to space and time variables by introducing modified Riemann invariants and the theory of invariant region. Based on the uniform estimates of density, we further show that the entropy solution converges to the corresponding unique stationary solution exponentially in time. No any smallness condition is assumed on the initial data and doping profile. Moreover, the novelty in this paper is about the unform bound with respect to time for the weak solutions of the isentropic Euler-Poisson system.

  7. Symmetry for the duration of entropy-consuming intervals.

    PubMed

    García-García, Reinaldo; Domínguez, Daniel

    2014-05-01

    We introduce the violation fraction υ as the cumulative fraction of time that a mesoscopic system spends consuming entropy at a single trajectory in phase space. We show that the fluctuations of this quantity are described in terms of a symmetry relation reminiscent of fluctuation theorems, which involve a function Φ, which can be interpreted as an entropy associated with the fluctuations of the violation fraction. The function Φ, when evaluated for arbitrary stochastic realizations of the violation fraction, is odd upon the symmetry transformations that are relevant for the associated stochastic entropy production. This fact leads to a detailed fluctuation theorem for the probability density function of Φ. We study the steady-state limit of this symmetry in the paradigmatic case of a colloidal particle dragged by optical tweezers through an aqueous solution. Finally, we briefly discuss possible applications of our results for the estimation of free-energy differences from single-molecule experiments.

  8. Entropy Viscosity and L1-based Approximations of PDEs: Exploiting Sparsity

    DTIC Science & Technology

    2015-10-23

    AFRL-AFOSR-VA-TR-2015-0337 Entropy Viscosity and L1-based Approximations of PDEs: Exploiting Sparsity Jean-Luc Guermond TEXAS A & M UNIVERSITY 750...REPORT DATE (DD-MM-YYYY) 09-05-2015 2. REPORT TYPE Final report 3. DATES COVERED (From - To) 01-07-2012 - 30-06-2015 4. TITLE AND SUBTITLE Entropy ...conservation equations can be stabilized by using the so-called entropy viscosity method and we proposed to to investigate this new technique. We

  9. Bayesian framework inspired no-reference region-of-interest quality measure for brain MRI images

    PubMed Central

    Osadebey, Michael; Pedersen, Marius; Arnold, Douglas; Wendel-Mitoraj, Katrina

    2017-01-01

    Abstract. We describe a postacquisition, attribute-based quality assessment method for brain magnetic resonance imaging (MRI) images. It is based on the application of Bayes theory to the relationship between entropy and image quality attributes. The entropy feature image of a slice is segmented into low- and high-entropy regions. For each entropy region, there are three separate observations of contrast, standard deviation, and sharpness quality attributes. A quality index for a quality attribute is the posterior probability of an entropy region given any corresponding region in a feature image where quality attribute is observed. Prior belief in each entropy region is determined from normalized total clique potential (TCP) energy of the slice. For TCP below the predefined threshold, the prior probability for a region is determined by deviation of its percentage composition in the slice from a standard normal distribution built from 250 MRI volume data provided by Alzheimer’s Disease Neuroimaging Initiative. For TCP above the threshold, the prior is computed using a mathematical model that describes the TCP–noise level relationship in brain MRI images. Our proposed method assesses the image quality of each entropy region and the global image. Experimental results demonstrate good correlation with subjective opinions of radiologists for different types and levels of quality distortions. PMID:28630885

  10. The entropy of the life table: A reappraisal.

    PubMed

    Fernandez, Oscar E; Beltrán-Sánchez, Hiram

    2015-09-01

    The life table entropy provides useful information for understanding improvements in mortality and survival in a population. In this paper we take a closer look at the life table entropy and use advanced mathematical methods to provide additional insights for understanding how it relates to changes in mortality and survival. By studying the entropy (H) as a functional, we show that changes in the entropy depend on both the relative change in life expectancy lost due to death (e(†)) and in life expectancy at birth (e0). We also show that changes in the entropy can be further linked to improvements in premature and older deaths. We illustrate our methods with empirical data from Latin American countries, which suggests that at high mortality levels declines in H (which are associated with survival increases) linked with larger improvements in e0, whereas at low mortality levels e(†) made larger contributions to H. We additionally show that among countries with low mortality level, contributions of e(†) to changes in the life table entropy resulted from averting early deaths. These findings indicate that future increases in overall survival in low mortality countries will likely result from improvements in e(†). Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Optimization of a Circular Microchannel With Entropy Generation Minimization Method

    NASA Astrophysics Data System (ADS)

    Jafari, Arash; Ghazali, Normah Mohd

    2010-06-01

    New advances in micro and nano scales are being realized and the contributions of micro and nano heat dissipation devices are of high importance in this novel technology development. Past studies showed that microchannel design depends on its thermal resistance and pressure drop. However, entropy generation minimization (EGM) as a new optimization theory stated that the rate of entropy generation should be also optimized. Application of EGM in microchannel heat sink design is reviewed and discussed in this paper. Latest principles for deriving the entropy generation relations are discussed to present how this approach can be achieved. An optimization procedure using EGM method with the entropy generation rate is derived for a circular microchannel heat sink based upon thermal resistance and pressure drop. The equations are solved using MATLAB and the obtained results are compared to similar past studies. The effects of channel diameter, number of channels, heat flux, and pumping power on the entropy generation rate and Reynolds number are investigated. Analytical correlations are utilized for heat transfer and friction coefficients. A minimum entropy generation has been observed for N = 40 and channel diameter of 90μm. It is concluded that for N = 40 and channel hydraulic diameter of 90μm, the circular microchannel heat sink is on its optimum operating point based on second law of thermodynamics.

  12. An entropy-based statistic for genomewide association studies.

    PubMed

    Zhao, Jinying; Boerwinkle, Eric; Xiong, Momiao

    2005-07-01

    Efficient genotyping methods and the availability of a large collection of single-nucleotide polymorphisms provide valuable tools for genetic studies of human disease. The standard chi2 statistic for case-control studies, which uses a linear function of allele frequencies, has limited power when the number of marker loci is large. We introduce a novel test statistic for genetic association studies that uses Shannon entropy and a nonlinear function of allele frequencies to amplify the differences in allele and haplotype frequencies to maintain statistical power with large numbers of marker loci. We investigate the relationship between the entropy-based test statistic and the standard chi2 statistic and show that, in most cases, the power of the entropy-based statistic is greater than that of the standard chi2 statistic. The distribution of the entropy-based statistic and the type I error rates are validated using simulation studies. Finally, we apply the new entropy-based test statistic to two real data sets, one for the COMT gene and schizophrenia and one for the MMP-2 gene and esophageal carcinoma, to evaluate the performance of the new method for genetic association studies. The results show that the entropy-based statistic obtained smaller P values than did the standard chi2 statistic.

  13. A method for calculating aerodynamic heating on sounding rocket tangent ogive noses.

    NASA Technical Reports Server (NTRS)

    Wing, L. D.

    1973-01-01

    A method is presented for calculating the aerodynamic heating and shear stresses at the wall for tangent ogive noses that are slender enough to maintain an attached nose shock through that portion of flight during which heat transfer from the boundary layer to the wall is significant. The lower entropy of the attached nose shock combined with the inclusion of the streamwise pressure gradient yields a reasonable estimate of the actual flow conditions. Both laminar and turbulent boundary layers are examined and an approximation of the effects of (up to) moderate angles-of-attack is included in the analysis. The analytical method has been programmed in FORTRAN IV for an IBM 360/91 computer.

  14. A method for calculating aerodynamic heating on sounding rocket tangent ogive noses

    NASA Technical Reports Server (NTRS)

    Wing, L. D.

    1972-01-01

    A method is presented for calculating the aerodynamic heating and shear stresses at the wall for tangent ogive noses that are slender enough to maintain an attached nose shock through that portion of flight during which heat transfer from the boundary layer to the wall is significant. The lower entropy of the attached nose shock combined with the inclusion of the streamwise pressure gradient yields a reasonable estimate of the actual flow conditions. Both laminar and turbulent boundary layers are examined and an approximation of the effects of (up to) moderate angles-of-attack is included in the analysis. The analytical method has been programmed in FORTRAN 4 for an IBM 360/91 computer.

  15. Self-organization and entropy reduction in a living cell.

    PubMed

    Davies, Paul C W; Rieper, Elisabeth; Tuszynski, Jack A

    2013-01-01

    In this paper we discuss the entropy and information aspects of a living cell. Particular attention is paid to the information gain on assembling and maintaining a living state. Numerical estimates of the information and entropy reduction are given and discussed in the context of the cell's metabolic activity. We discuss a solution to an apparent paradox that there is less information content in DNA than in the proteins that are assembled based on the genetic code encrypted in DNA. When energy input required for protein synthesis is accounted for, the paradox is clearly resolved. Finally, differences between biological information and instruction are discussed. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  16. Lexical Predictability During Natural Reading: Effects of Surprisal and Entropy Reduction.

    PubMed

    Lowder, Matthew W; Choi, Wonil; Ferreira, Fernanda; Henderson, John M

    2018-06-01

    What are the effects of word-by-word predictability on sentence processing times during the natural reading of a text? Although information complexity metrics such as surprisal and entropy reduction have been useful in addressing this question, these metrics tend to be estimated using computational language models, which require some degree of commitment to a particular theory of language processing. Taking a different approach, this study implemented a large-scale cumulative cloze task to collect word-by-word predictability data for 40 passages and compute surprisal and entropy reduction values in a theory-neutral manner. A separate group of participants read the same texts while their eye movements were recorded. Results showed that increases in surprisal and entropy reduction were both associated with increases in reading times. Furthermore, these effects did not depend on the global difficulty of the text. The findings suggest that surprisal and entropy reduction independently contribute to variation in reading times, as these metrics seem to capture different aspects of lexical predictability. Copyright © 2018 Cognitive Science Society, Inc.

  17. Microscopic insights into the NMR relaxation based protein conformational entropy meter

    PubMed Central

    Kasinath, Vignesh; Sharp, Kim A.; Wand, A. Joshua

    2013-01-01

    Conformational entropy is a potentially important thermodynamic parameter contributing to protein function. Quantitative measures of conformational entropy are necessary for an understanding of its role but have been difficult to obtain. An empirical method that utilizes changes in conformational dynamics as a proxy for changes in conformational entropy has recently been introduced. Here we probe the microscopic origins of the link between conformational dynamics and conformational entropy using molecular dynamics simulations. Simulation of seven pro! teins gave an excellent correlation with measures of side-chain motion derived from NMR relaxation. The simulations show that the motion of methyl-bearing side-chains are sufficiently coupled to that of other side chains to serve as excellent reporters of the overall side-chain conformational entropy. These results tend to validate the use of experimentally accessible measures of methyl motion - the NMR-derived generalized order parameters - as a proxy from which to derive changes in protein conformational entropy. PMID:24007504

  18. Secure self-calibrating quantum random-bit generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fiorentino, M.; Santori, C.; Spillane, S. M.

    2007-03-15

    Random-bit generators (RBGs) are key components of a variety of information processing applications ranging from simulations to cryptography. In particular, cryptographic systems require 'strong' RBGs that produce high-entropy bit sequences, but traditional software pseudo-RBGs have very low entropy content and therefore are relatively weak for cryptography. Hardware RBGs yield entropy from chaotic or quantum physical systems and therefore are expected to exhibit high entropy, but in current implementations their exact entropy content is unknown. Here we report a quantum random-bit generator (QRBG) that harvests entropy by measuring single-photon and entangled two-photon polarization states. We introduce and implement a quantum tomographicmore » method to measure a lower bound on the 'min-entropy' of the system, and we employ this value to distill a truly random-bit sequence. This approach is secure: even if an attacker takes control of the source of optical states, a secure random sequence can be distilled.« less

  19. Consistent Application of the Boltzmann Distribution to Residual Entropy in Crystals

    ERIC Educational Resources Information Center

    Kozliak, Evguenii I.

    2007-01-01

    Four different approaches to residual entropy (the entropy remaining in crystals comprised of nonsymmetric molecules like CO, N[subscript 2]O, FClO[subscript 3], and H[subscript 2]O as temperatures approach 0 K) are analyzed and a new method of its calculation is developed based on application of the Boltzmann distribution. The inherent connection…

  20. Enhanced capital-asset pricing model for the reconstruction of bipartite financial networks.

    PubMed

    Squartini, Tiziano; Almog, Assaf; Caldarelli, Guido; van Lelyveld, Iman; Garlaschelli, Diego; Cimini, Giulio

    2017-09-01

    Reconstructing patterns of interconnections from partial information is one of the most important issues in the statistical physics of complex networks. A paramount example is provided by financial networks. In fact, the spreading and amplification of financial distress in capital markets are strongly affected by the interconnections among financial institutions. Yet, while the aggregate balance sheets of institutions are publicly disclosed, information on single positions is mostly confidential and, as such, unavailable. Standard approaches to reconstruct the network of financial interconnection produce unrealistically dense topologies, leading to a biased estimation of systemic risk. Moreover, reconstruction techniques are generally designed for monopartite networks of bilateral exposures between financial institutions, thus failing in reproducing bipartite networks of security holdings (e.g., investment portfolios). Here we propose a reconstruction method based on constrained entropy maximization, tailored for bipartite financial networks. Such a procedure enhances the traditional capital-asset pricing model (CAPM) and allows us to reproduce the correct topology of the network. We test this enhanced CAPM (ECAPM) method on a dataset, collected by the European Central Bank, of detailed security holdings of European institutional sectors over a period of six years (2009-2015). Our approach outperforms the traditional CAPM and the recently proposed maximum-entropy CAPM both in reproducing the network topology and in estimating systemic risk due to fire sales spillovers. In general, ECAPM can be applied to the whole class of weighted bipartite networks described by the fitness model.

  1. Enhanced capital-asset pricing model for the reconstruction of bipartite financial networks

    NASA Astrophysics Data System (ADS)

    Squartini, Tiziano; Almog, Assaf; Caldarelli, Guido; van Lelyveld, Iman; Garlaschelli, Diego; Cimini, Giulio

    2017-09-01

    Reconstructing patterns of interconnections from partial information is one of the most important issues in the statistical physics of complex networks. A paramount example is provided by financial networks. In fact, the spreading and amplification of financial distress in capital markets are strongly affected by the interconnections among financial institutions. Yet, while the aggregate balance sheets of institutions are publicly disclosed, information on single positions is mostly confidential and, as such, unavailable. Standard approaches to reconstruct the network of financial interconnection produce unrealistically dense topologies, leading to a biased estimation of systemic risk. Moreover, reconstruction techniques are generally designed for monopartite networks of bilateral exposures between financial institutions, thus failing in reproducing bipartite networks of security holdings (e.g., investment portfolios). Here we propose a reconstruction method based on constrained entropy maximization, tailored for bipartite financial networks. Such a procedure enhances the traditional capital-asset pricing model (CAPM) and allows us to reproduce the correct topology of the network. We test this enhanced CAPM (ECAPM) method on a dataset, collected by the European Central Bank, of detailed security holdings of European institutional sectors over a period of six years (2009-2015). Our approach outperforms the traditional CAPM and the recently proposed maximum-entropy CAPM both in reproducing the network topology and in estimating systemic risk due to fire sales spillovers. In general, ECAPM can be applied to the whole class of weighted bipartite networks described by the fitness model.

  2. How multiplicity determines entropy and the derivation of the maximum entropy principle for complex systems.

    PubMed

    Hanel, Rudolf; Thurner, Stefan; Gell-Mann, Murray

    2014-05-13

    The maximum entropy principle (MEP) is a method for obtaining the most likely distribution functions of observables from statistical systems by maximizing entropy under constraints. The MEP has found hundreds of applications in ergodic and Markovian systems in statistical mechanics, information theory, and statistics. For several decades there has been an ongoing controversy over whether the notion of the maximum entropy principle can be extended in a meaningful way to nonextensive, nonergodic, and complex statistical systems and processes. In this paper we start by reviewing how Boltzmann-Gibbs-Shannon entropy is related to multiplicities of independent random processes. We then show how the relaxation of independence naturally leads to the most general entropies that are compatible with the first three Shannon-Khinchin axioms, the (c,d)-entropies. We demonstrate that the MEP is a perfectly consistent concept for nonergodic and complex statistical systems if their relative entropy can be factored into a generalized multiplicity and a constraint term. The problem of finding such a factorization reduces to finding an appropriate representation of relative entropy in a linear basis. In a particular example we show that path-dependent random processes with memory naturally require specific generalized entropies. The example is to our knowledge the first exact derivation of a generalized entropy from the microscopic properties of a path-dependent random process.

  3. Modeling Loop Entropy

    PubMed Central

    Chirikjian, Gregory S.

    2011-01-01

    Proteins fold from a highly disordered state into a highly ordered one. Traditionally, the folding problem has been stated as one of predicting ‘the’ tertiary structure from sequential information. However, new evidence suggests that the ensemble of unfolded forms may not be as disordered as once believed, and that the native form of many proteins may not be described by a single conformation, but rather an ensemble of its own. Quantifying the relative disorder in the folded and unfolded ensembles as an entropy difference may therefore shed light on the folding process. One issue that clouds discussions of ‘entropy’ is that many different kinds of entropy can be defined: entropy associated with overall translational and rotational Brownian motion, configurational entropy, vibrational entropy, conformational entropy computed in internal or Cartesian coordinates (which can even be different from each other), conformational entropy computed on a lattice; each of the above with different solvation and solvent models; thermodynamic entropy measured experimentally, etc. The focus of this work is the conformational entropy of coil/loop regions in proteins. New mathematical modeling tools for the approximation of changes in conformational entropy during transition from unfolded to folded ensembles are introduced. In particular, models for computing lower and upper bounds on entropy for polymer models of polypeptide coils both with and without end constraints are presented. The methods reviewed here include kinematics (the mathematics of rigid-body motions), classical statistical mechanics and information theory. PMID:21187223

  4. Entropy Production in Convective Hydrothermal Systems

    NASA Astrophysics Data System (ADS)

    Boersing, Nele; Wellmann, Florian; Niederau, Jan

    2016-04-01

    Exploring hydrothermal reservoirs requires reliable estimates of subsurface temperatures to delineate favorable locations of boreholes. It is therefore of fundamental and practical importance to understand the thermodynamic behavior of the system in order to predict its performance with numerical studies. To this end, the thermodynamic measure of entropy production is considered as a useful abstraction tool to characterize the convective state of a system since it accounts for dissipative heat processes and gives insight into the system's average behavior in a statistical sense. Solving the underlying conservation principles of a convective hydrothermal system is sensitive to initial conditions and boundary conditions which in turn are prone to uncertain knowledge in subsurface parameters. There exist multiple numerical solutions to the mathematical description of a convective system and the prediction becomes even more challenging as the vigor of convection increases. Thus, the variety of possible modes contained in such highly non-linear problems needs to be quantified. A synthetic study is carried out to simulate fluid flow and heat transfer in a finite porous layer heated from below. Various two-dimensional models are created such that their corresponding Rayleigh numbers lie in a range from the sub-critical linear to the supercritical non-linear regime, that is purely conductive to convection-dominated systems. Entropy production is found to describe the transient evolution of convective processes fairly well and can be used to identify thermodynamic equilibrium. Additionally, varying the aspect ratio for each Rayleigh number shows that the variety of realized convection modes increases with both larger aspect ratio and higher Rayleigh number. This phenomenon is also reflected by an enlarged spread of entropy production for the realized modes. Consequently, the Rayleigh number can be correlated to the magnitude of entropy production. In cases of moderate Rayleigh number and moderate aspect ratio, entropy production even enables to predict a preferred convection mode for a model with homogeneous parameter distribution. As a general rule, the thermodynamic measure of entropy production can be used to analyze uncertainties accompanied by modelling convective hydrothermal systems. Without considering any probability distributions of input data, this synthetic study shows that a higher entropy production implies a lower ability to uniquely predict the convection pattern. This in turn means that the uncertainty in estimating subsurface temperatures is higher.

  5. Measurement of entanglement entropy in the two-dimensional Potts model using wavelet analysis.

    PubMed

    Tomita, Yusuke

    2018-05-01

    A method is introduced to measure the entanglement entropy using a wavelet analysis. Using this method, the two-dimensional Haar wavelet transform of a configuration of Fortuin-Kasteleyn (FK) clusters is performed. The configuration represents a direct snapshot of spin-spin correlations since spin degrees of freedom are traced out in FK representation. A snapshot of FK clusters loses image information at each coarse-graining process by the wavelet transform. It is shown that the loss of image information measures the entanglement entropy in the Potts model.

  6. Time-series analysis of foreign exchange rates using time-dependent pattern entropy

    NASA Astrophysics Data System (ADS)

    Ishizaki, Ryuji; Inoue, Masayoshi

    2013-08-01

    Time-dependent pattern entropy is a method that reduces variations to binary symbolic dynamics and considers the pattern of symbols in a sliding temporal window. We use this method to analyze the instability of daily variations in foreign exchange rates, in particular, the dollar-yen rate. The time-dependent pattern entropy of the dollar-yen rate was found to be high in the following periods: before and after the turning points of the yen from strong to weak or from weak to strong, and the period after the Lehman shock.

  7. Inverting ion images without Abel inversion: maximum entropy reconstruction of velocity maps.

    PubMed

    Dick, Bernhard

    2014-01-14

    A new method for the reconstruction of velocity maps from ion images is presented, which is based on the maximum entropy concept. In contrast to other methods used for Abel inversion the new method never applies an inversion or smoothing to the data. Instead, it iteratively finds the map which is the most likely cause for the observed data, using the correct likelihood criterion for data sampled from a Poissonian distribution. The entropy criterion minimizes the information content in this map, which hence contains no information for which there is no evidence in the data. Two implementations are proposed, and their performance is demonstrated with simulated and experimental data: Maximum Entropy Velocity Image Reconstruction (MEVIR) obtains a two-dimensional slice through the velocity distribution and can be compared directly to Abel inversion. Maximum Entropy Velocity Legendre Reconstruction (MEVELER) finds one-dimensional distribution functions Q(l)(v) in an expansion of the velocity distribution in Legendre polynomials P((cos θ) for the angular dependence. Both MEVIR and MEVELER can be used for the analysis of ion images with intensities as low as 0.01 counts per pixel, with MEVELER performing significantly better than MEVIR for images with low intensity. Both methods perform better than pBASEX, in particular for images with less than one average count per pixel.

  8. Comparing adaptive procedures for estimating the psychometric function for an auditory gap detection task.

    PubMed

    Shen, Yi

    2013-05-01

    A subject's sensitivity to a stimulus variation can be studied by estimating the psychometric function. Generally speaking, three parameters of the psychometric function are of interest: the performance threshold, the slope of the function, and the rate at which attention lapses occur. In the present study, three psychophysical procedures were used to estimate the three-parameter psychometric function for an auditory gap detection task. These were an up-down staircase (up-down) procedure, an entropy-based Bayesian (entropy) procedure, and an updated maximum-likelihood (UML) procedure. Data collected from four young, normal-hearing listeners showed that while all three procedures provided similar estimates of the threshold parameter, the up-down procedure performed slightly better in estimating the slope and lapse rate for 200 trials of data collection. When the lapse rate was increased by mixing in random responses for the three adaptive procedures, the larger lapse rate was especially detrimental to the efficiency of the up-down procedure, and the UML procedure provided better estimates of the threshold and slope than did the other two procedures.

  9. Quantitative Compactness Estimates for Hamilton-Jacobi Equations

    NASA Astrophysics Data System (ADS)

    Ancona, Fabio; Cannarsa, Piermarco; Nguyen, Khai T.

    2016-02-01

    We study quantitative compactness estimates in {W^{1,1}_{loc}} for the map {S_t}, {t > 0} that is associated with the given initial data {u_0in Lip (R^N)} for the corresponding solution {S_t u_0} of a Hamilton-Jacobi equation u_t+Hbig(nabla_{x} ubig)=0, qquad t≥ 0,quad xinR^N, with a uniformly convex Hamiltonian {H=H(p)}. We provide upper and lower estimates of order {1/\\varepsilon^N} on the Kolmogorov {\\varepsilon}-entropy in {W^{1,1}} of the image through the map S t of sets of bounded, compactly supported initial data. Estimates of this type are inspired by a question posed by Lax (Course on Hyperbolic Systems of Conservation Laws. XXVII Scuola Estiva di Fisica Matematica, Ravello, 2002) within the context of conservation laws, and could provide a measure of the order of "resolution" of a numerical method implemented for this equation.

  10. Measuring Complexity and Predictability of Time Series with Flexible Multiscale Entropy for Sensor Networks

    PubMed Central

    Zhou, Renjie; Yang, Chen; Wan, Jian; Zhang, Wei; Guan, Bo; Xiong, Naixue

    2017-01-01

    Measurement of time series complexity and predictability is sometimes the cornerstone for proposing solutions to topology and congestion control problems in sensor networks. As a method of measuring time series complexity and predictability, multiscale entropy (MSE) has been widely applied in many fields. However, sample entropy, which is the fundamental component of MSE, measures the similarity of two subsequences of a time series with either zero or one, but without in-between values, which causes sudden changes of entropy values even if the time series embraces small changes. This problem becomes especially severe when the length of time series is getting short. For solving such the problem, we propose flexible multiscale entropy (FMSE), which introduces a novel similarity function measuring the similarity of two subsequences with full-range values from zero to one, and thus increases the reliability and stability of measuring time series complexity. The proposed method is evaluated on both synthetic and real time series, including white noise, 1/f noise and real vibration signals. The evaluation results demonstrate that FMSE has a significant improvement in reliability and stability of measuring complexity of time series, especially when the length of time series is short, compared to MSE and composite multiscale entropy (CMSE). The proposed method FMSE is capable of improving the performance of time series analysis based topology and traffic congestion control techniques. PMID:28383496

  11. Measuring Complexity and Predictability of Time Series with Flexible Multiscale Entropy for Sensor Networks.

    PubMed

    Zhou, Renjie; Yang, Chen; Wan, Jian; Zhang, Wei; Guan, Bo; Xiong, Naixue

    2017-04-06

    Measurement of time series complexity and predictability is sometimes the cornerstone for proposing solutions to topology and congestion control problems in sensor networks. As a method of measuring time series complexity and predictability, multiscale entropy (MSE) has been widely applied in many fields. However, sample entropy, which is the fundamental component of MSE, measures the similarity of two subsequences of a time series with either zero or one, but without in-between values, which causes sudden changes of entropy values even if the time series embraces small changes. This problem becomes especially severe when the length of time series is getting short. For solving such the problem, we propose flexible multiscale entropy (FMSE), which introduces a novel similarity function measuring the similarity of two subsequences with full-range values from zero to one, and thus increases the reliability and stability of measuring time series complexity. The proposed method is evaluated on both synthetic and real time series, including white noise, 1/f noise and real vibration signals. The evaluation results demonstrate that FMSE has a significant improvement in reliability and stability of measuring complexity of time series, especially when the length of time series is short, compared to MSE and composite multiscale entropy (CMSE). The proposed method FMSE is capable of improving the performance of time series analysis based topology and traffic congestion control techniques.

  12. Develop and Test a Solvent Accessible Surface Area-Based Model in Conformational Entropy Calculations

    PubMed Central

    Wang, Junmei; Hou, Tingjun

    2012-01-01

    It is of great interest in modern drug design to accurately calculate the free energies of protein-ligand or nucleic acid-ligand binding. MM-PBSA (Molecular Mechanics-Poisson Boltzmann Surface Area) and MM-GBSA (Molecular Mechanics-Generalized Born Surface Area) have gained popularity in this field. For both methods, the conformational entropy, which is usually calculated through normal mode analysis (NMA), is needed to calculate the absolute binding free energies. Unfortunately, NMA is computationally demanding and becomes a bottleneck of the MM-PB/GBSA-NMA methods. In this work, we have developed a fast approach to estimate the conformational entropy based upon solvent accessible surface area calculations. In our approach, the conformational entropy of a molecule, S, can be obtained by summing up the contributions of all atoms, no matter they are buried or exposed. Each atom has two types of surface areas, solvent accessible surface area (SAS) and buried SAS (BSAS). The two types of surface areas are weighted to estimate the contribution of an atom to S. Atoms having the same atom type share the same weight and a general parameter k is applied to balance the contributions of the two types of surface areas. This entropy model was parameterized using a large set of small molecules for which their conformational entropies were calculated at the B3LYP/6-31G* level taking the solvent effect into account. The weighted solvent accessible surface area (WSAS) model was extensively evaluated in three tests. For the convenience, TS, the product of temperature T and conformational entropy S, were calculated in those tests. T was always set to 298.15 K through the text. First of all, good correlations were achieved between WSAS TS and NMA TS for 44 protein or nucleic acid systems sampled with molecular dynamics simulations (10 snapshots were collected for post-entropy calculations): the mean correlation coefficient squares (R2) was 0.56. As to the 20 complexes, the TS changes upon binding, TΔS, were also calculated and the mean R2 was 0.67 between NMA and WSAS. In the second test, TS were calculated for 12 proteins decoy sets (each set has 31 conformations) generated by the Rosetta software package. Again, good correlations were achieved for all decoy sets: the mean, maximum, minimum of R2 were 0.73, 0.89 and 0.55, respectively. Finally, binding free energies were calculated for 6 protein systems (the numbers of inhibitors range from 4 to 18) using four scoring functions. Compared to the measured binding free energies, the mean R2 of the six protein systems were 0.51, 0.47, 0.40 and 0.43 for MM-GBSA-WSAS, MM-GBSA-NMA, MM-PBSA-WSAS and MM-PBSA-NMA, respectively. The mean RMS errors of prediction were 1.19, 1.24, 1.41, 1.29 kcal/mol for the four scoring functions, correspondingly. Therefore, the two scoring functions employing WSAS achieved a comparable prediction performance to that of the scoring functions using NMA. It should be emphasized that no minimization was performed prior to the WSAS calculation in the last test. Although WSAS is not as rigorous as physical models such as quasi-harmonic analysis and thermodynamic integration (TI), it is computationally very efficient as only surface area calculation is involved and no structural minimization is required. Moreover, WSAS has achieved a comparable performance to normal mode analysis. We expect that this model could find its applications in the fields like high throughput screening (HTS), molecular docking and rational protein design. In those fields, efficiency is crucial since there are a large number of compounds, docking poses or protein models to be evaluated. A list of acronyms and abbreviations used in this work is provided for quick reference. PMID:22497310

  13. SUPERMODEL ANALYSIS OF A1246 AND J255: ON THE EVOLUTION OF GALAXY CLUSTERS FROM HIGH TO LOW ENTROPY STATES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fusco-Femiano, R.; Lapi, A., E-mail: roberto.fuscofemiano@iaps.inaf.it

    2015-02-10

    We present an analysis of high-quality X-ray data out to the virial radius for the two galaxy clusters A1246 and GMBCG J255.34805+64.23661 (J255) by means of our entropy-based SuperModel. For A1246 we find that the spherically averaged entropy profile of the intracluster medium (ICM) progressively flattens outward, and that a nonthermal pressure component amounting to ≈20% of the total is required to support hydrostatic equilibrium in the outskirts; there we also estimate a modest value C ≈ 1.6 of the ICM clumping factor. These findings agree with previous analyses on other cool-core, relaxed clusters, and lend further support to themore » picture by Lapi et al. that relates the entropy flattening, the development of the nonthermal pressure component, and the azimuthal variation of ICM properties to weakening boundary shocks. In this scenario clusters are born in a high-entropy state throughout, and are expected to develop on similar timescales a low-entropy state both at the center due to cooling, and in the outskirts due to weakening shocks. However, the analysis of J255 testifies how such a typical evolutionary course can be interrupted or even reversed by merging especially at intermediate redshift, as predicted by Cavaliere et al. In fact, a merger has rejuvenated the ICM of this cluster at z ≈ 0.45 by reestablishing a high-entropy state in the outskirts, while leaving intact or erasing only partially the low-entropy, cool core at the center.« less

  14. Radiative entropy generation in a gray absorbing, emitting, and scattering planar medium at radiative equilibrium

    NASA Astrophysics Data System (ADS)

    Sadeghi, Pegah; Safavinejad, Ali

    2017-11-01

    Radiative entropy generation through a gray absorbing, emitting, and scattering planar medium at radiative equilibrium with diffuse-gray walls is investigated. The radiative transfer equation and radiative entropy generation equations are solved using discrete ordinates method. Components of the radiative entropy generation are considered for two different boundary conditions: two walls are at a prescribed temperature and mixed boundary conditions, which one wall is at a prescribed temperature and the other is at a prescribed heat flux. The effect of wall emissivities, optical thickness, single scattering albedo, and anisotropic-scattering factor on the entropy generation is attentively investigated. The results reveal that entropy generation in the system mainly arises from irreversible radiative transfer at wall with lower temperature. Total entropy generation rate for the system with prescribed temperature at walls remarkably increases as wall emissivity increases; conversely, for system with mixed boundary conditions, total entropy generation rate slightly decreases. Furthermore, as the optical thickness increases, total entropy generation rate remarkably decreases for the system with prescribed temperature at walls; nevertheless, for the system with mixed boundary conditions, total entropy generation rate increases. The variation of single scattering albedo does not considerably affect total entropy generation rate. This parametric analysis demonstrates that the optical thickness and wall emissivities have a significant effect on the entropy generation in the system at radiative equilibrium. Considering the parameters affecting radiative entropy generation significantly, provides an opportunity to optimally design or increase overall performance and efficiency by applying entropy minimization techniques for the systems at radiative equilibrium.

  15. Numerical investigation for entropy generation in hydromagnetic flow of fluid with variable properties and slip

    NASA Astrophysics Data System (ADS)

    Khan, M. Ijaz; Hayat, Tasawar; Alsaedi, Ahmed

    2018-02-01

    This modeling and computations present the study of viscous fluid flow with variable properties by a rotating stretchable disk. Rotating flow is generated through nonlinear rotating stretching surface. Nonlinear thermal radiation and heat generation/absorption are studied. Flow is conducting for a constant applied magnetic field. No polarization is taken. Induced magnetic field is not taken into account. Attention is focused on the entropy generation rate and Bejan number. The entropy generation rate and Bejan number clearly depend on velocity and thermal fields. The von Kármán approach is utilized to convert the partial differential expressions into ordinary ones. These expressions are non-dimensionalized, and numerical results are obtained for flow variables. The effects of the magnetic parameter, Prandtl number, radiative parameter, heat generation/absorption parameter, and slip parameter on velocity and temperature fields as well as the entropy generation rate and Bejan number are discussed. Drag forces (radial and tangential) and heat transfer rates are calculated and discussed. Furthermore the entropy generation rate is a decreasing function of magnetic variable and Reynolds number. The Bejan number effect on the entropy generation rate is reverse to that of the magnetic variable. Also opposite behavior of heat transfers is observed for varying estimations of radiative and slip variables.

  16. Computing the Entropy of Kerr-Newman Black Hole Without Brick Walls Method

    NASA Astrophysics Data System (ADS)

    Zhang, Li-Chun; Wu, Yue-Qin; Li, Huai-Fan; Ren, Zhao

    By using the entanglement entropy method, the statistical entropy of the Bose and Fermi fields in a thin film is calculated and the Bekenstein-Hawking entropy of Kerr-Newman black hole is obtained. Here, the Bose and Fermi fields are entangled with the quantum states in Kerr-Newman black hole and are outside of the horizon. The divergence of brick-wall model is avoided without any cutoff by the new equation of state density obtained with the generalized uncertainty principle. The calculation implies that the high density quantum states near the event horizon are strongly correlated with the quantum states in black hole. The black hole entropy is a quantum effect. It is an intrinsic characteristic of space-time. The ultraviolet cutoff in the brick-wall model is unreasonable. The generalized uncertainty principle should be considered in the high energy quantum field near the event horizon. From the calculation, the constant λ introduced in the generalized uncertainty principle is related to polar angle θ in an axisymmetric space-time.

  17. Entropy and convexity for nonlinear partial differential equations

    PubMed Central

    Ball, John M.; Chen, Gui-Qiang G.

    2013-01-01

    Partial differential equations are ubiquitous in almost all applications of mathematics, where they provide a natural mathematical description of many phenomena involving change in physical, chemical, biological and social processes. The concept of entropy originated in thermodynamics and statistical physics during the nineteenth century to describe the heat exchanges that occur in the thermal processes in a thermodynamic system, while the original notion of convexity is for sets and functions in mathematics. Since then, entropy and convexity have become two of the most important concepts in mathematics. In particular, nonlinear methods via entropy and convexity have been playing an increasingly important role in the analysis of nonlinear partial differential equations in recent decades. This opening article of the Theme Issue is intended to provide an introduction to entropy, convexity and related nonlinear methods for the analysis of nonlinear partial differential equations. We also provide a brief discussion about the content and contributions of the papers that make up this Theme Issue. PMID:24249768

  18. A modified belief entropy in Dempster-Shafer framework.

    PubMed

    Zhou, Deyun; Tang, Yongchuan; Jiang, Wen

    2017-01-01

    How to quantify the uncertain information in the framework of Dempster-Shafer evidence theory is still an open issue. Quite a few uncertainty measures have been proposed in Dempster-Shafer framework, however, the existing studies mainly focus on the mass function itself, the available information represented by the scale of the frame of discernment (FOD) in the body of evidence is ignored. Without taking full advantage of the information in the body of evidence, the existing methods are somehow not that efficient. In this paper, a modified belief entropy is proposed by considering the scale of FOD and the relative scale of a focal element with respect to FOD. Inspired by Deng entropy, the new belief entropy is consistent with Shannon entropy in the sense of probability consistency. What's more, with less information loss, the new measure can overcome the shortage of some other uncertainty measures. A few numerical examples and a case study are presented to show the efficiency and superiority of the proposed method.

  19. A modified belief entropy in Dempster-Shafer framework

    PubMed Central

    Zhou, Deyun; Jiang, Wen

    2017-01-01

    How to quantify the uncertain information in the framework of Dempster-Shafer evidence theory is still an open issue. Quite a few uncertainty measures have been proposed in Dempster-Shafer framework, however, the existing studies mainly focus on the mass function itself, the available information represented by the scale of the frame of discernment (FOD) in the body of evidence is ignored. Without taking full advantage of the information in the body of evidence, the existing methods are somehow not that efficient. In this paper, a modified belief entropy is proposed by considering the scale of FOD and the relative scale of a focal element with respect to FOD. Inspired by Deng entropy, the new belief entropy is consistent with Shannon entropy in the sense of probability consistency. What’s more, with less information loss, the new measure can overcome the shortage of some other uncertainty measures. A few numerical examples and a case study are presented to show the efficiency and superiority of the proposed method. PMID:28481914

  20. Monitoring of Time-Dependent System Profiles by Multiplex Gas Chromatography with Maximum Entropy Demodulation

    NASA Technical Reports Server (NTRS)

    Becker, Joseph F.; Valentin, Jose

    1996-01-01

    The maximum entropy technique was successfully applied to the deconvolution of overlapped chromatographic peaks. An algorithm was written in which the chromatogram was represented as a vector of sample concentrations multiplied by a peak shape matrix. Simulation results demonstrated that there is a trade off between the detector noise and peak resolution in the sense that an increase of the noise level reduced the peak separation that could be recovered by the maximum entropy method. Real data originated from a sample storage column was also deconvoluted using maximum entropy. Deconvolution is useful in this type of system because the conservation of time dependent profiles depends on the band spreading processes in the chromatographic column, which might smooth out the finer details in the concentration profile. The method was also applied to the deconvolution of previously interpretted Pioneer Venus chromatograms. It was found in this case that the correct choice of peak shape function was critical to the sensitivity of maximum entropy in the reconstruction of these chromatograms.

  1. A Discrete Constraint for Entropy Conservation and Sound Waves in Cloud-Resolving Modeling

    NASA Technical Reports Server (NTRS)

    Zeng, Xi-Ping; Tao, Wei-Kuo; Simpson, Joanne

    2003-01-01

    Ideal cloud-resolving models contain little-accumulative errors. When their domain is so large that synoptic large-scale circulations are accommodated, they can be used for the simulation of the interaction between convective clouds and the large-scale circulations. This paper sets up a framework for the models, using moist entropy as a prognostic variable and employing conservative numerical schemes. The models possess no accumulative errors of thermodynamic variables when they comply with a discrete constraint on entropy conservation and sound waves. Alternatively speaking, the discrete constraint is related to the correct representation of the large-scale convergence and advection of moist entropy. Since air density is involved in entropy conservation and sound waves, the challenge is how to compute sound waves efficiently under the constraint. To address the challenge, a compensation method is introduced on the basis of a reference isothermal atmosphere whose governing equations are solved analytically. Stability analysis and numerical experiments show that the method allows the models to integrate efficiently with a large time step.

  2. Logarithmic entropy of Kehagias-Sfetsos black hole with self-gravitation in asymptotically flat IR modified Hořava gravity

    NASA Astrophysics Data System (ADS)

    Liu, Molin; Lu, Junwang

    2011-05-01

    Motivated by recent logarithmic entropy of Hořava-Lifshitz gravity, we investigate Hawking radiation for Kehagias-Sfetsos black hole from tunneling perspective. After considering the effect of self-gravitation, we calculate the emission rate and entropy of quantum tunneling by using Kraus-Parikh-Wilczek method. Meanwhile, both massless and massive particles are considered in this Letter. Interestingly, two types tunneling particles have the same emission rate Γ and entropy Sb whose analytical formulae are Γ=exp[π(rin2-rout2)/2+π/αln rin/rout] and Sb=A/4+π/αln(A/4), respectively. Here, α is the Hořava-Lifshitz field parameter. The results show that the logarithmic entropy of Hořava-Lifshitz gravity could be explained well by the self-gravitation, which is totally different from other methods. The study of this semiclassical tunneling process may shed light on understanding the Hořava-Lifshitz gravity.

  3. Entropy and convexity for nonlinear partial differential equations.

    PubMed

    Ball, John M; Chen, Gui-Qiang G

    2013-12-28

    Partial differential equations are ubiquitous in almost all applications of mathematics, where they provide a natural mathematical description of many phenomena involving change in physical, chemical, biological and social processes. The concept of entropy originated in thermodynamics and statistical physics during the nineteenth century to describe the heat exchanges that occur in the thermal processes in a thermodynamic system, while the original notion of convexity is for sets and functions in mathematics. Since then, entropy and convexity have become two of the most important concepts in mathematics. In particular, nonlinear methods via entropy and convexity have been playing an increasingly important role in the analysis of nonlinear partial differential equations in recent decades. This opening article of the Theme Issue is intended to provide an introduction to entropy, convexity and related nonlinear methods for the analysis of nonlinear partial differential equations. We also provide a brief discussion about the content and contributions of the papers that make up this Theme Issue.

  4. Viscous regularization of the full set of nonequilibrium-diffusion Grey Radiation-Hydrodynamic equations

    DOE PAGES

    Delchini, Marc O.; Ragusa, Jean C.; Ferguson, Jim

    2017-02-17

    A viscous regularization technique, based on the local entropy residual, was proposed by Delchini et al. (2015) to stabilize the nonequilibrium-diffusion Grey Radiation-Hydrodynamic equations using an artificial viscosity technique. This viscous regularization is modulated by the local entropy production and is consistent with the entropy minimum principle. However, Delchini et al. (2015) only based their work on the hyperbolic parts of the Grey Radiation-Hydrodynamic equations and thus omitted the relaxation and diffusion terms present in the material energy and radiation energy equations. Here in this paper, we extend the theoretical grounds for the method and derive an entropy minimum principlemore » for the full set of nonequilibrium-diffusion Grey Radiation-Hydrodynamic equations. This further strengthens the applicability of the entropy viscosity method as a stabilization technique for radiation-hydrodynamic shock simulations. Radiative shock calculations using constant and temperature-dependent opacities are compared against semi-analytical reference solutions, and we present a procedure to perform spatial convergence studies of such simulations.« less

  5. Entropy Splitting and Numerical Dissipation

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Vinokur, M.; Djomehri, M. J.

    1999-01-01

    A rigorous stability estimate for arbitrary order of accuracy of spatial central difference schemes for initial-boundary value problems of nonlinear symmetrizable systems of hyperbolic conservation laws was established recently by Olsson and Oliger (1994) and Olsson (1995) and was applied to the two-dimensional compressible Euler equations for a perfect gas by Gerritsen and Olsson (1996) and Gerritsen (1996). The basic building block in developing the stability estimate is a generalized energy approach based on a special splitting of the flux derivative via a convex entropy function and certain homogeneous properties. Due to some of the unique properties of the compressible Euler equations for a perfect gas, the splitting resulted in the sum of a conservative portion and a non-conservative portion of the flux derivative. hereafter referred to as the "Entropy Splitting." There are several potential desirable attributes and side benefits of the entropy splitting for the compressible Euler equations that were not fully explored in Gerritsen and Olsson. The paper has several objectives. The first is to investigate the choice of the arbitrary parameter that determines the amount of splitting and its dependence on the type of physics of current interest to computational fluid dynamics. The second is to investigate in what manner the splitting affects the nonlinear stability of the central schemes for long time integrations of unsteady flows such as in nonlinear aeroacoustics and turbulence dynamics. If numerical dissipation indeed is needed to stabilize the central scheme, can the splitting help minimize the numerical dissipation compared to its un-split cousin? Extensive numerical study on the vortex preservation capability of the splitting in conjunction with central schemes for long time integrations will be presented. The third is to study the effect of the non-conservative proportion of splitting in obtaining the correct shock location for high speed complex shock-turbulence interactions. The fourth is to determine if this method can be extended to other physical equations of state and other evolutionary equation sets. If numerical dissipation is needed, the Yee, Sandham, and Djomehri (1999) numerical dissipation is employed. The Yee et al. schemes fit in the Olsson and Oliger framework.

  6. Properties of an entropy-based signal receiver with an application to ultrasonic molecular imaging.

    PubMed

    Hughes, M S; McCarthy, J E; Marsh, J N; Arbeit, J M; Neumann, R G; Fuhrhop, R W; Wallace, K D; Znidersic, D R; Maurizi, B N; Baldwin, S L; Lanza, G M; Wickline, S A

    2007-06-01

    Qualitative and quantitative properties of the finite part, H(f), of the Shannon entropy of a continuous waveform f(t) in the continuum limit are derived in order to illuminate its use for waveform characterization. Simple upper and lower bounds on H(f), based on features of f(t), are defined. Quantitative criteria for a priori estimation of the average-case variation of H(f) and log E(f), where E(f) is the signal energy of f(t) are also derived. These provide relative sensitivity estimates that could be used to prospectively choose optimal imaging strategies in real-time ultrasonic imaging machines, where system bandwidth is often pushed to its limits. To demonstrate the utility of these sensitivity relations for this application, a study designed to assess the feasibility of identification of angiogenic neovasculature targeted with perfluorocarbon nanoparticles that specifically bind to alpha(v)beta3-integrin expression in tumors was performed. The outcome of this study agrees with the prospective sensitivity estimates that were used for the two receivers. Moreover, these data demonstrate the ability of entropy-based signal receivers when used in conjunction with targeted nanoparticles to elucidate the presence of alpha(v)beta3 integrins in primordial neovasculature, particularly in acoustically unfavorable environments.

  7. Gas-phase acidities of cysteine-polyalanine peptides I: A(3,4)CSH and HSCA(3,4).

    PubMed

    Ren, Jianhua; Tan, John P; Harper, Robert T

    2009-10-15

    The gas-phase acidities of four cysteine-polyalanine peptides, A(3,4)CSH and HSCA(3,4), were determined using the extended Cooks kinetic method with full entropy analysis. A triple-quadrupole mass spectrometer with an electrospray interface was employed for the experimental study. The ion activation was achieved via collision-induced dissociation (CID) experiments. The deprotonation enthalpies (Delta(acid)H) of the peptides were determined to be 332.2 +/- 2.0 kcal/mol (A(3)CSH), 325.9 +/- 2.0 kcal/mol (A(4)CSH), 319.3 +/- 3.0 kcal/mol (HSCA(3)), and 319.2 +/- 4.0 kcal/mol (HSCA(4)). The deprotonation entropies (Delta(acid)S) of the peptides were estimated based on the entropy term (Delta(DeltaS)) and the deprotonation entropies of the reference acids. By using the deprotonation enthalpies and entropies, the gas-phase acidities (Delta(acid)G) of the peptides were derived: 325.0 +/- 2.0 kcal/mol (A(3)CSH), 320.2 +/- 2.0 kcal/mol (A(4)CSH), 316.3 +/- 3.0 kcal/mol (HSCA(3)), and 315.4 +/- 4.0 kcal/mol (HSCA(4)). Conformations and energetic information of the peptides were calculated through simulated annealing (Tripos), geometry optimization (AM1), and single-point energy calculations (B3LYP/6-31+G(d)), respectively. The calculated theoretical deprotonation enthalpies (Delta(acid)H) of 334.2 kcal/mol (A(3)CSH), 327.7 kcal/mol (A(4)CSH), 320.6 kcal/mol (HSCA(3)), and 318.6 kcal/mol (HSCA(4)) are in good agreement with the experimentally determined values. Both the experimental and computational studies suggest that the two N-terminal cysteine peptides, HSCA(3,4), are significantly more acidic than the corresponding C-terminal ones, A(3,4)CSH. The high acidities of the former are likely due to the helical conformational effects for which the thiolate anion may be strongly stabilized by the interaction with the helix macrodipole.

  8. Advanced Mathematical Tools in Metrology III

    NASA Astrophysics Data System (ADS)

    Ciarlini, P.

    The Table of Contents for the book is as follows: * Foreword * Invited Papers * The ISO Guide to the Expression of Uncertainty in Measurement: A Bridge between Statistics and Metrology * Bootstrap Algorithms and Applications * The TTRSs: 13 Oriented Constraints for Dimensioning, Tolerancing & Inspection * Graded Reference Data Sets and Performance Profiles for Testing Software Used in Metrology * Uncertainty in Chemical Measurement * Mathematical Methods for Data Analysis in Medical Applications * High-Dimensional Empirical Linear Prediction * Wavelet Methods in Signal Processing * Software Problems in Calibration Services: A Case Study * Robust Alternatives to Least Squares * Gaining Information from Biomagnetic Measurements * Full Papers * Increase of Information in the Course of Measurement * A Framework for Model Validation and Software Testing in Regression * Certification of Algorithms for Determination of Signal Extreme Values during Measurement * A Method for Evaluating Trends in Ozone-Concentration Data and Its Application to Data from the UK Rural Ozone Monitoring Network * Identification of Signal Components by Stochastic Modelling in Measurements of Evoked Magnetic Fields from Peripheral Nerves * High Precision 3D-Calibration of Cylindrical Standards * Magnetic Dipole Estimations for MCG-Data * Transfer Functions of Discrete Spline Filters * An Approximation Method for the Linearization of Tridimensional Metrology Problems * Regularization Algorithms for Image Reconstruction from Projections * Quality of Experimental Data in Hydrodynamic Research * Stochastic Drift Models for the Determination of Calibration Intervals * Short Communications * Projection Method for Lidar Measurement * Photon Flux Measurements by Regularised Solution of Integral Equations * Correct Solutions of Fit Problems in Different Experimental Situations * An Algorithm for the Nonlinear TLS Problem in Polynomial Fitting * Designing Axially Symmetric Electromechanical Systems of Superconducting Magnetic Levitation in Matlab Environment * Data Flow Evaluation in Metrology * A Generalized Data Model for Integrating Clinical Data and Biosignal Records of Patients * Assessment of Three-Dimensional Structures in Clinical Dentistry * Maximum Entropy and Bayesian Approaches to Parameter Estimation in Mass Metrology * Amplitude and Phase Determination of Sinusoidal Vibration in the Nanometer Range using Quadrature Signals * A Class of Symmetric Compactly Supported Wavelets and Associated Dual Bases * Analysis of Surface Topography by Maximum Entropy Power Spectrum Estimation * Influence of Different Kinds of Errors on Imaging Results in Optical Tomography * Application of the Laser Interferometry for Automatic Calibration of Height Setting Micrometer * Author Index

  9. Teaching Entropy Analysis in the First-Year High School Course and Beyond

    ERIC Educational Resources Information Center

    Bindel, Thomas H.

    2004-01-01

    A new method is presented, which educates and empowers the teachers and assists them in incorporating entropy analysis in their curricula and also provides an entropy-analysis unit that can be used in classrooms. The topics that the teachers can cover depending on the ability of the students and the comfort level of the teacher are included.

  10. Automatic event detection in low SNR microseismic signals based on multi-scale permutation entropy and a support vector machine

    NASA Astrophysics Data System (ADS)

    Jia, Rui-Sheng; Sun, Hong-Mei; Peng, Yan-Jun; Liang, Yong-Quan; Lu, Xin-Ming

    2017-07-01

    Microseismic monitoring is an effective means for providing early warning of rock or coal dynamical disasters, and its first step is microseismic event detection, although low SNR microseismic signals often cannot effectively be detected by routine methods. To solve this problem, this paper presents permutation entropy and a support vector machine to detect low SNR microseismic events. First, an extraction method of signal features based on multi-scale permutation entropy is proposed by studying the influence of the scale factor on the signal permutation entropy. Second, the detection model of low SNR microseismic events based on the least squares support vector machine is built by performing a multi-scale permutation entropy calculation for the collected vibration signals, constructing a feature vector set of signals. Finally, a comparative analysis of the microseismic events and noise signals in the experiment proves that the different characteristics of the two can be fully expressed by using multi-scale permutation entropy. The detection model of microseismic events combined with the support vector machine, which has the features of high classification accuracy and fast real-time algorithms, can meet the requirements of online, real-time extractions of microseismic events.

  11. Image construction from the IRAS survey and data fusion

    NASA Technical Reports Server (NTRS)

    Bontekoe, Tj. R.

    1990-01-01

    The IRAS survey data can be used successfully to produce images of extended objects. The major difficulty, viz. non-uniform sampling, different response functions for each detector, and varying signal-to-noise levels for each detector for each scan, were resolved. The results of three different image construction techniques are compared: co-addition, constrained least squares, and maximum entropy. The maximum entropy result is superior. An image of the galaxy M51 with an average spatial resolution of 45 arc seconds, is presented using 60 micron survey data. This exceeds the telescope diffraction limit of 1 minute of arc, at this wavelength. Data fusion is a proposed method for combining data from different instruments, with different spatial resolutions, at different wavelengths. Direct estimates of the physical parameters, temperature, density and composition, can be made from the data without prior images (re-)construction. An increase in the accuracy of these parameters is expected as the result of this more systematic approach.

  12. Shannon Entropy of the Canonical Genetic Code

    NASA Astrophysics Data System (ADS)

    Nemzer, Louis

    The probability that a non-synonymous point mutation in DNA will adversely affect the functionality of the resultant protein is greatly reduced if the substitution is conservative. In that case, the amino acid coded by the mutated codon has similar physico-chemical properties to the original. Many simplified alphabets, which group the 20 common amino acids into families, have been proposed. To evaluate these schema objectively, we introduce a novel, quantitative method based on the inherent redundancy in the canonical genetic code. By calculating the Shannon information entropy carried by 1- or 2-bit messages, groupings that best leverage the robustness of the code are identified. The relative importance of properties related to protein folding - like hydropathy and size - and function, including side-chain acidity, can also be estimated. In addition, this approach allows us to quantify the average information value of nucleotide codon positions, and explore the physiological basis for distinguishing between transition and transversion mutations. Supported by NSU PFRDG Grant #335347.

  13. Correntropy-based partial directed coherence for testing multivariate Granger causality in nonlinear processes

    NASA Astrophysics Data System (ADS)

    Kannan, Rohit; Tangirala, Arun K.

    2014-06-01

    Identification of directional influences in multivariate systems is of prime importance in several applications of engineering and sciences such as plant topology reconstruction, fault detection and diagnosis, and neurosciences. A spectrum of related directionality measures, ranging from linear measures such as partial directed coherence (PDC) to nonlinear measures such as transfer entropy, have emerged over the past two decades. The PDC-based technique is simple and effective, but being a linear directionality measure has limited applicability. On the other hand, transfer entropy, despite being a robust nonlinear measure, is computationally intensive and practically implementable only for bivariate processes. The objective of this work is to develop a nonlinear directionality measure, termed as KPDC, that possesses the simplicity of PDC but is still applicable to nonlinear processes. The technique is founded on a nonlinear measure called correntropy, a recently proposed generalized correlation measure. The proposed method is equivalent to constructing PDC in a kernel space where the PDC is estimated using a vector autoregressive model built on correntropy. A consistent estimator of the KPDC is developed and important theoretical results are established. A permutation scheme combined with the sequential Bonferroni procedure is proposed for testing hypothesis on absence of causality. It is demonstrated through several case studies that the proposed methodology effectively detects Granger causality in nonlinear processes.

  14. EEG artifacts reduction by multivariate empirical mode decomposition and multiscale entropy for monitoring depth of anaesthesia during surgery.

    PubMed

    Liu, Quan; Chen, Yi-Feng; Fan, Shou-Zen; Abbod, Maysam F; Shieh, Jiann-Shing

    2017-08-01

    Electroencephalography (EEG) has been widely utilized to measure the depth of anaesthesia (DOA) during operation. However, the EEG signals are usually contaminated by artifacts which have a consequence on the measured DOA accuracy. In this study, an effective and useful filtering algorithm based on multivariate empirical mode decomposition and multiscale entropy (MSE) is proposed to measure DOA. Mean entropy of MSE is used as an index to find artifacts-free intrinsic mode functions. The effect of different levels of artifacts on the performances of the proposed filtering is analysed using simulated data. Furthermore, 21 patients' EEG signals are collected and analysed using sample entropy to calculate the complexity for monitoring DOA. The correlation coefficients of entropy and bispectral index (BIS) results show 0.14 ± 0.30 and 0.63 ± 0.09 before and after filtering, respectively. Artificial neural network (ANN) model is used for range mapping in order to correlate the measurements with BIS. The ANN method results show strong correlation coefficient (0.75 ± 0.08). The results in this paper verify that entropy values and BIS have a strong correlation for the purpose of DOA monitoring and the proposed filtering method can effectively filter artifacts from EEG signals. The proposed method performs better than the commonly used wavelet denoising method. This study provides a fully adaptive and automated filter for EEG to measure DOA more accuracy and thus reduce risk related to maintenance of anaesthetic agents.

  15. Foreign exchange rate entropy evolution during financial crises

    NASA Astrophysics Data System (ADS)

    Stosic, Darko; Stosic, Dusan; Ludermir, Teresa; de Oliveira, Wilson; Stosic, Tatijana

    2016-05-01

    This paper examines the effects of financial crises on foreign exchange (FX) markets, where entropy evolution is measured for different exchange rates, using the time-dependent block entropy method. Empirical results suggest that financial crises are associated with significant increase of exchange rate entropy, reflecting instability in FX market dynamics. In accordance with phenomenological expectations, it is found that FX markets with large liquidity and large trading volume are more inert - they recover quicker from a crisis than markets with small liquidity and small trading volume. Moreover, our numerical analysis shows that periods of economic uncertainty are preceded by periods of low entropy values, which may serve as a tool for anticipating the onset of financial crises.

  16. Entanglement entropy and mutual information production rates in acoustic black holes.

    PubMed

    Giovanazzi, Stefano

    2011-01-07

    A method to investigate acoustic Hawking radiation is proposed, where entanglement entropy and mutual information are measured from the fluctuations of the number of particles. The rate of entropy radiated per one-dimensional (1D) channel is given by S=κ/12, where κ is the sound acceleration on the sonic horizon. This entropy production is accompanied by a corresponding formation of mutual information to ensure the overall conservation of information. The predictions are confirmed using an ab initio analytical approach in transonic flows of 1D degenerate ideal Fermi fluids.

  17. Empirical entropic contributions in computational docking: evaluation in APS reductase complexes.

    PubMed

    Chang, Max W; Belew, Richard K; Carroll, Kate S; Olson, Arthur J; Goodsell, David S

    2008-08-01

    The results from reiterated docking experiments may be used to evaluate an empirical vibrational entropy of binding in ligand-protein complexes. We have tested several methods for evaluating the vibrational contribution to binding of 22 nucleotide analogues to the enzyme APS reductase. These include two cluster size methods that measure the probability of finding a particular conformation, a method that estimates the extent of the local energetic well by looking at the scatter of conformations within clustered results, and an RMSD-based method that uses the overall scatter and clustering of all conformations. We have also directly characterized the local energy landscape by randomly sampling around docked conformations. The simple cluster size method shows the best performance, improving the identification of correct conformations in multiple docking experiments. 2008 Wiley Periodicals, Inc.

  18. Spatial analysis techniques applied to uranium prospecting in Chihuahua State, Mexico

    NASA Astrophysics Data System (ADS)

    Hinojosa de la Garza, Octavio R.; Montero Cabrera, María Elena; Sanín, Luz H.; Reyes Cortés, Manuel; Martínez Meyer, Enrique

    2014-07-01

    To estimate the distribution of uranium minerals in Chihuahua, the advanced statistical model "Maximun Entropy Method" (MaxEnt) was applied. A distinguishing feature of this method is that it can fit more complex models in case of small datasets (x and y data), as is the location of uranium ores in the State of Chihuahua. For georeferencing uranium ores, a database from the United States Geological Survey and workgroup of experts in Mexico was used. The main contribution of this paper is the proposal of maximum entropy techniques to obtain the mineral's potential distribution. For this model were used 24 environmental layers like topography, gravimetry, climate (worldclim), soil properties and others that were useful to project the uranium's distribution across the study area. For the validation of the places predicted by the model, comparisons were done with other research of the Mexican Service of Geological Survey, with direct exploration of specific areas and by talks with former exploration workers of the enterprise "Uranio de Mexico". Results. New uranium areas predicted by the model were validated, finding some relationship between the model predictions and geological faults. Conclusions. Modeling by spatial analysis provides additional information to the energy and mineral resources sectors.

  19. Regularized minimum I-divergence methods for the inverse blackbody radiation problem

    NASA Astrophysics Data System (ADS)

    Choi, Kerkil; Lanterman, Aaron D.; Shin, Jaemin

    2006-08-01

    This paper proposes iterative methods for estimating the area temperature distribution of a blackbody from its total radiated power spectrum measurements. This is called the inverse blackbody radiation problem. This problem is inherently ill-posed due to the characteristics of the kernel in the underlying integral equation given by Planck's law. The functions involved in the problem are all non-negative. Csiszár's I-divergence is an information-theoretic discrepancy measure between two non-negative functions. We derive iterative methods for minimizing Csiszár's I-divergence between the measured power spectrum and the power spectrum arising from the estimate according to the integral equation. Due to the ill-posedness of the problem, unconstrained algorithms often produce poor estimates, especially when the measurements are corrupted by noise. To alleviate this difficulty, we apply regularization methods to our algorithms. Penalties based on Shannon's entropy, the L1-norm and Good's roughness are chosen to suppress the undesirable artefacts. When a penalty is applied, the pertinent optimization that needs to be performed at each iteration is no longer trivial. In particular, Good's roughness causes couplings between estimate components. To handle this issue, we adapt Green's one-step-late method. This choice is based on the important fact that our minimum I-divergence algorithms can be interpreted as asymptotic forms of certain expectation-maximization algorithms. The effectiveness of our methods is illustrated via various numerical experiments.

  20. Groupwise registration of MR brain images with tumors.

    PubMed

    Tang, Zhenyu; Wu, Yihong; Fan, Yong

    2017-08-04

    A novel groupwise image registration framework is developed for registering MR brain images with tumors. Our method iteratively estimates a normal-appearance counterpart for each tumor image to be registered and constructs a directed graph (digraph) of normal-appearance images to guide the groupwise image registration. Particularly, our method maps each tumor image to its normal appearance counterpart by identifying and inpainting brain tumor regions with intensity information estimated using a low-rank plus sparse matrix decomposition based image representation technique. The estimated normal-appearance images are groupwisely registered to a group center image guided by a digraph of images so that the total length of 'image registration paths' to be the minimum, and then the original tumor images are warped to the group center image using the resulting deformation fields. We have evaluated our method based on both simulated and real MR brain tumor images. The registration results were evaluated with overlap measures of corresponding brain regions and average entropy of image intensity information, and Wilcoxon signed rank tests were adopted to compare different methods with respect to their regional overlap measures. Compared with a groupwise image registration method that is applied to normal-appearance images estimated using the traditional low-rank plus sparse matrix decomposition based image inpainting, our method achieved higher image registration accuracy with statistical significance (p  =  7.02  ×  10 -9 ).

Top