Entropy Viscosity and L1-based Approximations of PDEs: Exploiting Sparsity
2015-10-23
AFRL-AFOSR-VA-TR-2015-0337 Entropy Viscosity and L1-based Approximations of PDEs: Exploiting Sparsity Jean-Luc Guermond TEXAS A & M UNIVERSITY 750...REPORT DATE (DD-MM-YYYY) 09-05-2015 2. REPORT TYPE Final report 3. DATES COVERED (From - To) 01-07-2012 - 30-06-2015 4. TITLE AND SUBTITLE Entropy ...conservation equations can be stabilized by using the so-called entropy viscosity method and we proposed to to investigate this new technique. We
Ugarte, Juan P; Orozco-Duque, Andrés; Tobón, Catalina; Kremen, Vaclav; Novak, Daniel; Saiz, Javier; Oesterlein, Tobias; Schmitt, Clauss; Luik, Armin; Bustamante, John
2014-01-01
There is evidence that rotors could be drivers that maintain atrial fibrillation. Complex fractionated atrial electrograms have been located in rotor tip areas. However, the concept of electrogram fractionation, defined using time intervals, is still controversial as a tool for locating target sites for ablation. We hypothesize that the fractionation phenomenon is better described using non-linear dynamic measures, such as approximate entropy, and that this tool could be used for locating the rotor tip. The aim of this work has been to determine the relationship between approximate entropy and fractionated electrograms, and to develop a new tool for rotor mapping based on fractionation levels. Two episodes of chronic atrial fibrillation were simulated in a 3D human atrial model, in which rotors were observed. Dynamic approximate entropy maps were calculated using unipolar electrogram signals generated over the whole surface of the 3D atrial model. In addition, we optimized the approximate entropy calculation using two real multi-center databases of fractionated electrogram signals, labeled in 4 levels of fractionation. We found that the values of approximate entropy and the levels of fractionation are positively correlated. This allows the dynamic approximate entropy maps to localize the tips from stable and meandering rotors. Furthermore, we assessed the optimized approximate entropy using bipolar electrograms generated over a vicinity enclosing a rotor, achieving rotor detection. Our results suggest that high approximate entropy values are able to detect a high level of fractionation and to locate rotor tips in simulated atrial fibrillation episodes. We suggest that dynamic approximate entropy maps could become a tool for atrial fibrillation rotor mapping.
Ugarte, Juan P.; Orozco-Duque, Andrés; Tobón, Catalina; Kremen, Vaclav; Novak, Daniel; Saiz, Javier; Oesterlein, Tobias; Schmitt, Clauss; Luik, Armin; Bustamante, John
2014-01-01
There is evidence that rotors could be drivers that maintain atrial fibrillation. Complex fractionated atrial electrograms have been located in rotor tip areas. However, the concept of electrogram fractionation, defined using time intervals, is still controversial as a tool for locating target sites for ablation. We hypothesize that the fractionation phenomenon is better described using non-linear dynamic measures, such as approximate entropy, and that this tool could be used for locating the rotor tip. The aim of this work has been to determine the relationship between approximate entropy and fractionated electrograms, and to develop a new tool for rotor mapping based on fractionation levels. Two episodes of chronic atrial fibrillation were simulated in a 3D human atrial model, in which rotors were observed. Dynamic approximate entropy maps were calculated using unipolar electrogram signals generated over the whole surface of the 3D atrial model. In addition, we optimized the approximate entropy calculation using two real multi-center databases of fractionated electrogram signals, labeled in 4 levels of fractionation. We found that the values of approximate entropy and the levels of fractionation are positively correlated. This allows the dynamic approximate entropy maps to localize the tips from stable and meandering rotors. Furthermore, we assessed the optimized approximate entropy using bipolar electrograms generated over a vicinity enclosing a rotor, achieving rotor detection. Our results suggest that high approximate entropy values are able to detect a high level of fractionation and to locate rotor tips in simulated atrial fibrillation episodes. We suggest that dynamic approximate entropy maps could become a tool for atrial fibrillation rotor mapping. PMID:25489858
Minimal entropy approximation for cellular automata
NASA Astrophysics Data System (ADS)
Fukś, Henryk
2014-02-01
We present a method for the construction of approximate orbits of measures under the action of cellular automata which is complementary to the local structure theory. The local structure theory is based on the idea of Bayesian extension, that is, construction of a probability measure consistent with given block probabilities and maximizing entropy. If instead of maximizing entropy one minimizes it, one can develop another method for the construction of approximate orbits, at the heart of which is the iteration of finite-dimensional maps, called minimal entropy maps. We present numerical evidence that the minimal entropy approximation sometimes outperforms the local structure theory in characterizing the properties of cellular automata. The density response curve for elementary CA rule 26 is used to illustrate this claim.
Exploring stability of entropy analysis for signal with different trends
NASA Astrophysics Data System (ADS)
Zhang, Yin; Li, Jin; Wang, Jun
2017-03-01
Considering the effects of environment disturbances and instrument systems, the actual detecting signals always are carrying different trends, which result in that it is difficult to accurately catch signals complexity. So choosing steady and effective analysis methods is very important. In this paper, we applied entropy measures-the base-scale entropy and approximate entropy to analyze signal complexity, and studied the effect of trends on the ideal signal and the heart rate variability (HRV) signals, that is, linear, periodic, and power-law trends which are likely to occur in actual signals. The results show that approximate entropy is unsteady when we embed different trends into the signals, so it is not suitable to analyze signal with trends. However, the base-scale entropy has preferable stability and accuracy for signal with different trends. So the base-scale entropy is an effective method to analyze the actual signals.
Cao, Yuzhen; Cai, Lihui; Wang, Jiang; Wang, Ruofan; Yu, Haitao; Cao, Yibin; Liu, Jing
2015-08-01
In this paper, experimental neurophysiologic recording and statistical analysis are combined to investigate the nonlinear characteristic and the cognitive function of the brain. Fuzzy approximate entropy and fuzzy sample entropy are applied to characterize the model-based simulated series and electroencephalograph (EEG) series of Alzheimer's disease (AD). The effectiveness and advantages of these two kinds of fuzzy entropy are first verified through the simulated EEG series generated by the alpha rhythm model, including stronger relative consistency and robustness. Furthermore, in order to detect the abnormality of irregularity and chaotic behavior in the AD brain, the complexity features based on these two fuzzy entropies are extracted in the delta, theta, alpha, and beta bands. It is demonstrated that, due to the introduction of fuzzy set theory, the fuzzy entropies could better distinguish EEG signals of AD from that of the normal than the approximate entropy and sample entropy. Moreover, the entropy values of AD are significantly decreased in the alpha band, particularly in the temporal brain region, such as electrode T3 and T4. In addition, fuzzy sample entropy could achieve higher group differences in different brain regions and higher average classification accuracy of 88.1% by support vector machine classifier. The obtained results prove that fuzzy sample entropy may be a powerful tool to characterize the complexity abnormalities of AD, which could be helpful in further understanding of the disease.
NASA Astrophysics Data System (ADS)
Cao, Yuzhen; Cai, Lihui; Wang, Jiang; Wang, Ruofan; Yu, Haitao; Cao, Yibin; Liu, Jing
2015-08-01
In this paper, experimental neurophysiologic recording and statistical analysis are combined to investigate the nonlinear characteristic and the cognitive function of the brain. Fuzzy approximate entropy and fuzzy sample entropy are applied to characterize the model-based simulated series and electroencephalograph (EEG) series of Alzheimer's disease (AD). The effectiveness and advantages of these two kinds of fuzzy entropy are first verified through the simulated EEG series generated by the alpha rhythm model, including stronger relative consistency and robustness. Furthermore, in order to detect the abnormality of irregularity and chaotic behavior in the AD brain, the complexity features based on these two fuzzy entropies are extracted in the delta, theta, alpha, and beta bands. It is demonstrated that, due to the introduction of fuzzy set theory, the fuzzy entropies could better distinguish EEG signals of AD from that of the normal than the approximate entropy and sample entropy. Moreover, the entropy values of AD are significantly decreased in the alpha band, particularly in the temporal brain region, such as electrode T3 and T4. In addition, fuzzy sample entropy could achieve higher group differences in different brain regions and higher average classification accuracy of 88.1% by support vector machine classifier. The obtained results prove that fuzzy sample entropy may be a powerful tool to characterize the complexity abnormalities of AD, which could be helpful in further understanding of the disease.
Entropy Analysis of Kinetic Flux Vector Splitting Schemes for the Compressible Euler Equations
NASA Technical Reports Server (NTRS)
Shiuhong, Lui; Xu, Jun
1999-01-01
Flux Vector Splitting (FVS) scheme is one group of approximate Riemann solvers for the compressible Euler equations. In this paper, the discretized entropy condition of the Kinetic Flux Vector Splitting (KFVS) scheme based on the gas-kinetic theory is proved. The proof of the entropy condition involves the entropy definition difference between the distinguishable and indistinguishable particles.
Using entropy measures to characterize human locomotion.
Leverick, Graham; Szturm, Tony; Wu, Christine Q
2014-12-01
Entropy measures have been widely used to quantify the complexity of theoretical and experimental dynamical systems. In this paper, the value of using entropy measures to characterize human locomotion is demonstrated based on their construct validity, predictive validity in a simple model of human walking and convergent validity in an experimental study. Results show that four of the five considered entropy measures increase meaningfully with the increased probability of falling in a simple passive bipedal walker model. The same four entropy measures also experienced statistically significant increases in response to increasing age and gait impairment caused by cognitive interference in an experimental study. Of the considered entropy measures, the proposed quantized dynamical entropy (QDE) and quantization-based approximation of sample entropy (QASE) offered the best combination of sensitivity to changes in gait dynamics and computational efficiency. Based on these results, entropy appears to be a viable candidate for assessing the stability of human locomotion.
Approximate convective heating equations for hypersonic flows
NASA Technical Reports Server (NTRS)
Zoby, E. V.; Moss, J. N.; Sutton, K.
1979-01-01
Laminar and turbulent heating-rate equations appropriate for engineering predictions of the convective heating rates about blunt reentry spacecraft at hypersonic conditions are developed. The approximate methods are applicable to both nonreacting and reacting gas mixtures for either constant or variable-entropy edge conditions. A procedure which accounts for variable-entropy effects and is not based on mass balancing is presented. Results of the approximate heating methods are in good agreement with existing experimental results as well as boundary-layer and viscous-shock-layer solutions.
Driver fatigue detection through multiple entropy fusion analysis in an EEG-based system.
Min, Jianliang; Wang, Ping; Hu, Jianfeng
2017-01-01
Driver fatigue is an important contributor to road accidents, and fatigue detection has major implications for transportation safety. The aim of this research is to analyze the multiple entropy fusion method and evaluate several channel regions to effectively detect a driver's fatigue state based on electroencephalogram (EEG) records. First, we fused multiple entropies, i.e., spectral entropy, approximate entropy, sample entropy and fuzzy entropy, as features compared with autoregressive (AR) modeling by four classifiers. Second, we captured four significant channel regions according to weight-based electrodes via a simplified channel selection method. Finally, the evaluation model for detecting driver fatigue was established with four classifiers based on the EEG data from four channel regions. Twelve healthy subjects performed continuous simulated driving for 1-2 hours with EEG monitoring on a static simulator. The leave-one-out cross-validation approach obtained an accuracy of 98.3%, a sensitivity of 98.3% and a specificity of 98.2%. The experimental results verified the effectiveness of the proposed method, indicating that the multiple entropy fusion features are significant factors for inferring the fatigue state of a driver.
Estimating the Entropy of Binary Time Series: Methodology, Some Theory and a Simulation Study
NASA Astrophysics Data System (ADS)
Gao, Yun; Kontoyiannis, Ioannis; Bienenstock, Elie
2008-06-01
Partly motivated by entropy-estimation problems in neuroscience, we present a detailed and extensive comparison between some of the most popular and effective entropy estimation methods used in practice: The plug-in method, four different estimators based on the Lempel-Ziv (LZ) family of data compression algorithms, an estimator based on the Context-Tree Weighting (CTW) method, and the renewal entropy estimator. METHODOLOGY: Three new entropy estimators are introduced; two new LZ-based estimators, and the “renewal entropy estimator,” which is tailored to data generated by a binary renewal process. For two of the four LZ-based estimators, a bootstrap procedure is described for evaluating their standard error, and a practical rule of thumb is heuristically derived for selecting the values of their parameters in practice. THEORY: We prove that, unlike their earlier versions, the two new LZ-based estimators are universally consistent, that is, they converge to the entropy rate for every finite-valued, stationary and ergodic process. An effective method is derived for the accurate approximation of the entropy rate of a finite-state hidden Markov model (HMM) with known distribution. Heuristic calculations are presented and approximate formulas are derived for evaluating the bias and the standard error of each estimator. SIMULATION: All estimators are applied to a wide range of data generated by numerous different processes with varying degrees of dependence and memory. The main conclusions drawn from these experiments include: (i) For all estimators considered, the main source of error is the bias. (ii) The CTW method is repeatedly and consistently seen to provide the most accurate results. (iii) The performance of the LZ-based estimators is often comparable to that of the plug-in method. (iv) The main drawback of the plug-in method is its computational inefficiency; with small word-lengths it fails to detect longer-range structure in the data, and with longer word-lengths the empirical distribution is severely undersampled, leading to large biases.
Fractal-Based Analysis of the Influence of Music on Human Respiration
NASA Astrophysics Data System (ADS)
Reza Namazi, H.
An important challenge in respiration related studies is to investigate the influence of external stimuli on human respiration. Auditory stimulus is an important type of stimuli that influences human respiration. However, no one discovered any trend, which relates the characteristics of the auditory stimuli to the characteristics of the respiratory signal. In this paper, we investigate the correlation between auditory stimuli and respiratory signal from fractal point of view. We found out that the fractal structure of respiratory signal is correlated with the fractal structure of the applied music. Based on the obtained results, the music with greater fractal dimension will result in respiratory signal with smaller fractal dimension. In order to verify this result, we benefit from approximate entropy. The results show the respiratory signal will have smaller approximate entropy by choosing the music with smaller approximate entropy. The method of analysis could be further investigated to analyze the variations of different physiological time series due to the various types of stimuli when the complexity is the main concern.
Driver fatigue detection through multiple entropy fusion analysis in an EEG-based system
Min, Jianliang; Wang, Ping
2017-01-01
Driver fatigue is an important contributor to road accidents, and fatigue detection has major implications for transportation safety. The aim of this research is to analyze the multiple entropy fusion method and evaluate several channel regions to effectively detect a driver's fatigue state based on electroencephalogram (EEG) records. First, we fused multiple entropies, i.e., spectral entropy, approximate entropy, sample entropy and fuzzy entropy, as features compared with autoregressive (AR) modeling by four classifiers. Second, we captured four significant channel regions according to weight-based electrodes via a simplified channel selection method. Finally, the evaluation model for detecting driver fatigue was established with four classifiers based on the EEG data from four channel regions. Twelve healthy subjects performed continuous simulated driving for 1–2 hours with EEG monitoring on a static simulator. The leave-one-out cross-validation approach obtained an accuracy of 98.3%, a sensitivity of 98.3% and a specificity of 98.2%. The experimental results verified the effectiveness of the proposed method, indicating that the multiple entropy fusion features are significant factors for inferring the fatigue state of a driver. PMID:29220351
Wavelet entropy characterization of elevated intracranial pressure.
Xu, Peng; Scalzo, Fabien; Bergsneider, Marvin; Vespa, Paul; Chad, Miller; Hu, Xiao
2008-01-01
Intracranial Hypertension (ICH) often occurs for those patients with traumatic brain injury (TBI), stroke, tumor, etc. Pathology of ICH is still controversial. In this work, we used wavelet entropy and relative wavelet entropy to study the difference existed between normal and hypertension states of ICP for the first time. The wavelet entropy revealed the similar findings as the approximation entropy that entropy during ICH state is smaller than that in normal state. Moreover, with wavelet entropy, we can see that ICH state has the more focused energy in the low wavelet frequency band (0-3.1 Hz) than the normal state. The relative wavelet entropy shows that the energy distribution in the wavelet bands between these two states is actually different. Based on these results, we suggest that ICH may be formed by the re-allocation of oscillation energy within brain.
Gul, Ahmet; Erman, Burak
2018-01-16
Prediction of peptide binding on specific human leukocyte antigens (HLA) has long been studied with successful results. We herein describe the effects of entropy and dynamics by investigating the binding stabilities of 10 nanopeptides on various HLA Class I alleles using a theoretical model based on molecular dynamics simulations. The fluctuational entropies of the peptides are estimated over a temperature range of 310-460 K. The estimated entropies correlate well with experimental binding affinities of the peptides: peptides that have higher binding affinities have lower entropies compared to non-binders, which have significantly larger entropies. The computation of the entropies is based on a simple model that requires short molecular dynamics trajectories and allows for approximate but rapid determination. The paper draws attention to the long neglected dynamic aspects of peptide binding, and provides a fast computation scheme that allows for rapid scanning of large numbers of peptides on selected HLA antigens, which may be useful in defining the right peptides for personal immunotherapy.
NASA Astrophysics Data System (ADS)
Gul, Ahmet; Erman, Burak
2018-03-01
Prediction of peptide binding on specific human leukocyte antigens (HLA) has long been studied with successful results. We herein describe the effects of entropy and dynamics by investigating the binding stabilities of 10 nanopeptides on various HLA Class I alleles using a theoretical model based on molecular dynamics simulations. The fluctuational entropies of the peptides are estimated over a temperature range of 310-460 K. The estimated entropies correlate well with experimental binding affinities of the peptides: peptides that have higher binding affinities have lower entropies compared to non-binders, which have significantly larger entropies. The computation of the entropies is based on a simple model that requires short molecular dynamics trajectories and allows for approximate but rapid determination. The paper draws attention to the long neglected dynamic aspects of peptide binding, and provides a fast computation scheme that allows for rapid scanning of large numbers of peptides on selected HLA antigens, which may be useful in defining the right peptides for personal immunotherapy.
Jiang, Quansheng; Shen, Yehu; Li, Hua; Xu, Fengyu
2018-01-24
Feature recognition and fault diagnosis plays an important role in equipment safety and stable operation of rotating machinery. In order to cope with the complexity problem of the vibration signal of rotating machinery, a feature fusion model based on information entropy and probabilistic neural network is proposed in this paper. The new method first uses information entropy theory to extract three kinds of characteristics entropy in vibration signals, namely, singular spectrum entropy, power spectrum entropy, and approximate entropy. Then the feature fusion model is constructed to classify and diagnose the fault signals. The proposed approach can combine comprehensive information from different aspects and is more sensitive to the fault features. The experimental results on simulated fault signals verified better performances of our proposed approach. In real two-span rotor data, the fault detection accuracy of the new method is more than 10% higher compared with the methods using three kinds of information entropy separately. The new approach is proved to be an effective fault recognition method for rotating machinery.
The pressure and entropy of a unitary Fermi gas with particle-hole fluctuation
NASA Astrophysics Data System (ADS)
Gong, Hao; Ruan, Xiao-Xia; Zong, Hong-Shi
2018-01-01
We calculate the pressure and entropy of a unitary Fermi gas based on universal relations combined with our previous prediction of energy which was calculated within the framework of the non-self-consistent T-matrix approximation with particle-hole fluctuation. The resulting entropy and pressure are compared with the experimental data and the theoretical results without induced interaction. For entropy, we find good agreement between our results with particle-hole fluctuation and the experimental measurements reported by ENS group and MIT experiment. For pressure, our results suffer from a systematic upshift compared to MIT data.
Mauda, R.; Pinchas, M.
2014-01-01
Recently a new blind equalization method was proposed for the 16QAM constellation input inspired by the maximum entropy density approximation technique with improved equalization performance compared to the maximum entropy approach, Godard's algorithm, and others. In addition, an approximated expression for the minimum mean square error (MSE) was obtained. The idea was to find those Lagrange multipliers that bring the approximated MSE to minimum. Since the derivation of the obtained MSE with respect to the Lagrange multipliers leads to a nonlinear equation for the Lagrange multipliers, the part in the MSE expression that caused the nonlinearity in the equation for the Lagrange multipliers was ignored. Thus, the obtained Lagrange multipliers were not those Lagrange multipliers that bring the approximated MSE to minimum. In this paper, we derive a new set of Lagrange multipliers based on the nonlinear expression for the Lagrange multipliers obtained from minimizing the approximated MSE with respect to the Lagrange multipliers. Simulation results indicate that for the high signal to noise ratio (SNR) case, a faster convergence rate is obtained for a channel causing a high initial intersymbol interference (ISI) while the same equalization performance is obtained for an easy channel (initial ISI low). PMID:24723813
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helinski, Ryan
This Python package provides high-performance implementations of the functions and examples presented in "BiEntropy - The Approximate Entropy of a Finite Binary String" by Grenville J. Croll, presented at ANPA 34 in 2013. https://arxiv.org/abs/1305.0954 According to the paper, BiEntropy is "a simple algorithm which computes the approximate entropy of a finite binary string of arbitrary length" using "a weighted average of the Shannon Entropies of the string and all but the last binary derivative of the string."
Effect of extreme data loss on heart rate signals quantified by entropy analysis
NASA Astrophysics Data System (ADS)
Li, Yu; Wang, Jun; Li, Jin; Liu, Dazhao
2015-02-01
The phenomenon of data loss always occurs in the analysis of large databases. Maintaining the stability of analysis results in the event of data loss is very important. In this paper, we used a segmentation approach to generate a synthetic signal that is randomly wiped from data according to the Gaussian distribution and the exponential distribution of the original signal. Then, the logistic map is used as verification. Finally, two methods of measuring entropy-base-scale entropy and approximate entropy-are comparatively analyzed. Our results show the following: (1) Two key parameters-the percentage and the average length of removed data segments-can change the sequence complexity according to logistic map testing. (2) The calculation results have preferable stability for base-scale entropy analysis, which is not sensitive to data loss. (3) The loss percentage of HRV signals should be controlled below the range (p = 30 %), which can provide useful information in clinical applications.
Ectopic beats in approximate entropy and sample entropy-based HRV assessment
NASA Astrophysics Data System (ADS)
Singh, Butta; Singh, Dilbag; Jaryal, A. K.; Deepak, K. K.
2012-05-01
Approximate entropy (ApEn) and sample entropy (SampEn) are the promising techniques for extracting complex characteristics of cardiovascular variability. Ectopic beats, originating from other than the normal site, are the artefacts contributing a serious limitation to heart rate variability (HRV) analysis. The approaches like deletion and interpolation are currently in use to eliminate the bias produced by ectopic beats. In this study, normal R-R interval time series of 10 healthy and 10 acute myocardial infarction (AMI) patients were analysed by inserting artificial ectopic beats. Then the effects of ectopic beats editing by deletion, degree-zero and degree-one interpolation on ApEn and SampEn have been assessed. Ectopic beats addition (even 2%) led to reduced complexity, resulting in decreased ApEn and SampEn of both healthy and AMI patient data. This reduction has been found to be dependent on level of ectopic beats. Editing of ectopic beats by interpolation degree-one method is found to be superior to other methods.
NASA Astrophysics Data System (ADS)
Jarabo-Amores, María-Pilar; la Mata-Moya, David de; Gil-Pita, Roberto; Rosa-Zurera, Manuel
2013-12-01
The application of supervised learning machines trained to minimize the Cross-Entropy error to radar detection is explored in this article. The detector is implemented with a learning machine that implements a discriminant function, which output is compared to a threshold selected to fix a desired probability of false alarm. The study is based on the calculation of the function the learning machine approximates to during training, and the application of a sufficient condition for a discriminant function to be used to approximate the optimum Neyman-Pearson (NP) detector. In this article, the function a supervised learning machine approximates to after being trained to minimize the Cross-Entropy error is obtained. This discriminant function can be used to implement the NP detector, which maximizes the probability of detection, maintaining the probability of false alarm below or equal to a predefined value. Some experiments about signal detection using neural networks are also presented to test the validity of the study.
NOTE: Entropy-based automated classification of independent components separated from fMCG
NASA Astrophysics Data System (ADS)
Comani, S.; Srinivasan, V.; Alleva, G.; Romani, G. L.
2007-03-01
Fetal magnetocardiography (fMCG) is a noninvasive technique suitable for the prenatal diagnosis of the fetal heart function. Reliable fetal cardiac signals can be reconstructed from multi-channel fMCG recordings by means of independent component analysis (ICA). However, the identification of the separated components is usually accomplished by visual inspection. This paper discusses a novel automated system based on entropy estimators, namely approximate entropy (ApEn) and sample entropy (SampEn), for the classification of independent components (ICs). The system was validated on 40 fMCG datasets of normal fetuses with the gestational age ranging from 22 to 37 weeks. Both ApEn and SampEn were able to measure the stability and predictability of the physiological signals separated with ICA, and the entropy values of the three categories were significantly different at p <0.01. The system performances were compared with those of a method based on the analysis of the time and frequency content of the components. The outcomes of this study showed a superior performance of the entropy-based system, in particular for early gestation, with an overall ICs detection rate of 98.75% and 97.92% for ApEn and SampEn respectively, as against a value of 94.50% obtained with the time-frequency-based system.
Non-linear HRV indices under autonomic nervous system blockade.
Bolea, Juan; Pueyo, Esther; Laguna, Pablo; Bailón, Raquel
2014-01-01
Heart rate variability (HRV) has been studied as a non-invasive technique to characterize the autonomic nervous system (ANS) regulation of the heart. Non-linear methods based on chaos theory have been used during the last decades as markers for risk stratification. However, interpretation of these nonlinear methods in terms of sympathetic and parasympathetic activity is not fully established. In this work we study linear and non-linear HRV indices during ANS blockades in order to assess their relation with sympathetic and parasympathetic activities. Power spectral content in low frequency (0.04-0.15 Hz) and high frequency (0.15-0.4 Hz) bands of HRV, as well as correlation dimension, sample and approximate entropies were computed in a database of subjects during single and dual ANS blockade with atropine and/or propranolol. Parasympathetic blockade caused a significant decrease in the low and high frequency power of HRV, as well as in correlation dimension and sample and approximate entropies. Sympathetic blockade caused a significant increase in approximate entropy. Sympathetic activation due to postural change from supine to standing caused a significant decrease in all the investigated non-linear indices and a significant increase in the normalized power in the low frequency band. The other investigated linear indices did not show significant changes. Results suggest that parasympathetic activity has a direct relation with sample and approximate entropies.
Entropy and climate. I - ERBE observations of the entropy production of the earth
NASA Technical Reports Server (NTRS)
Stephens, G. L.; O'Brien, D. M.
1993-01-01
An approximate method for estimating the global distributions of the entropy fluxes flowing through the upper boundary of the climate system is introduced, and an estimate of the entropy exchange between the earth and space and the entropy production of the planet is provided. Entropy fluxes calculated from the Earth Radiation Budget Experiment measurements show how the long-wave entropy flux densities dominate the total entropy fluxes at all latitudes compared with the entropy flux densities associated with reflected sunlight, although the short-wave flux densities are important in the context of clear sky-cloudy sky net entropy flux differences. It is suggested that the entropy production of the planet is both constant for the 36 months of data considered and very near its maximum possible value. The mean value of this production is 0.68 x 10 exp 15 W/K, and the amplitude of the annual cycle is approximately 1 to 2 percent of this value.
NASA Astrophysics Data System (ADS)
Nearing, G. S.
2014-12-01
Statistical models consistently out-perform conceptual models in the short term, however to account for a nonstationary future (or an unobserved past) scientists prefer to base predictions on unchanging and commutable properties of the universe - i.e., physics. The problem with physically-based hydrology models is, of course, that they aren't really based on physics - they are based on statistical approximations of physical interactions, and we almost uniformly lack an understanding of the entropy associated with these approximations. Thermodynamics is successful precisely because entropy statistics are computable for homogeneous (well-mixed) systems, and ergodic arguments explain the success of Newton's laws to describe systems that are fundamentally quantum in nature. Unfortunately, similar arguments do not hold for systems like watersheds that are heterogeneous at a wide range of scales. Ray Solomonoff formalized the situation in 1968 by showing that given infinite evidence, simultaneously minimizing model complexity and entropy in predictions always leads to the best possible model. The open question in hydrology is about what happens when we don't have infinite evidence - for example, when the future will not look like the past, or when one watershed does not behave like another. How do we isolate stationary and commutable components of watershed behavior? I propose that one possible answer to this dilemma lies in a formal combination of physics and statistics. In this talk I outline my recent analogue (Solomonoff's theorem was digital) of Solomonoff's idea that allows us to quantify the complexity/entropy tradeoff in a way that is intuitive to physical scientists. I show how to formally combine "physical" and statistical methods for model development in a way that allows us to derive the theoretically best possible model given any given physics approximation(s) and available observations. Finally, I apply an analogue of Solomonoff's theorem to evaluate the tradeoff between model complexity and prediction power.
2013-01-01
Here we present a novel, end-point method using the dead-end-elimination and A* algorithms to efficiently and accurately calculate the change in free energy, enthalpy, and configurational entropy of binding for ligand–receptor association reactions. We apply the new approach to the binding of a series of human immunodeficiency virus (HIV-1) protease inhibitors to examine the effect ensemble reranking has on relative accuracy as well as to evaluate the role of the absolute and relative ligand configurational entropy losses upon binding in affinity differences for structurally related inhibitors. Our results suggest that most thermodynamic parameters can be estimated using only a small fraction of the full configurational space, and we see significant improvement in relative accuracy when using an ensemble versus single-conformer approach to ligand ranking. We also find that using approximate metrics based on the single-conformation enthalpy differences between the global minimum energy configuration in the bound as well as unbound states also correlates well with experiment. Using a novel, additive entropy expansion based on conditional mutual information, we also analyze the source of ligand configurational entropy loss upon binding in terms of both uncoupled per degree of freedom losses as well as changes in coupling between inhibitor degrees of freedom. We estimate entropic free energy losses of approximately +24 kcal/mol, 12 kcal/mol of which stems from loss of translational and rotational entropy. Coupling effects contribute only a small fraction to the overall entropy change (1–2 kcal/mol) but suggest differences in how inhibitor dihedral angles couple to each other in the bound versus unbound states. The importance of accounting for flexibility in drug optimization and design is also discussed. PMID:24250277
ERIC Educational Resources Information Center
Rigoldi, Chiara; Cimolin, Veronica; Camerota, Filippo; Celletti, Claudia; Albertini, Giorgio; Mainardi, Luca; Galli, Manuela
2013-01-01
Ligament laxity in Ehlers-Danlos syndrome hypermobility type (EDS-HT) patients can influence the intrinsic information about posture and movement and can have a negative effect on the appropriateness of postural reactions. Several measures have been proposed in literature to describe the planar migration of CoP over the base of support, and the…
Entropy-Based Registration of Point Clouds Using Terrestrial Laser Scanning and Smartphone GPS.
Chen, Maolin; Wang, Siying; Wang, Mingwei; Wan, Youchuan; He, Peipei
2017-01-20
Automatic registration of terrestrial laser scanning point clouds is a crucial but unresolved topic that is of great interest in many domains. This study combines terrestrial laser scanner with a smartphone for the coarse registration of leveled point clouds with small roll and pitch angles and height differences, which is a novel sensor combination mode for terrestrial laser scanning. The approximate distance between two neighboring scan positions is firstly calculated with smartphone GPS coordinates. Then, 2D distribution entropy is used to measure the distribution coherence between the two scans and search for the optimal initial transformation parameters. To this end, we propose a method called Iterative Minimum Entropy (IME) to correct initial transformation parameters based on two criteria: the difference between the average and minimum entropy and the deviation from the minimum entropy to the expected entropy. Finally, the presented method is evaluated using two data sets that contain tens of millions of points from panoramic and non-panoramic, vegetation-dominated and building-dominated cases and can achieve high accuracy and efficiency.
Zhao, Yong; Hong, Wen-Xue
2011-11-01
Fast, nondestructive and accurate identification of special quality eggs is an urgent problem. The present paper proposed a new feature extraction method based on symbol entropy to identify near infrared spectroscopy of special quality eggs. The authors selected normal eggs, free range eggs, selenium-enriched eggs and zinc-enriched eggs as research objects and measured the near-infrared diffuse reflectance spectra in the range of 12 000-4 000 cm(-1). Raw spectra were symbolically represented with aggregation approximation algorithm and symbolic entropy was extracted as feature vector. An error-correcting output codes multiclass support vector machine classifier was designed to identify the spectrum. Symbolic entropy feature is robust when parameter changed and the highest recognition rate reaches up to 100%. The results show that the identification method of special quality eggs using near-infrared is feasible and the symbol entropy can be used as a new feature extraction method of near-infrared spectra.
NASA Astrophysics Data System (ADS)
Melchert, O.; Hartmann, A. K.
2015-02-01
In this work we consider information-theoretic observables to analyze short symbolic sequences, comprising time series that represent the orientation of a single spin in a two-dimensional (2D) Ising ferromagnet on a square lattice of size L2=1282 for different system temperatures T . The latter were chosen from an interval enclosing the critical point Tc of the model. At small temperatures the sequences are thus very regular; at high temperatures they are maximally random. In the vicinity of the critical point, nontrivial, long-range correlations appear. Here we implement estimators for the entropy rate, excess entropy (i.e., "complexity"), and multi-information. First, we implement a Lempel-Ziv string-parsing scheme, providing seemingly elaborate entropy rate and multi-information estimates and an approximate estimator for the excess entropy. Furthermore, we apply easy-to-use black-box data-compression utilities, providing approximate estimators only. For comparison and to yield results for benchmarking purposes, we implement the information-theoretic observables also based on the well-established M -block Shannon entropy, which is more tedious to apply compared to the first two "algorithmic" entropy estimation procedures. To test how well one can exploit the potential of such data-compression techniques, we aim at detecting the critical point of the 2D Ising ferromagnet. Among the above observables, the multi-information, which is known to exhibit an isolated peak at the critical point, is very easy to replicate by means of both efficient algorithmic entropy estimation procedures. Finally, we assess how good the various algorithmic entropy estimates compare to the more conventional block entropy estimates and illustrate a simple modification that yields enhanced results.
Investigating dynamical complexity in the magnetosphere using various entropy measures
NASA Astrophysics Data System (ADS)
Balasis, Georgios; Daglis, Ioannis A.; Papadimitriou, Constantinos; Kalimeri, Maria; Anastasiadis, Anastasios; Eftaxias, Konstantinos
2009-09-01
The complex system of the Earth's magnetosphere corresponds to an open spatially extended nonequilibrium (input-output) dynamical system. The nonextensive Tsallis entropy has been recently introduced as an appropriate information measure to investigate dynamical complexity in the magnetosphere. The method has been employed for analyzing Dst time series and gave promising results, detecting the complexity dissimilarity among different physiological and pathological magnetospheric states (i.e., prestorm activity and intense magnetic storms, respectively). This paper explores the applicability and effectiveness of a variety of computable entropy measures (e.g., block entropy, Kolmogorov entropy, T complexity, and approximate entropy) to the investigation of dynamical complexity in the magnetosphere. We show that as the magnetic storm approaches there is clear evidence of significant lower complexity in the magnetosphere. The observed higher degree of organization of the system agrees with that inferred previously, from an independent linear fractal spectral analysis based on wavelet transforms. This convergence between nonlinear and linear analyses provides a more reliable detection of the transition from the quiet time to the storm time magnetosphere, thus showing evidence that the occurrence of an intense magnetic storm is imminent. More precisely, we claim that our results suggest an important principle: significant complexity decrease and accession of persistency in Dst time series can be confirmed as the magnetic storm approaches, which can be used as diagnostic tools for the magnetospheric injury (global instability). Overall, approximate entropy and Tsallis entropy yield superior results for detecting dynamical complexity changes in the magnetosphere in comparison to the other entropy measures presented herein. Ultimately, the analysis tools developed in the course of this study for the treatment of Dst index can provide convenience for space weather applications.
Sample entropy analysis of cervical neoplasia gene-expression signatures
Botting, Shaleen K; Trzeciakowski, Jerome P; Benoit, Michelle F; Salama, Salama A; Diaz-Arrastia, Concepcion R
2009-01-01
Background We introduce Approximate Entropy as a mathematical method of analysis for microarray data. Approximate entropy is applied here as a method to classify the complex gene expression patterns resultant of a clinical sample set. Since Entropy is a measure of disorder in a system, we believe that by choosing genes which display minimum entropy in normal controls and maximum entropy in the cancerous sample set we will be able to distinguish those genes which display the greatest variability in the cancerous set. Here we describe a method of utilizing Approximate Sample Entropy (ApSE) analysis to identify genes of interest with the highest probability of producing an accurate, predictive, classification model from our data set. Results In the development of a diagnostic gene-expression profile for cervical intraepithelial neoplasia (CIN) and squamous cell carcinoma of the cervix, we identified 208 genes which are unchanging in all normal tissue samples, yet exhibit a random pattern indicative of the genetic instability and heterogeneity of malignant cells. This may be measured in terms of the ApSE when compared to normal tissue. We have validated 10 of these genes on 10 Normal and 20 cancer and CIN3 samples. We report that the predictive value of the sample entropy calculation for these 10 genes of interest is promising (75% sensitivity, 80% specificity for prediction of cervical cancer over CIN3). Conclusion The success of the Approximate Sample Entropy approach in discerning alterations in complexity from biological system with such relatively small sample set, and extracting biologically relevant genes of interest hold great promise. PMID:19232110
Nguyen, Phuong H; Derreumaux, Philippe
2012-01-14
One challenge in computational biophysics and biology is to develop methodologies able to estimate accurately the configurational entropy of macromolecules. Among many methods, the quasiharmonic approximation (QH) is most widely used as it is simple in both theory and implementation. However, it has been shown that this method becomes inaccurate by overestimating entropy for systems with rugged free energy landscapes. Here, we propose a simple method to improve the QH approximation, i.e., to reduce QH entropy. We approximate the potential energy landscape of the system by an effective harmonic potential, and request that this potential must produce exactly the configurational temperature of the system. Due to this constraint, the force constants associated with the effective harmonic potential are increased, or equivalently, entropy of motion governed by this effective harmonic potential is reduced. We also introduce the effective configurational temperature concept which can be used as an indicator to check the anharmonicity of the free energy landscape. To validate the new method we compare it with the recently developed expansion approximate method by calculating entropy of one simple model system and two peptides with 3 and 16 amino acids either in gas phase or in explicit solvent. We show that the new method appears to be a good choice in practice as it is a compromise between accuracy and computational speed. A modification of the expansion approximate method is also introduced and advantages are discussed in some detail.
NASA Technical Reports Server (NTRS)
Bartlett, E. P.; Morse, H. L.; Tong, H.
1971-01-01
Procedures and methods for predicting aerothermodynamic heating to delta orbiter shuttle vehicles were reviewed. A number of approximate methods were found to be adequate for large scale parameter studies, but are considered inadequate for final design calculations. It is recommended that final design calculations be based on a computer code which accounts for nonequilibrium chemistry, streamline spreading, entropy swallowing, and turbulence. It is further recommended that this code be developed with the intent that it can be directly coupled with an exact inviscid flow field calculation when the latter becomes available. A nonsimilar, equilibrium chemistry computer code (BLIMP) was used to evaluate the effects of entropy swallowing, turbulence, and various three dimensional approximations. These solutions were compared with available wind tunnel data. It was found study that, for wind tunnel conditions, the effect of entropy swallowing and three dimensionality are small for laminar boundary layers but entropy swallowing causes a significant increase in turbulent heat transfer. However, it is noted that even small effects (say, 10-20%) may be important for the shuttle reusability concept.
Sort entropy-based for the analysis of EEG during anesthesia
NASA Astrophysics Data System (ADS)
Ma, Liang; Huang, Wei-Zhi
2010-08-01
The monitoring of anesthetic depth is an absolutely necessary procedure in the process of surgical operation. To judge and control the depth of anesthesia has become a clinical issue which should be resolved urgently. EEG collected wiil be processed by sort entrop in this paper. Signal response of the surface of the cerebral cortex is determined for different stages of patients in the course of anesthesia. EEG is simulated and analyzed through the fast algorithm of sort entropy. The results show that discipline of phasic changes for EEG is very detected accurately,and it has better noise immunity in detecting the EEG anaesthetized than approximate entropy. In conclusion,the computing of Sort entropy algorithm requires shorter time. It has high efficiency and strong anti-interference.
Optimization and large scale computation of an entropy-based moment closure
NASA Astrophysics Data System (ADS)
Kristopher Garrett, C.; Hauck, Cory; Hill, Judith
2015-12-01
We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. These results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.
Optimization and large scale computation of an entropy-based moment closure
Hauck, Cory D.; Hill, Judith C.; Garrett, C. Kristopher
2015-09-10
We present computational advances and results in the implementation of an entropy-based moment closure, M N, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as P N, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which aremore » used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. Lastly, these results show, in particular, load balancing issues in scaling the M N algorithm that do not appear for the P N algorithm. We also observe that in weak scaling tests, the ratio in time to solution of M N to P N decreases.« less
The convergence rate of approximate solutions for nonlinear scalar conservation laws
NASA Technical Reports Server (NTRS)
Nessyahu, Haim; Tadmor, Eitan
1991-01-01
The convergence rate is discussed of approximate solutions for the nonlinear scalar conservation law. The linear convergence theory is extended into a weak regime. The extension is based on the usual two ingredients of stability and consistency. On the one hand, the counterexamples show that one must strengthen the linearized L(sup 2)-stability requirement. It is assumed that the approximate solutions are Lip(sup +)-stable in the sense that they satisfy a one-sided Lipschitz condition, in agreement with Oleinik's E-condition for the entropy solution. On the other hand, the lack of smoothness requires to weaken the consistency requirement, which is measured in the Lip'-(semi)norm. It is proved for Lip(sup +)-stable approximate solutions, that their Lip'convergence rate to the entropy solution is of the same order as their Lip'-consistency. The Lip'-convergence rate is then converted into stronger L(sup p) convergence rate estimates.
Entropy Production in Collisionless Systems. II. Arbitrary Phase-space Occupation Numbers
NASA Astrophysics Data System (ADS)
Barnes, Eric I.; Williams, Liliya L. R.
2012-04-01
We present an analysis of two thermodynamic techniques for determining equilibria of self-gravitating systems. One is the Lynden-Bell (LB) entropy maximization analysis that introduced violent relaxation. Since we do not use the Stirling approximation, which is invalid at small occupation numbers, our systems have finite mass, unlike LB's isothermal spheres. (Instead of Stirling, we utilize a very accurate smooth approximation for ln x!.) The second analysis extends entropy production extremization to self-gravitating systems, also without the use of the Stirling approximation. In addition to the LB statistical family characterized by the exclusion principle in phase space, and designed to treat collisionless systems, we also apply the two approaches to the Maxwell-Boltzmann (MB) families, which have no exclusion principle and hence represent collisional systems. We implicitly assume that all of the phase space is equally accessible. We derive entropy production expressions for both families and give the extremum conditions for entropy production. Surprisingly, our analysis indicates that extremizing entropy production rate results in systems that have maximum entropy, in both LB and MB statistics. In other words, both thermodynamic approaches lead to the same equilibrium structures.
The increase of the functional entropy of the human brain with age.
Yao, Y; Lu, W L; Xu, B; Li, C B; Lin, C P; Waxman, D; Feng, J F
2013-10-09
We use entropy to characterize intrinsic ageing properties of the human brain. Analysis of fMRI data from a large dataset of individuals, using resting state BOLD signals, demonstrated that a functional entropy associated with brain activity increases with age. During an average lifespan, the entropy, which was calculated from a population of individuals, increased by approximately 0.1 bits, due to correlations in BOLD activity becoming more widely distributed. We attribute this to the number of excitatory neurons and the excitatory conductance decreasing with age. Incorporating these properties into a computational model leads to quantitatively similar results to the fMRI data. Our dataset involved males and females and we found significant differences between them. The entropy of males at birth was lower than that of females. However, the entropies of the two sexes increase at different rates, and intersect at approximately 50 years; after this age, males have a larger entropy.
The Increase of the Functional Entropy of the Human Brain with Age
Yao, Y.; Lu, W. L.; Xu, B.; Li, C. B.; Lin, C. P.; Waxman, D.; Feng, J. F.
2013-01-01
We use entropy to characterize intrinsic ageing properties of the human brain. Analysis of fMRI data from a large dataset of individuals, using resting state BOLD signals, demonstrated that a functional entropy associated with brain activity increases with age. During an average lifespan, the entropy, which was calculated from a population of individuals, increased by approximately 0.1 bits, due to correlations in BOLD activity becoming more widely distributed. We attribute this to the number of excitatory neurons and the excitatory conductance decreasing with age. Incorporating these properties into a computational model leads to quantitatively similar results to the fMRI data. Our dataset involved males and females and we found significant differences between them. The entropy of males at birth was lower than that of females. However, the entropies of the two sexes increase at different rates, and intersect at approximately 50 years; after this age, males have a larger entropy. PMID:24103922
Single-cell entropy for accurate estimation of differentiation potency from a cell's transcriptome
NASA Astrophysics Data System (ADS)
Teschendorff, Andrew E.; Enver, Tariq
2017-06-01
The ability to quantify differentiation potential of single cells is a task of critical importance. Here we demonstrate, using over 7,000 single-cell RNA-Seq profiles, that differentiation potency of a single cell can be approximated by computing the signalling promiscuity, or entropy, of a cell's transcriptome in the context of an interaction network, without the need for feature selection. We show that signalling entropy provides a more accurate and robust potency estimate than other entropy-based measures, driven in part by a subtle positive correlation between the transcriptome and connectome. Signalling entropy identifies known cell subpopulations of varying potency and drug resistant cancer stem-cell phenotypes, including those derived from circulating tumour cells. It further reveals that expression heterogeneity within single-cell populations is regulated. In summary, signalling entropy allows in silico estimation of the differentiation potency and plasticity of single cells and bulk samples, providing a means to identify normal and cancer stem-cell phenotypes.
Single-cell entropy for accurate estimation of differentiation potency from a cell's transcriptome
Teschendorff, Andrew E.; Enver, Tariq
2017-01-01
The ability to quantify differentiation potential of single cells is a task of critical importance. Here we demonstrate, using over 7,000 single-cell RNA-Seq profiles, that differentiation potency of a single cell can be approximated by computing the signalling promiscuity, or entropy, of a cell's transcriptome in the context of an interaction network, without the need for feature selection. We show that signalling entropy provides a more accurate and robust potency estimate than other entropy-based measures, driven in part by a subtle positive correlation between the transcriptome and connectome. Signalling entropy identifies known cell subpopulations of varying potency and drug resistant cancer stem-cell phenotypes, including those derived from circulating tumour cells. It further reveals that expression heterogeneity within single-cell populations is regulated. In summary, signalling entropy allows in silico estimation of the differentiation potency and plasticity of single cells and bulk samples, providing a means to identify normal and cancer stem-cell phenotypes. PMID:28569836
Video and accelerometer-based motion analysis for automated surgical skills assessment.
Zia, Aneeq; Sharma, Yachna; Bettadapura, Vinay; Sarin, Eric L; Essa, Irfan
2018-03-01
Basic surgical skills of suturing and knot tying are an essential part of medical training. Having an automated system for surgical skills assessment could help save experts time and improve training efficiency. There have been some recent attempts at automated surgical skills assessment using either video analysis or acceleration data. In this paper, we present a novel approach for automated assessment of OSATS-like surgical skills and provide an analysis of different features on multi-modal data (video and accelerometer data). We conduct a large study for basic surgical skill assessment on a dataset that contained video and accelerometer data for suturing and knot-tying tasks. We introduce "entropy-based" features-approximate entropy and cross-approximate entropy, which quantify the amount of predictability and regularity of fluctuations in time series data. The proposed features are compared to existing methods of Sequential Motion Texture, Discrete Cosine Transform and Discrete Fourier Transform, for surgical skills assessment. We report average performance of different features across all applicable OSATS-like criteria for suturing and knot-tying tasks. Our analysis shows that the proposed entropy-based features outperform previous state-of-the-art methods using video data, achieving average classification accuracies of 95.1 and 92.2% for suturing and knot tying, respectively. For accelerometer data, our method performs better for suturing achieving 86.8% average accuracy. We also show that fusion of video and acceleration features can improve overall performance for skill assessment. Automated surgical skills assessment can be achieved with high accuracy using the proposed entropy features. Such a system can significantly improve the efficiency of surgical training in medical schools and teaching hospitals.
Entropy in bimolecular simulations: A comprehensive review of atomic fluctuations-based methods.
Kassem, Summer; Ahmed, Marawan; El-Sheikh, Salah; Barakat, Khaled H
2015-11-01
Entropy of binding constitutes a major, and in many cases a detrimental, component of the binding affinity in biomolecular interactions. While the enthalpic part of the binding free energy is easier to calculate, estimating the entropy of binding is further more complicated. A precise evaluation of entropy requires a comprehensive exploration of the complete phase space of the interacting entities. As this task is extremely hard to accomplish in the context of conventional molecular simulations, calculating entropy has involved many approximations. Most of these golden standard methods focused on developing a reliable estimation of the conformational part of the entropy. Here, we review these methods with a particular emphasis on the different techniques that extract entropy from atomic fluctuations. The theoretical formalisms behind each method is explained highlighting its strengths as well as its limitations, followed by a description of a number of case studies for each method. We hope that this brief, yet comprehensive, review provides a useful tool to understand these methods and realize the practical issues that may arise in such calculations. Copyright © 2015 Elsevier Inc. All rights reserved.
Chao, Anne; Jost, Lou; Hsieh, T C; Ma, K H; Sherwin, William B; Rollins, Lee Ann
2015-01-01
Shannon entropy H and related measures are increasingly used in molecular ecology and population genetics because (1) unlike measures based on heterozygosity or allele number, these measures weigh alleles in proportion to their population fraction, thus capturing a previously-ignored aspect of allele frequency distributions that may be important in many applications; (2) these measures connect directly to the rich predictive mathematics of information theory; (3) Shannon entropy is completely additive and has an explicitly hierarchical nature; and (4) Shannon entropy-based differentiation measures obey strong monotonicity properties that heterozygosity-based measures lack. We derive simple new expressions for the expected values of the Shannon entropy of the equilibrium allele distribution at a neutral locus in a single isolated population under two models of mutation: the infinite allele model and the stepwise mutation model. Surprisingly, this complex stochastic system for each model has an entropy expressable as a simple combination of well-known mathematical functions. Moreover, entropy- and heterozygosity-based measures for each model are linked by simple relationships that are shown by simulations to be approximately valid even far from equilibrium. We also identify a bridge between the two models of mutation. We apply our approach to subdivided populations which follow the finite island model, obtaining the Shannon entropy of the equilibrium allele distributions of the subpopulations and of the total population. We also derive the expected mutual information and normalized mutual information ("Shannon differentiation") between subpopulations at equilibrium, and identify the model parameters that determine them. We apply our measures to data from the common starling (Sturnus vulgaris) in Australia. Our measures provide a test for neutrality that is robust to violations of equilibrium assumptions, as verified on real world data from starlings.
EEG entropy measures in anesthesia
Liang, Zhenhu; Wang, Yinghua; Sun, Xue; Li, Duan; Voss, Logan J.; Sleigh, Jamie W.; Hagihira, Satoshi; Li, Xiaoli
2015-01-01
Highlights: ► Twelve entropy indices were systematically compared in monitoring depth of anesthesia and detecting burst suppression.► Renyi permutation entropy performed best in tracking EEG changes associated with different anesthesia states.► Approximate Entropy and Sample Entropy performed best in detecting burst suppression. Objective: Entropy algorithms have been widely used in analyzing EEG signals during anesthesia. However, a systematic comparison of these entropy algorithms in assessing anesthesia drugs' effect is lacking. In this study, we compare the capability of 12 entropy indices for monitoring depth of anesthesia (DoA) and detecting the burst suppression pattern (BSP), in anesthesia induced by GABAergic agents. Methods: Twelve indices were investigated, namely Response Entropy (RE) and State entropy (SE), three wavelet entropy (WE) measures [Shannon WE (SWE), Tsallis WE (TWE), and Renyi WE (RWE)], Hilbert-Huang spectral entropy (HHSE), approximate entropy (ApEn), sample entropy (SampEn), Fuzzy entropy, and three permutation entropy (PE) measures [Shannon PE (SPE), Tsallis PE (TPE) and Renyi PE (RPE)]. Two EEG data sets from sevoflurane-induced and isoflurane-induced anesthesia respectively were selected to assess the capability of each entropy index in DoA monitoring and BSP detection. To validate the effectiveness of these entropy algorithms, pharmacokinetic/pharmacodynamic (PK/PD) modeling and prediction probability (Pk) analysis were applied. The multifractal detrended fluctuation analysis (MDFA) as a non-entropy measure was compared. Results: All the entropy and MDFA indices could track the changes in EEG pattern during different anesthesia states. Three PE measures outperformed the other entropy indices, with less baseline variability, higher coefficient of determination (R2) and prediction probability, and RPE performed best; ApEn and SampEn discriminated BSP best. Additionally, these entropy measures showed an advantage in computation efficiency compared with MDFA. Conclusion: Each entropy index has its advantages and disadvantages in estimating DoA. Overall, it is suggested that the RPE index was a superior measure. Investigating the advantages and disadvantages of these entropy indices could help improve current clinical indices for monitoring DoA. PMID:25741277
Excess Entropy Production in Quantum System: Quantum Master Equation Approach
NASA Astrophysics Data System (ADS)
Nakajima, Satoshi; Tokura, Yasuhiro
2017-12-01
For open systems described by the quantum master equation (QME), we investigate the excess entropy production under quasistatic operations between nonequilibrium steady states. The average entropy production is composed of the time integral of the instantaneous steady entropy production rate and the excess entropy production. We propose to define average entropy production rate using the average energy and particle currents, which are calculated by using the full counting statistics with QME. The excess entropy production is given by a line integral in the control parameter space and its integrand is called the Berry-Sinitsyn-Nemenman (BSN) vector. In the weakly nonequilibrium regime, we show that BSN vector is described by ln \\breve{ρ }_0 and ρ _0 where ρ _0 is the instantaneous steady state of the QME and \\breve{ρ }_0 is that of the QME which is given by reversing the sign of the Lamb shift term. If the system Hamiltonian is non-degenerate or the Lamb shift term is negligible, the excess entropy production approximately reduces to the difference between the von Neumann entropies of the system. Additionally, we point out that the expression of the entropy production obtained in the classical Markov jump process is different from our result and show that these are approximately equivalent only in the weakly nonequilibrium regime.
Entropic Imaging of Cataract Lens: An In Vitro Study
Shung, K. Kirk; Tsui, Po-Hsiang; Fang, Jui; Ma, Hsiang-Yang; Wu, Shuicai; Lin, Chung-Chih
2014-01-01
Phacoemulsification is a common surgical method for treating advanced cataracts. Determining the optimal phacoemulsification energy depends on the hardness of the lens involved. Previous studies have shown that it is possible to evaluate lens hardness via ultrasound parametric imaging based on statistical models that require data to follow a specific distribution. To make the method more system-adaptive, nonmodel-based imaging approach may be necessary in the visualization of lens hardness. This study investigated the feasibility of applying an information theory derived parameter – Shannon entropy from ultrasound backscatter to quantify lens hardness. To determine the physical significance of entropy, we performed computer simulations to investigate the relationship between the signal-to-noise ratio (SNR) based on the Rayleigh distribution and Shannon entropy. Young's modulus was measured in porcine lenses, in which cataracts had been artificially induced by the immersion in formalin solution in vitro. A 35-MHz ultrasound transducer was used to scan the cataract lenses for entropy imaging. The results showed that the entropy is 4.8 when the backscatter data form a Rayleigh distribution corresponding to an SNR of 1.91. The Young's modulus of the lens increased from approximately 8 to 100 kPa when we increased the immersion time from 40 to 160 min (correlation coefficient r = 0.99). Furthermore, the results indicated that entropy imaging seemed to facilitate visualizing different degrees of lens hardening. The mean entropy value increased from 2.7 to 4.0 as the Young's modulus increased from 8 to 100 kPa (r = 0.85), suggesting that entropy imaging may have greater potential than that of conventional statistical parametric imaging in determining the optimal energy to apply during phacoemulsification. PMID:24760103
Testing the mutual information expansion of entropy with multivariate Gaussian distributions.
Goethe, Martin; Fita, Ignacio; Rubi, J Miguel
2017-12-14
The mutual information expansion (MIE) represents an approximation of the configurational entropy in terms of low-dimensional integrals. It is frequently employed to compute entropies from simulation data of large systems, such as macromolecules, for which brute-force evaluation of the full configurational integral is intractable. Here, we test the validity of MIE for systems consisting of more than m = 100 degrees of freedom (dofs). The dofs are distributed according to multivariate Gaussian distributions which were generated from protein structures using a variant of the anisotropic network model. For the Gaussian distributions, we have semi-analytical access to the configurational entropy as well as to all contributions of MIE. This allows us to accurately assess the validity of MIE for different situations. We find that MIE diverges for systems containing long-range correlations which means that the error of consecutive MIE approximations grows with the truncation order n for all tractable n ≪ m. This fact implies severe limitations on the applicability of MIE, which are discussed in the article. For systems with correlations that decay exponentially with distance, MIE represents an asymptotic expansion of entropy, where the first successive MIE approximations approach the exact entropy, while MIE also diverges for larger orders. In this case, MIE serves as a useful entropy expansion when truncated up to a specific truncation order which depends on the correlation length of the system.
Deffeyes, Joan E; Harbourne, Regina T; DeJong, Stacey L; Kyvelidou, Anastasia; Stuberg, Wayne A; Stergiou, Nicholas
2009-01-01
Background By quantifying the information entropy of postural sway data, the complexity of the postural movement of different populations can be assessed, giving insight into pathologic motor control functioning. Methods In this study, developmental delay of motor control function in infants was assessed by analysis of sitting postural sway data acquired from force plate center of pressure measurements. Two types of entropy measures were used: symbolic entropy, including a new asymmetric symbolic entropy measure, and approximate entropy, a more widely used entropy measure. For each method of analysis, parameters were adjusted to optimize the separation of the results from the infants with delayed development from infants with typical development. Results The method that gave the widest separation between the populations was the asymmetric symbolic entropy method, which we developed by modification of the symbolic entropy algorithm. The approximate entropy algorithm also performed well, using parameters optimized for the infant sitting data. The infants with delayed development were found to have less complex patterns of postural sway in the medial-lateral direction, and were found to have different left-right symmetry in their postural sway, as compared to typically developing infants. Conclusion The results of this study indicate that optimization of the entropy algorithm for infant sitting postural sway data can greatly improve the ability to separate the infants with developmental delay from typically developing infants. PMID:19671183
NASA Astrophysics Data System (ADS)
Liu, Weixin; Jin, Ningde; Han, Yunfeng; Ma, Jing
2018-06-01
In the present study, multi-scale entropy algorithm was used to characterise the complex flow phenomena of turbulent droplets in high water-cut oil-water two-phase flow. First, we compared multi-scale weighted permutation entropy (MWPE), multi-scale approximate entropy (MAE), multi-scale sample entropy (MSE) and multi-scale complexity measure (MCM) for typical nonlinear systems. The results show that MWPE presents satisfied variability with scale and anti-noise ability. Accordingly, we conducted an experiment of vertical upward oil-water two-phase flow with high water-cut and collected the signals of a high-resolution microwave resonant sensor, based on which two indexes, the entropy rate and mean value of MWPE, were extracted. Besides, the effects of total flow rate and water-cut on these two indexes were analysed. Our researches show that MWPE is an effective method to uncover the dynamic instability of oil-water two-phase flow with high water-cut.
Cuesta, D; Varela, M; Miró, P; Galdós, P; Abásolo, D; Hornero, R; Aboy, M
2007-07-01
Body temperature is a classical diagnostic tool for a number of diseases. However, it is usually employed as a plain binary classification function (febrile or not febrile), and therefore its diagnostic power has not been fully developed. In this paper, we describe how body temperature regularity can be used for diagnosis. Our proposed methodology is based on obtaining accurate long-term temperature recordings at high sampling frequencies and analyzing the temperature signal using a regularity metric (approximate entropy). In this study, we assessed our methodology using temperature registers acquired from patients with multiple organ failure admitted to an intensive care unit. Our results indicate there is a correlation between the patient's condition and the regularity of the body temperature. This finding enabled us to design a classifier for two outcomes (survival or death) and test it on a dataset including 36 subjects. The classifier achieved an accuracy of 72%.
Groups and the Entropy Floor: XMM-Newton Observations of Two Groups
NASA Technical Reports Server (NTRS)
Mushotzky, R. F.; Figueroa-Feliciano, E.; Loewenstein, M.; Snowden, S. L.
2002-01-01
Using XMM-Newton spatially resolved X-ray imaging spectroscopy we obtain the temperature, density, entropy, gas mass, and total mass profiles for two groups of galaxies out to approximately 0.3 R(sub vir)(R(sub vir), the virial radius). Our density profiles agree well with those derived previously, and the temperature data are broadly consistent with previous results but are considerably more precise. Both of these groups are at the mass scale of 2x10(exp 13) M(solar mass), but have rather different properties. Both have considerably lower gas mass fractions at r < 0.3 R(sub vir), than the rich clusters. NGC2563, one of the least luminous groups for its X-ray temperature, has a very low gas mass fraction of approximately 0.004 inside 0.1 R(sub vir), which increases with radius. NGC4325, one of the most luminous groups at the same average temperature, has a higher gas mass fraction of 0.02. The entropy profiles and the absolute values of the entropy as a function of virial radius also differ, with NGC4325 having a value of approximately 100 keV cm(exp -2) and NGC2563 a value of approximately 300 keV cm(exp -2) at r approximately 0.1 R(sub vir). For both groups the profiles rise monotonically with radius and there is no sign of an entropy 'floor'. These results are inconsistent with pre-heating scenarios that have been developed to explain a possible entropy floor in groups, but are broadly consistent with models of structure formation that include the effects of heating and/or the cooling of the gas. The total entropy in these systems provides a strong constraint on all models of galaxy and group formation, and on the poorly defined feedback process that controls the transformation of gas into stars and thus the formation of structure in the universe.
On the convergence of difference approximations to scalar conservation laws
NASA Technical Reports Server (NTRS)
Osher, Stanley; Tadmor, Eitan
1988-01-01
A unified treatment is given for time-explicit, two-level, second-order-resolution (SOR), total-variation-diminishing (TVD) approximations to scalar conservation laws. The schemes are assumed only to have conservation form and incremental form. A modified flux and a viscosity coefficient are introduced to obtain results in terms of the latter. The existence of a cell entropy inequality is discussed, and such an equality for all entropies is shown to imply that the scheme is an E scheme on monotone (actually more general) data, hence at most only first-order accurate in general. Convergence for TVD-SOR schemes approximating convex or concave conservation laws is shown by enforcing a single discrete entropy inequality.
NASA Astrophysics Data System (ADS)
Yan, Hao-Peng; Liu, Wen-Biao
2016-08-01
Using Parikh-Wilczek tunneling framework, we calculate the tunneling rate from a Schwarzschild black hole under the third order WKB approximation, and then obtain the expressions for emission spectrum and black hole entropy to the third order correction. The entropy contains four terms including the Bekenstein-Hawking entropy, the logarithmic term, the inverse area term, and the square of inverse area term. In addition, we analyse the correlation between sequential emissions under this approximation. It is shown that the entropy is conserved during the process of black hole evaporation, which consists with the request of quantum mechanics and implies the information is conserved during this process. We also compare the above result with that of pure thermal spectrum case, and find that the non-thermal correction played an important role.
EEG-Based Computer Aided Diagnosis of Autism Spectrum Disorder Using Wavelet, Entropy, and ANN
AlSharabi, Khalil; Ibrahim, Sutrisno; Alsuwailem, Abdullah
2017-01-01
Autism spectrum disorder (ASD) is a type of neurodevelopmental disorder with core impairments in the social relationships, communication, imagination, or flexibility of thought and restricted repertoire of activity and interest. In this work, a new computer aided diagnosis (CAD) of autism based on electroencephalography (EEG) signal analysis is investigated. The proposed method is based on discrete wavelet transform (DWT), entropy (En), and artificial neural network (ANN). DWT is used to decompose EEG signals into approximation and details coefficients to obtain EEG subbands. The feature vector is constructed by computing Shannon entropy values from each EEG subband. ANN classifies the corresponding EEG signal into normal or autistic based on the extracted features. The experimental results show the effectiveness of the proposed method for assisting autism diagnosis. A receiver operating characteristic (ROC) curve metric is used to quantify the performance of the proposed method. The proposed method obtained promising results tested using real dataset provided by King Abdulaziz Hospital, Jeddah, Saudi Arabia. PMID:28484720
NASA Astrophysics Data System (ADS)
Zurek, Sebastian; Guzik, Przemyslaw; Pawlak, Sebastian; Kosmider, Marcin; Piskorski, Jaroslaw
2012-12-01
We explore the relation between correlation dimension, approximate entropy and sample entropy parameters, which are commonly used in nonlinear systems analysis. Using theoretical considerations we identify the points which are shared by all these complexity algorithms and show explicitly that the above parameters are intimately connected and mutually interdependent. A new geometrical interpretation of sample entropy and correlation dimension is provided and the consequences for the interpretation of sample entropy, its relative consistency and some of the algorithms for parameter selection for this quantity are discussed. To get an exact algorithmic relation between the three parameters we construct a very fast algorithm for simultaneous calculations of the above, which uses the full time series as the source of templates, rather than the usual 10%. This algorithm can be used in medical applications of complexity theory, as it can calculate all three parameters for a realistic recording of 104 points within minutes with the use of an average notebook computer.
On the convergence of difference approximations to scalar conservation laws
NASA Technical Reports Server (NTRS)
Osher, S.; Tadmor, E.
1985-01-01
A unified treatment of explicit in time, two level, second order resolution, total variation diminishing, approximations to scalar conservation laws are presented. The schemes are assumed only to have conservation form and incremental form. A modified flux and a viscosity coefficient are introduced and results in terms of the latter are obtained. The existence of a cell entropy inequality is discussed and such an equality for all entropies is shown to imply that the scheme is an E scheme on monotone (actually more general) data, hence at most only first order accurate in general. Convergence for total variation diminishing-second order resolution schemes approximating convex or concave conservation laws is shown by enforcing a single discrete entropy inequality.
Approximation of the ruin probability using the scaled Laplace transform inversion
Mnatsakanov, Robert M.; Sarkisian, Khachatur; Hakobyan, Artak
2015-01-01
The problem of recovering the ruin probability in the classical risk model based on the scaled Laplace transform inversion is studied. It is shown how to overcome the problem of evaluating the ruin probability at large values of an initial surplus process. Comparisons of proposed approximations with the ones based on the Laplace transform inversions using a fixed Talbot algorithm as well as on the ones using the Trefethen–Weideman–Schmelzer and maximum entropy methods are presented via a simulation study. PMID:26752796
Entropy production of a Brownian ellipsoid in the overdamped limit.
Marino, Raffaele; Eichhorn, Ralf; Aurell, Erik
2016-01-01
We analyze the translational and rotational motion of an ellipsoidal Brownian particle from the viewpoint of stochastic thermodynamics. The particle's Brownian motion is driven by external forces and torques and takes place in an heterogeneous thermal environment where friction coefficients and (local) temperature depend on space and time. Our analysis of the particle's stochastic thermodynamics is based on the entropy production associated with single particle trajectories. It is motivated by the recent discovery that the overdamped limit of vanishing inertia effects (as compared to viscous fricion) produces a so-called "anomalous" contribution to the entropy production, which has no counterpart in the overdamped approximation, when inertia effects are simply discarded. Here we show that rotational Brownian motion in the overdamped limit generates an additional contribution to the "anomalous" entropy. We calculate its specific form by performing a systematic singular perturbation analysis for the generating function of the entropy production. As a side result, we also obtain the (well-known) equations of motion in the overdamped limit. We furthermore investigate the effects of particle shape and give explicit expressions of the "anomalous entropy" for prolate and oblate spheroids and for near-spherical Brownian particles.
2017-01-01
Driver fatigue has become an important factor to traffic accidents worldwide, and effective detection of driver fatigue has major significance for public health. The purpose method employs entropy measures for feature extraction from a single electroencephalogram (EEG) channel. Four types of entropies measures, sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE), and spectral entropy (PE), were deployed for the analysis of original EEG signal and compared by ten state-of-the-art classifiers. Results indicate that optimal performance of single channel is achieved using a combination of channel CP4, feature FE, and classifier Random Forest (RF). The highest accuracy can be up to 96.6%, which has been able to meet the needs of real applications. The best combination of channel + features + classifier is subject-specific. In this work, the accuracy of FE as the feature is far greater than the Acc of other features. The accuracy using classifier RF is the best, while that of classifier SVM with linear kernel is the worst. The impact of channel selection on the Acc is larger. The performance of various channels is very different. PMID:28255330
Hu, Jianfeng
2017-01-01
Driver fatigue has become an important factor to traffic accidents worldwide, and effective detection of driver fatigue has major significance for public health. The purpose method employs entropy measures for feature extraction from a single electroencephalogram (EEG) channel. Four types of entropies measures, sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE), and spectral entropy (PE), were deployed for the analysis of original EEG signal and compared by ten state-of-the-art classifiers. Results indicate that optimal performance of single channel is achieved using a combination of channel CP4, feature FE, and classifier Random Forest (RF). The highest accuracy can be up to 96.6%, which has been able to meet the needs of real applications. The best combination of channel + features + classifier is subject-specific. In this work, the accuracy of FE as the feature is far greater than the Acc of other features. The accuracy using classifier RF is the best, while that of classifier SVM with linear kernel is the worst. The impact of channel selection on the Acc is larger. The performance of various channels is very different.
Binarized cross-approximate entropy in crowdsensing environment.
Skoric, Tamara; Mohamoud, Omer; Milovanovic, Branislav; Japundzic-Zigon, Nina; Bajic, Dragana
2017-01-01
Personalised monitoring in health applications has been recognised as part of the mobile crowdsensing concept, where subjects equipped with sensors extract information and share them for personal or common benefit. Limited transmission resources impose the use of local analyses methodology, but this approach is incompatible with analytical tools that require stationary and artefact-free data. This paper proposes a computationally efficient binarised cross-approximate entropy, referred to as (X)BinEn, for unsupervised cardiovascular signal processing in environments where energy and processor resources are limited. The proposed method is a descendant of the cross-approximate entropy ((X)ApEn). It operates on binary, differentially encoded data series split into m-sized vectors. The Hamming distance is used as a distance measure, while a search for similarities is performed on the vector sets. The procedure is tested on rats under shaker and restraint stress, and compared to the existing (X)ApEn results. The number of processing operations is reduced. (X)BinEn captures entropy changes in a similar manner to (X)ApEn. The coding coarseness yields an adverse effect of reduced sensitivity, but it attenuates parameter inconsistency and binary bias. A special case of (X)BinEn is equivalent to Shannon's entropy. A binary conditional entropy for m =1 vectors is embedded into the (X)BinEn procedure. (X)BinEn can be applied to a single time series as an auto-entropy method, or to a pair of time series, as a cross-entropy method. Its low processing requirements makes it suitable for mobile, battery operated, self-attached sensing devices, with limited power and processor resources. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Onate, C. A.; Onyeaju, M. C.; Ikot, A. N.; Ebomwonyi, O.
2017-11-01
By using the supersymmetric approach, we studied the approximate analytic solutions of the three-dimensional Schrödinger equation with the Hellmann potential by applying a suitable approximation scheme to the centrifugal term. The solutions of other useful potentials, such as Coulomb potential and Yukawa potential, are obtained by transformation of variables from the Hellmann potential. Finally, we calculated the Tsallis entropy and Rényi entropy both in position and momentum spaces under the Hellmann potential using integral method. The effects of these entropies on the angular momentum quantum number are investigated in detail.
Gu, Q; Ding, Y S; Zhang, T L
2010-05-01
We use approximate entropy and hydrophobicity patterns to predict G-protein-coupled receptors. Adaboost classifier is adopted as the prediction engine. A low homology dataset is used to validate the proposed method. Compared with the results reported, the successful rate is encouraging. The source code is written by Matlab.
Cuesta-Frau, David; Miró-Martínez, Pau; Jordán Núñez, Jorge; Oltra-Crespo, Sandra; Molina Picó, Antonio
2017-08-01
This paper evaluates the performance of first generation entropy metrics, featured by the well known and widely used Approximate Entropy (ApEn) and Sample Entropy (SampEn) metrics, and what can be considered an evolution from these, Fuzzy Entropy (FuzzyEn), in the Electroencephalogram (EEG) signal classification context. The study uses the commonest artifacts found in real EEGs, such as white noise, and muscular, cardiac, and ocular artifacts. Using two different sets of publicly available EEG records, and a realistic range of amplitudes for interfering artifacts, this work optimises and assesses the robustness of these metrics against artifacts in class segmentation terms probability. The results show that the qualitative behaviour of the two datasets is similar, with SampEn and FuzzyEn performing the best, and the noise and muscular artifacts are the most confounding factors. On the contrary, there is a wide variability as regards initialization parameters. The poor performance achieved by ApEn suggests that this metric should not be used in these contexts. Copyright © 2017 Elsevier Ltd. All rights reserved.
Numerical viscosity and the entropy condition for conservative difference schemes
NASA Technical Reports Server (NTRS)
Tadmor, E.
1983-01-01
Consider a scalar, nonlinear conservative difference scheme satisfying the entropy condition. It is shown that difference schemes containing more numerical viscosity will necessarily converge to the unique, physically relevant weak solution of the approximated conservation equation. In particular, entropy satisfying convergence follows for E schemes - those containing more numerical viscosity than Godunov's scheme.
Fine structure of the entanglement entropy in the O(2) model.
Yang, Li-Ping; Liu, Yuzhi; Zou, Haiyuan; Xie, Z Y; Meurice, Y
2016-01-01
We compare two calculations of the particle density in the superfluid phase of the O(2) model with a chemical potential μ in 1+1 dimensions. The first relies on exact blocking formulas from the Tensor Renormalization Group (TRG) formulation of the transfer matrix. The second is a worm algorithm. We show that the particle number distributions obtained with the two methods agree well. We use the TRG method to calculate the thermal entropy and the entanglement entropy. We describe the particle density, the two entropies and the topology of the world lines as we increase μ to go across the superfluid phase between the first two Mott insulating phases. For a sufficiently large temporal size, this process reveals an interesting fine structure: the average particle number and the winding number of most of the world lines in the Euclidean time direction increase by one unit at a time. At each step, the thermal entropy develops a peak and the entanglement entropy increases until we reach half-filling and then decreases in a way that approximately mirrors the ascent. This suggests an approximate fermionic picture.
High-Order Entropy Stable Finite Difference Schemes for Nonlinear Conservation Laws: Finite Domains
NASA Technical Reports Server (NTRS)
Fisher, Travis C.; Carpenter, Mark H.
2013-01-01
Developing stable and robust high-order finite difference schemes requires mathematical formalism and appropriate methods of analysis. In this work, nonlinear entropy stability is used to derive provably stable high-order finite difference methods with formal boundary closures for conservation laws. Particular emphasis is placed on the entropy stability of the compressible Navier-Stokes equations. A newly derived entropy stable weighted essentially non-oscillatory finite difference method is used to simulate problems with shocks and a conservative, entropy stable, narrow-stencil finite difference approach is used to approximate viscous terms.
On cell entropy inequality for discontinuous Galerkin methods
NASA Technical Reports Server (NTRS)
Jiang, Guangshan; Shu, Chi-Wang
1993-01-01
We prove a cell entropy inequality for a class of high order discontinuous Galerkin finite element methods approximating conservation laws, which implies convergence for the one dimensional scalar convex case.
Engoren, Milo; Brown, Russell R; Dubovoy, Anna
2017-01-01
Acute anemia is associated with both cerebral dysfunction and acute kidney injury and is often treated with red blood cell transfusion. We sought to determine if blood transfusion changed the cerebral oximetry entropy, a measure of the complexity or irregularity of the oximetry values, and if this change was associated with subsequent acute kidney injury. This was a retrospective, case-control study of patients undergoing cardiac surgery with cardiopulmonary bypass at a tertiary care hospital, comparing those who received a red blood cell transfusion to those who did not. Acute kidney injury was defined as a perioperative increase in serum creatinine by ⩾26.4 μmol/L or by ⩾50% increase. Entropy was measured using approximate entropy, sample entropy, forbidden word entropy and basescale4 entropy in 500-point sets. Forty-four transfused patients were matched to 88 randomly selected non-transfused patients. All measures of entropy had small changes in the transfused group, but increased in the non-transfused group (p<0.05, for all comparisons). Thirty-five of 132 patients (27%) suffered acute kidney injury. Based on preoperative factors, patients who suffered kidney injury were similar to those who did not, including baseline cerebral oximetry levels. After analysis with hierarchical logistic regression, the change in basescale4 entropy (odds ratio = 1.609, 95% confidence interval = 1.057-2.450, p = 0.027) and the interaction between basescale entropy and transfusion were significantly associated with subsequent development of acute kidney injury. The transfusion of red blood cells was associated with a smaller rise in entropy values compared to non-transfused patients, suggesting a change in the regulation of cerebral oxygenation, and these changes in cerebral oxygenation are also associated with acute kidney injury.
Campbell's Rule for Estimating Entropy Changes
ERIC Educational Resources Information Center
Jensen, William B.
2004-01-01
Campbell's rule for estimating entropy changes is discussed in relation to an earlier article by Norman Craig, where it was proposed that the approximate value of the entropy of reaction was related to net moles of gas consumed or generated. It was seen that the average for Campbell's data set was lower than that for Craig's data set and…
Renyi entropy measures of heart rate Gaussianity.
Lake, Douglas E
2006-01-01
Sample entropy and approximate entropy are measures that have been successfully utilized to study the deterministic dynamics of heart rate (HR). A complementary stochastic point of view and a heuristic argument using the Central Limit Theorem suggests that the Gaussianity of HR is a complementary measure of the physiological complexity of the underlying signal transduction processes. Renyi entropy (or q-entropy) is a widely used measure of Gaussianity in many applications. Particularly important members of this family are differential (or Shannon) entropy (q = 1) and quadratic entropy (q = 2). We introduce the concepts of differential and conditional Renyi entropy rate and, in conjunction with Burg's theorem, develop a measure of the Gaussianity of a linear random process. Robust algorithms for estimating these quantities are presented along with estimates of their standard errors.
Third-order dissipative hydrodynamics from the entropy principle
NASA Astrophysics Data System (ADS)
El, Andrej; Xu, Zhe; Greiner, Carsten
2010-06-01
We review the entropy based derivation of third-order hydrodynamic equations and compare their solutions in one-dimensional boost-invariant geometry with calculations by the partonic cascade BAMPS. We demonstrate that Grad's approximation, which underlies the derivation of both Israel-Stewart and third-order equations, describes the transverse spectra from BAMPS with high accuracy. At the same time solutions of third-order equations are much closer to BAMPS results than solutions of Israel-Stewart equations. Introducing a resummation scheme for all higher-oder corrections to one-dimensional hydrodynamic equation we demonstrate the importance of higher-order terms if the Knudsen number is large.
Minimal entropy probability paths between genome families.
Ahlbrandt, Calvin; Benson, Gary; Casey, William
2004-05-01
We develop a metric for probability distributions with applications to biological sequence analysis. Our distance metric is obtained by minimizing a functional defined on the class of paths over probability measures on N categories. The underlying mathematical theory is connected to a constrained problem in the calculus of variations. The solution presented is a numerical solution, which approximates the true solution in a set of cases called rich paths where none of the components of the path is zero. The functional to be minimized is motivated by entropy considerations, reflecting the idea that nature might efficiently carry out mutations of genome sequences in such a way that the increase in entropy involved in transformation is as small as possible. We characterize sequences by frequency profiles or probability vectors, in the case of DNA where N is 4 and the components of the probability vector are the frequency of occurrence of each of the bases A, C, G and T. Given two probability vectors a and b, we define a distance function based as the infimum of path integrals of the entropy function H( p) over all admissible paths p(t), 0 < or = t< or =1, with p(t) a probability vector such that p(0)=a and p(1)=b. If the probability paths p(t) are parameterized as y(s) in terms of arc length s and the optimal path is smooth with arc length L, then smooth and "rich" optimal probability paths may be numerically estimated by a hybrid method of iterating Newton's method on solutions of a two point boundary value problem, with unknown distance L between the abscissas, for the Euler-Lagrange equations resulting from a multiplier rule for the constrained optimization problem together with linear regression to improve the arc length estimate L. Matlab code for these numerical methods is provided which works only for "rich" optimal probability vectors. These methods motivate a definition of an elementary distance function which is easier and faster to calculate, works on non-rich vectors, does not involve variational theory and does not involve differential equations, but is a better approximation of the minimal entropy path distance than the distance //b-a//(2). We compute minimal entropy distance matrices for examples of DNA myostatin genes and amino-acid sequences across several species. Output tree dendograms for our minimal entropy metric are compared with dendograms based on BLAST and BLAST identity scores.
Critical evaluation of methods to incorporate entropy loss upon binding in high-throughput docking.
Salaniwal, Sumeet; Manas, Eric S; Alvarez, Juan C; Unwalla, Rayomand J
2007-02-01
Proper accounting of the positional/orientational/conformational entropy loss associated with protein-ligand binding is important to obtain reliable predictions of binding affinity. Herein, we critically examine two simplified statistical mechanics-based approaches, namely a constant penalty per rotor method, and a more rigorous method, referred to here as the partition function-based scoring (PFS) method, to account for such entropy losses in high-throughput docking calculations. Our results on the estrogen receptor beta and dihydrofolate reductase proteins demonstrate that, while the constant penalty method over-penalizes molecules for their conformational flexibility, the PFS method behaves in a more "DeltaG-like" manner by penalizing different rotors differently depending on their residual entropy in the bound state. Furthermore, in contrast to no entropic penalty or the constant penalty approximation, the PFS method does not exhibit any bias towards either rigid or flexible molecules in the hit list. Preliminary enrichment studies using a lead-like random molecular database suggest that an accurate representation of the "true" energy landscape of the protein-ligand complex is critical for reliable predictions of relative binding affinities by the PFS method. Copyright 2006 Wiley-Liss, Inc.
Roelfsema, Ferdinand; Pereira, Alberto M; Adriaanse, Ria; Endert, Erik; Fliers, Eric; Romijn, Johannes A; Veldhuis, Johannes D
2010-02-01
Twenty-four-hour TSH secretion profiles in primary hypothyroidism have been analyzed with methods no longer in use. The insights afforded by earlier methods are limited. We studied TSH secretion in patients with primary hypothyroidism (eight patients with severe and eight patients with mild hypothyroidism) with up-to-date analytical tools and compared the results with outcomes in 38 healthy controls. Patients and controls underwent a 24-h study with 10-min blood sampling. TSH data were analyzed with a newly developed automated deconvolution program, approximate entropy, spikiness assessment, and cosinor regression. Both basal and pulsatile TSH secretion rates were increased in hypothyroid patients, the latter by increased burst mass with unchanged frequency. Secretory regularity (approximate entropy) was diminished, and spikiness was increased only in patients with severe hypothyroidism. A diurnal TSH rhythm was present in all but two patients, although with an earlier acrophase in severe hypothyroidism. The estimated slow component of the TSH half-life was shortened in all patients. Increased TSH concentrations in hypothyroidism are mediated by amplification of basal secretion and burst size. Secretory abnormalities quantitated by approximate entropy and spikiness were only present in patients with severe disease and thus are possibly related to the increased thyrotrope cell mass.
Statistical Entropy of Dirac Field Outside RN Black Hole and Modified Density Equation
NASA Astrophysics Data System (ADS)
Cao, Fei; He, Feng
2012-02-01
Statistical entropy of Dirac field in Reissner-Nordstrom black hole space-time is computed by state density equation corrected by the generalized uncertainty principle to all orders in Planck length and WKB approximation. The result shows that the statistical entropy is proportional to the horizon area but the present result is convergent without any artificial cutoff.
Discrete gravity on random tensor network and holographic Rényi entropy
NASA Astrophysics Data System (ADS)
Han, Muxin; Huang, Shilin
2017-11-01
In this paper we apply the discrete gravity and Regge calculus to tensor networks and Anti-de Sitter/conformal field theory (AdS/CFT) correspondence. We construct the boundary many-body quantum state |Ψ〉 using random tensor networks as the holographic mapping, applied to the Wheeler-deWitt wave function of bulk Euclidean discrete gravity in 3 dimensions. The entanglement Rényi entropy of |Ψ〉 is shown to holographically relate to the on-shell action of Einstein gravity on a branch cover bulk manifold. The resulting Rényi entropy S n of |Ψ〉 approximates with high precision the Rényi entropy of ground state in 2-dimensional conformal field theory (CFT). In particular it reproduces the correct n dependence. Our results develop the framework of realizing the AdS3/CFT2 correspondence on random tensor networks, and provide a new proposal to approximate the CFT ground state.
Entropy for the Complexity of Physiological Signal Dynamics.
Zhang, Xiaohua Douglas
2017-01-01
Recently, the rapid development of large data storage technologies, mobile network technology, and portable medical devices makes it possible to measure, record, store, and track analysis of biological dynamics. Portable noninvasive medical devices are crucial to capture individual characteristics of biological dynamics. The wearable noninvasive medical devices and the analysis/management of related digital medical data will revolutionize the management and treatment of diseases, subsequently resulting in the establishment of a new healthcare system. One of the key features that can be extracted from the data obtained by wearable noninvasive medical device is the complexity of physiological signals, which can be represented by entropy of biological dynamics contained in the physiological signals measured by these continuous monitoring medical devices. Thus, in this chapter I present the major concepts of entropy that are commonly used to measure the complexity of biological dynamics. The concepts include Shannon entropy, Kolmogorov entropy, Renyi entropy, approximate entropy, sample entropy, and multiscale entropy. I also demonstrate an example of using entropy for the complexity of glucose dynamics.
NASA Astrophysics Data System (ADS)
Li, Weiyao; Huang, Guanhua; Xiong, Yunwu
2016-04-01
The complexity of the spatial structure of porous media, randomness of groundwater recharge and discharge (rainfall, runoff, etc.) has led to groundwater movement complexity, physical and chemical interaction between groundwater and porous media cause solute transport in the medium more complicated. An appropriate method to describe the complexity of features is essential when study on solute transport and conversion in porous media. Information entropy could measure uncertainty and disorder, therefore we attempted to investigate complexity, explore the contact between the information entropy and complexity of solute transport in heterogeneous porous media using information entropy theory. Based on Markov theory, two-dimensional stochastic field of hydraulic conductivity (K) was generated by transition probability. Flow and solute transport model were established under four conditions (instantaneous point source, continuous point source, instantaneous line source and continuous line source). The spatial and temporal complexity of solute transport process was characterized and evaluated using spatial moment and information entropy. Results indicated that the entropy increased as the increase of complexity of solute transport process. For the point source, the one-dimensional entropy of solute concentration increased at first and then decreased along X and Y directions. As time increased, entropy peak value basically unchanged, peak position migrated along the flow direction (X direction) and approximately coincided with the centroid position. With the increase of time, spatial variability and complexity of solute concentration increase, which result in the increases of the second-order spatial moment and the two-dimensional entropy. Information entropy of line source was higher than point source. Solute entropy obtained from continuous input was higher than instantaneous input. Due to the increase of average length of lithoface, media continuity increased, flow and solute transport complexity weakened, and the corresponding information entropy also decreased. Longitudinal macro dispersivity declined slightly at early time then rose. Solute spatial and temporal distribution had significant impacts on the information entropy. Information entropy could reflect the change of solute distribution. Information entropy appears a tool to characterize the spatial and temporal complexity of solute migration and provides a reference for future research.
NASA Technical Reports Server (NTRS)
Dejarnette, F. R.
1972-01-01
A relatively simple method is presented for including the effect of variable entropy at the boundary-layer edge in a heat transfer method developed previously. For each inviscid surface streamline an approximate shockwave shape is calculated using a modified form of Maslen's method for inviscid axisymmetric flows. The entropy for the streamline at the edge of the boundary layer is determined by equating the mass flux through the shock wave to that inside the boundary layer. Approximations used in this technique allow the heating rates along each inviscid surface streamline to be calculated independent of the other streamlines. The shock standoff distances computed by the present method are found to compare well with those computed by Maslen's asymmetric method. Heating rates are presented for blunted circular and elliptical cones and a typical space shuttle orbiter at angles of attack. Variable entropy effects are found to increase heating rates downstream of the nose significantly higher than those computed using normal-shock entropy, and turbulent heating rates increased more than laminar rates. Effects of Reynolds number and angles of attack are also shown.
Approximate reversibility in the context of entropy gain, information gain, and complete positivity
NASA Astrophysics Data System (ADS)
Buscemi, Francesco; Das, Siddhartha; Wilde, Mark M.
2016-06-01
There are several inequalities in physics which limit how well we can process physical systems to achieve some intended goal, including the second law of thermodynamics, entropy bounds in quantum information theory, and the uncertainty principle of quantum mechanics. Recent results provide physically meaningful enhancements of these limiting statements, determining how well one can attempt to reverse an irreversible process. In this paper, we apply and extend these results to give strong enhancements to several entropy inequalities, having to do with entropy gain, information gain, entropic disturbance, and complete positivity of open quantum systems dynamics. Our first result is a remainder term for the entropy gain of a quantum channel. This result implies that a small increase in entropy under the action of a subunital channel is a witness to the fact that the channel's adjoint can be used as a recovery map to undo the action of the original channel. We apply this result to pure-loss, quantum-limited amplifier, and phase-insensitive quantum Gaussian channels, showing how a quantum-limited amplifier can serve as a recovery from a pure-loss channel and vice versa. Our second result regards the information gain of a quantum measurement, both without and with quantum side information. We find here that a small information gain implies that it is possible to undo the action of the original measurement if it is efficient. The result also has operational ramifications for the information-theoretic tasks known as measurement compression without and with quantum side information. Our third result shows that the loss of Holevo information caused by the action of a noisy channel on an input ensemble of quantum states is small if and only if the noise can be approximately corrected on average. We finally establish that the reduced dynamics of a system-environment interaction are approximately completely positive and trace preserving if and only if the data processing inequality holds approximately.
Biomathematical modeling of pulsatile hormone secretion: a historical perspective.
Evans, William S; Farhy, Leon S; Johnson, Michael L
2009-01-01
Shortly after the recognition of the profound physiological significance of the pulsatile nature of hormone secretion, computer-based modeling techniques were introduced for the identification and characterization of such pulses. Whereas these earlier approaches defined perturbations in hormone concentration-time series, deconvolution procedures were subsequently employed to separate such pulses into their secretion event and clearance components. Stochastic differential equation modeling was also used to define basal and pulsatile hormone secretion. To assess the regulation of individual components within a hormone network, a method that quantitated approximate entropy within hormone concentration-times series was described. To define relationships within coupled hormone systems, methods including cross-correlation and cross-approximate entropy were utilized. To address some of the inherent limitations of these methods, modeling techniques with which to appraise the strength of feedback signaling between and among hormone-secreting components of a network have been developed. Techniques such as dynamic modeling have been utilized to reconstruct dose-response interactions between hormones within coupled systems. A logical extension of these advances will require the development of mathematical methods with which to approximate endocrine networks exhibiting multiple feedback interactions and subsequently reconstruct their parameters based on experimental data for the purpose of testing regulatory hypotheses and estimating alterations in hormone release control mechanisms.
Global Existence Analysis of Cross-Diffusion Population Systems for Multiple Species
NASA Astrophysics Data System (ADS)
Chen, Xiuqing; Daus, Esther S.; Jüngel, Ansgar
2018-02-01
The existence of global-in-time weak solutions to reaction-cross-diffusion systems for an arbitrary number of competing population species is proved. The equations can be derived from an on-lattice random-walk model with general transition rates. In the case of linear transition rates, it extends the two-species population model of Shigesada, Kawasaki, and Teramoto. The equations are considered in a bounded domain with homogeneous Neumann boundary conditions. The existence proof is based on a refined entropy method and a new approximation scheme. Global existence follows under a detailed balance or weak cross-diffusion condition. The detailed balance condition is related to the symmetry of the mobility matrix, which mirrors Onsager's principle in thermodynamics. Under detailed balance (and without reaction) the entropy is nonincreasing in time, but counter-examples show that the entropy may increase initially if detailed balance does not hold.
The Correlation of Standard Entropy with Enthalpy Supplied from 0 to 298.15 K
ERIC Educational Resources Information Center
Lambert, Frank L.; Leff, Harvey S.
2009-01-01
As a substance is heated at constant pressure from near 0 K to 298 K, each incremental enthalpy increase, dH, alters entropy by dH/T, bringing it from approximately zero to its standard molar entropy S degrees. Using heat capacity data for 32 solids and CODATA results for another 45, we found a roughly linear relationship between S degrees and…
Finite-temperature Gutzwiller approximation from the time-dependent variational principle
NASA Astrophysics Data System (ADS)
Lanatà, Nicola; Deng, Xiaoyu; Kotliar, Gabriel
2015-08-01
We develop an extension of the Gutzwiller approximation to finite temperatures based on the Dirac-Frenkel variational principle. Our method does not rely on any entropy inequality, and is substantially more accurate than the approaches proposed in previous works. We apply our theory to the single-band Hubbard model at different fillings, and show that our results compare quantitatively well with dynamical mean field theory in the metallic phase. We discuss potential applications of our technique within the framework of first-principle calculations.
Self-Similar Random Process and Chaotic Behavior In Serrated Flow of High Entropy Alloys
Chen, Shuying; Yu, Liping; Ren, Jingli; Xie, Xie; Li, Xueping; Xu, Ying; Zhao, Guangfeng; Li, Peizhen; Yang, Fuqian; Ren, Yang; Liaw, Peter K.
2016-01-01
The statistical and dynamic analyses of the serrated-flow behavior in the nanoindentation of a high-entropy alloy, Al0.5CoCrCuFeNi, at various holding times and temperatures, are performed to reveal the hidden order associated with the seemingly-irregular intermittent flow. Two distinct types of dynamics are identified in the high-entropy alloy, which are based on the chaotic time-series, approximate entropy, fractal dimension, and Hurst exponent. The dynamic plastic behavior at both room temperature and 200 °C exhibits a positive Lyapunov exponent, suggesting that the underlying dynamics is chaotic. The fractal dimension of the indentation depth increases with the increase of temperature, and there is an inflection at the holding time of 10 s at the same temperature. A large fractal dimension suggests the concurrent nucleation of a large number of slip bands. In particular, for the indentation with the holding time of 10 s at room temperature, the slip process evolves as a self-similar random process with a weak negative correlation similar to a random walk. PMID:27435922
Self-similar random process and chaotic behavior in serrated flow of high entropy alloys
Chen, Shuying; Yu, Liping; Ren, Jingli; ...
2016-07-20
Here, the statistical and dynamic analyses of the serrated-flow behavior in the nanoindentation of a high-entropy alloy, Al 0.5CoCrCuFeNi, at various holding times and temperatures, are performed to reveal the hidden order associated with the seemingly-irregular intermittent flow. Two distinct types of dynamics are identified in the high-entropy alloy, which are based on the chaotic time-series, approximate entropy, fractal dimension, and Hurst exponent. The dynamic plastic behavior at both room temperature and 200 °C exhibits a positive Lyapunov exponent, suggesting that the underlying dynamics is chaotic. The fractal dimension of the indentation depth increases with the increase of temperature, andmore » there is an inflection at the holding time of 10 s at the same temperature. A large fractal dimension suggests the concurrent nucleation of a large number of slip bands. In particular, for the indentation with the holding time of 10 s at room temperature, the slip process evolves as a self-similar random process with a weak negative correlation similar to a random walk.« less
Self-Similar Random Process and Chaotic Behavior In Serrated Flow of High Entropy Alloys.
Chen, Shuying; Yu, Liping; Ren, Jingli; Xie, Xie; Li, Xueping; Xu, Ying; Zhao, Guangfeng; Li, Peizhen; Yang, Fuqian; Ren, Yang; Liaw, Peter K
2016-07-20
The statistical and dynamic analyses of the serrated-flow behavior in the nanoindentation of a high-entropy alloy, Al0.5CoCrCuFeNi, at various holding times and temperatures, are performed to reveal the hidden order associated with the seemingly-irregular intermittent flow. Two distinct types of dynamics are identified in the high-entropy alloy, which are based on the chaotic time-series, approximate entropy, fractal dimension, and Hurst exponent. The dynamic plastic behavior at both room temperature and 200 °C exhibits a positive Lyapunov exponent, suggesting that the underlying dynamics is chaotic. The fractal dimension of the indentation depth increases with the increase of temperature, and there is an inflection at the holding time of 10 s at the same temperature. A large fractal dimension suggests the concurrent nucleation of a large number of slip bands. In particular, for the indentation with the holding time of 10 s at room temperature, the slip process evolves as a self-similar random process with a weak negative correlation similar to a random walk.
Discrimination of coherent features in turbulent boundary layers by the entropy method
NASA Technical Reports Server (NTRS)
Corke, T. C.; Guezennec, Y. G.
1984-01-01
Entropy in information theory is defined as the expected or mean value of the measure of the amount of self-information contained in the ith point of a distribution series x sub i, based on its probability of occurrence p(x sub i). If p(x sub i) is the probability of the ith state of the system in probability space, then the entropy, E(X) = - sigma p(x sub i) logp (x sub i), is a measure of the disorder in the system. Based on this concept, a method was devised which sought to minimize the entropy in a time series in order to construct the signature of the most coherent motions. The constrained minimization was performed using a Lagrange multiplier approach which resulted in the solution of a simultaneous set of non-linear coupled equations to obtain the coherent time series. The application of the method to space-time data taken by a rake of sensors in the near-wall region of a turbulent boundary layer was presented. The results yielded coherent velocity motions made up of locally decelerated or accelerated fluid having a streamwise scale of approximately 100 nu/u(tau), which is in qualitative agreement with the results from other less objective discrimination methods.
Efficient optimization of the quantum relative entropy
NASA Astrophysics Data System (ADS)
Fawzi, Hamza; Fawzi, Omar
2018-04-01
Many quantum information measures can be written as an optimization of the quantum relative entropy between sets of states. For example, the relative entropy of entanglement of a state is the minimum relative entropy to the set of separable states. The various capacities of quantum channels can also be written in this way. We propose a unified framework to numerically compute these quantities using off-the-shelf semidefinite programming solvers, exploiting the approximation method proposed in Fawzi, Saunderson and Parrilo (2017 arXiv: 1705.00812). As a notable application, this method allows us to provide numerical counterexamples for a proposed lower bound on the quantum conditional mutual information in terms of the relative entropy of recovery.
Entanglement entropy in a boundary impurity model.
Levine, G C
2004-12-31
Boundary impurities are known to dramatically alter certain bulk properties of (1+1)-dimensional strongly correlated systems. The entanglement entropy of a zero temperature Luttinger liquid bisected by a single impurity is computed using a novel finite size scaling or bosonization scheme. For a Luttinger liquid of length 2L and UV cutoff epsilon, the boundary impurity correction (deltaSimp) to the logarithmic entanglement entropy (Sent proportional, variant lnL/epsilon scales as deltaSimp approximately yrlnL/epsilon, where yr is the renormalized backscattering coupling constant. In this way, the entanglement entropy within a region is related to scattering through the region's boundary. In the repulsive case (g<1), deltaSimp diverges (negatively) suggesting that the entropy vanishes. Our results are consistent with the recent conjecture that entanglement entropy decreases irreversibly along renormalization group flow.
Nonequilibrium Entropy in a Shock
Margolin, Len G.
2017-07-19
In a classic paper, Morduchow and Libby use an analytic solution for the profile of a Navier–Stokes shock to show that the equilibrium thermodynamic entropy has a maximum inside the shock. There is no general nonequilibrium thermodynamic formulation of entropy; the extension of equilibrium theory to nonequililbrium processes is usually made through the assumption of local thermodynamic equilibrium (LTE). However, gas kinetic theory provides a perfectly general formulation of a nonequilibrium entropy in terms of the probability distribution function (PDF) solutions of the Boltzmann equation. In this paper I will evaluate the Boltzmann entropy for the PDF that underlies themore » Navier–Stokes equations and also for the PDF of the Mott–Smith shock solution. I will show that both monotonically increase in the shock. As a result, I will propose a new nonequilibrium thermodynamic entropy and show that it is also monotone and closely approximates the Boltzmann entropy.« less
Nonequilibrium Entropy in a Shock
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margolin, Len G.
In a classic paper, Morduchow and Libby use an analytic solution for the profile of a Navier–Stokes shock to show that the equilibrium thermodynamic entropy has a maximum inside the shock. There is no general nonequilibrium thermodynamic formulation of entropy; the extension of equilibrium theory to nonequililbrium processes is usually made through the assumption of local thermodynamic equilibrium (LTE). However, gas kinetic theory provides a perfectly general formulation of a nonequilibrium entropy in terms of the probability distribution function (PDF) solutions of the Boltzmann equation. In this paper I will evaluate the Boltzmann entropy for the PDF that underlies themore » Navier–Stokes equations and also for the PDF of the Mott–Smith shock solution. I will show that both monotonically increase in the shock. As a result, I will propose a new nonequilibrium thermodynamic entropy and show that it is also monotone and closely approximates the Boltzmann entropy.« less
Measurement Uncertainty Relations for Discrete Observables: Relative Entropy Formulation
NASA Astrophysics Data System (ADS)
Barchielli, Alberto; Gregoratti, Matteo; Toigo, Alessandro
2018-02-01
We introduce a new information-theoretic formulation of quantum measurement uncertainty relations, based on the notion of relative entropy between measurement probabilities. In the case of a finite-dimensional system and for any approximate joint measurement of two target discrete observables, we define the entropic divergence as the maximal total loss of information occurring in the approximation at hand. For fixed target observables, we study the joint measurements minimizing the entropic divergence, and we prove the general properties of its minimum value. Such a minimum is our uncertainty lower bound: the total information lost by replacing the target observables with their optimal approximations, evaluated at the worst possible state. The bound turns out to be also an entropic incompatibility degree, that is, a good information-theoretic measure of incompatibility: indeed, it vanishes if and only if the target observables are compatible, it is state-independent, and it enjoys all the invariance properties which are desirable for such a measure. In this context, we point out the difference between general approximate joint measurements and sequential approximate joint measurements; to do this, we introduce a separate index for the tradeoff between the error of the first measurement and the disturbance of the second one. By exploiting the symmetry properties of the target observables, exact values, lower bounds and optimal approximations are evaluated in two different concrete examples: (1) a couple of spin-1/2 components (not necessarily orthogonal); (2) two Fourier conjugate mutually unbiased bases in prime power dimension. Finally, the entropic incompatibility degree straightforwardly generalizes to the case of many observables, still maintaining all its relevant properties; we explicitly compute it for three orthogonal spin-1/2 components.
Quantum entanglement in strong-field ionization
NASA Astrophysics Data System (ADS)
Majorosi, Szilárd; Benedict, Mihály G.; Czirják, Attila
2017-10-01
We investigate the time evolution of quantum entanglement between an electron, liberated by a strong few-cycle laser pulse, and its parent ion core. Since the standard procedure is numerically prohibitive in this case, we propose a method to quantify the quantum correlation in such a system: we use the reduced density matrices of the directional subspaces along the polarization of the laser pulse and along the transverse directions as building blocks for an approximate entanglement entropy. We present our results, based on accurate numerical simulations, in terms of several of these entropies, for selected values of the peak electric-field strength and the carrier-envelope phase difference of the laser pulse. The time evolution of the mutual entropy of the electron and the ion-core motion along the direction of the laser polarization is similar to our earlier results based on a simple one-dimensional model. However, taking into account also the dynamics perpendicular to the laser polarization reveals a surprisingly different entanglement dynamics above the laser intensity range corresponding to pure tunneling: the quantum entanglement decreases with time in the over-the-barrier ionization regime.
NASA Astrophysics Data System (ADS)
Zingan, Valentin Nikolaevich
This work develops a discontinuous Galerkin finite element discretization of non- linear hyperbolic conservation equations with efficient and robust high order stabilization built on an entropy-based artificial viscosity approximation. The solutions of equations are represented by elementwise polynomials of an arbitrary degree p > 0 which are continuous within each element but discontinuous on the boundaries. The discretization of equations in time is done by means of high order explicit Runge-Kutta methods identified with respective Butcher tableaux. To stabilize a numerical solution in the vicinity of shock waves and simultaneously preserve the smooth parts from smearing, we add some reasonable amount of artificial viscosity in accordance with the physical principle of entropy production in the interior of shock waves. The viscosity coefficient is proportional to the local size of the residual of an entropy equation and is bounded from above by the first-order artificial viscosity defined by a local wave speed. Since the residual of an entropy equation is supposed to be vanishingly small in smooth regions (of the order of the Local Truncation Error) and arbitrarily large in shocks, the entropy viscosity is almost zero everywhere except the shocks, where it reaches the first-order upper bound. One- and two-dimensional benchmark test cases are presented for nonlinear hyperbolic scalar conservation laws and the system of compressible Euler equations. These tests demonstrate the satisfactory stability properties of the method and optimal convergence rates as well. All numerical solutions to the test problems agree well with the reference solutions found in the literature. We conclude that the new method developed in the present work is a valuable alternative to currently existing techniques of viscous stabilization.
Valenza, Gaetano; Garcia, Ronald G; Citi, Luca; Scilingo, Enzo P; Tomaz, Carlos A; Barbieri, Riccardo
2015-01-01
Nonlinear digital signal processing methods that address system complexity have provided useful computational tools for helping in the diagnosis and treatment of a wide range of pathologies. More specifically, nonlinear measures have been successful in characterizing patients with mental disorders such as Major Depression (MD). In this study, we propose the use of instantaneous measures of entropy, namely the inhomogeneous point-process approximate entropy (ipApEn) and the inhomogeneous point-process sample entropy (ipSampEn), to describe a novel characterization of MD patients undergoing affective elicitation. Because these measures are built within a nonlinear point-process model, they allow for the assessment of complexity in cardiovascular dynamics at each moment in time. Heartbeat dynamics were characterized from 48 healthy controls and 48 patients with MD while emotionally elicited through either neutral or arousing audiovisual stimuli. Experimental results coming from the arousing tasks show that ipApEn measures are able to instantaneously track heartbeat complexity as well as discern between healthy subjects and MD patients. Conversely, standard heart rate variability (HRV) analysis performed in both time and frequency domains did not show any statistical significance. We conclude that measures of entropy based on nonlinear point-process models might contribute to devising useful computational tools for care in mental health.
From Maximum Entropy Models to Non-Stationarity and Irreversibility
NASA Astrophysics Data System (ADS)
Cofre, Rodrigo; Cessac, Bruno; Maldonado, Cesar
The maximum entropy distribution can be obtained from a variational principle. This is important as a matter of principle and for the purpose of finding approximate solutions. One can exploit this fact to obtain relevant information about the underlying stochastic process. We report here in recent progress in three aspects to this approach.1- Biological systems are expected to show some degree of irreversibility in time. Based on the transfer matrix technique to find the spatio-temporal maximum entropy distribution, we build a framework to quantify the degree of irreversibility of any maximum entropy distribution.2- The maximum entropy solution is characterized by a functional called Gibbs free energy (solution of the variational principle). The Legendre transformation of this functional is the rate function, which controls the speed of convergence of empirical averages to their ergodic mean. We show how the correct description of this functional is determinant for a more rigorous characterization of first and higher order phase transitions.3- We assess the impact of a weak time-dependent external stimulus on the collective statistics of spiking neuronal networks. We show how to evaluate this impact on any higher order spatio-temporal correlation. RC supported by ERC advanced Grant ``Bridges'', BC: KEOPS ANR-CONICYT, Renvision and CM: CONICYT-FONDECYT No. 3140572.
Anomalous thermodynamics at the microscale.
Celani, Antonio; Bo, Stefano; Eichhorn, Ralf; Aurell, Erik
2012-12-28
Particle motion at the microscale is an incessant tug-of-war between thermal fluctuations and applied forces on one side and the strong resistance exerted by fluid viscosity on the other. Friction is so strong that completely neglecting inertia--the overdamped approximation--gives an excellent effective description of the actual particle mechanics. In sharp contrast to this result, here we show that the overdamped approximation dramatically fails when thermodynamic quantities such as the entropy production in the environment are considered, in the presence of temperature gradients. In the limit of vanishingly small, yet finite, inertia, we find that the entropy production is dominated by a contribution that is anomalous, i.e., has no counterpart in the overdamped approximation. This phenomenon, which we call an entropic anomaly, is due to a symmetry breaking that occurs when moving to the small, finite inertia limit. Anomalous entropy production is traced back to futile phase-space cyclic trajectories displaying a fast downgradient sweep followed by a slow upgradient return to the original position.
The specific entropy of elliptical galaxies: an explanation for profile-shape distance indicators?
NASA Astrophysics Data System (ADS)
Lima Neto, G. B.; Gerbal, D.; Márquez, I.
1999-10-01
Dynamical systems in equilibrium have a stationary entropy; we suggest that elliptical galaxies, as stellar systems in a stage of quasi-equilibrium, may have in principle a unique specific entropy. This uniqueness, a priori unknown, should be reflected in correlations between the fundamental parameters describing the mass (light) distribution in galaxies. Following recent photometrical work on elliptical galaxies by Caon et al., Graham & Colless and Prugniel & Simien, we use the Sérsic law to describe the light profile and an analytical approximation to its three-dimensional deprojection. The specific entropy is then calculated, supposing that the galaxy behaves as a spherical, isotropic, one-component system in hydrostatic equilibrium, obeying the ideal-gas equations of state. We predict a relation between the three parameters of the Sérsic law linked to the specific entropy, defining a surface in the parameter space, an `Entropic Plane', by analogy with the well-known Fundamental Plane. We have analysed elliptical galaxies in two rich clusters of galaxies (Coma and ABCG 85) and a group of galaxies (associated with NGC 4839, near Coma). We show that, for a given cluster, the galaxies follow closely a relation predicted by the constant specific entropy hypothesis with a typical dispersion (one standard deviation) of 9.5per cent around the mean value of the specific entropy. Moreover, assuming that the specific entropy is also the same for galaxies of different clusters, we are able to derive relative distances between Coma, ABGC 85, and the group of NGC 4839. If the errors are due only to the determination of the specific entropy (about 10per cent), then the error in the relative distance determination should be less than 20per cent for rich clusters. We suggest that the unique specific entropy may provide a physical explanation for the distance indicators based on the Sérsic profile put forward by Young & Currie and recently discussed by Binggeli & Jerjen.
Robust Multimodal Cognitive Load Measurement
2014-03-26
dimension, Hurst exponent ) of electroencephalogram (EEG) signals to evaluate changes in working memory load during the performance of a cognitive task...dimension, Hurst exponent ) of electroencephalogram (EEG) signals to evaluate changes in working memory load during the performance of a cognitive task with...approximate entropies, wavelet-based complexity measures, correlation dimension, Hurst exponent ) of electroencephalogram (EEG) signals to evaluate changes
Low-dimensional approximation searching strategy for transfer entropy from non-uniform embedding
2018-01-01
Transfer entropy from non-uniform embedding is a popular tool for the inference of causal relationships among dynamical subsystems. In this study we present an approach that makes use of low-dimensional conditional mutual information quantities to decompose the original high-dimensional conditional mutual information in the searching procedure of non-uniform embedding for significant variables at different lags. We perform a series of simulation experiments to assess the sensitivity and specificity of our proposed method to demonstrate its advantage compared to previous algorithms. The results provide concrete evidence that low-dimensional approximations can help to improve the statistical accuracy of transfer entropy in multivariate causality analysis and yield a better performance over other methods. The proposed method is especially efficient as the data length grows. PMID:29547669
Spectral entropy in monitoring anesthetic depth.
Escontrela Rodríguez, B; Gago Martínez, A; Merino Julián, I; Martínez Ruiz, A
2016-10-01
Monitoring the brain response to hypnotics in general anesthesia, with the nociceptive and hemodynamic stimulus interaction, has been a subject of intense investigation for many years. Nowadays, monitors of depth of anesthesia are based in processed electroencephalogram by different algorithms, some of them unknown, to obtain a simplified numeric parameter approximate to brain activity state in each moment. In this review we evaluate if spectral entropy suitably reflects the brain electric behavior in response to hypnotics and the different intensity nociceptive stimulus effect during a surgical procedure. Copyright © 2015 Sociedad Española de Anestesiología, Reanimación y Terapéutica del Dolor. Publicado por Elsevier España, S.L.U. All rights reserved.
Modified Dispersion Relations: from Black-Hole Entropy to the Cosmological Constant
NASA Astrophysics Data System (ADS)
Garattini, Remo
2012-07-01
Quantum Field Theory is plagued by divergences in the attempt to calculate physical quantities. Standard techniques of regularization and renormalization are used to keep under control such a problem. In this paper we would like to use a different scheme based on Modified Dispersion Relations (MDR) to remove infinities appearing in one loop approximation in contrast to what happens in conventional approaches. In particular, we apply the MDR regularization to the computation of the entropy of a Schwarzschild black hole from one side and the Zero Point Energy (ZPE) of the graviton from the other side. The graviton ZPE is connected to the cosmological constant by means of of the Wheeler-DeWitt equation.
NASA Astrophysics Data System (ADS)
Vermersch, B.; Elben, A.; Dalmonte, M.; Cirac, J. I.; Zoller, P.
2018-02-01
We present a general framework for the generation of random unitaries based on random quenches in atomic Hubbard and spin models, forming approximate unitary n -designs, and their application to the measurement of Rényi entropies. We generalize our protocol presented in Elben et al. [Phys. Rev. Lett. 120, 050406 (2018), 10.1103/PhysRevLett.120.050406] to a broad class of atomic and spin-lattice models. We further present an in-depth numerical and analytical study of experimental imperfections, including the effect of decoherence and statistical errors, and discuss connections of our approach with many-body quantum chaos.
An entropy method for induced drag minimization
NASA Technical Reports Server (NTRS)
Greene, George C.
1989-01-01
A fundamentally new approach to the aircraft minimum induced drag problem is presented. The method, a 'viscous lifting line', is based on the minimum entropy production principle and does not require the planar wake assumption. An approximate, closed form solution is obtained for several wing configurations including a comparison of wing extension, winglets, and in-plane wing sweep, with and without a constraint on wing-root bending moment. Like the classical lifting-line theory, this theory predicts that induced drag is proportional to the square of the lift coefficient and inversely proportioinal to the wing aspect ratio. Unlike the classical theory, it predicts that induced drag is Reynolds number dependent and that the optimum spanwise circulation distribution is non-elliptic.
Wu, Yunfeng; Chen, Pinnan; Luo, Xin; Huang, Hui; Liao, Lifang; Yao, Yuchen; Wu, Meihong; Rangayyan, Rangaraj M
2016-07-01
Injury of knee joint cartilage may result in pathological vibrations between the articular surfaces during extension and flexion motions. The aim of this paper is to analyze and quantify vibroarthrographic (VAG) signal irregularity associated with articular cartilage degeneration and injury in the patellofemoral joint. The symbolic entropy (SyEn), approximate entropy (ApEn), fuzzy entropy (FuzzyEn), and the mean, standard deviation, and root-mean-squared (RMS) values of the envelope amplitude, were utilized to quantify the signal fluctuations associated with articular cartilage pathology of the patellofemoral joint. The quadratic discriminant analysis (QDA), generalized logistic regression analysis (GLRA), and support vector machine (SVM) methods were used to perform signal pattern classifications. The experimental results showed that the patients with cartilage pathology (CP) possess larger SyEn and ApEn, but smaller FuzzyEn, over the statistical significance level of the Wilcoxon rank-sum test (p<0.01), than the healthy subjects (HS). The mean, standard deviation, and RMS values computed from the amplitude difference between the upper and lower signal envelopes are also consistently and significantly larger (p<0.01) for the group of CP patients than for the HS group. The SVM based on the entropy and envelope amplitude features can provide superior classification performance as compared with QDA and GLRA, with an overall accuracy of 0.8356, sensitivity of 0.9444, specificity of 0.8, Matthews correlation coefficient of 0.6599, and an area of 0.9212 under the receiver operating characteristic curve. The SyEn, ApEn, and FuzzyEn features can provide useful information about pathological VAG signal irregularity based on different entropy metrics. The statistical parameters of signal envelope amplitude can be used to characterize the temporal fluctuations related to the cartilage pathology. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
A minimum entropy principle in the gas dynamics equations
NASA Technical Reports Server (NTRS)
Tadmor, E.
1986-01-01
Let u(x bar,t) be a weak solution of the Euler equations, governing the inviscid polytropic gas dynamics; in addition, u(x bar, t) is assumed to respect the usual entropy conditions connected with the conservative Euler equations. We show that such entropy solutions of the gas dynamics equations satisfy a minimum entropy principle, namely, that the spatial minimum of their specific entropy, (Ess inf s(u(x,t)))/x, is an increasing function of time. This principle equally applies to discrete approximations of the Euler equations such as the Godunov-type and Lax-Friedrichs schemes. Our derivation of this minimum principle makes use of the fact that there is a family of generalized entrophy functions connected with the conservative Euler equations.
NASA Astrophysics Data System (ADS)
Shakirov, T.; Paul, W.
2018-04-01
What is the thermodynamic driving force for the crystallization of melts of semiflexible polymers? We try to answer this question by employing stochastic approximation Monte Carlo simulations to obtain the complete thermodynamic equilibrium information for a melt of short, semiflexible polymer chains with purely repulsive nonbonded interactions. The thermodynamics is obtained based on the density of states of our coarse-grained model, which varies by up to 5600 orders of magnitude. We show that our polymer melt undergoes a first-order crystallization transition upon increasing the chain stiffness at fixed density. This crystallization can be understood by the interplay of the maximization of different entropy contributions in different spatial dimensions. At sufficient stiffness and density, the three-dimensional orientational interactions drive the orientational ordering transition, which is accompanied by a two-dimensional translational ordering transition in the plane perpendicular to the chains resulting in a hexagonal crystal structure. While the three-dimensional ordering can be understood in terms of Onsager theory, the two-dimensional transition can be understood in terms of the liquid-hexatic transition of hard disks. Due to the domination of lateral two-dimensional translational entropy over the one-dimensional translational entropy connected with columnar displacements, the chains form a lamellar phase. Based on this physical understanding, orientational ordering and translational ordering should be separable for polymer melts. A phenomenological theory based on this understanding predicts a qualitative phase diagram as a function of volume fraction and stiffness in good agreement with results from the literature.
Intra-Tumour Signalling Entropy Determines Clinical Outcome in Breast and Lung Cancer
Banerji, Christopher R. S.; Severini, Simone; Caldas, Carlos; Teschendorff, Andrew E.
2015-01-01
The cancer stem cell hypothesis, that a small population of tumour cells are responsible for tumorigenesis and cancer progression, is becoming widely accepted and recent evidence has suggested a prognostic and predictive role for such cells. Intra-tumour heterogeneity, the diversity of the cancer cell population within the tumour of an individual patient, is related to cancer stem cells and is also considered a potential prognostic indicator in oncology. The measurement of cancer stem cell abundance and intra-tumour heterogeneity in a clinically relevant manner however, currently presents a challenge. Here we propose signalling entropy, a measure of signalling pathway promiscuity derived from a sample’s genome-wide gene expression profile, as an estimate of the stemness of a tumour sample. By considering over 500 mixtures of diverse cellular expression profiles, we reveal that signalling entropy also associates with intra-tumour heterogeneity. By analysing 3668 breast cancer and 1692 lung adenocarcinoma samples, we further demonstrate that signalling entropy correlates negatively with survival, outperforming leading clinical gene expression based prognostic tools. Signalling entropy is found to be a general prognostic measure, valid in different breast cancer clinical subgroups, as well as within stage I lung adenocarcinoma. We find that its prognostic power is driven by genes involved in cancer stem cells and treatment resistance. In summary, by approximating both stemness and intra-tumour heterogeneity, signalling entropy provides a powerful prognostic measure across different epithelial cancers. PMID:25793737
Beyond Atomic Sizes and Hume-Rothery Rules: Understanding and Predicting High-Entropy Alloys
Troparevsky, M. Claudia; Morris, James R.; Daene, Markus; ...
2015-09-03
High-entropy alloys constitute a new class of materials that provide an excellent combination of strength, ductility, thermal stability, and oxidation resistance. Although they have attracted extensive attention due to their potential applications, little is known about why these compounds are stable or how to predict which combination of elements will form a single phase. Here, we present a review of the latest research done on these alloys focusing on the theoretical models devised during the last decade. We discuss semiempirical methods based on the Hume-Rothery rules and stability criteria based on enthalpies of mixing and size mismatch. To provide insightsmore » into the electronic and magnetic properties of high-entropy alloys, we show the results of first-principles calculations of the electronic structure of the disordered solid-solution phase based on both Korringa Kohn Rostoker coherent potential approximation and large supercell models of example face-centered cubic and body-centered cubic systems. Furthermore, we discuss in detail a model based on enthalpy considerations that can predict which elemental combinations are most likely to form a single-phase high-entropy alloy. The enthalpies are evaluated via first-principles high-throughput density functional theory calculations of the energies of formation of binary compounds, and therefore it requires no experimental or empirically derived input. Finally, the model correctly accounts for the specific combinations of metallic elements that are known to form single-phase alloys while rejecting similar combinations that have been tried and shown not to be single phase.« less
Statistical mechanical theory of liquid entropy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wallace, D.C.
The multiparticle correlation expansion for the entropy of a classical monatomic liquid is presented. This entropy expresses the physical picture in which there is no free particle motion, but rather, each atom moves within a cage formed by its neighbors. The liquid expansion, including only pair correlations, gives an excellent account of the experimental entropy of most liquid metals, of liquid argon, and the hard sphere liquid. The pair correlation entropy is well approximated by a universal function of temperature. Higher order correlation entropy, due to n-particle irreducible correlations for n{ge}3, is significant in only a few liquid metals, andmore » its occurrence suggests the presence of n-body forces. When the liquid theory is applied to the study of melting, the author discovers the important classification of normal and anomalous melting, according to whether there is not or is a significant change in the electronic structure upon melting, and he discovers the universal disordering entropy for melting of a monatomic crystal. Interesting directions for future research are: extension to include orientational correlations of molecules, theoretical calculation of the entropy of water, application to the entropy of the amorphous state, and correlational entropy of compressed argon. The author clarifies the relation among different entropy expansions in the recent literature.« less
Gravitational entropy and the cosmological no-hair conjecture
NASA Astrophysics Data System (ADS)
Bolejko, Krzysztof
2018-04-01
The gravitational entropy and no-hair conjectures seem to predict contradictory future states of our Universe. The growth of the gravitational entropy is associated with the growth of inhomogeneity, while the no-hair conjecture argues that a universe dominated by dark energy should asymptotically approach a homogeneous and isotropic de Sitter state. The aim of this paper is to study these two conjectures. The investigation is based on the Simsilun simulation, which simulates the universe using the approximation of the Silent Universe. The Silent Universe is a solution to the Einstein equations that assumes irrotational, nonviscous, and insulated dust, with vanishing magnetic part of the Weyl curvature. The initial conditions for the Simsilun simulation are sourced from the Millennium simulation, which results with a realistically appearing but relativistic at origin simulation of a universe. The Simsilun simulation is evolved from the early universe (t =25 Myr ) until far future (t =1000 Gyr ). The results of this investigation show that both conjectures are correct. On global scales, a universe with a positive cosmological constant and nonpositive spatial curvature does indeed approach the de Sitter state. At the same time it keeps generating the gravitational entropy.
Inhomogeneous point-process entropy: An instantaneous measure of complexity in discrete systems
NASA Astrophysics Data System (ADS)
Valenza, Gaetano; Citi, Luca; Scilingo, Enzo Pasquale; Barbieri, Riccardo
2014-05-01
Measures of entropy have been widely used to characterize complexity, particularly in physiological dynamical systems modeled in discrete time. Current approaches associate these measures to finite single values within an observation window, thus not being able to characterize the system evolution at each moment in time. Here, we propose a new definition of approximate and sample entropy based on the inhomogeneous point-process theory. The discrete time series is modeled through probability density functions, which characterize and predict the time until the next event occurs as a function of the past history. Laguerre expansions of the Wiener-Volterra autoregressive terms account for the long-term nonlinear information. As the proposed measures of entropy are instantaneously defined through probability functions, the novel indices are able to provide instantaneous tracking of the system complexity. The new measures are tested on synthetic data, as well as on real data gathered from heartbeat dynamics of healthy subjects and patients with cardiac heart failure and gait recordings from short walks of young and elderly subjects. Results show that instantaneous complexity is able to effectively track the system dynamics and is not affected by statistical noise properties.
Hacisuleyman, Aysima; Erman, Burak
2017-06-01
A fast and approximate method of generating allosteric communication landscapes in proteins is presented by using Schreiber's entropy transfer concept in combination with the Gaussian Network Model of proteins. Predictions of the model and the allosteric communication landscapes generated show that information transfer in proteins does not necessarily take place along a single path, but an ensemble of pathways is possible. The model emphasizes that knowledge of entropy only is not sufficient for determining allosteric communication and additional information based on time delayed correlations should be introduced, which leads to the presence of causality in proteins. The model provides a simple tool for mapping entropy sink-source relations into pairs of residues. By this approach, residues that should be manipulated to control protein activity may be determined. This should be of great importance for allosteric drug design and for understanding the effects of mutations on function. The model is applied to determine allosteric communication in three proteins, Ubiquitin, Pyruvate Kinase, and the PDZ domain. Predictions are in agreement with molecular dynamics simulations and experimental evidence. Proteins 2017; 85:1056-1064. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Entropy Stable Wall Boundary Conditions for the Compressible Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Parsani, Matteo; Carpenter, Mark H.; Nielsen, Eric J.
2014-01-01
Non-linear entropy stability and a summation-by-parts framework are used to derive entropy stable wall boundary conditions for the compressible Navier-Stokes equations. A semi-discrete entropy estimate for the entire domain is achieved when the new boundary conditions are coupled with an entropy stable discrete interior operator. The data at the boundary are weakly imposed using a penalty flux approach and a simultaneous-approximation-term penalty technique. Although discontinuous spectral collocation operators are used herein for the purpose of demonstrating their robustness and efficacy, the new boundary conditions are compatible with any diagonal norm summation-by-parts spatial operator, including finite element, finite volume, finite difference, discontinuous Galerkin, and flux reconstruction schemes. The proposed boundary treatment is tested for three-dimensional subsonic and supersonic flows. The numerical computations corroborate the non-linear stability (entropy stability) and accuracy of the boundary conditions.
NASA Technical Reports Server (NTRS)
Parsani, Matteo; Carpenter, Mark H.; Nielsen, Eric J.
2015-01-01
Non-linear entropy stability and a summation-by-parts framework are used to derive entropy stable wall boundary conditions for the three-dimensional compressible Navier-Stokes equations. A semi-discrete entropy estimate for the entire domain is achieved when the new boundary conditions are coupled with an entropy stable discrete interior operator. The data at the boundary are weakly imposed using a penalty flux approach and a simultaneous-approximation-term penalty technique. Although discontinuous spectral collocation operators on unstructured grids are used herein for the purpose of demonstrating their robustness and efficacy, the new boundary conditions are compatible with any diagonal norm summation-by-parts spatial operator, including finite element, finite difference, finite volume, discontinuous Galerkin, and flux reconstruction/correction procedure via reconstruction schemes. The proposed boundary treatment is tested for three-dimensional subsonic and supersonic flows. The numerical computations corroborate the non-linear stability (entropy stability) and accuracy of the boundary conditions.
Jennings, Robert C; Zucchelli, Giuseppe
2014-01-01
We examine ergodicity and configurational entropy for a dilute pigment solution and for a suspension of plant photosystem particles in which both ground and excited state pigments are present. It is concluded that the pigment solution, due to the extreme brevity of the excited state lifetime, is non-ergodic and the configurational entropy approaches zero. Conversely, due to the rapid energy transfer among pigments, each photosystem is ergodic and the configurational entropy is positive. This decreases the free energy of the single photosystem pigment array by a small amount. On the other hand, the suspension of photosystems is non-ergodic and the configurational entropy approaches zero. The overall configurational entropy which, in principle, includes contributions from both the single excited photosystems and the suspension which contains excited photosystems, also approaches zero. Thus the configurational entropy upon photon absorption by either a pigment solution or a suspension of photosystem particles is approximately zero. Copyright © 2014 Elsevier B.V. All rights reserved.
Random versus maximum entropy models of neural population activity
NASA Astrophysics Data System (ADS)
Ferrari, Ulisse; Obuchi, Tomoyuki; Mora, Thierry
2017-04-01
The principle of maximum entropy provides a useful method for inferring statistical mechanics models from observations in correlated systems, and is widely used in a variety of fields where accurate data are available. While the assumptions underlying maximum entropy are intuitive and appealing, its adequacy for describing complex empirical data has been little studied in comparison to alternative approaches. Here, data from the collective spiking activity of retinal neurons is reanalyzed. The accuracy of the maximum entropy distribution constrained by mean firing rates and pairwise correlations is compared to a random ensemble of distributions constrained by the same observables. For most of the tested networks, maximum entropy approximates the true distribution better than the typical or mean distribution from that ensemble. This advantage improves with population size, with groups as small as eight being almost always better described by maximum entropy. Failure of maximum entropy to outperform random models is found to be associated with strong correlations in the population.
Lower bounds on entropy for polymer chains on a square and a cubic lattice
NASA Astrophysics Data System (ADS)
Gujrati, P. D.
1982-07-01
Rigorous lower bounds on the entropy per particle as a function of the fraction g of the gauche bonds of a system of semiflexible polymer chains is obtained in the thermodynamic limit. Only square and cubic lattices are considered. For the case of a single chain having l monomers, the bound is obtained for all g⩽g=2/3. For the case of p>1 chains, each having l monomers, where l is a multiple of 4, the bound is obtained for all g⩽g'=13/90. In both cases, it is shown that the entropy is always nonzero for all 0< g
Fast estimate of Hartley entropy in image sharpening
NASA Astrophysics Data System (ADS)
Krbcová, Zuzana; Kukal, Jaromír.; Svihlik, Jan; Fliegel, Karel
2016-09-01
Two classes of linear IIR filters: Laplacian of Gaussian (LoG) and Difference of Gaussians (DoG) are frequently used as high pass filters for contextual vision and edge detection. They are also used for image sharpening when linearly combined with the original image. Resulting sharpening filters are radially symmetric in spatial and frequency domains. Our approach is based on the radial approximation of unknown optimal filter, which is designed as a weighted sum of Gaussian filters with various radii. The novel filter is designed for MRI image enhancement where the image intensity represents anatomical structure plus additive noise. We prefer the gradient norm of Hartley entropy of whole image intensity as a measure which has to be maximized for the best sharpening. The entropy estimation procedure is as fast as FFT included in the filter but this estimate is a continuous function of enhanced image intensities. Physically motivated heuristic is used for optimum sharpening filter design by its parameter tuning. Our approach is compared with Wiener filter on MRI images.
Memory behaviors of entropy production rates in heat conduction
NASA Astrophysics Data System (ADS)
Li, Shu-Nan; Cao, Bing-Yang
2018-02-01
Based on the relaxation time approximation and first-order expansion, memory behaviors in heat conduction are found between the macroscopic and Boltzmann-Gibbs-Shannon (BGS) entropy production rates with exponentially decaying memory kernels. In the frameworks of classical irreversible thermodynamics (CIT) and BGS statistical mechanics, the memory dependency on the integrated history is unidirectional, while for the extended irreversible thermodynamics (EIT) and BGS entropy production rates, the memory dependences are bidirectional and coexist with the linear terms. When macroscopic and microscopic relaxation times satisfy a specific relationship, the entropic memory dependences will be eliminated. There also exist initial effects in entropic memory behaviors, which decay exponentially. The second-order term are also discussed, which can be understood as the global non-equilibrium degree. The effects of the second-order term are consisted of three parts: memory dependency, initial value and linear term. The corresponding memory kernels are still exponential and the initial effects of the global non-equilibrium degree also decay exponentially.
Bayesian Approach to Spectral Function Reconstruction for Euclidean Quantum Field Theories
NASA Astrophysics Data System (ADS)
Burnier, Yannis; Rothkopf, Alexander
2013-11-01
We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T=2.33TC.
Bayesian approach to spectral function reconstruction for Euclidean quantum field theories.
Burnier, Yannis; Rothkopf, Alexander
2013-11-01
We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T=2.33T(C).
A secure image encryption method based on dynamic harmony search (DHS) combined with chaotic map
NASA Astrophysics Data System (ADS)
Mirzaei Talarposhti, Khadijeh; Khaki Jamei, Mehrzad
2016-06-01
In recent years, there has been increasing interest in the security of digital images. This study focuses on the gray scale image encryption using dynamic harmony search (DHS). In this research, first, a chaotic map is used to create cipher images, and then the maximum entropy and minimum correlation coefficient is obtained by applying a harmony search algorithm on them. This process is divided into two steps. In the first step, the diffusion of a plain image using DHS to maximize the entropy as a fitness function will be performed. However, in the second step, a horizontal and vertical permutation will be applied on the best cipher image, which is obtained in the previous step. Additionally, DHS has been used to minimize the correlation coefficient as a fitness function in the second step. The simulation results have shown that by using the proposed method, the maximum entropy and the minimum correlation coefficient, which are approximately 7.9998 and 0.0001, respectively, have been obtained.
Perić-Hassler, Lovorka; Hansen, Halvor S; Baron, Riccardo; Hünenberger, Philippe H
2010-08-16
Explicit-solvent molecular dynamics (MD) simulations of the 11 glucose-based disaccharides in water at 300K and 1bar are reported. The simulations were carried out with the GROMOS 45A4 force-field and the sampling along the glycosidic dihedral angles phi and psi was artificially enhanced using the local elevation umbrella sampling (LEUS) method. The trajectories are analyzed in terms of free-energy maps, stable and metastable conformational states (relative free energies and estimated transition timescales), intramolecular H-bonds, single molecule configurational entropies, and agreement with experimental data. All disaccharides considered are found to be characterized either by a single stable (overwhelmingly populated) state ((1-->n)-linked disaccharides with n=1, 2, 3, or 4) or by two stable (comparably populated and differing in the third glycosidic dihedral angle omega ; gg or gt) states with a low interconversion barrier ((1-->6)-linked disaccharides). Metastable (anti-phi or anti-psi) states are also identified with relative free energies in the range of 8-22 kJ mol(-1). The 11 compounds can be classified into four families: (i) the alpha(1-->1)alpha-linked disaccharide trehalose (axial-axial linkage) presents no metastable state, the lowest configurational entropy, and no intramolecular H-bonds; (ii) the four alpha(1-->n)-linked disaccharides (n=1, 2, 3, or 4; axial-equatorial linkage) present one metastable (anti-psi) state, an intermediate configurational entropy, and two alternative intramolecular H-bonds; (iii) the four beta(1-->n)-linked disaccharides (n=1, 2, 3, or 4; equatorial-equatorial linkage) present two metastable (anti-phi and anti-psi) states, an intermediate configurational entropy, and one intramolecular H-bond; (iv) the two (1-->6)-linked disaccharides (additional glycosidic dihedral angle) present no (isomaltose) or a pair of (gentiobiose) metastable (anti-phi) states, the highest configurational entropy, and no intramolecular H-bonds. The observed conformational preferences appear to be dictated by four main driving forces (ring conformational preferences, exo-anomeric effect, steric constraints, and possible presence of a third glycosidic dihedral angle), leaving a secondary role to intramolecular H-bonding and specific solvation effects. In spite of the weak conformational driving force attributed to solvent-exposed H-bonds in water (highly polar protic solvent), intramolecular H-bonds may still have a significant influence on the physico-chemical properties of the disaccharide by decreasing its hydrophilicity. Along with previous work, the results also complete the suggestion of a spectrum of approximate transition timescales for carbohydrates up to the disaccharide level, namely: approximately 30 ps (hydroxyl groups), approximately 1 ns (free lactol group, free hydroxymethyl groups, glycosidic dihedral angleomega in (1-->6)-linked disaccharides), approximately 10 ns to 2 micros (ring conformation, glycosidic dihedral angles phi and psi). The calculated average values of the glycosidic torsional angles agree well with the available experimental data, providing validation for the force-field and simulation methodology employed. Copyright 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wintermeyer, Niklas; Winters, Andrew R.; Gassner, Gregor J.; Kopriva, David A.
2017-07-01
We design an arbitrary high-order accurate nodal discontinuous Galerkin spectral element approximation for the non-linear two dimensional shallow water equations with non-constant, possibly discontinuous, bathymetry on unstructured, possibly curved, quadrilateral meshes. The scheme is derived from an equivalent flux differencing formulation of the split form of the equations. We prove that this discretization exactly preserves the local mass and momentum. Furthermore, combined with a special numerical interface flux function, the method exactly preserves the mathematical entropy, which is the total energy for the shallow water equations. By adding a specific form of interface dissipation to the baseline entropy conserving scheme we create a provably entropy stable scheme. That is, the numerical scheme discretely satisfies the second law of thermodynamics. Finally, with a particular discretization of the bathymetry source term we prove that the numerical approximation is well-balanced. We provide numerical examples that verify the theoretical findings and furthermore provide an application of the scheme for a partial break of a curved dam test problem.
Wu, Wei; Liu, Yangang
2010-05-12
A new one-dimensional radiative equilibrium model is built to analytically evaluate the vertical profile of the Earth's atmospheric radiation entropy flux under the assumption that atmospheric longwave radiation emission behaves as a greybody and shortwave radiation as a diluted blackbody. Results show that both the atmospheric shortwave and net longwave radiation entropy fluxes increase with altitude, and the latter is about one order in magnitude greater than the former. The vertical profile of the atmospheric net radiation entropy flux follows approximately that of the atmospheric net longwave radiation entropy flux. Sensitivity study further reveals that a 'darker' atmosphere with a larger overall atmospheric longwave optical depth exhibits a smaller net radiation entropy flux at all altitudes, suggesting an intrinsic connection between the atmospheric net radiation entropy flux and the overall atmospheric longwave optical depth. These results indicate that the overall strength of the atmospheric irreversible processes at all altitudes as determined by the corresponding atmospheric net entropy flux is closely related to the amount of greenhouse gases in the atmosphere.
Recoverability in quantum information theory
NASA Astrophysics Data System (ADS)
Wilde, Mark
The fact that the quantum relative entropy is non-increasing with respect to quantum physical evolutions lies at the core of many optimality theorems in quantum information theory and has applications in other areas of physics. In this work, we establish improvements of this entropy inequality in the form of physically meaningful remainder terms. One of the main results can be summarized informally as follows: if the decrease in quantum relative entropy between two quantum states after a quantum physical evolution is relatively small, then it is possible to perform a recovery operation, such that one can perfectly recover one state while approximately recovering the other. This can be interpreted as quantifying how well one can reverse a quantum physical evolution. Our proof method is elementary, relying on the method of complex interpolation, basic linear algebra, and the recently introduced Renyi generalization of a relative entropy difference. The theorem has a number of applications in quantum information theory, which have to do with providing physically meaningful improvements to many known entropy inequalities. This is based on arXiv:1505.04661, now accepted for publication in Proceedings of the Royal Society A. I acknowledge support from startup funds from the Department of Physics and Astronomy at LSU, the NSF under Award No. CCF-1350397, and the DARPA Quiness Program through US Army Research Office award W31P4Q-12-1-0019.
Chirikjian, Gregory S.
2011-01-01
Proteins fold from a highly disordered state into a highly ordered one. Traditionally, the folding problem has been stated as one of predicting ‘the’ tertiary structure from sequential information. However, new evidence suggests that the ensemble of unfolded forms may not be as disordered as once believed, and that the native form of many proteins may not be described by a single conformation, but rather an ensemble of its own. Quantifying the relative disorder in the folded and unfolded ensembles as an entropy difference may therefore shed light on the folding process. One issue that clouds discussions of ‘entropy’ is that many different kinds of entropy can be defined: entropy associated with overall translational and rotational Brownian motion, configurational entropy, vibrational entropy, conformational entropy computed in internal or Cartesian coordinates (which can even be different from each other), conformational entropy computed on a lattice; each of the above with different solvation and solvent models; thermodynamic entropy measured experimentally, etc. The focus of this work is the conformational entropy of coil/loop regions in proteins. New mathematical modeling tools for the approximation of changes in conformational entropy during transition from unfolded to folded ensembles are introduced. In particular, models for computing lower and upper bounds on entropy for polymer models of polypeptide coils both with and without end constraints are presented. The methods reviewed here include kinematics (the mathematics of rigid-body motions), classical statistical mechanics and information theory. PMID:21187223
NASA Astrophysics Data System (ADS)
Sposetti, C. N.; Manuel, L. O.; Roura-Bas, P.
2016-08-01
The Anderson impurity model is studied by means of the self-consistent hybridization expansions in its noncrossing (NCA) and one-crossing (OCA) approximations. We have found that for the one-channel spin-1 /2 particle-hole symmetric Anderson model, the NCA results are qualitatively wrong for any temperature, even when the approximation gives the exact threshold exponents of the ionic states. Actually, the NCA solution describes an overscreened Kondo effect, because it is the same as for the two-channel infinite-U single-level Anderson model. We explicitly show that the NCA is unable to distinguish between these two very different physical systems, independently of temperature. Using the impurity entropy as an example, we show that the low-temperature values of the NCA entropy for the symmetric case yield the limit Simp(T =0 ) →ln√{2 }, which corresponds to the zero temperature entropy of the overscreened Kondo model. Similar pathologies are predicted for any other thermodynamic property. On the other hand, we have found that the OCA approach lifts the artificial mapping between the models and restores correct properties of the ground state, for instance, a vanishing entropy at low enough temperatures Simp(T =0 ) →0 . Our results indicate that the very well known NCA should be used with caution close to the symmetric point of the Anderson model.
Detection of cracks in shafts with the Approximated Entropy algorithm
NASA Astrophysics Data System (ADS)
Sampaio, Diego Luchesi; Nicoletti, Rodrigo
2016-05-01
The Approximate Entropy is a statistical calculus used primarily in the fields of Medicine, Biology, and Telecommunication for classifying and identifying complex signal data. In this work, an Approximate Entropy algorithm is used to detect cracks in a rotating shaft. The signals of the cracked shaft are obtained from numerical simulations of a de Laval rotor with breathing cracks modelled by the Fracture Mechanics. In this case, one analysed the vertical displacements of the rotor during run-up transients. The results show the feasibility of detecting cracks from 5% depth, irrespective of the unbalance of the rotating system and crack orientation in the shaft. The results also show that the algorithm can differentiate the occurrence of crack only, misalignment only, and crack + misalignment in the system. However, the algorithm is sensitive to intrinsic parameters p (number of data points in a sample vector) and f (fraction of the standard deviation that defines the minimum distance between two sample vectors), and good results are only obtained by appropriately choosing their values according to the sampling rate of the signal.
NASA Technical Reports Server (NTRS)
Tadmor, Eitan
1988-01-01
A convergence theory for semi-discrete approximations to nonlinear systems of conservation laws is developed. It is shown, by a series of scalar counter-examples, that consistency with the conservation law alone does not guarantee convergence. Instead, a notion of consistency which takes into account both the conservation law and its augmenting entropy condition is introduced. In this context it is concluded that consistency and L(infinity)-stability guarantee for a relevant class of admissible entropy functions, that their entropy production rate belongs to a compact subset of H(loc)sup -1 (x,t). One can now use compensated compactness arguments in order to turn this conclusion into a convergence proof. The current state of the art for these arguments includes the scalar and a wide class of 2 x 2 systems of conservation laws. The general framework of the vanishing viscosity method is studied as an effective way to meet the consistency and L(infinity)-stability requirements. How this method is utilized to enforce consistency and stability for scalar conservation laws is shown. In this context we prove, under the appropriate assumptions, the convergence of finite difference approximations (e.g., the high resolution TVD and UNO methods), finite element approximations (e.g., the Streamline-Diffusion methods) and spectral and pseudospectral approximations (e.g., the Spectral Viscosity methods).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tadmor, E.
1988-07-01
A convergence theory for semi-discrete approximations to nonlinear systems of conservation laws is developed. It is shown, by a series of scalar counter-examples, that consistency with the conservation law alone does not guarantee convergence. Instead, a notion of consistency which takes into account both the conservation law and its augmenting entropy condition is introduced. In this context it is concluded that consistency and L(infinity)-stability guarantee for a relevant class of admissible entropy functions, that their entropy production rate belongs to a compact subset of H(loc)sup -1 (x,t). One can now use compensated compactness arguments in order to turn this conclusionmore » into a convergence proof. The current state of the art for these arguments includes the scalar and a wide class of 2 x 2 systems of conservation laws. The general framework of the vanishing viscosity method is studied as an effective way to meet the consistency and L(infinity)-stability requirements. How this method is utilized to enforce consistency and stability for scalar conservation laws is shown. In this context we prove, under the appropriate assumptions, the convergence of finite difference approximations (e.g., the high resolution TVD and UNO methods), finite element approximations (e.g., the Streamline-Diffusion methods) and spectral and pseudospectral approximations (e.g., the Spectral Viscosity methods).« less
Quirks of Stirling's Approximation
ERIC Educational Resources Information Center
Macrae, Roderick M.; Allgeier, Benjamin M.
2013-01-01
Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…
Liu, Jian; Pedroza, Luana S; Misch, Carissa; Fernández-Serra, Maria V; Allen, Philip B
2014-07-09
We present total energy and force calculations for the (GaN)1-x(ZnO)x alloy. Site-occupancy configurations are generated from Monte Carlo (MC) simulations, on the basis of a cluster expansion model proposed in a previous study. Local atomic coordinate relaxations of surprisingly large magnitude are found via density-functional calculations using a 432-atom periodic supercell, for three representative configurations at x = 0.5. These are used to generate bond-length distributions. The configurationally averaged composition- and temperature-dependent short-range order (SRO) parameters of the alloys are discussed. The entropy is approximated in terms of pair distribution statistics and thus related to SRO parameters. This approximate entropy is compared with accurate numerical values from MC simulations. An empirical model for the dependence of the bond length on the local chemical environments is proposed.
Steepest entropy ascent for two-state systems with slowly varying Hamiltonians
NASA Astrophysics Data System (ADS)
Militello, Benedetto
2018-05-01
The steepest entropy ascent approach is considered and applied to two-state systems. When the Hamiltonian of the system is time-dependent, the principle of maximum entropy production can still be exploited; arguments to support this fact are given. In the limit of slowly varying Hamiltonians, which allows for the adiabatic approximation for the unitary part of the dynamics, the system exhibits significant robustness to the thermalization process. Specific examples such as a spin in a rotating field and a generic two-state system undergoing an avoided crossing are considered.
NASA Astrophysics Data System (ADS)
Virtanen, P.; Vischi, F.; Strambini, E.; Carrega, M.; Giazotto, F.
2017-12-01
We discuss the quasiparticle entropy and heat capacity of a dirty superconductor/normal metal/superconductor junction. In the case of short junctions, the inverse proximity effect extending in the superconducting banks plays a crucial role in determining the thermodynamic quantities. In this case, commonly used approximations can violate thermodynamic relations between supercurrent and quasiparticle entropy. We provide analytical and numerical results as a function of different geometrical parameters. Quantitative estimates for the heat capacity can be relevant for the design of caloritronic devices or radiation sensor applications.
Martínez-Zarzuela, Mario; Gómez, Carlos; Díaz-Pernas, Francisco Javier; Fernández, Alberto; Hornero, Roberto
2013-10-01
Cross-Approximate Entropy (Cross-ApEn) is a useful measure to quantify the statistical dissimilarity of two time series. In spite of the advantage of Cross-ApEn over its one-dimensional counterpart (Approximate Entropy), only a few studies have applied it to biomedical signals, mainly due to its high computational cost. In this paper, we propose a fast GPU-based implementation of the Cross-ApEn that makes feasible its use over a large amount of multidimensional data. The scheme followed is fully scalable, thus maximizes the use of the GPU despite of the number of neural signals being processed. The approach consists in processing many trials or epochs simultaneously, with independence of its origin. In the case of MEG data, these trials can proceed from different input channels or subjects. The proposed implementation achieves an average speedup greater than 250× against a CPU parallel version running on a processor containing six cores. A dataset of 30 subjects containing 148 MEG channels (49 epochs of 1024 samples per channel) can be analyzed using our development in about 30min. The same processing takes 5 days on six cores and 15 days when running on a single core. The speedup is much larger if compared to a basic sequential Matlab(®) implementation, that would need 58 days per subject. To our knowledge, this is the first contribution of Cross-ApEn measure computation using GPUs. This study demonstrates that this hardware is, to the day, the best option for the signal processing of biomedical data with Cross-ApEn. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Pattabhiraman, Harini; Gantapara, Anjan P.; Dijkstra, Marjolein
2015-10-01
Using computer simulations, we study the phase behavior of a model system of colloidal hard disks with a diameter σ and a soft corona of width 1.4σ. The particles interact with a hard core and a repulsive square-shoulder potential. We calculate the free energy of the random-tiling quasicrystal and its crystalline approximants using the Frenkel-Ladd method. We explicitly account for the configurational entropy associated with the number of distinct configurations of the random-tiling quasicrystal. We map out the phase diagram and find that the random tiling dodecagonal quasicrystal is stabilised by entropy at finite temperatures with respect to the crystalline approximants that we considered, and its stability region seems to extend to zero temperature as the energies of the defect-free quasicrystal and the crystalline approximants are equal within our statistical accuracy.
Guan, Yue; Li, Weifeng; Jiang, Zhuoran; Chen, Ying; Liu, Song; He, Jian; Zhou, Zhengyang; Ge, Yun
2016-12-01
This study aimed to develop whole-lesion apparent diffusion coefficient (ADC)-based entropy-related parameters of cervical cancer to preliminarily assess intratumoral heterogeneity of this lesion in comparison to adjacent normal cervical tissues. A total of 51 women (mean age, 49 years) with cervical cancers confirmed by biopsy underwent 3-T pelvic diffusion-weighted magnetic resonance imaging with b values of 0 and 800 s/mm 2 prospectively. ADC-based entropy-related parameters including first-order entropy and second-order entropies were derived from the whole tumor volume as well as adjacent normal cervical tissues. Intraclass correlation coefficient, Wilcoxon test with Bonferroni correction, Kruskal-Wallis test, and receiver operating characteristic curve were used for statistical analysis. All the parameters showed excellent interobserver agreement (all intraclass correlation coefficients > 0.900). Entropy, entropy(H) 0 , entropy(H) 45 , entropy(H) 90 , entropy(H) 135 , and entropy(H) mean were significantly higher, whereas entropy(H) range and entropy(H) std were significantly lower in cervical cancers compared to adjacent normal cervical tissues (all P <.0001). Kruskal-Wallis test showed that there were no significant differences among the values of various second-order entropies including entropy(H) 0, entropy(H) 45 , entropy(H) 90 , entropy(H) 135 , and entropy(H) mean. All second-order entropies had larger area under the receiver operating characteristic curve than first-order entropy in differentiating cervical cancers from adjacent normal cervical tissues. Further, entropy(H) 45 , entropy(H) 90 , entropy(H) 135 , and entropy(H) mean had the same largest area under the receiver operating characteristic curve of 0.867. Whole-lesion ADC-based entropy-related parameters of cervical cancers were developed successfully, which showed initial potential in characterizing intratumoral heterogeneity in comparison to adjacent normal cervical tissues. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Liu, Yong; Shu, Chi-Wang; Zhang, Mengping
2018-02-01
We present a discontinuous Galerkin (DG) scheme with suitable quadrature rules [15] for ideal compressible magnetohydrodynamic (MHD) equations on structural meshes. The semi-discrete scheme is analyzed to be entropy stable by using the symmetrizable version of the equations as introduced by Godunov [32], the entropy stable DG framework with suitable quadrature rules [15], the entropy conservative flux in [14] inside each cell and the entropy dissipative approximate Godunov type numerical flux at cell interfaces to make the scheme entropy stable. The main difficulty in the generalization of the results in [15] is the appearance of the non-conservative "source terms" added in the modified MHD model introduced by Godunov [32], which do not exist in the general hyperbolic system studied in [15]. Special care must be taken to discretize these "source terms" adequately so that the resulting DG scheme satisfies entropy stability. Total variation diminishing / bounded (TVD/TVB) limiters and bound-preserving limiters are applied to control spurious oscillations. We demonstrate the accuracy and robustness of this new scheme on standard MHD examples.
Reduced nocturnal ACTH-driven cortisol secretion during critical illness
Boonen, Eva; Meersseman, Philippe; Vervenne, Hilke; Meyfroidt, Geert; Guïza, Fabian; Wouters, Pieter J.; Veldhuis, Johannes D.
2014-01-01
Recently, during critical illness, cortisol metabolism was found to be reduced. We hypothesize that such reduced cortisol breakdown may suppress pulsatile ACTH and cortisol secretion via feedback inhibition. To test this hypothesis, nocturnal ACTH and cortisol secretory profiles were constructed by deconvolution analysis from plasma concentration time series in 40 matched critically ill patients and eight healthy controls, excluding diseases or drugs that affect the hypothalamic-pituitary-adrenal axis. Blood was sampled every 10 min between 2100 and 0600 to quantify plasma concentrations of ACTH and (free) cortisol. Approximate entropy, an estimation of process irregularity, cross-approximate entropy, a measure of ACTH-cortisol asynchrony, and ACTH-cortisol dose-response relationships were calculated. Total and free plasma cortisol concentrations were higher at all times in patients than in controls (all P < 0.04). Pulsatile cortisol secretion was 54% lower in patients than in controls (P = 0.005), explained by reduced cortisol burst mass (P = 0.03), whereas cortisol pulse frequency (P = 0.35) and nonpulsatile cortisol secretion (P = 0.80) were unaltered. Pulsatile ACTH secretion was 31% lower in patients than in controls (P = 0.03), again explained by a lower ACTH burst mass (P = 0.02), whereas ACTH pulse frequency (P = 0.50) and nonpulsatile ACTH secretion (P = 0.80) were unchanged. ACTH-cortisol dose response estimates were similar in patients and controls. ACTH and cortisol approximate entropy were higher in patients (P ≤ 0.03), as was ACTH-cortisol cross-approximate entropy (P ≤ 0.001). We conclude that hypercortisolism during critical illness coincided with suppressed pulsatile ACTH and cortisol secretion and a normal ACTH-cortisol dose response. Increased irregularity and asynchrony of the ACTH and cortisol time series supported non-ACTH-dependent mechanisms driving hypercortisolism during critical illness. PMID:24569590
Entropy-based adaptive attitude estimation
NASA Astrophysics Data System (ADS)
Kiani, Maryam; Barzegar, Aylin; Pourtakdoust, Seid H.
2018-03-01
Gaussian approximation filters have increasingly been developed to enhance the accuracy of attitude estimation in space missions. The effective employment of these algorithms demands accurate knowledge of system dynamics and measurement models, as well as their noise characteristics, which are usually unavailable or unreliable. An innovation-based adaptive filtering approach has been adopted as a solution to this problem; however, it exhibits two major challenges, namely appropriate window size selection and guaranteed assurance of positive definiteness for the estimated noise covariance matrices. The current work presents two novel techniques based on relative entropy and confidence level concepts in order to address the abovementioned drawbacks. The proposed adaptation techniques are applied to two nonlinear state estimation algorithms of the extended Kalman filter and cubature Kalman filter for attitude estimation of a low earth orbit satellite equipped with three-axis magnetometers and Sun sensors. The effectiveness of the proposed adaptation scheme is demonstrated by means of comprehensive sensitivity analysis on the system and environmental parameters by using extensive independent Monte Carlo simulations.
Liu, Jian; Miller, William H
2008-09-28
The maximum entropy analytic continuation (MEAC) method is used to extend the range of accuracy of the linearized semiclassical initial value representation (LSC-IVR)/classical Wigner approximation for real time correlation functions. LSC-IVR provides a very effective "prior" for the MEAC procedure since it is very good for short times, exact for all time and temperature for harmonic potentials (even for correlation functions of nonlinear operators), and becomes exact in the classical high temperature limit. This combined MEAC+LSC/IVR approach is applied here to two highly nonlinear dynamical systems, a pure quartic potential in one dimensional and liquid para-hydrogen at two thermal state points (25 and 14 K under nearly zero external pressure). The former example shows the MEAC procedure to be a very significant enhancement of the LSC-IVR for correlation functions of both linear and nonlinear operators, and especially at low temperature where semiclassical approximations are least accurate. For liquid para-hydrogen, the LSC-IVR is seen already to be excellent at T=25 K, but the MEAC procedure produces a significant correction at the lower temperature (T=14 K). Comparisons are also made as to how the MEAC procedure is able to provide corrections for other trajectory-based dynamical approximations when used as priors.
How unitary cosmology generalizes thermodynamics and solves the inflationary entropy problem
NASA Astrophysics Data System (ADS)
Tegmark, Max
2012-06-01
We analyze cosmology assuming unitary quantum mechanics, using a tripartite partition into system, observer, and environment degrees of freedom. This generalizes the second law of thermodynamics to “The system’s entropy cannot decrease unless it interacts with the observer, and it cannot increase unless it interacts with the environment.” The former follows from the quantum Bayes theorem we derive. We show that because of the long-range entanglement created by cosmological inflation, the cosmic entropy decreases exponentially rather than linearly with the number of bits of information observed, so that a given observer can reduce entropy by much more than the amount of information her brain can store. Indeed, we argue that as long as inflation has occurred in a non-negligible fraction of the volume, almost all sentient observers will find themselves in a post-inflationary low-entropy Hubble volume, and we humans have no reason to be surprised that we do so as well, which solves the so-called inflationary entropy problem. An arguably worse problem for unitary cosmology involves gamma-ray-burst constraints on the “big snap,” a fourth cosmic doomsday scenario alongside the “big crunch,” “big chill,” and “big rip,” where an increasingly granular nature of expanding space modifies our life-supporting laws of physics. Our tripartite framework also clarifies when the popular quantum gravity approximation Gμν≈8πG⟨Tμν⟩ is valid, and how problems with recent attempts to explain dark energy as gravitational backreaction from superhorizon scale fluctuations can be understood as a failure of this approximation.
Scaling of the entropy budget with surface temperature in radiative-convective equilibrium
NASA Astrophysics Data System (ADS)
Singh, Martin S.; O'Gorman, Paul A.
2016-09-01
The entropy budget of the atmosphere is examined in simulations of radiative-convective equilibrium with a cloud-system resolving model over a wide range of surface temperatures from 281 to 311 K. Irreversible phase changes and the diffusion of water vapor account for more than half of the irreversible entropy production within the atmosphere, even in the coldest simulation. As the surface temperature is increased, the atmospheric radiative cooling rate increases, driving a greater entropy sink that must be matched by greater irreversible entropy production. The entropy production resulting from irreversible moist processes increases at a similar fractional rate as the entropy sink and at a lower rate than that implied by Clausius-Clapeyron scaling. This allows the entropy production from frictional drag on hydrometeors and on the atmospheric flow to also increase with warming, in contrast to recent results for simulations with global climate models in which the work output decreases with warming. A set of approximate scaling relations is introduced for the terms in the entropy budget as the surface temperature is varied, and many of the terms are found to scale with the mean surface precipitation rate. The entropy budget provides some insight into changes in frictional dissipation in response to warming or changes in model resolution, but it is argued that frictional dissipation is not closely linked to other measures of convective vigor.
On the sufficiency of pairwise interactions in maximum entropy models of networks
NASA Astrophysics Data System (ADS)
Nemenman, Ilya; Merchan, Lina
Biological information processing networks consist of many components, which are coupled by an even larger number of complex multivariate interactions. However, analyses of data sets from fields as diverse as neuroscience, molecular biology, and behavior have reported that observed statistics of states of some biological networks can be approximated well by maximum entropy models with only pairwise interactions among the components. Based on simulations of random Ising spin networks with p-spin (p > 2) interactions, here we argue that this reduction in complexity can be thought of as a natural property of some densely interacting networks in certain regimes, and not necessarily as a special property of living systems. This work was supported in part by James S. McDonnell Foundation Grant No. 220020321.
Large deviation analysis of a simple information engine
NASA Astrophysics Data System (ADS)
Maitland, Michael; Grosskinsky, Stefan; Harris, Rosemary J.
2015-11-01
Information thermodynamics provides a framework for studying the effect of feedback loops on entropy production. It has enabled the understanding of novel thermodynamic systems such as the information engine, which can be seen as a modern version of "Maxwell's Dæmon," whereby a feedback controller processes information gained by measurements in order to extract work. Here, we analyze a simple model of such an engine that uses feedback control based on measurements to obtain negative entropy production. We focus on the distribution and fluctuations of the information obtained by the feedback controller. Significantly, our model allows an analytic treatment for a two-state system with exact calculation of the large deviation rate function. These results suggest an approximate technique for larger systems, which is corroborated by simulation data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dehesa, J.S.; Martinez-Finkelshtein, A.; Sorokin, V.N.
The asymptotics of the Boltzmann-Shannon information entropy as well as the Renyi entropy for the quantum probability density of a single-particle system with a confining (i.e., bounded below) power-type potential V(x)=x{sup 2k} with k is a member of N and x is a member of R, is investigated in the position and momentum spaces within the semiclassical (WKB) approximation. It is found that for highly excited states both physical entropies, as well as their sum, have a logarithmic dependence on its quantum number not only when k=1 (harmonic oscillator), but also for any fixed k. As a by-product, the extremalmore » case k{yields}{infinity} (the infinite well potential) is also rigorously analyzed. It is shown that not only the position-space entropy has the same constant value for all quantum states, which is a known result, but also that the momentum-space entropy is constant for highly excited states.« less
Stokes-Einstein relation and excess entropy in Al-rich Al-Cu melts
NASA Astrophysics Data System (ADS)
Pasturel, A.; Jakse, N.
2016-07-01
We investigate the conditions for the validity of the Stokes-Einstein relation that connects diffusivity to viscosity in melts using entropy-scaling relationships developed by Rosenfeld. Employing ab initio molecular dynamics simulations to determine transport and structural properties of liquid Al1-xCux alloys (with composition x ≤ 0.4), we first show that reduced self-diffusion coefficients and viscosities, according to Rosenfeld's formulation, scale with the two-body approximation of the excess entropy except the reduced viscosity for x = 0.4. Then, we use our findings to evidence that the Stokes-Einstein relation using effective atomic radii is not valid in these alloys while its validity can be related to the temperature dependence of the partial pair-excess entropies of both components. Finally, we derive a relation between the ratio of the self-diffusivities of the components and the ratio of their pair excess entropies.
Yao, Hongwei; Qiao, Jun -Wei; Gao, Michael; ...
2016-05-19
Guided by CALPHAD (Calculation of Phase Diagrams) modeling, the refractory medium-entropy alloy MoNbTaV was synthesized by vacuum arc melting under a high-purity argon atmosphere. A body-centered cubic solid solution phase was experimentally confirmed in the as-cast ingot using X-ray diffraction and scanning electron microscopy. The measured lattice parameter of the alloy (3.208 Å) obeys the rule of mixtures (ROM), but the Vickers microhardness (4.95 GPa) and the yield strength (1.5 GPa) are about 4.5 and 4.6 times those estimated from the ROM, respectively. Using a simple model on solid solution strengthening predicts a yield strength of approximately 1.5 GPa. Inmore » conclusion, thermodynamic analysis shows that the total entropy of the alloy is more than three times the configurational entropy at room temperature, and the entropy of mixing exhibits a small negative departure from ideal mixing.« less
Jiang, Xiaoying; Wei, Rong; Zhao, Yanjun; Zhang, Tongliang
2008-05-01
The knowledge of subnuclear localization in eukaryotic cells is essential for understanding the life function of nucleus. Developing prediction methods and tools for proteins subnuclear localization become important research fields in protein science for special characteristics in cell nuclear. In this study, a novel approach has been proposed to predict protein subnuclear localization. Sample of protein is represented by Pseudo Amino Acid (PseAA) composition based on approximate entropy (ApEn) concept, which reflects the complexity of time series. A novel ensemble classifier is designed incorporating three AdaBoost classifiers. The base classifier algorithms in three AdaBoost are decision stumps, fuzzy K nearest neighbors classifier, and radial basis-support vector machines, respectively. Different PseAA compositions are used as input data of different AdaBoost classifier in ensemble. Genetic algorithm is used to optimize the dimension and weight factor of PseAA composition. Two datasets often used in published works are used to validate the performance of the proposed approach. The obtained results of Jackknife cross-validation test are higher and more balance than them of other methods on same datasets. The promising results indicate that the proposed approach is effective and practical. It might become a useful tool in protein subnuclear localization. The software in Matlab and supplementary materials are available freely by contacting the corresponding author.
Thermodynamic characterization of networks using graph polynomials
NASA Astrophysics Data System (ADS)
Ye, Cheng; Comin, César H.; Peron, Thomas K. DM.; Silva, Filipi N.; Rodrigues, Francisco A.; Costa, Luciano da F.; Torsello, Andrea; Hancock, Edwin R.
2015-09-01
In this paper, we present a method for characterizing the evolution of time-varying complex networks by adopting a thermodynamic representation of network structure computed from a polynomial (or algebraic) characterization of graph structure. Commencing from a representation of graph structure based on a characteristic polynomial computed from the normalized Laplacian matrix, we show how the polynomial is linked to the Boltzmann partition function of a network. This allows us to compute a number of thermodynamic quantities for the network, including the average energy and entropy. Assuming that the system does not change volume, we can also compute the temperature, defined as the rate of change of entropy with energy. All three thermodynamic variables can be approximated using low-order Taylor series that can be computed using the traces of powers of the Laplacian matrix, avoiding explicit computation of the normalized Laplacian spectrum. These polynomial approximations allow a smoothed representation of the evolution of networks to be constructed in the thermodynamic space spanned by entropy, energy, and temperature. We show how these thermodynamic variables can be computed in terms of simple network characteristics, e.g., the total number of nodes and node degree statistics for nodes connected by edges. We apply the resulting thermodynamic characterization to real-world time-varying networks representing complex systems in the financial and biological domains. The study demonstrates that the method provides an efficient tool for detecting abrupt changes and characterizing different stages in network evolution.
High resolution schemes and the entropy condition
NASA Technical Reports Server (NTRS)
Osher, S.; Chakravarthy, S.
1983-01-01
A systematic procedure for constructing semidiscrete, second order accurate, variation diminishing, five point band width, approximations to scalar conservation laws, is presented. These schemes are constructed to also satisfy a single discrete entropy inequality. Thus, in the convex flux case, convergence is proven to be the unique physically correct solution. For hyperbolic systems of conservation laws, this construction is used formally to extend the first author's first order accurate scheme, and show (under some minor technical hypotheses) that limit solutions satisfy an entropy inequality. Results concerning discrete shocks, a maximum principle, and maximal order of accuracy are obtained. Numerical applications are also presented.
Entropy and equilibrium via games of complexity
NASA Astrophysics Data System (ADS)
Topsøe, Flemming
2004-09-01
It is suggested that thermodynamical equilibrium equals game theoretical equilibrium. Aspects of this thesis are discussed. The philosophy is consistent with maximum entropy thinking of Jaynes, but goes one step deeper by deriving the maximum entropy principle from an underlying game theoretical principle. The games introduced are based on measures of complexity. Entropy is viewed as minimal complexity. It is demonstrated that Tsallis entropy ( q-entropy) and Kaniadakis entropy ( κ-entropy) can be obtained in this way, based on suitable complexity measures. A certain unifying effect is obtained by embedding these measures in a two-parameter family of entropy functions.
Time dependence of Hawking radiation entropy
NASA Astrophysics Data System (ADS)
Page, Don N.
2013-09-01
If a black hole starts in a pure quantum state and evaporates completely by a unitary process, the von Neumann entropy of the Hawking radiation initially increases and then decreases back to zero when the black hole has disappeared. Here numerical results are given for an approximation to the time dependence of the radiation entropy under an assumption of fast scrambling, for large nonrotating black holes that emit essentially only photons and gravitons. The maximum of the von Neumann entropy then occurs after about 53.81% of the evaporation time, when the black hole has lost about 40.25% of its original Bekenstein-Hawking (BH) entropy (an upper bound for its von Neumann entropy) and then has a BH entropy that equals the entropy in the radiation, which is about 59.75% of the original BH entropy 4πM02, or about 7.509M02 ≈ 6.268 × 1076(M0/Msolar)2, using my 1976 calculations that the photon and graviton emission process into empty space gives about 1.4847 times the BH entropy loss of the black hole. Results are also given for black holes in initially impure states. If the black hole starts in a maximally mixed state, the von Neumann entropy of the Hawking radiation increases from zero up to a maximum of about 119.51% of the original BH entropy, or about 15.018M02 ≈ 1.254 × 1077(M0/Msolar)2, and then decreases back down to 4πM02 = 1.049 × 1077(M0/Msolar)2.
Holographic entanglement entropy in Suzuki-Trotter decomposition of spin systems.
Matsueda, Hiroaki
2012-03-01
In quantum spin chains at criticality, two types of scaling for the entanglement entropy exist: one comes from conformal field theory (CFT), and the other is for entanglement support of matrix product state (MPS) approximation. On the other hand, the quantum spin-chain models can be mapped onto two-dimensional (2D) classical ones by the Suzuki-Trotter decomposition. Motivated by the scaling and the mapping, we introduce information entropy for 2D classical spin configurations as well as a spectrum, and examine their basic properties in the Ising and the three-state Potts models on the square lattice. They are defined by the singular values of the reduced density matrix for a Monte Carlo snapshot. We find scaling relations of the entropy compatible with the CFT and the MPS results. Thus, we propose that the entropy is a kind of "holographic" entanglement entropy. At T(c), the spin configuration is fractal, and various sizes of ordered clusters coexist. Then, the singular values automatically decompose the original snapshot into a set of images with different length scales, respectively. This is the origin of the scaling. In contrast to the MPS scaling, long-range spin correlation can be described by only few singular values. Furthermore, the spectrum, which is a set of logarithms of the singular values, also seems to be a holographic entanglement spectrum. We find multiple gaps in the spectrum, and in contrast to the topological phases, the low-lying levels below the gap represent spontaneous symmetry breaking. These contrasts are strong evidence of the dual nature of the holography. Based on these observations, we discuss the amount of information contained in one snapshot.
An entropy maximization problem related to optical communication
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Rodemich, E. R.; Swanson, L.
1986-01-01
In relation to a problem in optical communication, the paper considers the general problem of maximizing the entropy of a stationary radom process that is subject to an average transition cost constraint. By using a recent result of Justesen and Hoholdt, an exact solution to the problem is presented and a class of finite state encoders that give a good approximation to the exact solution is suggested.
Double symbolic joint entropy in nonlinear dynamic complexity analysis
NASA Astrophysics Data System (ADS)
Yao, Wenpo; Wang, Jun
2017-07-01
Symbolizations, the base of symbolic dynamic analysis, are classified as global static and local dynamic approaches which are combined by joint entropy in our works for nonlinear dynamic complexity analysis. Two global static methods, symbolic transformations of Wessel N. symbolic entropy and base-scale entropy, and two local ones, namely symbolizations of permutation and differential entropy, constitute four double symbolic joint entropies that have accurate complexity detections in chaotic models, logistic and Henon map series. In nonlinear dynamical analysis of different kinds of heart rate variability, heartbeats of healthy young have higher complexity than those of the healthy elderly, and congestive heart failure (CHF) patients are lowest in heartbeats' joint entropy values. Each individual symbolic entropy is improved by double symbolic joint entropy among which the combination of base-scale and differential symbolizations have best complexity analysis. Test results prove that double symbolic joint entropy is feasible in nonlinear dynamic complexity analysis.
Solti, Imre; Cooke, Colin R; Xia, Fei; Wurfel, Mark M
2009-11-01
This paper compares the performance of keyword and machine learning-based chest x-ray report classification for Acute Lung Injury (ALI). ALI mortality is approximately 30 percent. High mortality is, in part, a consequence of delayed manual chest x-ray classification. An automated system could reduce the time to recognize ALI and lead to reductions in mortality. For our study, 96 and 857 chest x-ray reports in two corpora were labeled by domain experts for ALI. We developed a keyword and a Maximum Entropy-based classification system. Word unigram and character n-grams provided the features for the machine learning system. The Maximum Entropy algorithm with character 6-gram achieved the highest performance (Recall=0.91, Precision=0.90 and F-measure=0.91) on the 857-report corpus. This study has shown that for the classification of ALI chest x-ray reports, the machine learning approach is superior to the keyword based system and achieves comparable results to highest performing physician annotators.
Solti, Imre; Cooke, Colin R.; Xia, Fei; Wurfel, Mark M.
2010-01-01
This paper compares the performance of keyword and machine learning-based chest x-ray report classification for Acute Lung Injury (ALI). ALI mortality is approximately 30 percent. High mortality is, in part, a consequence of delayed manual chest x-ray classification. An automated system could reduce the time to recognize ALI and lead to reductions in mortality. For our study, 96 and 857 chest x-ray reports in two corpora were labeled by domain experts for ALI. We developed a keyword and a Maximum Entropy-based classification system. Word unigram and character n-grams provided the features for the machine learning system. The Maximum Entropy algorithm with character 6-gram achieved the highest performance (Recall=0.91, Precision=0.90 and F-measure=0.91) on the 857-report corpus. This study has shown that for the classification of ALI chest x-ray reports, the machine learning approach is superior to the keyword based system and achieves comparable results to highest performing physician annotators. PMID:21152268
NASA Astrophysics Data System (ADS)
Wang, Ke-Yan; Li, Yun-Song; Liu, Kai; Wu, Cheng-Ke
2008-08-01
A novel compression algorithm for interferential multispectral images based on adaptive classification and curve-fitting is proposed. The image is first partitioned adaptively into major-interference region and minor-interference region. Different approximating functions are then constructed for two kinds of regions respectively. For the major interference region, some typical interferential curves are selected to predict other curves. These typical curves are then processed by curve-fitting method. For the minor interference region, the data of each interferential curve are independently approximated. Finally the approximating errors of two regions are entropy coded. The experimental results show that, compared with JPEG2000, the proposed algorithm not only decreases the average output bit-rate by about 0.2 bit/pixel for lossless compression, but also improves the reconstructed images and reduces the spectral distortion greatly, especially at high bit-rate for lossy compression.
Namazi, Hamidreza; Akrami, Amin; Nazeri, Sina; Kulish, Vladimir V
2016-01-01
An important challenge in brain research is to make out the relation between the features of olfactory stimuli and the electroencephalogram (EEG) signal. Yet, no one has discovered any relation between the structures of olfactory stimuli and the EEG signal. This study investigates the relation between the structures of EEG signal and the olfactory stimulus (odorant). We show that the complexity of the EEG signal is coupled with the molecular complexity of the odorant, where more structurally complex odorant causes less fractal EEG signal. Also, odorant having higher entropy causes the EEG signal to have lower approximate entropy. The method discussed here can be applied and investigated in case of patients with brain diseases as the rehabilitation purpose.
Akrami, Amin; Nazeri, Sina
2016-01-01
An important challenge in brain research is to make out the relation between the features of olfactory stimuli and the electroencephalogram (EEG) signal. Yet, no one has discovered any relation between the structures of olfactory stimuli and the EEG signal. This study investigates the relation between the structures of EEG signal and the olfactory stimulus (odorant). We show that the complexity of the EEG signal is coupled with the molecular complexity of the odorant, where more structurally complex odorant causes less fractal EEG signal. Also, odorant having higher entropy causes the EEG signal to have lower approximate entropy. The method discussed here can be applied and investigated in case of patients with brain diseases as the rehabilitation purpose. PMID:27699169
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wintermeyer, Niklas; Winters, Andrew R., E-mail: awinters@math.uni-koeln.de; Gassner, Gregor J.
We design an arbitrary high-order accurate nodal discontinuous Galerkin spectral element approximation for the non-linear two dimensional shallow water equations with non-constant, possibly discontinuous, bathymetry on unstructured, possibly curved, quadrilateral meshes. The scheme is derived from an equivalent flux differencing formulation of the split form of the equations. We prove that this discretization exactly preserves the local mass and momentum. Furthermore, combined with a special numerical interface flux function, the method exactly preserves the mathematical entropy, which is the total energy for the shallow water equations. By adding a specific form of interface dissipation to the baseline entropy conserving schememore » we create a provably entropy stable scheme. That is, the numerical scheme discretely satisfies the second law of thermodynamics. Finally, with a particular discretization of the bathymetry source term we prove that the numerical approximation is well-balanced. We provide numerical examples that verify the theoretical findings and furthermore provide an application of the scheme for a partial break of a curved dam test problem.« less
Anomalous Thermodynamics at the Microscale
NASA Astrophysics Data System (ADS)
Celani, Antonio; Bo, Stefano; Eichhorn, Ralf; Aurell, Erik
2012-12-01
Particle motion at the microscale is an incessant tug-of-war between thermal fluctuations and applied forces on one side and the strong resistance exerted by fluid viscosity on the other. Friction is so strong that completely neglecting inertia—the overdamped approximation—gives an excellent effective description of the actual particle mechanics. In sharp contrast to this result, here we show that the overdamped approximation dramatically fails when thermodynamic quantities such as the entropy production in the environment are considered, in the presence of temperature gradients. In the limit of vanishingly small, yet finite, inertia, we find that the entropy production is dominated by a contribution that is anomalous, i.e., has no counterpart in the overdamped approximation. This phenomenon, which we call an entropic anomaly, is due to a symmetry breaking that occurs when moving to the small, finite inertia limit. Anomalous entropy production is traced back to futile phase-space cyclic trajectories displaying a fast downgradient sweep followed by a slow upgradient return to the original position.
Statistical mechanics of letters in words
Stephens, Greg J.; Bialek, William
2013-01-01
We consider words as a network of interacting letters, and approximate the probability distribution of states taken on by this network. Despite the intuition that the rules of English spelling are highly combinatorial and arbitrary, we find that maximum entropy models consistent with pairwise correlations among letters provide a surprisingly good approximation to the full statistics of words, capturing ~92% of the multi-information in four-letter words and even “discovering” words that were not represented in the data. These maximum entropy models incorporate letter interactions through a set of pairwise potentials and thus define an energy landscape on the space of possible words. Guided by the large letter redundancy we seek a lower-dimensional encoding of the letter distribution and show that distinctions between local minima in the landscape account for ~68% of the four-letter entropy. We suggest that these states provide an effective vocabulary which is matched to the frequency of word use and much smaller than the full lexicon. PMID:20866490
Mixing and electronic entropy contributions to thermal energy storage in low melting point alloys
NASA Astrophysics Data System (ADS)
Shamberger, Patrick J.; Mizuno, Yasushi; Talapatra, Anjana A.
2017-07-01
Melting of crystalline solids is associated with an increase in entropy due to an increase in configurational, rotational, and other degrees of freedom of a system. However, the magnitude of chemical mixing and electronic degrees of freedom, two significant contributions to the entropy of fusion, remain poorly constrained, even in simple 2 and 3 component systems. Here, we present experimentally measured entropies of fusion in the Sn-Pb-Bi and In-Sn-Bi ternary systems, and decouple mixing and electronic contributions. We demonstrate that electronic effects remain the dominant contribution to the entropy of fusion in multi-component post-transition metal and metalloid systems, and that excess entropy of mixing terms can be equal in magnitude to ideal mixing terms, causing regular solution approximations to be inadequate in the general case. Finally, we explore binary eutectic systems using mature thermodynamic databases, identifying eutectics containing at least one semiconducting intermetallic phase as promising candidates to exceed the entropy of fusion of monatomic endmembers, while simultaneously maintaining low melting points. These results have significant implications for engineering high-thermal conductivity metallic phase change materials to store thermal energy.
A Critical Look at Entropy-Based Gene-Gene Interaction Measures.
Lee, Woojoo; Sjölander, Arvid; Pawitan, Yudi
2016-07-01
Several entropy-based measures for detecting gene-gene interaction have been proposed recently. It has been argued that the entropy-based measures are preferred because entropy can better capture the nonlinear relationships between genotypes and traits, so they can be useful to detect gene-gene interactions for complex diseases. These suggested measures look reasonable at intuitive level, but so far there has been no detailed characterization of the interactions captured by them. Here we study analytically the properties of some entropy-based measures for detecting gene-gene interactions in detail. The relationship between interactions captured by the entropy-based measures and those of logistic regression models is clarified. In general we find that the entropy-based measures can suffer from a lack of specificity in terms of target parameters, i.e., they can detect uninteresting signals as interactions. Numerical studies are carried out to confirm theoretical findings. © 2016 WILEY PERIODICALS, INC.
On the entanglement entropy of quantum fields in causal sets
NASA Astrophysics Data System (ADS)
Belenchia, Alessio; Benincasa, Dionigi M. T.; Letizia, Marco; Liberati, Stefano
2018-04-01
In order to understand the detailed mechanism by which a fundamental discreteness can provide a finite entanglement entropy, we consider the entanglement entropy of two classes of free massless scalar fields on causal sets that are well approximated by causal diamonds in Minkowski spacetime of dimensions 2, 3 and 4. The first class is defined from discretised versions of the continuum retarded Green functions, while the second uses the causal set’s retarded nonlocal d’Alembertians parametrised by a length scale l k . In both cases we provide numerical evidence that the area law is recovered when the double-cutoff prescription proposed in Sorkin and Yazdi (2016 Entanglement entropy in causal set theory (arXiv:1611.10281)) is imposed. We discuss in detail the need for this double cutoff by studying the effect of two cutoffs on the quantum field and, in particular, on the entanglement entropy, in isolation. In so doing, we get a novel interpretation for why these two cutoff are necessary, and the different roles they play in making the entanglement entropy on causal sets finite.
Brain Entropy Mapping Using fMRI
Wang, Ze; Li, Yin; Childress, Anna Rose; Detre, John A.
2014-01-01
Entropy is an important trait for life as well as the human brain. Characterizing brain entropy (BEN) may provide an informative tool to assess brain states and brain functions. Yet little is known about the distribution and regional organization of BEN in normal brain. The purpose of this study was to examine the whole brain entropy patterns using a large cohort of normal subjects. A series of experiments were first performed to validate an approximate entropy measure regarding its sensitivity, specificity, and reliability using synthetic data and fMRI data. Resting state fMRI data from a large cohort of normal subjects (n = 1049) from multi-sites were then used to derive a 3-dimensional BEN map, showing a sharp low-high entropy contrast between the neocortex and the rest of brain. The spatial heterogeneity of resting BEN was further studied using a data-driven clustering method, and the entire brain was found to be organized into 7 hierarchical regional BEN networks that are consistent with known structural and functional brain parcellations. These findings suggest BEN mapping as a physiologically and functionally meaningful measure for studying brain functions. PMID:24657999
A positive and entropy-satisfying finite volume scheme for the Baer-Nunziato model
NASA Astrophysics Data System (ADS)
Coquel, Frédéric; Hérard, Jean-Marc; Saleh, Khaled
2017-02-01
We present a relaxation scheme for approximating the entropy dissipating weak solutions of the Baer-Nunziato two-phase flow model. This relaxation scheme is straightforwardly obtained as an extension of the relaxation scheme designed in [16] for the isentropic Baer-Nunziato model and consequently inherits its main properties. To our knowledge, this is the only existing scheme for which the approximated phase fractions, phase densities and phase internal energies are proven to remain positive without any restrictive condition other than a classical fully computable CFL condition. For ideal gas and stiffened gas equations of state, real values of the phasic speeds of sound are also proven to be maintained by the numerical scheme. It is also the only scheme for which a discrete entropy inequality is proven, under a CFL condition derived from the natural sub-characteristic condition associated with the relaxation approximation. This last property, which ensures the non-linear stability of the numerical method, is satisfied for any admissible equation of state. We provide a numerical study for the convergence of the approximate solutions towards some exact Riemann solutions. The numerical simulations show that the relaxation scheme compares well with two of the most popular existing schemes available for the Baer-Nunziato model, namely Schwendeman-Wahle-Kapila's Godunov-type scheme [39] and Tokareva-Toro's HLLC scheme [44]. The relaxation scheme also shows a higher precision and a lower computational cost (for comparable accuracy) than a standard numerical scheme used in the nuclear industry, namely Rusanov's scheme. Finally, we assess the good behavior of the scheme when approximating vanishing phase solutions.
A positive and entropy-satisfying finite volume scheme for the Baer–Nunziato model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coquel, Frédéric, E-mail: frederic.coquel@cmap.polytechnique.fr; Hérard, Jean-Marc, E-mail: jean-marc.herard@edf.fr; Saleh, Khaled, E-mail: saleh@math.univ-lyon1.fr
We present a relaxation scheme for approximating the entropy dissipating weak solutions of the Baer–Nunziato two-phase flow model. This relaxation scheme is straightforwardly obtained as an extension of the relaxation scheme designed in for the isentropic Baer–Nunziato model and consequently inherits its main properties. To our knowledge, this is the only existing scheme for which the approximated phase fractions, phase densities and phase internal energies are proven to remain positive without any restrictive condition other than a classical fully computable CFL condition. For ideal gas and stiffened gas equations of state, real values of the phasic speeds of sound aremore » also proven to be maintained by the numerical scheme. It is also the only scheme for which a discrete entropy inequality is proven, under a CFL condition derived from the natural sub-characteristic condition associated with the relaxation approximation. This last property, which ensures the non-linear stability of the numerical method, is satisfied for any admissible equation of state. We provide a numerical study for the convergence of the approximate solutions towards some exact Riemann solutions. The numerical simulations show that the relaxation scheme compares well with two of the most popular existing schemes available for the Baer–Nunziato model, namely Schwendeman–Wahle–Kapila's Godunov-type scheme and Tokareva–Toro's HLLC scheme . The relaxation scheme also shows a higher precision and a lower computational cost (for comparable accuracy) than a standard numerical scheme used in the nuclear industry, namely Rusanov's scheme. Finally, we assess the good behavior of the scheme when approximating vanishing phase solutions.« less
Noise and complexity in human postural control: interpreting the different estimations of entropy.
Rhea, Christopher K; Silver, Tobin A; Hong, S Lee; Ryu, Joong Hyun; Studenka, Breanna E; Hughes, Charmayne M L; Haddad, Jeffrey M
2011-03-17
Over the last two decades, various measures of entropy have been used to examine the complexity of human postural control. In general, entropy measures provide information regarding the health, stability and adaptability of the postural system that is not captured when using more traditional analytical techniques. The purpose of this study was to examine how noise, sampling frequency and time series length influence various measures of entropy when applied to human center of pressure (CoP) data, as well as in synthetic signals with known properties. Such a comparison is necessary to interpret data between and within studies that use different entropy measures, equipment, sampling frequencies or data collection durations. The complexity of synthetic signals with known properties and standing CoP data was calculated using Approximate Entropy (ApEn), Sample Entropy (SampEn) and Recurrence Quantification Analysis Entropy (RQAEn). All signals were examined at varying sampling frequencies and with varying amounts of added noise. Additionally, an increment time series of the original CoP data was examined to remove long-range correlations. Of the three measures examined, ApEn was the least robust to sampling frequency and noise manipulations. Additionally, increased noise led to an increase in SampEn, but a decrease in RQAEn. Thus, noise can yield inconsistent results between the various entropy measures. Finally, the differences between the entropy measures were minimized in the increment CoP data, suggesting that long-range correlations should be removed from CoP data prior to calculating entropy. The various algorithms typically used to quantify the complexity (entropy) of CoP may yield very different results, particularly when sampling frequency and noise are different. The results of this study are discussed within the context of the neural noise and loss of complexity hypotheses.
Time dependence of Hawking radiation entropy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Page, Don N., E-mail: profdonpage@gmail.com
2013-09-01
If a black hole starts in a pure quantum state and evaporates completely by a unitary process, the von Neumann entropy of the Hawking radiation initially increases and then decreases back to zero when the black hole has disappeared. Here numerical results are given for an approximation to the time dependence of the radiation entropy under an assumption of fast scrambling, for large nonrotating black holes that emit essentially only photons and gravitons. The maximum of the von Neumann entropy then occurs after about 53.81% of the evaporation time, when the black hole has lost about 40.25% of its originalmore » Bekenstein-Hawking (BH) entropy (an upper bound for its von Neumann entropy) and then has a BH entropy that equals the entropy in the radiation, which is about 59.75% of the original BH entropy 4πM{sub 0}{sup 2}, or about 7.509M{sub 0}{sup 2} ≈ 6.268 × 10{sup 76}(M{sub 0}/M{sub s}un){sup 2}, using my 1976 calculations that the photon and graviton emission process into empty space gives about 1.4847 times the BH entropy loss of the black hole. Results are also given for black holes in initially impure states. If the black hole starts in a maximally mixed state, the von Neumann entropy of the Hawking radiation increases from zero up to a maximum of about 119.51% of the original BH entropy, or about 15.018M{sub 0}{sup 2} ≈ 1.254 × 10{sup 77}(M{sub 0}/M{sub s}un){sup 2}, and then decreases back down to 4πM{sub 0}{sup 2} = 1.049 × 10{sup 77}(M{sub 0}/M{sub s}un){sup 2}.« less
Meirovitch, Hagai
2010-01-01
The commonly used simulation techniques, Metropolis Monte Carlo (MC) and molecular dynamics (MD) are of a dynamical type which enables one to sample system configurations i correctly with the Boltzmann probability, P(i)(B), while the value of P(i)(B) is not provided directly; therefore, it is difficult to obtain the absolute entropy, S approximately -ln P(i)(B), and the Helmholtz free energy, F. With a different simulation approach developed in polymer physics, a chain is grown step-by-step with transition probabilities (TPs), and thus their product is the value of the construction probability; therefore, the entropy is known. Because all exact simulation methods are equivalent, i.e. they lead to the same averages and fluctuations of physical properties, one can treat an MC or MD sample as if its members have rather been generated step-by-step. Thus, each configuration i of the sample can be reconstructed (from nothing) by calculating the TPs with which it could have been constructed. This idea applies also to bulk systems such as fluids or magnets. This approach has led earlier to the "local states" (LS) and the "hypothetical scanning" (HS) methods, which are approximate in nature. A recent development is the hypothetical scanning Monte Carlo (HSMC) (or molecular dynamics, HSMD) method which is based on stochastic TPs where all interactions are taken into account. In this respect, HSMC(D) can be viewed as exact and the only approximation involved is due to insufficient MC(MD) sampling for calculating the TPs. The validity of HSMC has been established by applying it first to liquid argon, TIP3P water, self-avoiding walks (SAW), and polyglycine models, where the results for F were found to agree with those obtained by other methods. Subsequently, HSMD was applied to mobile loops of the enzymes porcine pancreatic alpha-amylase and acetylcholinesterase in explicit water, where the difference in F between the bound and free states of the loop was calculated. Currently, HSMD is being extended for calculating the absolute and relative free energies of ligand-enzyme binding. We describe the whole approach and discuss future directions. 2009 John Wiley & Sons, Ltd.
Fractal Based Analysis of the Influence of Odorants on Heart Activity
NASA Astrophysics Data System (ADS)
Namazi, Hamidreza; Kulish, Vladimir V.
2016-12-01
An important challenge in heart research is to make the relation between the features of external stimuli and heart activity. Olfactory stimulation is an important type of stimulation that affects the heart activity, which is mapped on Electrocardiogram (ECG) signal. Yet, no one has discovered any relation between the structures of olfactory stimuli and the ECG signal. This study investigates the relation between the structures of heart rate and the olfactory stimulus (odorant). We show that the complexity of the heart rate is coupled with the molecular complexity of the odorant, where more structurally complex odorant causes less fractal heart rate. Also, odorant having higher entropy causes the heart rate having lower approximate entropy. The method discussed here can be applied and investigated in case of patients with heart diseases as the rehabilitation purpose.
NASA Astrophysics Data System (ADS)
Sher Akbar, Noreen; Wahid Butt, Adil
2017-05-01
The study of heat transfer is of significant importance in many biological and biomedical industry problems. This investigation comprises of the study of entropy generation analysis of the blood flow in the arteries with permeable walls. The convection through the flow is studied with compliments to the entropy generation. Governing problem is formulized and solved for low Reynold’s number and long wavelength approximations. Exact analytical solutions have been obtained and are analyzed graphically. It is seen that temperature for pure water is lower as compared to the copper water. It gains magnitude with an increase in the slip parameter.
Configurational entropy and ρ and ϕ mesons production in QCD
NASA Astrophysics Data System (ADS)
Karapetyan, G.
2018-06-01
In the present work the electroproduction for diffractive ρ and ϕ mesons by considering AdS/QCD correspondence and Color Glass Condensate (CGC) approximation are studied with respect to the associated dipole cross section, whose parameters are studied and analysed in the framework of the configurational entropy. Our results suggest different quantum states of the nuclear matter, showing that the extremal points of the nuclear configurational entropy is able to reflect a true description of the ρ and ϕ mesons production, using current data concerning light quark masses. During the computations parameters, obtained in fitting procedure, coincide to the experimental within ∼ 0.1%.
Quantile based Tsallis entropy in residual lifetime
NASA Astrophysics Data System (ADS)
Khammar, A. H.; Jahanshahi, S. M. A.
2018-02-01
Tsallis entropy is a generalization of type α of the Shannon entropy, that is a nonadditive entropy unlike the Shannon entropy. Shannon entropy may be negative for some distributions, but Tsallis entropy can always be made nonnegative by choosing appropriate value of α. In this paper, we derive the quantile form of this nonadditive's entropy function in the residual lifetime, namely the residual quantile Tsallis entropy (RQTE) and get the bounds for it, depending on the Renyi's residual quantile entropy. Also, we obtain relationship between RQTE and concept of proportional hazards model in the quantile setup. Based on the new measure, we propose a stochastic order and aging classes, and study its properties. Finally, we prove characterizations theorems for some well known lifetime distributions. It is shown that RQTE uniquely determines the parent distribution unlike the residual Tsallis entropy.
Design of two-dimensional zero reference codes with cross-entropy method.
Chen, Jung-Chieh; Wen, Chao-Kai
2010-06-20
We present a cross-entropy (CE)-based method for the design of optimum two-dimensional (2D) zero reference codes (ZRCs) in order to generate a zero reference signal for a grating measurement system and achieve absolute position, a coordinate origin, or a machine home position. In the absence of diffraction effects, the 2D ZRC design problem is known as the autocorrelation approximation. Based on the properties of the autocorrelation function, the design of the 2D ZRC is first formulated as a particular combination optimization problem. The CE method is then applied to search for an optimal 2D ZRC and thus obtain the desirable zero reference signal. Computer simulation results indicate that there are 15.38% and 14.29% reductions in the second maxima value for the 16x16 grating system with n(1)=64 and the 100x100 grating system with n(1)=300, respectively, where n(1) is the number of transparent pixels, compared with those of the conventional genetic algorithm.
Stokes–Einstein relation and excess entropy in Al-rich Al-Cu melts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pasturel, A.; Jakse, N.
We investigate the conditions for the validity of the Stokes-Einstein relation that connects diffusivity to viscosity in melts using entropy-scaling relationships developed by Rosenfeld. Employing ab initio molecular dynamics simulations to determine transport and structural properties of liquid Al{sub 1−x}Cu{sub x} alloys (with composition x ≤ 0.4), we first show that reduced self-diffusion coefficients and viscosities, according to Rosenfeld's formulation, scale with the two-body approximation of the excess entropy except the reduced viscosity for x = 0.4. Then, we use our findings to evidence that the Stokes-Einstein relation using effective atomic radii is not valid in these alloys while its validity can be relatedmore » to the temperature dependence of the partial pair-excess entropies of both components. Finally, we derive a relation between the ratio of the self-diffusivities of the components and the ratio of their pair excess entropies.« less
Statistical mechanics of monatomic liquids
NASA Astrophysics Data System (ADS)
Wallace, Duane C.
1997-10-01
Two key experimental properties of elemental liquids, together with an analysis of the condensed-system potential-energy surface, lead us logically to the dynamical theory of monatomic liquids. Experimentally, the ion motional specific heat is approximately 3Nk for N ions, implying the normal modes of motion are approximately 3N independent harmonic oscillators. This implies the potential surface contains nearly harmonic valleys. The equilibrium configuration at the bottom of each valley is a ``structure.'' Structures are crystalline or amorphous, and amorphous structures can have a remnant of local crystal symmetry, or can be random. The random structures are by far the most numerous, and hence dominate the statistical mechanics of the liquid state, and their macroscopic properties are uniform over the structure class, for large-N systems. The Hamiltonian for any structural valley is the static structure potential, a sum of harmonic normal modes, and an anharmonic correction. Again from experiment, the constant-density entropy of melting contains a universal disordering contribution of NkΔ, suggesting the random structural valleys are of universal number wN, where lnw=Δ. Our experimental estimate for Δ is 0.80. In quasiharmonic approximation, the liquid theory for entropy agrees with experiment, for all currently analyzable experimental data at elevated temperatures, to within 1-2% of the total entropy. Further testable predictions of the theory are mentioned.
Minimum relative entropy distributions with a large mean are Gaussian
NASA Astrophysics Data System (ADS)
Smerlak, Matteo
2016-12-01
Entropy optimization principles are versatile tools with wide-ranging applications from statistical physics to engineering to ecology. Here we consider the following constrained problem: Given a prior probability distribution q , find the posterior distribution p minimizing the relative entropy (also known as the Kullback-Leibler divergence) with respect to q under the constraint that mean (p ) is fixed and large. We show that solutions to this problem are approximately Gaussian. We discuss two applications of this result. In the context of dissipative dynamics, the equilibrium distribution of a Brownian particle confined in a strong external field is independent of the shape of the confining potential. We also derive an H -type theorem for evolutionary dynamics: The entropy of the (standardized) distribution of fitness of a population evolving under natural selection is eventually increasing in time.
Zhang, Tong-Liang; Ding, Yong-Sheng; Chou, Kuo-Chen
2008-01-07
Compared with the conventional amino acid (AA) composition, the pseudo-amino acid (PseAA) composition as originally introduced for protein subcellular location prediction can incorporate much more information of a protein sequence, so as to remarkably enhance the power of using a discrete model to predict various attributes of a protein. In this study, based on the concept of PseAA composition, the approximate entropy and hydrophobicity pattern of a protein sequence are used to characterize the PseAA components. Also, the immune genetic algorithm (IGA) is applied to search the optimal weight factors in generating the PseAA composition. Thus, for a given protein sequence sample, a 27-D (dimensional) PseAA composition is generated as its descriptor. The fuzzy K nearest neighbors (FKNN) classifier is adopted as the prediction engine. The results thus obtained in predicting protein structural classification are quite encouraging, indicating that the current approach may also be used to improve the prediction quality of other protein attributes, or at least can play a complimentary role to the existing methods in the relevant areas. Our algorithm is written in Matlab that is available by contacting the corresponding author.
Comparison of algorithms to quantify muscle fatigue in upper limb muscles based on sEMG signals.
Kahl, Lorenz; Hofmann, Ulrich G
2016-11-01
This work compared the performance of six different fatigue detection algorithms quantifying muscle fatigue based on electromyographic signals. Surface electromyography (sEMG) was obtained by an experiment from upper arm contractions at three different load levels from twelve volunteers. Fatigue detection algorithms mean frequency (MNF), spectral moments ratio (SMR), the wavelet method WIRM1551, sample entropy (SampEn), fuzzy approximate entropy (fApEn) and recurrence quantification analysis (RQA%DET) were calculated. The resulting fatigue signals were compared considering the disturbances incorporated in fatiguing situations as well as according to the possibility to differentiate the load levels based on the fatigue signals. Furthermore we investigated the influence of the electrode locations on the fatigue detection quality and whether an optimized channel set is reasonable. The results of the MNF, SMR, WIRM1551 and fApEn algorithms fell close together. Due to the small amount of subjects in this study significant differences could not be found. In terms of disturbances the SMR algorithm showed a slight tendency to out-perform the others. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prasad, Saurav, E-mail: saurav7188@gmail.com, E-mail: cyz118212@chemistry.iitd.ac.in; Chakravarty, Charusita
Experiments and simulations demonstrate some intriguing equivalences in the effect of pressure and electrolytes on the hydrogen-bonded network of water. Here, we examine the extent and nature of equivalence effects between pressure and salt concentration using relationships between structure, entropy, and transport properties based on two key ideas: first, the approximation of the excess entropy of the fluid by the contribution due to the atom-atom pair correlation functions and second, Rosenfeld-type excess entropy scaling relations for transport properties. We perform molecular dynamics simulations of LiCl–H{sub 2}O and bulk SPC/E water spanning the concentration range 0.025–0.300 molefraction of LiCl at 1more » atm and pressure range from 0 to 7 GPa, respectively. The temperature range considered was from 225 to 350 K for both the systems. To establish that the time-temperature-transformation behaviour of electrolyte solutions and water is equivalent, we use the additional observation based on our simulations that the pair entropy behaves as a near-linear function of pressure in bulk water and of composition in LiCl–H{sub 2}O. This allows for the alignment of pair entropy isotherms and allows for a simple mapping of pressure onto composition. Rosenfeld-scaling implies that pair entropy is semiquantitatively related to the transport properties. At a given temperature, equivalent state points in bulk H{sub 2}O and LiCl–H{sub 2}O (at 1 atm) are defined as those for which the pair entropy, diffusivity, and viscosity are nearly identical. The microscopic basis for this equivalence lies in the ability of both pressure and ions to convert the liquid phase into a pair-dominated fluid, as demonstrated by the O–O–O angular distribution within the first coordination shell of a water molecule. There are, however, sharp differences in local order and mechanisms for the breakdown of tetrahedral order by pressure and electrolytes. Increasing pressure increases orientational disorder within the first neighbour shell while addition of ions shifts local orientational order from tetrahedral to close-packed as water molecules get incorporated in ionic hydration shells. The variations in local order within the first hydration shell may underlie ion-specific effects, such as the Hofmeister series.« less
NASA Astrophysics Data System (ADS)
Prasad, Saurav; Chakravarty, Charusita
2016-06-01
Experiments and simulations demonstrate some intriguing equivalences in the effect of pressure and electrolytes on the hydrogen-bonded network of water. Here, we examine the extent and nature of equivalence effects between pressure and salt concentration using relationships between structure, entropy, and transport properties based on two key ideas: first, the approximation of the excess entropy of the fluid by the contribution due to the atom-atom pair correlation functions and second, Rosenfeld-type excess entropy scaling relations for transport properties. We perform molecular dynamics simulations of LiCl-H2O and bulk SPC/E water spanning the concentration range 0.025-0.300 molefraction of LiCl at 1 atm and pressure range from 0 to 7 GPa, respectively. The temperature range considered was from 225 to 350 K for both the systems. To establish that the time-temperature-transformation behaviour of electrolyte solutions and water is equivalent, we use the additional observation based on our simulations that the pair entropy behaves as a near-linear function of pressure in bulk water and of composition in LiCl-H2O. This allows for the alignment of pair entropy isotherms and allows for a simple mapping of pressure onto composition. Rosenfeld-scaling implies that pair entropy is semiquantitatively related to the transport properties. At a given temperature, equivalent state points in bulk H2O and LiCl-H2O (at 1 atm) are defined as those for which the pair entropy, diffusivity, and viscosity are nearly identical. The microscopic basis for this equivalence lies in the ability of both pressure and ions to convert the liquid phase into a pair-dominated fluid, as demonstrated by the O-O-O angular distribution within the first coordination shell of a water molecule. There are, however, sharp differences in local order and mechanisms for the breakdown of tetrahedral order by pressure and electrolytes. Increasing pressure increases orientational disorder within the first neighbour shell while addition of ions shifts local orientational order from tetrahedral to close-packed as water molecules get incorporated in ionic hydration shells. The variations in local order within the first hydration shell may underlie ion-specific effects, such as the Hofmeister series.
Maximum-entropy probability distributions under Lp-norm constraints
NASA Technical Reports Server (NTRS)
Dolinar, S.
1991-01-01
Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.
Time-dependent entropy evolution in microscopic and macroscopic electromagnetic relaxation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker-Jarvis, James
This paper is a study of entropy and its evolution in the time and frequency domains upon application of electromagnetic fields to materials. An understanding of entropy and its evolution in electromagnetic interactions bridges the boundaries between electromagnetism and thermodynamics. The approach used here is a Liouville-based statistical-mechanical theory. I show that the microscopic entropy is reversible and the macroscopic entropy satisfies an H theorem. The spectral entropy development can be very useful for studying the frequency response of materials. Using a projection-operator based nonequilibrium entropy, different equations are derived for the entropy and entropy production and are applied tomore » the polarization, magnetization, and macroscopic fields. I begin by proving an exact H theorem for the entropy, progress to application of time-dependent entropy in electromagnetics, and then apply the theory to relevant applications in electromagnetics. The paper concludes with a discussion of the relationship of the frequency-domain form of the entropy to the permittivity, permeability, and impedance.« less
Relating different quantum generalizations of the conditional Rényi entropy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tomamichel, Marco; School of Physics, The University of Sydney, Sydney 2006; Berta, Mario
2014-08-15
Recently a new quantum generalization of the Rényi divergence and the corresponding conditional Rényi entropies was proposed. Here, we report on a surprising relation between conditional Rényi entropies based on this new generalization and conditional Rényi entropies based on the quantum relative Rényi entropy that was used in previous literature. Our result generalizes the well-known duality relation H(A|B) + H(A|C) = 0 of the conditional von Neumann entropy for tripartite pure states to Rényi entropies of two different kinds. As a direct application, we prove a collection of inequalities that relate different conditional Rényi entropies and derive a new entropicmore » uncertainty relation.« less
Classical many-particle systems with unique disordered ground states
NASA Astrophysics Data System (ADS)
Zhang, G.; Stillinger, F. H.; Torquato, S.
2017-10-01
Classical ground states (global energy-minimizing configurations) of many-particle systems are typically unique crystalline structures, implying zero enumeration entropy of distinct patterns (aside from trivial symmetry operations). By contrast, the few previously known disordered classical ground states of many-particle systems are all high-entropy (highly degenerate) states. Here we show computationally that our recently proposed "perfect-glass" many-particle model [Sci. Rep. 6, 36963 (2016), 10.1038/srep36963] possesses disordered classical ground states with a zero entropy: a highly counterintuitive situation . For all of the system sizes, parameters, and space dimensions that we have numerically investigated, the disordered ground states are unique such that they can always be superposed onto each other or their mirror image. At low energies, the density of states obtained from simulations matches those calculated from the harmonic approximation near a single ground state, further confirming ground-state uniqueness. Our discovery provides singular examples in which entropy and disorder are at odds with one another. The zero-entropy ground states provide a unique perspective on the celebrated Kauzmann-entropy crisis in which the extrapolated entropy of a supercooled liquid drops below that of the crystal. We expect that our disordered unique patterns to be of value in fields beyond glass physics, including applications in cryptography as pseudorandom functions with tunable computational complexity.
Vibrational entropy of a protein: large differences between distinct conformations.
Goethe, Martin; Fita, Ignacio; Rubi, J Miguel
2015-01-13
In this article, it is investigated whether vibrational entropy (VE) is an important contribution to the free energy of globular proteins at ambient conditions. VE represents the major configurational-entropy contribution of these proteins. By definition, it is an average of the configurational entropies of the protein within single minima of the energy landscape, weighted by their occupation probabilities. Its large part originates from thermal motion of flexible torsion angles giving rise to the finite peak widths observed in torsion angle distributions. While VE may affect the equilibrium properties of proteins, it is usually neglected in numerical calculations as its consideration is difficult. Moreover, it is sometimes believed that all well-packed conformations of a globular protein have similar VE anyway. Here, we measure explicitly the VE for six different conformations from simulation data of a test protein. Estimates are obtained using the quasi-harmonic approximation for three coordinate sets, Cartesian, bond-angle-torsion (BAT), and a new set termed rotamer-degeneracy lifted BAT coordinates by us. The new set gives improved estimates as it overcomes a known shortcoming of the quasi-harmonic approximation caused by multiply populated rotamer states, and it may serve for VE estimation of macromolecules in a very general context. The obtained VE values depend considerably on the type of coordinates used. However, for all coordinate sets we find large entropy differences between the conformations, of the order of the overall stability of the protein. This result may have important implications on the choice of free energy expressions used in software for protein structure prediction, protein design, and NMR refinement.
Entropy generation of nanofluid flow in a microchannel heat sink
NASA Astrophysics Data System (ADS)
Manay, Eyuphan; Akyürek, Eda Feyza; Sahin, Bayram
2018-06-01
Present study aims to investigate the effects of the presence of nano sized TiO2 particles in the base fluid on entropy generation rate in a microchannel heat sink. Pure water was chosen as base fluid, and TiO2 particles were suspended into the pure water in five different particle volume fractions of 0.25%, 0.5%, 1.0%, 1.5% and 2.0%. Under laminar, steady state flow and constant heat flux boundary conditions, thermal, frictional, total entropy generation rates and entropy generation number ratios of nanofluids were experimentally analyzed in microchannel flow for different channel heights of 200 μm, 300 μm, 400 μm and 500 μm. It was observed that frictional and total entropy generation rates increased as thermal entropy generation rate were decreasing with an increase in particle volume fraction. In microchannel flows, thermal entropy generation could be neglected due to its too low rate smaller than 1.10e-07 in total entropy generation. Higher channel heights caused higher thermal entropy generation rates, and increasing channel height yielded an increase from 30% to 52% in thermal entropy generation. When channel height decreased, an increase of 66%-98% in frictional entropy generation was obtained. Adding TiO2 nanoparticles into the base fluid caused thermal entropy generation to decrease about 1.8%-32.4%, frictional entropy generation to increase about 3.3%-21.6%.
NASA Astrophysics Data System (ADS)
Jamshed, Wasim; Aziz, Asim
2018-06-01
In the present research, a simplified mathematical model is presented to study the heat transfer and entropy generation analysis of thermal system containing hybrid nanofluid. Nanofluid occupies the space over an infinite horizontal surface and the flow is induced by the non-linear stretching of surface. A uniform transverse magnetic field, Cattaneo-Christov heat flux model and thermal radiation effects are also included in the present study. The similarity technique is employed to reduce the governing non-linear partial differential equations to a set of ordinary differential equation. Keller Box numerical scheme is then used to approximate the solutions for the thermal analysis. Results are presented for conventional copper oxide-ethylene glycol (CuO-EG) and hybrid titanium-copper oxide/ethylene glycol ({TiO}_2 -CuO/EG) nanofluids. The spherical, hexahedron, tetrahedron, cylindrical, and lamina-shaped nanoparticles are considered in the present analysis. The significant findings of the study is the enhanced heat transfer capability of hybrid nanofluids over the conventional nanofluids, greatest heat transfer rate for the smallest value of the shape factor parameter and the increase in Reynolds number and Brinkman number increases the overall entropy of the system.
NASA Astrophysics Data System (ADS)
Hong, S. Lee; Bodfish, James W.; Newell, Karl M.
2006-03-01
We investigated the relationship between macroscopic entropy and microscopic complexity of the dynamics of body rocking and sitting still across adults with stereotyped movement disorder and mental retardation (profound and severe) against controls matched for age, height, and weight. This analysis was performed through the examination of center of pressure (COP) motion on the mediolateral (side-to-side) and anteroposterior (fore-aft) dimensions and the entropy of the relative phase between the two dimensions of motion. Intentional body rocking and stereotypical body rocking possessed similar slopes for their respective frequency spectra, but differences were revealed during maintenance of sitting postures. The dynamics of sitting in the control group produced lower spectral slopes and higher complexity (approximate entropy). In the controls, the higher complexity found on each dimension of motion was related to a weaker coupling between dimensions. Information entropy of the relative phase between the two dimensions of COP motion and irregularity (complexity) of their respective motions fitted a power-law function, revealing a relationship between macroscopic entropy and microscopic complexity across both groups and behaviors. This power-law relation affords the postulation that the organization of movement and posture dynamics occurs as a fractal process.
An entropy correction method for unsteady full potential flows with strong shocks
NASA Technical Reports Server (NTRS)
Whitlow, W., Jr.; Hafez, M. M.; Osher, S. J.
1986-01-01
An entropy correction method for the unsteady full potential equation is presented. The unsteady potential equation is modified to account for entropy jumps across shock waves. The conservative form of the modified equation is solved in generalized coordinates using an implicit, approximate factorization method. A flux-biasing differencing method, which generates the proper amounts of artificial viscosity in supersonic regions, is used to discretize the flow equations in space. Comparisons between the present method and solutions of the Euler equations and between the present method and experimental data are presented. The comparisons show that the present method more accurately models solutions of the Euler equations and experiment than does the isentropic potential formulation.
Chapman Enskog-maximum entropy method on time-dependent neutron transport equation
NASA Astrophysics Data System (ADS)
Abdou, M. A.
2006-09-01
The time-dependent neutron transport equation in semi and infinite medium with linear anisotropic and Rayleigh scattering is proposed. The problem is solved by means of the flux-limited, Chapman Enskog-maximum entropy for obtaining the solution of the time-dependent neutron transport. The solution gives the neutron distribution density function which is used to compute numerically the radiant energy density E(x,t), net flux F(x,t) and reflectivity Rf. The behaviour of the approximate flux-limited maximum entropy neutron density function are compared with those found by other theories. Numerical calculations for the radiant energy, net flux and reflectivity of the proposed medium are calculated at different time and space.
Entropy-based goodness-of-fit test: Application to the Pareto distribution
NASA Astrophysics Data System (ADS)
Lequesne, Justine
2013-08-01
Goodness-of-fit tests based on entropy have been introduced in [13] for testing normality. The maximum entropy distribution in a class of probability distributions defined by linear constraints induces a Pythagorean equality between the Kullback-Leibler information and an entropy difference. This allows one to propose a goodness-of-fit test for maximum entropy parametric distributions which is based on the Kullback-Leibler information. We will focus on the application of the method to the Pareto distribution. The power of the proposed test is computed through Monte Carlo simulation.
RED: a set of molecular descriptors based on Renyi entropy.
Delgado-Soler, Laura; Toral, Raul; Tomás, M Santos; Rubio-Martinez, Jaime
2009-11-01
New molecular descriptors, RED (Renyi entropy descriptors), based on the generalized entropies introduced by Renyi are presented. Topological descriptors based on molecular features have proven to be useful for describing molecular profiles. Renyi entropy is used as a variability measure to contract a feature-pair distribution composing the descriptor vector. The performance of RED descriptors was tested for the analysis of different sets of molecular distances, virtual screening, and pharmacological profiling. A free parameter of the Renyi entropy has been optimized for all the considered applications.
Husimi-cactus approximation study on the diluted spin ice
NASA Astrophysics Data System (ADS)
Otsuka, Hiromi; Okabe, Yutaka; Nefedev, Konstantin
2018-04-01
We investigate dilution effects on the classical spin-ice materials such as Ho2Ti2O7 and Dy2Ti2O7 . In particular, we derive a formula of the thermodynamic quantities as functions of the temperature and a nonmagnetic ion concentration based on a Husimi-cactus approximation. We find that the formula predicts a dilution-induced crossover from the cooperative to the conventional paramagnets in a ground state, and that it also reproduces the "generalized Pauling's entropy" given by Ke et al. To verify the formula from a numerical viewpoint, we compare these results with Monte Carlo simulation calculation data, and then find good agreement for all parameter values.
Approximate Entropy in the Electroencephalogram During Wake and Sleep
Burioka, Naoto; Miyata, Masanori; Cornélissen, Germaine; Halberg, Franz; Takeshima, Takao; Kaplan, Daniel T.; Suyama, Hisashi; Endo, Masanori; Maegaki, Yoshihiro; Nomura, Takashi; Tomita, Yutaka; Nakashima, Kenji; Shimizu, Eiji
2006-01-01
Entropy measurement can discriminate among complex systems, including deterministic, stochastic and composite systems. We evaluated the changes of approximate entropy (ApEn) in signals of the electroencephalogram (EEG) during sleep. EEG signals were recorded from eight healthy volunteers during nightly sleep. We estimated the values of ApEn in EEG signals in each sleep stage. The ApEn values for EEG signals (mean ± SD) were 0.896 ± 0.264 during eyes-closed waking state, 0.738 ± 0.089 during Stage I, 0.615 ± 0.107 during Stage II, 0.487 ± 0.101 during Stage III, 0.397 ± 0.078 during Stage IV and 0.789 ± 0.182 during REM sleep. The ApEn values were found to differ with statistical significance among the six different stages of consciousness (ANOVA, p<0.001). ApEn of EEG was statistically significantly lower during Stage IV and higher during wake and REM sleep. We conclude that ApEn measurement can be useful to estimate sleep stages and the complexity in brain activity. PMID:15683194
Estimating the Aqueous Solubility of Pharmaceutical Hydrates
Franklin, Stephen J.; Younis, Usir S.; Myrdal, Paul B.
2016-01-01
Estimation of crystalline solute solubility is well documented throughout the literature. However, the anhydrous crystal form is typically considered with these models, which is not always the most stable crystal form in water. In this study an equation which predicts the aqueous solubility of a hydrate is presented. This research attempts to extend the utility of the ideal solubility equation by incorporating desolvation energetics of the hydrated crystal. Similar to the ideal solubility equation, which accounts for the energetics of melting, this model approximates the energy of dehydration to the entropy of vaporization for water. Aqueous solubilities, dehydration and melting temperatures, and log P values were collected experimentally and from the literature. The data set includes different hydrate types and a range of log P values. Three models are evaluated, the most accurate model approximates the entropy of dehydration (ΔSd) by the entropy of vaporization (ΔSvap) for water, and utilizes onset dehydration and melting temperatures in combination with log P. With this model, the average absolute error for the prediction of solubility of 14 compounds was 0.32 log units. PMID:27238488
Ab initio calculation of thermodynamic potentials and entropies for superionic water
DOE Office of Scientific and Technical Information (OSTI.GOV)
French, Martin; Desjarlais, Michael P.; Redmer, Ronald
We construct thermodynamic potentials for two superionic phases of water [with body-centered cubic (bcc) and face-centered cubic (fcc) oxygen lattice] using a combination of density functional theory (DFT) and molecular dynamics simulations (MD). For this purpose, a generic expression for the free energy of warm dense matter is developed and parametrized with equation of state data from the DFT-MD simulations. A second central aspect is the accurate determination of the entropy, which is done using an approximate two-phase method based on the frequency spectra of the nuclear motion. The boundary between the bcc superionic phase and the ices VII andmore » X calculated with thermodynamic potentials from DFT-MD is consistent with that directly derived from the simulations. As a result, differences in the physical properties of the bcc and fcc superionic phases and their impact on interior modeling of water-rich giant planets are discussed.« less
Ab initio calculation of thermodynamic potentials and entropies for superionic water
French, Martin; Desjarlais, Michael P.; Redmer, Ronald
2016-02-25
We construct thermodynamic potentials for two superionic phases of water [with body-centered cubic (bcc) and face-centered cubic (fcc) oxygen lattice] using a combination of density functional theory (DFT) and molecular dynamics simulations (MD). For this purpose, a generic expression for the free energy of warm dense matter is developed and parametrized with equation of state data from the DFT-MD simulations. A second central aspect is the accurate determination of the entropy, which is done using an approximate two-phase method based on the frequency spectra of the nuclear motion. The boundary between the bcc superionic phase and the ices VII andmore » X calculated with thermodynamic potentials from DFT-MD is consistent with that directly derived from the simulations. As a result, differences in the physical properties of the bcc and fcc superionic phases and their impact on interior modeling of water-rich giant planets are discussed.« less
Information theory analysis of Australian humpback whale song.
Miksis-Olds, Jennifer L; Buck, John R; Noad, Michael J; Cato, Douglas H; Stokes, M Dale
2008-10-01
Songs produced by migrating whales were recorded off the coast of Queensland, Australia, over six consecutive weeks in 2003. Forty-eight independent song sessions were analyzed using information theory techniques. The average length of the songs estimated by correlation analysis was approximately 100 units, with song sessions lasting from 300 to over 3100 units. Song entropy, a measure of structural constraints, was estimated using three different methodologies: (1) the independently identically distributed model, (2) a first-order Markov model, and (3) the nonparametric sliding window match length (SWML) method, as described by Suzuki et al. [(2006). "Information entropy of humpback whale song," J. Acoust. Soc. Am. 119, 1849-1866]. The analysis finds that the song sequences of migrating Australian whales are consistent with the hierarchical structure proposed by Payne and McVay [(1971). "Songs of humpback whales," Science 173, 587-597], and recently supported mathematically by Suzuki et al. (2006) for singers on the Hawaiian breeding grounds. Both the SWML entropy estimates and the song lengths for the Australian singers in 2003 were lower than that reported by Suzuki et al. (2006) for Hawaiian whales in 1976-1978; however, song redundancy did not differ between these two populations separated spatially and temporally. The average total information in the sequence of units in Australian song was approximately 35 bits/song. Aberrant songs (8%) yielded entropies similar to the typical songs.
Content Based Image Retrieval and Information Theory: A General Approach.
ERIC Educational Resources Information Center
Zachary, John; Iyengar, S. S.; Barhen, Jacob
2001-01-01
Proposes an alternative real valued representation of color based on the information theoretic concept of entropy. A theoretical presentation of image entropy is accompanied by a practical description of the merits and limitations of image entropy compared to color histograms. Results suggest that image entropy is a promising approach to image…
Subgrid-scale Condensation Modeling for Entropy-based Large Eddy Simulations of Clouds
NASA Astrophysics Data System (ADS)
Kaul, C. M.; Schneider, T.; Pressel, K. G.; Tan, Z.
2015-12-01
An entropy- and total water-based formulation of LES thermodynamics, such as that used by the recently developed code PyCLES, is advantageous from physical and numerical perspectives. However, existing closures for subgrid-scale thermodynamic fluctuations assume more traditional choices for prognostic thermodynamic variables, such as liquid potential temperature, and are not directly applicable to entropy-based modeling. Since entropy and total water are generally nonlinearly related to diagnosed quantities like temperature and condensate amounts, neglecting their small-scale variability can lead to bias in simulation results. Here we present the development of a subgrid-scale condensation model suitable for use with entropy-based thermodynamic formulations.
Noise and Complexity in Human Postural Control: Interpreting the Different Estimations of Entropy
Rhea, Christopher K.; Silver, Tobin A.; Hong, S. Lee; Ryu, Joong Hyun; Studenka, Breanna E.; Hughes, Charmayne M. L.; Haddad, Jeffrey M.
2011-01-01
Background Over the last two decades, various measures of entropy have been used to examine the complexity of human postural control. In general, entropy measures provide information regarding the health, stability and adaptability of the postural system that is not captured when using more traditional analytical techniques. The purpose of this study was to examine how noise, sampling frequency and time series length influence various measures of entropy when applied to human center of pressure (CoP) data, as well as in synthetic signals with known properties. Such a comparison is necessary to interpret data between and within studies that use different entropy measures, equipment, sampling frequencies or data collection durations. Methods and Findings The complexity of synthetic signals with known properties and standing CoP data was calculated using Approximate Entropy (ApEn), Sample Entropy (SampEn) and Recurrence Quantification Analysis Entropy (RQAEn). All signals were examined at varying sampling frequencies and with varying amounts of added noise. Additionally, an increment time series of the original CoP data was examined to remove long-range correlations. Of the three measures examined, ApEn was the least robust to sampling frequency and noise manipulations. Additionally, increased noise led to an increase in SampEn, but a decrease in RQAEn. Thus, noise can yield inconsistent results between the various entropy measures. Finally, the differences between the entropy measures were minimized in the increment CoP data, suggesting that long-range correlations should be removed from CoP data prior to calculating entropy. Conclusions The various algorithms typically used to quantify the complexity (entropy) of CoP may yield very different results, particularly when sampling frequency and noise are different. The results of this study are discussed within the context of the neural noise and loss of complexity hypotheses. PMID:21437281
Innovative techniques to analyze time series of geomagnetic activity indices
NASA Astrophysics Data System (ADS)
Balasis, Georgios; Papadimitriou, Constantinos; Daglis, Ioannis A.; Potirakis, Stelios M.; Eftaxias, Konstantinos
2016-04-01
Magnetic storms are undoubtedly among the most important phenomena in space physics and also a central subject of space weather. The non-extensive Tsallis entropy has been recently introduced, as an effective complexity measure for the analysis of the geomagnetic activity Dst index. The Tsallis entropy sensitively shows the complexity dissimilarity among different "physiological" (normal) and "pathological" states (intense magnetic storms). More precisely, the Tsallis entropy implies the emergence of two distinct patterns: (i) a pattern associated with the intense magnetic storms, which is characterized by a higher degree of organization, and (ii) a pattern associated with normal periods, which is characterized by a lower degree of organization. Other entropy measures such as Block Entropy, T-Complexity, Approximate Entropy, Sample Entropy and Fuzzy Entropy verify the above mentioned result. Importantly, the wavelet spectral analysis in terms of Hurst exponent, H, also shows the existence of two different patterns: (i) a pattern associated with the intense magnetic storms, which is characterized by a fractional Brownian persistent behavior (ii) a pattern associated with normal periods, which is characterized by a fractional Brownian anti-persistent behavior. Finally, we observe universality in the magnetic storm and earthquake dynamics, on a basis of a modified form of the Gutenberg-Richter law for the Tsallis statistics. This finding suggests a common approach to the interpretation of both phenomena in terms of the same driving physical mechanism. Signatures of discrete scale invariance in Dst time series further supports the aforementioned proposal.
NASA Astrophysics Data System (ADS)
Chen, Gui-Qiang G.; Schrecker, Matthew R. I.
2018-04-01
We are concerned with globally defined entropy solutions to the Euler equations for compressible fluid flows in transonic nozzles with general cross-sectional areas. Such nozzles include the de Laval nozzles and other more general nozzles whose cross-sectional area functions are allowed at the nozzle ends to be either zero (closed ends) or infinity (unbounded ends). To achieve this, in this paper, we develop a vanishing viscosity method to construct globally defined approximate solutions and then establish essential uniform estimates in weighted L p norms for the whole range of physical adiabatic exponents γ\\in (1, ∞) , so that the viscosity approximate solutions satisfy the general L p compensated compactness framework. The viscosity method is designed to incorporate artificial viscosity terms with the natural Dirichlet boundary conditions to ensure the uniform estimates. Then such estimates lead to both the convergence of the approximate solutions and the existence theory of globally defined finite-energy entropy solutions to the Euler equations for transonic flows that may have different end-states in the class of nozzles with general cross-sectional areas for all γ\\in (1, ∞) . The approach and techniques developed here apply to other problems with similar difficulties. In particular, we successfully apply them to construct globally defined spherically symmetric entropy solutions to the Euler equations for all γ\\in (1, ∞).
Estimation of conformational entropy in protein-ligand interactions: a computational perspective.
Polyansky, Anton A; Zubac, Ruben; Zagrovic, Bojan
2012-01-01
Conformational entropy is an important component of the change in free energy upon binding of a ligand to its target protein. As a consequence, development of computational techniques for reliable estimation of conformational entropies is currently receiving an increased level of attention in the context of computational drug design. Here, we review the most commonly used techniques for conformational entropy estimation from classical molecular dynamics simulations. Although by-and-large still not directly used in practical drug design, these techniques provide a golden standard for developing other, computationally less-demanding methods for such applications, in addition to furthering our understanding of protein-ligand interactions in general. In particular, we focus on the quasi-harmonic approximation and discuss different approaches that can be used to go beyond it, most notably, when it comes to treating anharmonic and/or correlated motions. In addition to reviewing basic theoretical formalisms, we provide a concrete set of steps required to successfully calculate conformational entropy from molecular dynamics simulations, as well as discuss a number of practical issues that may arise in such calculations.
NASA Astrophysics Data System (ADS)
Gary, S. Peter; Zhao, Yinjian; Hughes, R. Scott; Wang, Joseph; Parashar, Tulasi N.
2018-06-01
Three-dimensional particle-in-cell simulations of the forward cascade of decaying turbulence in the relatively short-wavelength kinetic range have been carried out as initial-value problems on collisionless, homogeneous, magnetized electron-ion plasma models. The simulations have addressed both whistler turbulence at β i = β e = 0.25 and kinetic Alfvén turbulence at β i = β e = 0.50, computing the species energy dissipation rates as well as the increase of the Boltzmann entropies for both ions and electrons as functions of the initial dimensionless fluctuating magnetic field energy density ε o in the range 0 ≤ ε o ≤ 0.50. This study shows that electron and ion entropies display similar rates of increase and that all four entropy rates increase approximately as ε o , consistent with the assumption that the quasilinear premise is valid for the initial conditions assumed for these simulations. The simulations further predict that the time rates of ion entropy increase should be substantially greater for kinetic Alfvén turbulence than for whistler turbulence.
An Entropy-Based Measure for Assessing Fuzziness in Logistic Regression
ERIC Educational Resources Information Center
Weiss, Brandi A.; Dardick, William
2016-01-01
This article introduces an entropy-based measure of data-model fit that can be used to assess the quality of logistic regression models. Entropy has previously been used in mixture-modeling to quantify how well individuals are classified into latent classes. The current study proposes the use of entropy for logistic regression models to quantify…
Information Entropy Analysis of the H1N1 Genetic Code
NASA Astrophysics Data System (ADS)
Martwick, Andy
2010-03-01
During the current H1N1 pandemic, viral samples are being obtained from large numbers of infected people world-wide and are being sequenced on the NCBI Influenza Virus Resource Database. The information entropy of the sequences was computed from the probability of occurrence of each nucleotide base at every position of each set of sequences using Shannon's definition of information entropy, [ H=∑bpb,2( 1pb ) ] where H is the observed information entropy at each nucleotide position and pb is the probability of the base pair of the nucleotides A, C, G, U. Information entropy of the current H1N1 pandemic is compared to reference human and swine H1N1 entropy. As expected, the current H1N1 entropy is in a low entropy state and has a very large mutation potential. Using the entropy method in mature genes we can identify low entropy regions of nucleotides that generally correlate to critical protein function.
NASA Astrophysics Data System (ADS)
Li, Jin; Zhang, Xian; Gong, Jinzhe; Tang, Jingtian; Ren, Zhengyong; Li, Guang; Deng, Yanli; Cai, Jin
A new technique is proposed for signal-noise identification and targeted de-noising of Magnetotelluric (MT) signals. This method is based on fractal-entropy and clustering algorithm, which automatically identifies signal sections corrupted by common interference (square, triangle and pulse waves), enabling targeted de-noising and preventing the loss of useful information in filtering. To implement the technique, four characteristic parameters — fractal box dimension (FBD), higuchi fractal dimension (HFD), fuzzy entropy (FuEn) and approximate entropy (ApEn) — are extracted from MT time-series. The fuzzy c-means (FCM) clustering technique is used to analyze the characteristic parameters and automatically distinguish signals with strong interference from the rest. The wavelet threshold (WT) de-noising method is used only to suppress the identified strong interference in selected signal sections. The technique is validated through signal samples with known interference, before being applied to a set of field measured MT/Audio Magnetotelluric (AMT) data. Compared with the conventional de-noising strategy that blindly applies the filter to the overall dataset, the proposed method can automatically identify and purposefully suppress the intermittent interference in the MT/AMT signal. The resulted apparent resistivity-phase curve is more continuous and smooth, and the slow-change trend in the low-frequency range is more precisely reserved. Moreover, the characteristic of the target-filtered MT/AMT signal is close to the essential characteristic of the natural field, and the result more accurately reflects the inherent electrical structure information of the measured site.
NASA Astrophysics Data System (ADS)
Godin, Paul
2005-09-01
We consider smooth three-dimensional spherically symmetric Eulerian flows of ideal polytropic gases with variable entropy, whose initial data are obtained by adding a small smooth perturbation with compact support to a constant state. Under a natural assumption, we obtain precise information on the asymptotic behavior of their lifespan when the size of the initial perturbation tends to 0. This is achieved by the construction and estimate of a suitable approximate flow.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, K.; Smith, J. C.; Grabowski, P. E.
Universal exact conditions guided the construction of most ground-state density functional approximations in use today. Here, we derive the relation between the entropy and Mermin free energy density functionals for thermal density functional theory. Both the entropy and sum of kinetic and electron-electron repulsion functionals are shown to be monotonically increasing with temperature, while the Mermin functional is concave downwards. Analogous relations are found for both exchange and correlation. The importance of these conditions is illustrated in two extremes: the Hubbard dimer and the uniform gas.
Maximum Kolmogorov-Sinai Entropy Versus Minimum Mixing Time in Markov Chains
NASA Astrophysics Data System (ADS)
Mihelich, M.; Dubrulle, B.; Paillard, D.; Kral, Q.; Faranda, D.
2018-01-01
We establish a link between the maximization of Kolmogorov Sinai entropy (KSE) and the minimization of the mixing time for general Markov chains. Since the maximisation of KSE is analytical and easier to compute in general than mixing time, this link provides a new faster method to approximate the minimum mixing time dynamics. It could be interesting in computer sciences and statistical physics, for computations that use random walks on graphs that can be represented as Markov chains.
Microscopic theory for the time irreversibility and the entropy production
NASA Astrophysics Data System (ADS)
Chun, Hyun-Myung; Noh, Jae Dong
2018-02-01
In stochastic thermodynamics, the entropy production of a thermodynamic system is defined by the irreversibility measured by the logarithm of the ratio of the path probabilities in the forward and reverse processes. We derive the relation between the irreversibility and the entropy production starting from the deterministic equations of motion of the whole system consisting of a physical system and a surrounding thermal environment. The derivation assumes the Markov approximation that the environmental degrees of freedom equilibrate instantaneously. Our approach provides a guideline for the choice of the proper reverse process to a given forward process, especially when there exists a velocity-dependent force. We demonstrate our idea with an example of a charged particle in the presence of a time-varying magnetic field.
Entropy production in a Glauber–Ising irreversible model with dynamical competition
NASA Astrophysics Data System (ADS)
Barbosa, Oscar A.; Tomé, Tânia
2018-06-01
An out of equilibrium Glauber–Ising model, evolving in accordance with an irreversible and stochastic Markovian dynamics, is analyzed in order to improve our comprehension concerning critical behavior and phase transitions in nonequilibrium systems. Therefore, a lattice model ruled by the competition between two Glauber dynamics acting on interlaced square lattices is proposed. Previous results have shown how the entropy production provides information about irreversibility and criticality. Mean-field approximations and Monte Carlo simulations were used in the analysis. The results obtained here show a continuous phase transition, reflected in the entropy production as a logarithmic divergence of its derivative, which suggests a shared universality class with the irreversible models invariant under the symmetry operations of the Ising model.
Upper entropy axioms and lower entropy axioms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Jin-Li, E-mail: phd5816@163.com; Suo, Qi
2015-04-15
The paper suggests the concepts of an upper entropy and a lower entropy. We propose a new axiomatic definition, namely, upper entropy axioms, inspired by axioms of metric spaces, and also formulate lower entropy axioms. We also develop weak upper entropy axioms and weak lower entropy axioms. Their conditions are weaker than those of Shannon–Khinchin axioms and Tsallis axioms, while these conditions are stronger than those of the axiomatics based on the first three Shannon–Khinchin axioms. The subadditivity and strong subadditivity of entropy are obtained in the new axiomatics. Tsallis statistics is a special case of satisfying our axioms. Moreover,more » different forms of information measures, such as Shannon entropy, Daroczy entropy, Tsallis entropy and other entropies, can be unified under the same axiomatics.« less
Coherence and entanglement measures based on Rényi relative entropies
NASA Astrophysics Data System (ADS)
Zhu, Huangjun; Hayashi, Masahito; Chen, Lin
2017-11-01
We study systematically resource measures of coherence and entanglement based on Rényi relative entropies, which include the logarithmic robustness of coherence, geometric coherence, and conventional relative entropy of coherence together with their entanglement analogues. First, we show that each Rényi relative entropy of coherence is equal to the corresponding Rényi relative entropy of entanglement for any maximally correlated state. By virtue of this observation, we establish a simple operational connection between entanglement measures and coherence measures based on Rényi relative entropies. We then prove that all these coherence measures, including the logarithmic robustness of coherence, are additive. Accordingly, all these entanglement measures are additive for maximally correlated states. In addition, we derive analytical formulas for Rényi relative entropies of entanglement of maximally correlated states and bipartite pure states, which reproduce a number of classic results on the relative entropy of entanglement and logarithmic robustness of entanglement in a unified framework. Several nontrivial bounds for Rényi relative entropies of coherence (entanglement) are further derived, which improve over results known previously. Moreover, we determine all states whose relative entropy of coherence is equal to the logarithmic robustness of coherence. As an application, we provide an upper bound for the exact coherence distillation rate, which is saturated for pure states.
Singh, Amritpal; Saini, Barjinder Singh; Singh, Dilbag
2016-06-01
Multiscale approximate entropy (MAE) is used to quantify the complexity of a time series as a function of time scale τ. Approximate entropy (ApEn) tolerance threshold selection 'r' is based on either: (1) arbitrary selection in the recommended range (0.1-0.25) times standard deviation of time series (2) or finding maximum ApEn (ApEnmax) i.e., the point where self-matches start to prevail over other matches and choosing the corresponding 'r' (rmax) as threshold (3) or computing rchon by empirically finding the relation between rmax, SD1/SD2 ratio and N using curve fitting, where, SD1 and SD2 are short-term and long-term variability of a time series respectively. None of these methods is gold standard for selection of 'r'. In our previous study [1], an adaptive procedure for selection of 'r' is proposed for approximate entropy (ApEn). In this paper, this is extended to multiple time scales using MAEbin and multiscale cross-MAEbin (XMAEbin). We applied this to simulations i.e. 50 realizations (n = 50) of random number series, fractional Brownian motion (fBm) and MIX (P) [1] series of data length of N = 300 and short term recordings of HRV and SBPV performed under postural stress from supine to standing. MAEbin and XMAEbin analysis was performed on laboratory recorded data of 50 healthy young subjects experiencing postural stress from supine to upright. The study showed that (i) ApEnbin of HRV is more than SBPV in supine position but is lower than SBPV in upright position (ii) ApEnbin of HRV decreases from supine i.e. 1.7324 ± 0.112 (mean ± SD) to upright 1.4916 ± 0.108 due to vagal inhibition (iii) ApEnbin of SBPV increases from supine i.e. 1.5535 ± 0.098 to upright i.e. 1.6241 ± 0.101 due sympathetic activation (iv) individual and cross complexities of RRi and systolic blood pressure (SBP) series depend on time scale under consideration (v) XMAEbin calculated using ApEnmax is correlated with cross-MAE calculated using ApEn (0.1-0.26) in steps of 0.02 at each time scale in supine and upright position and is concluded that ApEn0.26 has highest correlation at most scales (vi) choice of 'r' is critical in interpreting interactions between RRi and SBP and in ascertaining true complexity of the individual RRi and SBP series.
Damage detection in rotating machinery by means of entropy-based parameters
NASA Astrophysics Data System (ADS)
Tocarciuc, Alexandru; Bereteu, Liviu; ǎgǎnescu, Gheorghe Eugen, Dr
2014-11-01
The paper is proposing two new entropy-based parameters, namely Renyi Entropy Index (REI) and Sharma-Mittal Entropy Index (SMEI), for detecting the presence of failures (or damages) in rotating machinery, namely: belt structural damage, belt wheels misalignment, failure of the fixing bolt of the machine to its baseplate and eccentricities (i.e.: due to detaching a small piece of material or bad mounting of the rotating components of the machine). The algorithms to obtain the proposed entropy-based parameters are described and test data is used in order to assess their sensitivity. A vibration test bench is used for measuring the levels of vibration while artificially inducing damage. The deviation of the two entropy-based parameters is compared in two states of the vibration test bench: not damaged and damaged. At the end of the study, their sensitivity is compared to Shannon Entropic Index.
An efficient algorithm for automatic phase correction of NMR spectra based on entropy minimization
NASA Astrophysics Data System (ADS)
Chen, Li; Weng, Zhiqiang; Goh, LaiYoong; Garland, Marc
2002-09-01
A new algorithm for automatic phase correction of NMR spectra based on entropy minimization is proposed. The optimal zero-order and first-order phase corrections for a NMR spectrum are determined by minimizing entropy. The objective function is constructed using a Shannon-type information entropy measure. Entropy is defined as the normalized derivative of the NMR spectral data. The algorithm has been successfully applied to experimental 1H NMR spectra. The results of automatic phase correction are found to be comparable to, or perhaps better than, manual phase correction. The advantages of this automatic phase correction algorithm include its simple mathematical basis and the straightforward, reproducible, and efficient optimization procedure. The algorithm is implemented in the Matlab program ACME—Automated phase Correction based on Minimization of Entropy.
Schoenenberger, A W; Erne, P; Ammann, S; Perrig, M; Bürgi, U; Stuck, A E
2008-01-01
Approximate entropy (ApEn) of blood pressure (BP) can be easily measured based on software analysing 24-h ambulatory BP monitoring (ABPM), but the clinical value of this measure is unknown. In a prospective study we investigated whether ApEn of BP predicts, in addition to average and variability of BP, the risk of hypertensive crisis. In 57 patients with known hypertension we measured ApEn, average and variability of systolic and diastolic BP based on 24-h ABPM. Eight of these fifty-seven patients developed hypertensive crisis during follow-up (mean follow-up duration 726 days). In bivariate regression analysis, ApEn of systolic BP (P<0.01), average of systolic BP (P=0.02) and average of diastolic BP (P=0.03) were significant predictors of hypertensive crisis. The incidence rate ratio of hypertensive crisis was 14.0 (95% confidence interval (CI) 1.8, 631.5; P<0.01) for high ApEn of systolic BP as compared to low values. In multivariable regression analysis, ApEn of systolic (P=0.01) and average of diastolic BP (P<0.01) were independent predictors of hypertensive crisis. A combination of these two measures had a positive predictive value of 75%, and a negative predictive value of 91%, respectively. ApEn, combined with other measures of 24-h ABPM, is a potentially powerful predictor of hypertensive crisis. If confirmed in independent samples, these findings have major clinical implications since measures predicting the risk of hypertensive crisis define patients requiring intensive follow-up and intensified therapy.
Efficient Bayesian experimental design for contaminant source identification
NASA Astrophysics Data System (ADS)
Zhang, Jiangjiang; Zeng, Lingzao; Chen, Cheng; Chen, Dingjiang; Wu, Laosheng
2015-01-01
In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameters identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from concentration measurements in identifying unknown parameters. In this approach, the sampling locations that give the maximum expected relative entropy are selected as the optimal design. After the sampling locations are determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport equation. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. It is shown that the methods can be used to assist in both single sampling location and monitoring network design for contaminant source identifications in groundwater.
Existence regimes for shocks in inhomogeneous magneto-plasmas having entropy
NASA Astrophysics Data System (ADS)
Iqbal, Javed; Yaqub Khan, M.
2018-04-01
The finding of connection of plasma density and temperature with entropy gives an incitement to study different plasma models with respect to entropy. Nonlinear dissipative one- and two-dimensional structures (shocks) are investigated in nonuniform magnetized plasma with respect to entropy. The dissipation comes in the medium through ion-neutral collisions. The linear dispersion relation is derived. The Korteweg-deVries-Burgers and Kadomtsev-Petviashvili-Burgers equations are derived for nonlinear drift waves in 1-D and 2-D by employing the drift approximation. It is found that vd/u ( vd is the diamagnetic drift velocity and u is the velocity of nonlinear structure) plays a significant role in the shock formation. It is also found that entropy has a significant effect on the strength of shocks. It is noticed that v d/u determines the rarefactive and compressive nature of the shocks. It is observed that upper and lower bounds exist for the shock velocity. It is also observed that the existing regimes for both one- and two-dimensional shocks for kappa distributed electrons are different from shocks with Cairns distributed electrons. Both rarefactive and compressive shocks are found for the 1-D drift waves with kappa distributed electrons. Interestingly, it is noticed that entropy enhances the strength of one- and two-dimensional shocks.
NASA Astrophysics Data System (ADS)
Karamanos, K.; Mistakidis, S. I.; Massart, T. J.; Mistakidis, I. S.
2015-06-01
The entropy production and the variational functional of a Laplacian diffusional field around the first four fractal iterations of a linear self-similar tree (von Koch curve) is studied analytically and detailed predictions are stated. In a next stage, these predictions are confronted with results from numerical resolution of the Laplace equation by means of Finite Elements computations. After a brief review of the existing results, the range of distances near the geometric irregularity, the so-called "Near Field", a situation never studied in the past, is treated exhaustively. We notice here that in the Near Field, the usual notion of the active zone approximation introduced by Sapoval et al. [M. Filoche and B. Sapoval, Transfer across random versus deterministic fractal interfaces, Phys. Rev. Lett. 84(25) (2000) 5776;1 B. Sapoval, M. Filoche, K. Karamanos and R. Brizzi, Can one hear the shape of an electrode? I. Numerical study of the active zone in Laplacian transfer, Eur. Phys. J. B. Condens. Matter Complex Syst. 9(4) (1999) 739-753.]2 is strictly inapplicable. The basic new result is that the validity of the active-zone approximation based on irreversible thermodynamics is confirmed in this limit, and this implies a new interpretation of this notion for Laplacian diffusional fields.
Valenza, Gaetano; Allegrini, Paolo; Lanatà, Antonio; Scilingo, Enzo Pasquale
2012-01-01
In this work we characterized the non-linear complexity of Heart Rate Variability (HRV) in short time series. The complexity of HRV signal was evaluated during emotional visual elicitation by using Dominant Lyapunov Exponents (DLEs) and Approximate Entropy (ApEn). We adopted a simplified model of emotion derived from the Circumplex Model of Affects (CMAs), in which emotional mechanisms are conceptualized in two dimensions by the terms of valence and arousal. Following CMA model, a set of standardized visual stimuli in terms of arousal and valence gathered from the International Affective Picture System (IAPS) was administered to a group of 35 healthy volunteers. Experimental session consisted of eight sessions alternating neutral images with high arousal content images. Several works can be found in the literature showing a chaotic dynamics of HRV during rest or relax conditions. The outcomes of this work showed a clear switching mechanism between regular and chaotic dynamics when switching from neutral to arousal elicitation. Accordingly, the mean ApEn decreased with statistical significance during arousal elicitation and the DLE became negative. Results showed a clear distinction between the neutral and the arousal elicitation and could be profitably exploited to improve the accuracy of emotion recognition systems based on HRV time series analysis. PMID:22393320
Bolea, Juan; Pueyo, Esther; Orini, Michele; Bailón, Raquel
2016-01-01
The purpose of this study is to characterize and attenuate the influence of mean heart rate (HR) on nonlinear heart rate variability (HRV) indices (correlation dimension, sample, and approximate entropy) as a consequence of being the HR the intrinsic sampling rate of HRV signal. This influence can notably alter nonlinear HRV indices and lead to biased information regarding autonomic nervous system (ANS) modulation. First, a simulation study was carried out to characterize the dependence of nonlinear HRV indices on HR assuming similar ANS modulation. Second, two HR-correction approaches were proposed: one based on regression formulas and another one based on interpolating RR time series. Finally, standard and HR-corrected HRV indices were studied in a body position change database. The simulation study showed the HR-dependence of non-linear indices as a sampling rate effect, as well as the ability of the proposed HR-corrections to attenuate mean HR influence. Analysis in a body position changes database shows that correlation dimension was reduced around 21% in median values in standing with respect to supine position ( p < 0.05), concomitant with a 28% increase in mean HR ( p < 0.05). After HR-correction, correlation dimension decreased around 18% in standing with respect to supine position, being the decrease still significant. Sample and approximate entropy showed similar trends. HR-corrected nonlinear HRV indices could represent an improvement in their applicability as markers of ANS modulation when mean HR changes.
NASA Astrophysics Data System (ADS)
Han, Keesook J.; Hodge, Matthew; Ross, Virginia W.
2011-06-01
For monitoring network traffic, there is an enormous cost in collecting, storing, and analyzing network traffic datasets. Data mining based network traffic analysis has a growing interest in the cyber security community, but is computationally expensive for finding correlations between attributes in massive network traffic datasets. To lower the cost and reduce computational complexity, it is desirable to perform feasible statistical processing on effective reduced datasets instead of on the original full datasets. Because of the dynamic behavior of network traffic, traffic traces exhibit mixtures of heavy tailed statistical distributions or overdispersion. Heavy tailed network traffic characterization and visualization are important and essential tasks to measure network performance for the Quality of Services. However, heavy tailed distributions are limited in their ability to characterize real-time network traffic due to the difficulty of parameter estimation. The Entropy-Based Heavy Tailed Distribution Transformation (EHTDT) was developed to convert the heavy tailed distribution into a transformed distribution to find the linear approximation. The EHTDT linearization has the advantage of being amenable to characterize and aggregate overdispersion of network traffic in realtime. Results of applying the EHTDT for innovative visual analytics to real network traffic data are presented.
Salient target detection based on pseudo-Wigner-Ville distribution and Rényi entropy.
Xu, Yuannan; Zhao, Yuan; Jin, Chenfei; Qu, Zengfeng; Liu, Liping; Sun, Xiudong
2010-02-15
We present what we believe to be a novel method based on pseudo-Wigner-Ville distribution (PWVD) and Rényi entropy for salient targets detection. In the foundation of studying the statistical property of Rényi entropy via PWVD, the residual entropy-based saliency map of an input image can be obtained. From the saliency map, target detection is completed by the simple and convenient threshold segmentation. Experimental results demonstrate the proposed method can detect targets effectively in complex ground scenes.
Probability density function learning by unsupervised neurons.
Fiori, S
2001-10-01
In a recent work, we introduced the concept of pseudo-polynomial adaptive activation function neuron (FAN) and presented an unsupervised information-theoretic learning theory for such structure. The learning model is based on entropy optimization and provides a way of learning probability distributions from incomplete data. The aim of the present paper is to illustrate some theoretical features of the FAN neuron, to extend its learning theory to asymmetrical density function approximation, and to provide an analytical and numerical comparison with other known density function estimation methods, with special emphasis to the universal approximation ability. The paper also provides a survey of PDF learning from incomplete data, as well as results of several experiments performed on real-world problems and signals.
Entropy coders for image compression based on binary forward classification
NASA Astrophysics Data System (ADS)
Yoo, Hoon; Jeong, Jechang
2000-12-01
Entropy coders as a noiseless compression method are widely used as final step compression for images, and there have been many contributions to increase of entropy coder performance and to reduction of entropy coder complexity. In this paper, we propose some entropy coders based on the binary forward classification (BFC). The BFC requires overhead of classification but there is no change between the amount of input information and the total amount of classified output information, which we prove this property in this paper. And using the proved property, we propose entropy coders that are the BFC followed by Golomb-Rice coders (BFC+GR) and the BFC followed by arithmetic coders (BFC+A). The proposed entropy coders introduce negligible additional complexity due to the BFC. Simulation results also show better performance than other entropy coders that have similar complexity to the proposed coders.
NASA Astrophysics Data System (ADS)
Yao, Lei; Wang, Zhenpo; Ma, Jun
2015-10-01
This paper proposes a method of fault detection of the connection of Lithium-Ion batteries based on entropy for electric vehicle. In electric vehicle operation process, some factors, such as road conditions, driving habits, vehicle performance, always affect batteries by vibration, which easily cause loosing or virtual connection between batteries. Through the simulation of the battery charging and discharging experiment under vibration environment, the data of voltage fluctuation can be obtained. Meanwhile, an optimal filtering method is adopted using discrete cosine filter method to analyze the characteristics of system noise, based on the voltage set when batteries are working under different vibration frequency. Experimental data processed by filtering is analyzed based on local Shannon entropy, ensemble Shannon entropy and sample entropy. And the best way to find a method of fault detection of the connection of lithium-ion batteries based on entropy is presented for electric vehicle. The experimental data shows that ensemble Shannon entropy can predict the accurate time and the location of battery connection failure in real time. Besides electric-vehicle industry, this method can also be used in other areas in complex vibration environment.
Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy
NASA Astrophysics Data System (ADS)
Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng
2018-06-01
To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.
An Entropy-Based Measure for Assessing Fuzziness in Logistic Regression
Weiss, Brandi A.; Dardick, William
2015-01-01
This article introduces an entropy-based measure of data–model fit that can be used to assess the quality of logistic regression models. Entropy has previously been used in mixture-modeling to quantify how well individuals are classified into latent classes. The current study proposes the use of entropy for logistic regression models to quantify the quality of classification and separation of group membership. Entropy complements preexisting measures of data–model fit and provides unique information not contained in other measures. Hypothetical data scenarios, an applied example, and Monte Carlo simulation results are used to demonstrate the application of entropy in logistic regression. Entropy should be used in conjunction with other measures of data–model fit to assess how well logistic regression models classify cases into observed categories. PMID:29795897
An Entropy-Based Measure for Assessing Fuzziness in Logistic Regression.
Weiss, Brandi A; Dardick, William
2016-12-01
This article introduces an entropy-based measure of data-model fit that can be used to assess the quality of logistic regression models. Entropy has previously been used in mixture-modeling to quantify how well individuals are classified into latent classes. The current study proposes the use of entropy for logistic regression models to quantify the quality of classification and separation of group membership. Entropy complements preexisting measures of data-model fit and provides unique information not contained in other measures. Hypothetical data scenarios, an applied example, and Monte Carlo simulation results are used to demonstrate the application of entropy in logistic regression. Entropy should be used in conjunction with other measures of data-model fit to assess how well logistic regression models classify cases into observed categories.
Estimating the Aqueous Solubility of Pharmaceutical Hydrates.
Franklin, Stephen J; Younis, Usir S; Myrdal, Paul B
2016-06-01
Estimation of crystalline solute solubility is well documented throughout the literature. However, the anhydrous crystal form is typically considered with these models, which is not always the most stable crystal form in water. In this study, an equation which predicts the aqueous solubility of a hydrate is presented. This research attempts to extend the utility of the ideal solubility equation by incorporating desolvation energetics of the hydrated crystal. Similar to the ideal solubility equation, which accounts for the energetics of melting, this model approximates the energy of dehydration to the entropy of vaporization for water. Aqueous solubilities, dehydration and melting temperatures, and log P values were collected experimentally and from the literature. The data set includes different hydrate types and a range of log P values. Three models are evaluated, the most accurate model approximates the entropy of dehydration (ΔSd) by the entropy of vaporization (ΔSvap) for water, and utilizes onset dehydration and melting temperatures in combination with log P. With this model, the average absolute error for the prediction of solubility of 14 compounds was 0.32 log units. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Heat, temperature and Clausius inequality in a model for active Brownian particles
Marconi, Umberto Marini Bettolo; Puglisi, Andrea; Maggi, Claudio
2017-01-01
Methods of stochastic thermodynamics and hydrodynamics are applied to a recently introduced model of active particles. The model consists of an overdamped particle subject to Gaussian coloured noise. Inspired by stochastic thermodynamics, we derive from the system’s Fokker-Planck equation the average exchanges of heat and work with the active bath and the associated entropy production. We show that a Clausius inequality holds, with the local (non-uniform) temperature of the active bath replacing the uniform temperature usually encountered in equilibrium systems. Furthermore, by restricting the dynamical space to the first velocity moments of the local distribution function we derive a hydrodynamic description where local pressure, kinetic temperature and internal heat fluxes appear and are consistent with the previous thermodynamic analysis. The procedure also shows under which conditions one obtains the unified coloured noise approximation (UCNA): such an approximation neglects the fast relaxation to the active bath and therefore yields detailed balance and zero entropy production. In the last part, by using multiple time-scale analysis, we provide a constructive method (alternative to UCNA) to determine the solution of the Kramers equation and go beyond the detailed balance condition determining negative entropy production. PMID:28429787
Heat, temperature and Clausius inequality in a model for active Brownian particles.
Marconi, Umberto Marini Bettolo; Puglisi, Andrea; Maggi, Claudio
2017-04-21
Methods of stochastic thermodynamics and hydrodynamics are applied to a recently introduced model of active particles. The model consists of an overdamped particle subject to Gaussian coloured noise. Inspired by stochastic thermodynamics, we derive from the system's Fokker-Planck equation the average exchanges of heat and work with the active bath and the associated entropy production. We show that a Clausius inequality holds, with the local (non-uniform) temperature of the active bath replacing the uniform temperature usually encountered in equilibrium systems. Furthermore, by restricting the dynamical space to the first velocity moments of the local distribution function we derive a hydrodynamic description where local pressure, kinetic temperature and internal heat fluxes appear and are consistent with the previous thermodynamic analysis. The procedure also shows under which conditions one obtains the unified coloured noise approximation (UCNA): such an approximation neglects the fast relaxation to the active bath and therefore yields detailed balance and zero entropy production. In the last part, by using multiple time-scale analysis, we provide a constructive method (alternative to UCNA) to determine the solution of the Kramers equation and go beyond the detailed balance condition determining negative entropy production.
An entropy-based statistic for genomewide association studies.
Zhao, Jinying; Boerwinkle, Eric; Xiong, Momiao
2005-07-01
Efficient genotyping methods and the availability of a large collection of single-nucleotide polymorphisms provide valuable tools for genetic studies of human disease. The standard chi2 statistic for case-control studies, which uses a linear function of allele frequencies, has limited power when the number of marker loci is large. We introduce a novel test statistic for genetic association studies that uses Shannon entropy and a nonlinear function of allele frequencies to amplify the differences in allele and haplotype frequencies to maintain statistical power with large numbers of marker loci. We investigate the relationship between the entropy-based test statistic and the standard chi2 statistic and show that, in most cases, the power of the entropy-based statistic is greater than that of the standard chi2 statistic. The distribution of the entropy-based statistic and the type I error rates are validated using simulation studies. Finally, we apply the new entropy-based test statistic to two real data sets, one for the COMT gene and schizophrenia and one for the MMP-2 gene and esophageal carcinoma, to evaluate the performance of the new method for genetic association studies. The results show that the entropy-based statistic obtained smaller P values than did the standard chi2 statistic.
Dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization
NASA Astrophysics Data System (ADS)
Li, Li
2018-03-01
In order to extract target from complex background more quickly and accurately, and to further improve the detection effect of defects, a method of dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization was proposed. Firstly, the method of single-threshold selection based on Arimoto entropy was extended to dual-threshold selection in order to separate the target from the background more accurately. Then intermediate variables in formulae of Arimoto entropy dual-threshold selection was calculated by recursion to eliminate redundant computation effectively and to reduce the amount of calculation. Finally, the local search phase of artificial bee colony algorithm was improved by chaotic sequence based on tent mapping. The fast search for two optimal thresholds was achieved using the improved bee colony optimization algorithm, thus the search could be accelerated obviously. A large number of experimental results show that, compared with the existing segmentation methods such as multi-threshold segmentation method using maximum Shannon entropy, two-dimensional Shannon entropy segmentation method, two-dimensional Tsallis gray entropy segmentation method and multi-threshold segmentation method using reciprocal gray entropy, the proposed method can segment target more quickly and accurately with superior segmentation effect. It proves to be an instant and effective method for image segmentation.
A Phase-Locked Loop Epilepsy Network Emulator.
Watson, P D; Horecka, K M; Cohen, N J; Ratnam, R
2016-10-15
Most seizure forecasting employs statistical learning techniques that lack a representation of the network interactions that give rise to seizures. We present an epilepsy network emulator (ENE) that uses a network of interconnected phase-locked loops (PLLs) to model synchronous, circuit-level oscillations between electrocorticography (ECoG) electrodes. Using ECoG data from a canine-epilepsy model (Davis et al. 2011) and a physiological entropy measure (approximate entropy or ApEn, Pincus 1995), we demonstrate the entropy of the emulator phases increases dramatically during ictal periods across all ECoG recording sites and across all animals in the sample. Further, this increase precedes the observable voltage spikes that characterize seizure activity in the ECoG data. These results suggest that the ENE is sensitive to phase-domain information in the neural circuits measured by ECoG and that an increase in the entropy of this measure coincides with increasing likelihood of seizure activity. Understanding this unpredictable phase-domain electrical activity present in ECoG recordings may provide a target for seizure detection and feedback control.
Validity of the Stokes-Einstein relation in liquids: simple rules from the excess entropy.
Pasturel, A; Jakse, N
2016-12-07
It is becoming common practice to consider that the Stokes-Einstein relation D/T~ η -1 usually works for liquids above their melting temperatures although there is also experimental evidence for its failure. Here we investigate numerically this commonly-invoked assumption for simple liquid metals as well as for their liquid alloys. Using ab initio molecular dynamics simulations we show how entropy scaling relationships developed by Rosenfeld can be used to predict the conditions for the validity of the Stokes-Einstein relation in the liquid phase. Specifically, we demonstrate the Stokes-Einstein relation may break down in the liquid phase of some liquid alloys mainly due to the presence of local structural ordering as evidenced in their partial two-body excess entropies. Our findings shed new light on the understanding of transport properties of liquid materials and will trigger more experimental and theoretical studies since excess entropy and its two-body approximation are readily obtainable from standard experiments and simulations.
A pairwise maximum entropy model accurately describes resting-state human brain networks
Watanabe, Takamitsu; Hirose, Satoshi; Wada, Hiroyuki; Imai, Yoshio; Machida, Toru; Shirouzu, Ichiro; Konishi, Seiki; Miyashita, Yasushi; Masuda, Naoki
2013-01-01
The resting-state human brain networks underlie fundamental cognitive functions and consist of complex interactions among brain regions. However, the level of complexity of the resting-state networks has not been quantified, which has prevented comprehensive descriptions of the brain activity as an integrative system. Here, we address this issue by demonstrating that a pairwise maximum entropy model, which takes into account region-specific activity rates and pairwise interactions, can be robustly and accurately fitted to resting-state human brain activities obtained by functional magnetic resonance imaging. Furthermore, to validate the approximation of the resting-state networks by the pairwise maximum entropy model, we show that the functional interactions estimated by the pairwise maximum entropy model reflect anatomical connexions more accurately than the conventional functional connectivity method. These findings indicate that a relatively simple statistical model not only captures the structure of the resting-state networks but also provides a possible method to derive physiological information about various large-scale brain networks. PMID:23340410
Bivariate Rainfall and Runoff Analysis Using Shannon Entropy Theory
NASA Astrophysics Data System (ADS)
Rahimi, A.; Zhang, L.
2012-12-01
Rainfall-Runoff analysis is the key component for many hydrological and hydraulic designs in which the dependence of rainfall and runoff needs to be studied. It is known that the convenient bivariate distribution are often unable to model the rainfall-runoff variables due to that they either have constraints on the range of the dependence or fixed form for the marginal distributions. Thus, this paper presents an approach to derive the entropy-based joint rainfall-runoff distribution using Shannon entropy theory. The distribution derived can model the full range of dependence and allow different specified marginals. The modeling and estimation can be proceeded as: (i) univariate analysis of marginal distributions which includes two steps, (a) using the nonparametric statistics approach to detect modes and underlying probability density, and (b) fitting the appropriate parametric probability density functions; (ii) define the constraints based on the univariate analysis and the dependence structure; (iii) derive and validate the entropy-based joint distribution. As to validate the method, the rainfall-runoff data are collected from the small agricultural experimental watersheds located in semi-arid region near Riesel (Waco), Texas, maintained by the USDA. The results of unviariate analysis show that the rainfall variables follow the gamma distribution, whereas the runoff variables have mixed structure and follow the mixed-gamma distribution. With this information, the entropy-based joint distribution is derived using the first moments, the first moments of logarithm transformed rainfall and runoff, and the covariance between rainfall and runoff. The results of entropy-based joint distribution indicate: (1) the joint distribution derived successfully preserves the dependence between rainfall and runoff, and (2) the K-S goodness of fit statistical tests confirm the marginal distributions re-derived reveal the underlying univariate probability densities which further assure that the entropy-based joint rainfall-runoff distribution are satisfactorily derived. Overall, the study shows the Shannon entropy theory can be satisfactorily applied to model the dependence between rainfall and runoff. The study also shows that the entropy-based joint distribution is an appropriate approach to capture the dependence structure that cannot be captured by the convenient bivariate joint distributions. Joint Rainfall-Runoff Entropy Based PDF, and Corresponding Marginal PDF and Histogram for W12 Watershed The K-S Test Result and RMSE on Univariate Distributions Derived from the Maximum Entropy Based Joint Probability Distribution;
NASA Astrophysics Data System (ADS)
Ai, Yan-Ting; Guan, Jiao-Yue; Fei, Cheng-Wei; Tian, Jing; Zhang, Feng-Ling
2017-05-01
To monitor rolling bearing operating status with casings in real time efficiently and accurately, a fusion method based on n-dimensional characteristic parameters distance (n-DCPD) was proposed for rolling bearing fault diagnosis with two types of signals including vibration signal and acoustic emission signals. The n-DCPD was investigated based on four information entropies (singular spectrum entropy in time domain, power spectrum entropy in frequency domain, wavelet space characteristic spectrum entropy and wavelet energy spectrum entropy in time-frequency domain) and the basic thought of fusion information entropy fault diagnosis method with n-DCPD was given. Through rotor simulation test rig, the vibration and acoustic emission signals of six rolling bearing faults (ball fault, inner race fault, outer race fault, inner-ball faults, inner-outer faults and normal) are collected under different operation conditions with the emphasis on the rotation speed from 800 rpm to 2000 rpm. In the light of the proposed fusion information entropy method with n-DCPD, the diagnosis of rolling bearing faults was completed. The fault diagnosis results show that the fusion entropy method holds high precision in the recognition of rolling bearing faults. The efforts of this study provide a novel and useful methodology for the fault diagnosis of an aeroengine rolling bearing.
Entropy Based Genetic Association Tests and Gene-Gene Interaction Tests
de Andrade, Mariza; Wang, Xin
2011-01-01
In the past few years, several entropy-based tests have been proposed for testing either single SNP association or gene-gene interaction. These tests are mainly based on Shannon entropy and have higher statistical power when compared to standard χ2 tests. In this paper, we extend some of these tests using a more generalized entropy definition, Rényi entropy, where Shannon entropy is a special case of order 1. The order λ (>0) of Rényi entropy weights the events (genotype/haplotype) according to their probabilities (frequencies). Higher λ places more emphasis on higher probability events while smaller λ (close to 0) tends to assign weights more equally. Thus, by properly choosing the λ, one can potentially increase the power of the tests or the p-value level of significance. We conducted simulation as well as real data analyses to assess the impact of the order λ and the performance of these generalized tests. The results showed that for dominant model the order 2 test was more powerful and for multiplicative model the order 1 or 2 had similar power. The analyses indicate that the choice of λ depends on the underlying genetic model and Shannon entropy is not necessarily the most powerful entropy measure for constructing genetic association or interaction tests. PMID:23089811
Studies on pressure-gain combustion engines
NASA Astrophysics Data System (ADS)
Matsutomi, Yu
Various aspects of the pressure-gain combustion engine are investigated analytically and experimentally in the current study. A lumped parameter model is developed to characterize the operation of a valveless pulse detonation engine. The model identified the function of flame quenching process through gas dynamic process. By adjusting fuel manifold pressure and geometries, the duration of the air buffer can be effectively varied. The parametric study with the lumped parameter model has shown that engine frequency of up to approximately 15 Hz is attainable. However, requirements for upstream air pressure increases significantly with higher engine frequency. The higher pressure requirement indicates pressure loss in the system and lower overall engine performance. The loss of performance due to the pressure loss is a critical issue for the integrated pressure-gain combustors. Two types of transitional methods are examined using entropy-based models. An accumulator based transition has obvious loss due to sudden area expansion, but it can be minimized by utilizing the gas dynamics in the combustion tube. An ejector type transition has potential to achieve performance beyond the limit specified by a single flow path Humphrey cycle. The performance of an ejector was discussed in terms of apparent entropy and mixed flow entropy. Through an ideal ejector, the apparent part of entropy increases due to the reduction in flow unsteadiness, but entropy of the mixed flow remains constant. The method is applied to a CFD simulation with a simple manifold for qualitative evaluation. The operation of the wave rotor constant volume combustion rig is experimentally examined. The rig has shown versatility of operation for wide range of conditions. Large pressure rise in the rotor channel and in a section of the exhaust duct are observed even with relatively large leakage gaps on the rotor. The simplified analysis indicated that inconsistent combustion is likely due to insufficient fuel near the ignition source. However, it is difficult to conclude its fuel distribution with the current setup. Additional measurement near the rotor interfaces and better fuel control are required for the future test.
NASA Astrophysics Data System (ADS)
Khosravi Tanak, A.; Mohtashami Borzadaran, G. R.; Ahmadi, J.
2015-11-01
In economics and social sciences, the inequality measures such as Gini index, Pietra index etc., are commonly used to measure the statistical dispersion. There is a generalization of Gini index which includes it as special case. In this paper, we use principle of maximum entropy to approximate the model of income distribution with a given mean and generalized Gini index. Many distributions have been used as descriptive models for the distribution of income. The most widely known of these models are the generalized beta of second kind and its subclass distributions. The obtained maximum entropy distributions are fitted to the US family total money income in 2009, 2011 and 2013 and their relative performances with respect to generalized beta of second kind family are compared.
Conformational Entropy of Intrinsically Disordered Proteins from Amino Acid Triads
Baruah, Anupaul; Rani, Pooja; Biswas, Parbati
2015-01-01
This work quantitatively characterizes intrinsic disorder in proteins in terms of sequence composition and backbone conformational entropy. Analysis of the normalized relative composition of the amino acid triads highlights a distinct boundary between globular and disordered proteins. The conformational entropy is calculated from the dihedral angles of the middle amino acid in the amino acid triad for the conformational ensemble of the globular, partially and completely disordered proteins relative to the non-redundant database. Both Monte Carlo (MC) and Molecular Dynamics (MD) simulations are used to characterize the conformational ensemble of the representative proteins of each group. The results show that the globular proteins span approximately half of the allowed conformational states in the Ramachandran space, while the amino acid triads in disordered proteins sample the entire range of the allowed dihedral angle space following Flory’s isolated-pair hypothesis. Therefore, only the sequence information in terms of the relative amino acid triad composition may be sufficient to predict protein disorder and the backbone conformational entropy, even in the absence of well-defined structure. The predicted entropies are found to agree with those calculated using mutual information expansion and the histogram method. PMID:26138206
The predictive power of singular value decomposition entropy for stock market dynamics
NASA Astrophysics Data System (ADS)
Caraiani, Petre
2014-01-01
We use a correlation-based approach to analyze financial data from the US stock market, both daily and monthly observations from the Dow Jones. We compute the entropy based on the singular value decomposition of the correlation matrix for the components of the Dow Jones Industrial Index. Based on a moving window, we derive time varying measures of entropy for both daily and monthly data. We find that the entropy has a predictive ability with respect to stock market dynamics as indicated by the Granger causality tests.
Hunt, Brian R; Ott, Edward
2015-09-01
In this paper, we propose, discuss, and illustrate a computationally feasible definition of chaos which can be applied very generally to situations that are commonly encountered, including attractors, repellers, and non-periodically forced systems. This definition is based on an entropy-like quantity, which we call "expansion entropy," and we define chaos as occurring when this quantity is positive. We relate and compare expansion entropy to the well-known concept of topological entropy to which it is equivalent under appropriate conditions. We also present example illustrations, discuss computational implementations, and point out issues arising from attempts at giving definitions of chaos that are not entropy-based.
Entropy information of heart rate variability and its power spectrum during day and night
NASA Astrophysics Data System (ADS)
Jin, Li; Jun, Wang
2013-07-01
Physiologic systems generate complex fluctuations in their output signals that reflect the underlying dynamics. We employed the base-scale entropy method and the power spectral analysis to study the 24 hours heart rate variability (HRV) signals. The results show that such profound circadian-, age- and pathologic-dependent changes are accompanied by changes in base-scale entropy and power spectral distribution. Moreover, the base-scale entropy changes reflect the corresponding changes in the autonomic nerve outflow. With the suppression of the vagal tone and dominance of the sympathetic tone in congestive heart failure (CHF) subjects, there is more variability in the date fluctuation mode. So the higher base-scale entropy belongs to CHF subjects. With the decrease of the sympathetic tone and the respiratory frequency (RSA) becoming more pronounced with slower breathing during sleeping, the base-scale entropy drops in CHF subjects. The HRV series of the two healthy groups have the same diurnal/nocturnal trend as the CHF series. The fluctuation dynamics trend of data in the three groups can be described as “HF effect”.
Throat quantization of the Schwarzschild-Tangherlini(-AdS) black hole
NASA Astrophysics Data System (ADS)
Maeda, Hideki
2018-01-01
By the throat quantization pioneered by Louko and Mäkelä, we derive the mass and area/entropy spectra for the Schwarzschild-Tangherlini-type asymptotically flat or AdS vacuum black hole in arbitrary dimensions. Using the WKB approximation for black holes with large mass, we show that area/entropy is equally spaced for asymptotically flat black holes, while mass is equally spaced for asymptotically AdS black holes. Exact spectra can be obtained for toroidal AdS black holes in arbitrary dimensions including the three-dimensional BTZ black hole.
Exact conditions on the temperature dependence of density functionals
Burke, K.; Smith, J. C.; Grabowski, P. E.; ...
2016-05-15
Universal exact conditions guided the construction of most ground-state density functional approximations in use today. Here, we derive the relation between the entropy and Mermin free energy density functionals for thermal density functional theory. Both the entropy and sum of kinetic and electron-electron repulsion functionals are shown to be monotonically increasing with temperature, while the Mermin functional is concave downwards. Analogous relations are found for both exchange and correlation. The importance of these conditions is illustrated in two extremes: the Hubbard dimer and the uniform gas.
Entropy in molecular recognition by proteins
Caro, José A.; Harpole, Kyle W.; Kasinath, Vignesh; Lim, Jackwee; Granja, Jeffrey; Valentine, Kathleen G.; Sharp, Kim A.
2017-01-01
Molecular recognition by proteins is fundamental to molecular biology. Dissection of the thermodynamic energy terms governing protein–ligand interactions has proven difficult, with determination of entropic contributions being particularly elusive. NMR relaxation measurements have suggested that changes in protein conformational entropy can be quantitatively obtained through a dynamical proxy, but the generality of this relationship has not been shown. Twenty-eight protein–ligand complexes are used to show a quantitative relationship between measures of fast side-chain motion and the underlying conformational entropy. We find that the contribution of conformational entropy can range from favorable to unfavorable, which demonstrates the potential of this thermodynamic variable to modulate protein–ligand interactions. For about one-quarter of these complexes, the absence of conformational entropy would render the resulting affinity biologically meaningless. The dynamical proxy for conformational entropy or “entropy meter” also allows for refinement of the contributions of solvent entropy and the loss in rotational-translational entropy accompanying formation of high-affinity complexes. Furthermore, structure-based application of the approach can also provide insight into long-lived specific water–protein interactions that escape the generic treatments of solvent entropy based simply on changes in accessible surface area. These results provide a comprehensive and unified view of the general role of entropy in high-affinity molecular recognition by proteins. PMID:28584100
Entropy in molecular recognition by proteins.
Caro, José A; Harpole, Kyle W; Kasinath, Vignesh; Lim, Jackwee; Granja, Jeffrey; Valentine, Kathleen G; Sharp, Kim A; Wand, A Joshua
2017-06-20
Molecular recognition by proteins is fundamental to molecular biology. Dissection of the thermodynamic energy terms governing protein-ligand interactions has proven difficult, with determination of entropic contributions being particularly elusive. NMR relaxation measurements have suggested that changes in protein conformational entropy can be quantitatively obtained through a dynamical proxy, but the generality of this relationship has not been shown. Twenty-eight protein-ligand complexes are used to show a quantitative relationship between measures of fast side-chain motion and the underlying conformational entropy. We find that the contribution of conformational entropy can range from favorable to unfavorable, which demonstrates the potential of this thermodynamic variable to modulate protein-ligand interactions. For about one-quarter of these complexes, the absence of conformational entropy would render the resulting affinity biologically meaningless. The dynamical proxy for conformational entropy or "entropy meter" also allows for refinement of the contributions of solvent entropy and the loss in rotational-translational entropy accompanying formation of high-affinity complexes. Furthermore, structure-based application of the approach can also provide insight into long-lived specific water-protein interactions that escape the generic treatments of solvent entropy based simply on changes in accessible surface area. These results provide a comprehensive and unified view of the general role of entropy in high-affinity molecular recognition by proteins.
Information-Based Analysis of Data Assimilation (Invited)
NASA Astrophysics Data System (ADS)
Nearing, G. S.; Gupta, H. V.; Crow, W. T.; Gong, W.
2013-12-01
Data assimilation is defined as the Bayesian conditioning of uncertain model simulations on observations for the purpose of reducing uncertainty about model states. Practical data assimilation methods make the application of Bayes' law tractable either by employing assumptions about the prior, posterior and likelihood distributions (e.g., the Kalman family of filters) or by using resampling methods (e.g., bootstrap filter). We propose to quantify the efficiency of these approximations in an OSSE setting using information theory and, in an OSSE or real-world validation setting, to measure the amount - and more importantly, the quality - of information extracted from observations during data assimilation. To analyze DA assumptions, uncertainty is quantified as the Shannon-type entropy of a discretized probability distribution. The maximum amount of information that can be extracted from observations about model states is the mutual information between states and observations, which is equal to the reduction in entropy in our estimate of the state due to Bayesian filtering. The difference between this potential and the actual reduction in entropy due to Kalman (or other type of) filtering measures the inefficiency of the filter assumptions. Residual uncertainty in DA posterior state estimates can be attributed to three sources: (i) non-injectivity of the observation operator, (ii) noise in the observations, and (iii) filter approximations. The contribution of each of these sources is measurable in an OSSE setting. The amount of information extracted from observations by data assimilation (or system identification, including parameter estimation) can also be measured by Shannon's theory. Since practical filters are approximations of Bayes' law, it is important to know whether the information that is extracted form observations by a filter is reliable. We define information as either good or bad, and propose to measure these two types of information using partial Kullback-Leibler divergences. Defined this way, good and bad information sum to total information. This segregation of information into good and bad components requires a validation target distribution; in a DA OSSE setting, this can be the true Bayesian posterior, but in a real-world setting the validation target might be determined by a set of in situ observations.
NASA Astrophysics Data System (ADS)
Brustein, R.
I review some basic facts about entropy bounds in general and about cosmological entropy bounds. Then I review the causal entropy bound, the conditions for its validity and its application to the study of cosmological singularities. This article is based on joint work with Gabriele Veneziano and subsequent related research.
NASA Astrophysics Data System (ADS)
Xu, Pengcheng; Wang, Dong; Singh, Vijay P.; Wang, Yuankun; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Chen, Yuanfang; Chen, Xi; Liu, Jiufu; Zou, Ying; He, Ruimin
2017-12-01
Hydrometeorological data are needed for obtaining point and areal mean, quantifying the spatial variability of hydrometeorological variables, and calibration and verification of hydrometeorological models. Hydrometeorological networks are utilized to collect such data. Since data collection is expensive, it is essential to design an optimal network based on the minimal number of hydrometeorological stations in order to reduce costs. This study proposes a two-phase copula entropy- based multiobjective optimization approach that includes: (1) copula entropy-based directional information transfer (CDIT) for clustering the potential hydrometeorological gauges into several groups, and (2) multiobjective method for selecting the optimal combination of gauges for regionalized groups. Although entropy theory has been employed for network design before, the joint histogram method used for mutual information estimation has several limitations. The copula entropy-based mutual information (MI) estimation method is shown to be more effective for quantifying the uncertainty of redundant information than the joint histogram (JH) method. The effectiveness of this approach is verified by applying to one type of hydrometeorological gauge network, with the use of three model evaluation measures, including Nash-Sutcliffe Coefficient (NSC), arithmetic mean of the negative copula entropy (MNCE), and MNCE/NSC. Results indicate that the two-phase copula entropy-based multiobjective technique is capable of evaluating the performance of regional hydrometeorological networks and can enable decision makers to develop strategies for water resources management.
Max Planck and the birth of the quantum hypothesis
NASA Astrophysics Data System (ADS)
Nauenberg, Michael
2016-09-01
Based on the functional dependence of entropy on energy, and on Wien's distribution for black-body radiation, Max Planck obtained a formula for this radiation by an interpolation relation that fitted the experimental measurements of thermal radiation at the Physikalisch Technishe Reichanstalt (PTR) in Berlin in the late 19th century. Surprisingly, his purely phenomenological result turned out to be not just an approximation, as would have been expected, but an exact relation. To obtain a physical interpretation for his formula, Planck then turned to Boltzmann's 1877 paper on the statistical interpretation of entropy, which led him to introduce the fundamental concept of energy discreteness into physics. A novel aspect of our account that has been missed in previous historical studies of Planck's discovery is to show that Planck could have found his phenomenological formula partially derived in Boltzmann's paper in terms of a variational parameter. But the dependence of this parameter on temperature is not contained in this paper, and it was first derived by Planck.
Li, Yongkai; Yi, Ming; Zou, Xiufen
2014-01-01
To gain insights into the mechanisms of cell fate decision in a noisy environment, the effects of intrinsic and extrinsic noises on cell fate are explored at the single cell level. Specifically, we theoretically define the impulse of Cln1/2 as an indication of cell fates. The strong dependence between the impulse of Cln1/2 and cell fates is exhibited. Based on the simulation results, we illustrate that increasing intrinsic fluctuations causes the parallel shift of the separation ratio of Whi5P but that increasing extrinsic fluctuations leads to the mixture of different cell fates. Our quantitative study also suggests that the strengths of intrinsic and extrinsic noises around an approximate linear model can ensure a high accuracy of cell fate selection. Furthermore, this study demonstrates that the selection of cell fates is an entropy-decreasing process. In addition, we reveal that cell fates are significantly correlated with the range of entropy decreases. PMID:25042292
Design of high-strength refractory complex solid-solution alloys
Singh, Prashant; Sharma, Aayush; Smirnov, A. V.; ...
2018-03-28
Nickel-based superalloys and near-equiatomic high-entropy alloys containing molybdenum are known for higher temperature strength and corrosion resistance. Yet, complex solid-solution alloys offer a huge design space to tune for optimal properties at slightly reduced entropy. For refractory Mo-W-Ta-Ti-Zr, we showcase KKR electronic structure methods via the coherent-potential approximation to identify alloys over five-dimensional design space with improved mechanical properties and necessary global (formation enthalpy) and local (short-range order) stability. Deformation is modeled with classical molecular dynamic simulations, validated from our first-principle data. We predict complex solid-solution alloys of improved stability with greatly enhanced modulus of elasticity (3× at 300 K)more » over near-equiatomic cases, as validated experimentally, and with higher moduli above 500 K over commercial alloys (2.3× at 2000 K). We also show that optimal complex solid-solution alloys are not described well by classical potentials due to critical electronic effects.« less
Design of high-strength refractory complex solid-solution alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Prashant; Sharma, Aayush; Smirnov, A. V.
Nickel-based superalloys and near-equiatomic high-entropy alloys containing molybdenum are known for higher temperature strength and corrosion resistance. Yet, complex solid-solution alloys offer a huge design space to tune for optimal properties at slightly reduced entropy. For refractory Mo-W-Ta-Ti-Zr, we showcase KKR electronic structure methods via the coherent-potential approximation to identify alloys over five-dimensional design space with improved mechanical properties and necessary global (formation enthalpy) and local (short-range order) stability. Deformation is modeled with classical molecular dynamic simulations, validated from our first-principle data. We predict complex solid-solution alloys of improved stability with greatly enhanced modulus of elasticity (3× at 300 K)more » over near-equiatomic cases, as validated experimentally, and with higher moduli above 500 K over commercial alloys (2.3× at 2000 K). We also show that optimal complex solid-solution alloys are not described well by classical potentials due to critical electronic effects.« less
NASA Technical Reports Server (NTRS)
Fessler, T. E.
1977-01-01
A computer program subroutine, FLUID, was developed to calculate thermodynamic and transport properties of pure fluid substances. It provides for determining the thermodynamic state from assigned values for temperature-density, pressure-density, temperature-pressure, pressure-entropy, or pressure-enthalpy. Liquid or two-phase (liquid-gas) conditions are considered as well as the gas phase. A van der Waals model is used to obtain approximate state values; these values are then corrected for real gas effects by model-correction factors obtained from tables based on experimental data. Saturation conditions, specific heat, entropy, and enthalpy data are included in the tables for each gas. Since these tables are external to the FLUID subroutine itself, FLUID can implement any gas for which a set of tables has been generated. (A setup phase is used to establish pointers dynamically to the tables for a specific gas.) Data-table preparation is described. FLUID is available in both SFTRAN and FORTRAN
Mittal, Jeetain; Errington, Jeffrey R; Truskett, Thomas M
2007-08-30
Static measures such as density and entropy, which are intimately connected to structure, have featured prominently in modern thinking about the dynamics of the liquid state. Here, we explore the connections between self-diffusivity, density, and excess entropy for two of the most widely used model "simple" liquids, the equilibrium Lennard-Jones and square-well fluids, in both bulk and confined environments. We find that the self-diffusivity data of the Lennard-Jones fluid can be approximately collapsed onto a single curve (i) versus effective packing fraction and (ii) in appropriately reduced form versus excess entropy, as suggested by two well-known scaling laws. Similar data collapse does not occur for the square-well fluid, a fact that can be understood on the basis of the nontrivial effects that temperature has on its static structure. Nonetheless, we show that the implications of confinement for the self-diffusivity of both of these model fluids, over a broad range of equilibrium conditions, can be predicted on the basis of knowledge of the bulk fluid behavior and either the effective packing fraction or the excess entropy of the confined fluid. Excess entropy is perhaps the most preferable route due to its superior predictive ability and because it is a standard, unambiguous thermodynamic quantity that can be readily predicted via classical density functional theories of inhomogeneous fluids.
Gender-specific heart rate dynamics in severe intrauterine growth-restricted fetuses.
Gonçalves, Hernâni; Bernardes, João; Ayres-de-Campos, Diogo
2013-06-01
Management of intrauterine growth restriction (IUGR) remains a major issue in perinatology. The objective of this paper was the assessment of gender-specific fetal heart rate (FHR) dynamics as a diagnostic tool in severe IUGR. FHR was analyzed in the antepartum period in 15 severe IUGR fetuses and 18 controls, matched for gestational age, in relation to fetal gender. Linear and entropy methods, such as mean FHR (mFHR), low (LF), high (HF) and movement frequency (MF), approximate, sample and multiscale entropy. Sensitivities and specificities were estimated using Fisher linear discriminant analysis and the leave-one-out method. Overall, IUGR fetuses presented significantly lower mFHR and entropy compared with controls. However, gender-specific analysis showed that significantly lower mFHR was only evident in IUGR males and lower entropy in IUGR females. In addition, lower LF/(MF+HF) was patent in IUGR females compared with controls, but not in males. Rather high sensitivities and specificities were achieved in the detection of the FHR recordings related with IUGR male fetuses, when gender-specific analysis was performed at gestational ages less than 34 weeks. Severe IUGR fetuses present gender-specific linear and entropy FHR changes, compared with controls, characterized by a significantly lower entropy and sympathetic-vagal balance in females than in males. These findings need to be considered in order to achieve better diagnostic results. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wu, Yue; Shang, Pengjian; Li, Yilong
2018-03-01
A modified multiscale sample entropy measure based on symbolic representation and similarity (MSEBSS) is proposed in this paper to research the complexity of stock markets. The modified algorithm reduces the probability of inducing undefined entropies and is confirmed to be robust to strong noise. Considering the validity and accuracy, MSEBSS is more reliable than Multiscale entropy (MSE) for time series mingled with much noise like financial time series. We apply MSEBSS to financial markets and results show American stock markets have the lowest complexity compared with European and Asian markets. There are exceptions to the regularity that stock markets show a decreasing complexity over the time scale, indicating a periodicity at certain scales. Based on MSEBSS, we introduce the modified multiscale cross-sample entropy measure based on symbolic representation and similarity (MCSEBSS) to consider the degree of the asynchrony between distinct time series. Stock markets from the same area have higher synchrony than those from different areas. And for stock markets having relative high synchrony, the entropy values will decrease with the increasing scale factor. While for stock markets having high asynchrony, the entropy values will not decrease with the increasing scale factor sometimes they tend to increase. So both MSEBSS and MCSEBSS are able to distinguish stock markets of different areas, and they are more helpful if used together for studying other features of financial time series.
Excess entropy scaling for the segmental and global dynamics of polyethylene melts.
Voyiatzis, Evangelos; Müller-Plathe, Florian; Böhm, Michael C
2014-11-28
The range of validity of the Rosenfeld and Dzugutov excess entropy scaling laws is analyzed for unentangled linear polyethylene chains. We consider two segmental dynamical quantities, i.e. the bond and the torsional relaxation times, and two global ones, i.e. the chain diffusion coefficient and the viscosity. The excess entropy is approximated by either a series expansion of the entropy in terms of the pair correlation function or by an equation of state for polymers developed in the context of the self associating fluid theory. For the whole range of temperatures and chain lengths considered, the two estimates of the excess entropy are linearly correlated. The scaled bond and torsional relaxation times fall into a master curve irrespective of the chain length and the employed scaling scheme. Both quantities depend non-linearly on the excess entropy. For a fixed chain length, the reduced diffusion coefficient and viscosity scale linearly with the excess entropy. An empirical reduction to a chain length-independent master curve is accessible for both dynamic quantities. The Dzugutov scheme predicts an increased value of the scaled diffusion coefficient with increasing chain length which contrasts physical expectations. The origin of this trend can be traced back to the density dependence of the scaling factors. This finding has not been observed previously for Lennard-Jones chain systems (Macromolecules, 2013, 46, 8710-8723). Thus, it limits the applicability of the Dzugutov approach to polymers. In connection with diffusion coefficients and viscosities, the Rosenfeld scaling law appears to be of higher quality than the Dzugutov approach. An empirical excess entropy scaling is also proposed which leads to a chain length-independent correlation. It is expected to be valid for polymers in the Rouse regime.
Awan, Imtiaz; Aziz, Wajid; Habib, Nazneen; Alowibdi, Jalal S.; Saeed, Sharjil; Nadeem, Malik Sajjad Ahmed; Shah, Syed Ahsin Ali
2018-01-01
Considerable interest has been devoted for developing a deeper understanding of the dynamics of healthy biological systems and how these dynamics are affected due to aging and disease. Entropy based complexity measures have widely been used for quantifying the dynamics of physical and biological systems. These techniques have provided valuable information leading to a fuller understanding of the dynamics of these systems and underlying stimuli that are responsible for anomalous behavior. The single scale based traditional entropy measures yielded contradictory results about the dynamics of real world time series data of healthy and pathological subjects. Recently the multiscale entropy (MSE) algorithm was introduced for precise description of the complexity of biological signals, which was used in numerous fields since its inception. The original MSE quantified the complexity of coarse-grained time series using sample entropy. The original MSE may be unreliable for short signals because the length of the coarse-grained time series decreases with increasing scaling factor τ, however, MSE works well for long signals. To overcome the drawback of original MSE, various variants of this method have been proposed for evaluating complexity efficiently. In this study, we have proposed multiscale normalized corrected Shannon entropy (MNCSE), in which instead of using sample entropy, symbolic entropy measure NCSE has been used as an entropy estimate. The results of the study are compared with traditional MSE. The effectiveness of the proposed approach is demonstrated using noise signals as well as interbeat interval signals from healthy and pathological subjects. The preliminary results of the study indicate that MNCSE values are more stable and reliable than original MSE values. The results show that MNCSE based features lead to higher classification accuracies in comparison with the MSE based features. PMID:29771977
Awan, Imtiaz; Aziz, Wajid; Shah, Imran Hussain; Habib, Nazneen; Alowibdi, Jalal S; Saeed, Sharjil; Nadeem, Malik Sajjad Ahmed; Shah, Syed Ahsin Ali
2018-01-01
Considerable interest has been devoted for developing a deeper understanding of the dynamics of healthy biological systems and how these dynamics are affected due to aging and disease. Entropy based complexity measures have widely been used for quantifying the dynamics of physical and biological systems. These techniques have provided valuable information leading to a fuller understanding of the dynamics of these systems and underlying stimuli that are responsible for anomalous behavior. The single scale based traditional entropy measures yielded contradictory results about the dynamics of real world time series data of healthy and pathological subjects. Recently the multiscale entropy (MSE) algorithm was introduced for precise description of the complexity of biological signals, which was used in numerous fields since its inception. The original MSE quantified the complexity of coarse-grained time series using sample entropy. The original MSE may be unreliable for short signals because the length of the coarse-grained time series decreases with increasing scaling factor τ, however, MSE works well for long signals. To overcome the drawback of original MSE, various variants of this method have been proposed for evaluating complexity efficiently. In this study, we have proposed multiscale normalized corrected Shannon entropy (MNCSE), in which instead of using sample entropy, symbolic entropy measure NCSE has been used as an entropy estimate. The results of the study are compared with traditional MSE. The effectiveness of the proposed approach is demonstrated using noise signals as well as interbeat interval signals from healthy and pathological subjects. The preliminary results of the study indicate that MNCSE values are more stable and reliable than original MSE values. The results show that MNCSE based features lead to higher classification accuracies in comparison with the MSE based features.
Entropy-based link prediction in weighted networks
NASA Astrophysics Data System (ADS)
Xu, Zhongqi; Pu, Cunlai; Ramiz Sharafat, Rajput; Li, Lunbo; Yang, Jian
2017-01-01
Information entropy has been proved to be an effective tool to quantify the structural importance of complex networks. In the previous work (Xu et al, 2016 \\cite{xu2016}), we measure the contribution of a path in link prediction with information entropy. In this paper, we further quantify the contribution of a path with both path entropy and path weight, and propose a weighted prediction index based on the contributions of paths, namely Weighted Path Entropy (WPE), to improve the prediction accuracy in weighted networks. Empirical experiments on six weighted real-world networks show that WPE achieves higher prediction accuracy than three typical weighted indices.
Entropy Is Simple, Qualitatively.
ERIC Educational Resources Information Center
Lambert, Frank L.
2002-01-01
Suggests that qualitatively, entropy is simple. Entropy increase from a macro viewpoint is a measure of the dispersal of energy from localized to spread out at a temperature T. Fundamentally based on statistical and quantum mechanics, this approach is superior to the non-fundamental "disorder" as a descriptor of entropy change. (MM)
NASA Astrophysics Data System (ADS)
Kabeel, A. E.; Abdelgaied, Mohamed
2016-08-01
Nano-fluids are used to improve the heat transfer rates in heat exchangers, especially; the shell-and-tube heat exchanger that is considered one of the most important types of heat exchangers. In the present study, an experimental loop is constructed to study the thermal characteristics of the shell-and-tube heat exchanger; at different concentrations of Al2O3 nonmetallic particles (0.0, 2, 4, and 6 %). This material concentrations is by volume concentrations in pure water as a base fluid. The effects of nano-fluid concentrations on the performance of shell and tube heat exchanger have been conducted based on the overall heat transfer coefficient, the friction factor, the pressure drop in tube side, and the entropy generation rate. The experimental results show that; the highest heat transfer coefficient is obtained at a nano-fluid concentration of 4 % of the shell side. In shell side the maximum percentage increase in the overall heat transfer coefficient has reached 29.8 % for a nano-fluid concentration of 4 %, relative to the case of the base fluid (water) at the same tube side Reynolds number. However; in the tube side the maximum relative increase in pressure drop has recorded the values of 12, 28 and 48 % for a nano-material concentration of 2, 4 and 6 %, respectively, relative to the case without nano-fluid, at an approximate value of 56,000 for Reynolds number. The entropy generation reduces with increasing the nonmetallic particle volume fraction of the same flow rates. For increase the nonmetallic particle volume fraction from 0.0 to 6 % the rate of entropy generation decrease by 10 %.
Approximate entropy analysis of event-related potentials in patients with early vascular dementia.
Xu, Jin; Sheng, Hengsong; Lou, Wutao; Zhao, Songzhen
2012-06-01
This study investigated differences in event-related potential (ERP) parameters among early vascular dementia (VD) patients, healthy elder controls (ECs), and young controls (YCs). A visual "oddball" color identification task was performed while individuals' electroencephalograms (EEGs) were recorded. Approximate entropy (ApEn), a nonlinear measure, along with P300 latencies and amplitudes were used to analyze ERP data and compare these three groups. The patients with VD showed more complex ERP waveforms and higher ApEn values than did ECs while performing the visual task. It was further found that patients with VD showed reduced P300 amplitudes and increased latencies. The results indicate that patients with VD have fewer attention resources to devote to processing stimuli, lower speed of stimulus classification, and lower synchrony in their cortical activity during the response period. We suggest that ApEn, as a measure of ERP complexity, is a promising marker for early diagnosis of VD.
García-González, Miguel A; Fernández-Chimeno, Mireya; Ramos-Castro, Juan
2009-02-01
An analysis of the errors due to the finite resolution of RR time series in the estimation of the approximate entropy (ApEn) is described. The quantification errors in the discrete RR time series produce considerable errors in the ApEn estimation (bias and variance) when the signal variability or the sampling frequency is low. Similar errors can be found in indices related to the quantification of recurrence plots. An easy way to calculate a figure of merit [the signal to resolution of the neighborhood ratio (SRN)] is proposed in order to predict when the bias in the indices could be high. When SRN is close to an integer value n, the bias is higher than when near n - 1/2 or n + 1/2. Moreover, if SRN is close to an integer value, the lower this value, the greater the bias is.
Shourie, Nasrin; Firoozabadi, Mohammad; Badie, Kambiz
2014-01-01
In this paper, differences between multichannel EEG signals of artists and nonartists were analyzed during visual perception and mental imagery of some paintings and at resting condition using approximate entropy (ApEn). It was found that ApEn is significantly higher for artists during the visual perception and the mental imagery in the frontal lobe, suggesting that artists process more information during these conditions. It was also observed that ApEn decreases for the two groups during the visual perception due to increasing mental load; however, their variation patterns are different. This difference may be used for measuring progress in novice artists. In addition, it was found that ApEn is significantly lower during the visual perception than the mental imagery in some of the channels, suggesting that visual perception task requires more cerebral efforts.
Quantum darwinism in a mixed environment.
Zwolak, Michael; Quan, H T; Zurek, Wojciech H
2009-09-11
Quantum Darwinism recognizes that we-the observers-acquire our information about the "systems of interest" indirectly from their imprints on the environment. Here, we show that information about a system can be acquired from a mixed-state, or hazy, environment, but the storage capacity of an environment fragment is suppressed by its initial entropy. In the case of good decoherence, the mutual information between the system and the fragment is given solely by the fragment's entropy increase. For fairly mixed environments, this means a reduction by a factor 1-h, where h is the haziness of the environment, i.e., the initial entropy of an environment qubit. Thus, even such hazy environments eventually reveal the state of the system, although now the intercepted environment fragment must be larger by approximately (1-h)(-1) to gain the same information about the system.
On the asymptotic behavior of a subcritical convection-diffusion equation with nonlocal diffusion
NASA Astrophysics Data System (ADS)
Cazacu, Cristian M.; Ignat, Liviu I.; Pazoto, Ademir F.
2017-08-01
In this paper we consider a subcritical model that involves nonlocal diffusion and a classical convective term. In spite of the nonlocal diffusion, we obtain an Oleinik type estimate similar to the case when the diffusion is local. First we prove that the entropy solution can be obtained by adding a small viscous term μ uxx and letting μ\\to 0 . Then, by using uniform Oleinik estimates for the viscous approximation we are able to prove the well-posedness of the entropy solutions with L 1-initial data. Using a scaling argument and hyperbolic estimates given by Oleinik’s inequality, we obtain the first term in the asymptotic behavior of the nonnegative solutions. Finally, the large time behavior of changing sign solutions is proved using the classical flux-entropy method and estimates for the nonlocal operator.
The Matter-Gravity Entanglement Hypothesis
NASA Astrophysics Data System (ADS)
Kay, Bernard S.
2018-03-01
I outline some of my work and results (some dating back to 1998, some more recent) on my matter-gravity entanglement hypothesis, according to which the entropy of a closed quantum gravitational system is equal to the system's matter-gravity entanglement entropy. The main arguments presented are: (1) that this hypothesis is capable of resolving what I call the second-law puzzle, i.e. the puzzle as to how the entropy increase of a closed system can be reconciled with the asssumption of unitary time-evolution; (2) that the black hole information loss puzzle may be regarded as a special case of this second law puzzle and that therefore the same resolution applies to it; (3) that the black hole thermal atmosphere puzzle (which I recall) can be resolved by adopting a radically different-from-usual description of quantum black hole equilibrium states, according to which they are total pure states, entangled between matter and gravity in such a way that the partial states of matter and gravity are each approximately thermal equilibrium states (at the Hawking temperature); (4) that the Susskind-Horowitz-Polchinski string-theoretic understanding of black hole entropy as the logarithm of the degeneracy of a long string (which is the weak string coupling limit of a black hole) cannot be quite correct but should be replaced by a modified understanding according to which it is the entanglement entropy between a long string and its stringy atmosphere, when in a total pure equilibrium state in a suitable box, which (in line with (3)) goes over, at strong-coupling, to a black hole in equilibrium with its thermal atmosphere. The modified understanding in (4) is based on a general result, which I also describe, which concerns the likely state of a quantum system when it is weakly coupled to an energy-bath and the total state is a random pure state with a given energy. This result generalizes Goldstein et al.'s `canonical typicality' result to systems which are not necessarily small.
The Matter-Gravity Entanglement Hypothesis
NASA Astrophysics Data System (ADS)
Kay, Bernard S.
2018-05-01
I outline some of my work and results (some dating back to 1998, some more recent) on my matter-gravity entanglement hypothesis, according to which the entropy of a closed quantum gravitational system is equal to the system's matter-gravity entanglement entropy. The main arguments presented are: (1) that this hypothesis is capable of resolving what I call the second-law puzzle, i.e. the puzzle as to how the entropy increase of a closed system can be reconciled with the asssumption of unitary time-evolution; (2) that the black hole information loss puzzle may be regarded as a special case of this second law puzzle and that therefore the same resolution applies to it; (3) that the black hole thermal atmosphere puzzle (which I recall) can be resolved by adopting a radically different-from-usual description of quantum black hole equilibrium states, according to which they are total pure states, entangled between matter and gravity in such a way that the partial states of matter and gravity are each approximately thermal equilibrium states (at the Hawking temperature); (4) that the Susskind-Horowitz-Polchinski string-theoretic understanding of black hole entropy as the logarithm of the degeneracy of a long string (which is the weak string coupling limit of a black hole) cannot be quite correct but should be replaced by a modified understanding according to which it is the entanglement entropy between a long string and its stringy atmosphere, when in a total pure equilibrium state in a suitable box, which (in line with (3)) goes over, at strong-coupling, to a black hole in equilibrium with its thermal atmosphere. The modified understanding in (4) is based on a general result, which I also describe, which concerns the likely state of a quantum system when it is weakly coupled to an energy-bath and the total state is a random pure state with a given energy. This result generalizes Goldstein et al.'s `canonical typicality' result to systems which are not necessarily small.
Distribution entropy analysis of epileptic EEG signals.
Li, Peng; Yan, Chang; Karmakar, Chandan; Liu, Changchun
2015-01-01
It is an open-ended challenge to accurately detect the epileptic seizures through electroencephalogram (EEG) signals. Recently published studies have made elaborate attempts to distinguish between the normal and epileptic EEG signals by advanced nonlinear entropy methods, such as the approximate entropy, sample entropy, fuzzy entropy, and permutation entropy, etc. Most recently, a novel distribution entropy (DistEn) has been reported to have superior performance compared with the conventional entropy methods for especially short length data. We thus aimed, in the present study, to show the potential of DistEn in the analysis of epileptic EEG signals. The publicly-accessible Bonn database which consisted of normal, interictal, and ictal EEG signals was used in this study. Three different measurement protocols were set for better understanding the performance of DistEn, which are: i) calculate the DistEn of a specific EEG signal using the full recording; ii) calculate the DistEn by averaging the results for all its possible non-overlapped 5 second segments; and iii) calculate it by averaging the DistEn values for all the possible non-overlapped segments of 1 second length, respectively. Results for all three protocols indicated a statistically significantly increased DistEn for the ictal class compared with both the normal and interictal classes. Besides, the results obtained under the third protocol, which only used very short segments (1 s) of EEG recordings showed a significantly (p <; 0.05) increased DistEn for the interictal class in compassion with the normal class, whereas both analyses using relatively long EEG signals failed in tracking this difference between them, which may be due to a nonstationarity effect on entropy algorithm. The capability of discriminating between the normal and interictal EEG signals is of great clinical relevance since it may provide helpful tools for the detection of a seizure onset. Therefore, our study suggests that the DistEn analysis of EEG signals is very promising for clinical and even portable EEG monitoring.
Study on corrosion resistance of high - entropy alloy in medium acid liquid and chemical properties
NASA Astrophysics Data System (ADS)
Florea, I.; Buluc, G.; Florea, R. M.; Soare, V.; Carcea, I.
2015-11-01
High-entropy alloy is a new alloy which is different from traditional alloys. The high entropy alloys were started in Tsing Hua University of Taiwan since 1995 by Yeh et al. Consisting of a variety of elements, each element occupying a similar compared with other alloy elements to form a high entropy. We could define high entropy alloys as having approximately equal concentrations, made up of a group of 5 to 11 major elements. In general, the content of each element is not more than 35% by weight of the alloy. During the investigation it turned out that this alloy has a high hardness and is also corrosion proof and also strength and good thermal stability. In the experimental area, scientists used different tools, including traditional casting, mechanical alloying, sputtering, splat-quenching to obtain the high entropy alloys with different alloying elements and then to investigate the corresponding microstructures and mechanical, chemical, thermal, and electronic performances. The present study is aimed to investigate the corrosion resistance in a different medium acid and try to put in evidence the mechanical properties. Forasmuch of the wide composition range and the enormous number of alloy systems in high entropy alloys, the mechanical properties of high entropy alloys can vary significantly. In terms of hardness, the most critical factors are: hardness/strength of each composing phase in the alloy, distribution of the composing phases. The corrosion resistance of an high entropy alloy was made in acid liquid such as 10%HNO3-3%HF, 10%H2SO4, 5%HCl and then was investigated, respectively with weight loss experiment. Weight loss test was carried out by put the samples into the acid solution for corrosion. The solution was maintained at a constant room temperature. The liquid formulations used for tests were 3% hydrofluoric acid with 10% nitric acid, 10% sulphuric acid, 5% hydrochloric acid. Weight loss of the samples was measured by electronic scale.
Jin, Ke; Sales, Brian C.; Stocks, George Malcolm; ...
2016-02-01
We discovered that equiatomic alloys (e.g. high entropy alloys) have recently attracted considerable interest due to their exceptional properties, which might be closely related to their extreme disorder induced by the chemical complexity. To understand the effects of chemical complexity on their fundamental physical properties, a family of (eight) Ni-based, face-center-cubic (FCC), equiatomic alloys, extending from elemental Ni to quinary high entropy alloys, has been synthesized, and their electrical, thermal, and magnetic properties are systematically investigated in the range of 4–300 K by combining experiments with ab initio Korring-Kohn-Rostoker coherent-potential-approximation (KKR-CPA) calculations. The scattering of electrons is significantly increased duemore » to the chemical (especially magnetic) disorder. It has weak correlation with the number of elements but strongly depends on the type of elements. Thermal conductivities of the alloys are largely lower than pure metals, primarily because the high electrical resistivity suppresses the electronic thermal conductivity. Moreover, the temperature dependence of the electrical and thermal transport properties is further discussed, and the magnetization of five alloys containing three or more elements is measured in magnetic fields up to 4 T.« less
Application of Renyi entropy for ultrasonic molecular imaging.
Hughes, M S; Marsh, J N; Arbeit, J M; Neumann, R G; Fuhrhop, R W; Wallace, K D; Thomas, L; Smith, J; Agyem, K; Lanza, G M; Wickline, S A; McCarthy, J E
2009-05-01
Previous work has demonstrated that a signal receiver based on a limiting form of the Shannon entropy is, in certain settings, more sensitive to subtle changes in scattering architecture than conventional energy-based signal receivers [M. S. Hughes et al., J. Acoust. Soc. Am. 121, 3542-3557 (2007)]. In this paper new results are presented demonstrating further improvements in sensitivity using a signal receiver based on the Renyi entropy.
A Phase-Locked Loop Epilepsy Network Emulator
Watson, P.D.; Horecka, K. M.; Cohen, N.J.; Ratnam, R.
2015-01-01
Most seizure forecasting employs statistical learning techniques that lack a representation of the network interactions that give rise to seizures. We present an epilepsy network emulator (ENE) that uses a network of interconnected phase-locked loops (PLLs) to model synchronous, circuit-level oscillations between electrocorticography (ECoG) electrodes. Using ECoG data from a canine-epilepsy model (Davis et al. 2011) and a physiological entropy measure (approximate entropy or ApEn, Pincus 1995), we demonstrate the entropy of the emulator phases increases dramatically during ictal periods across all ECoG recording sites and across all animals in the sample. Further, this increase precedes the observable voltage spikes that characterize seizure activity in the ECoG data. These results suggest that the ENE is sensitive to phase-domain information in the neural circuits measured by ECoG and that an increase in the entropy of this measure coincides with increasing likelihood of seizure activity. Understanding this unpredictable phase-domain electrical activity present in ECoG recordings may provide a target for seizure detection and feedback control. PMID:26664133
Multibody local approximation: Application to conformational entropy calculations on biomolecules
NASA Astrophysics Data System (ADS)
Suárez, Ernesto; Suárez, Dimas
2012-08-01
Multibody type expansions like mutual information expansions are widely used for computing or analyzing properties of large composite systems. The power of such expansions stems from their generality. Their weaknesses, however, are the large computational cost of including high order terms due to the combinatorial explosion and the fact that truncation errors do not decrease strictly with the expansion order. Herein, we take advantage of the redundancy of multibody expansions in order to derive an efficient reformulation that captures implicitly all-order correlation effects within a given cutoff, avoiding the combinatory explosion. This approach, which is cutoff dependent rather than order dependent, keeps the generality of the original expansions and simultaneously mitigates their limitations provided that a reasonable cutoff can be used. An application of particular interest can be the computation of the conformational entropy of flexible peptide molecules from molecular dynamics trajectories. By combining the multibody local estimations of conformational entropy with average values of the rigid-rotor and harmonic-oscillator entropic contributions, we obtain by far a tighter upper bound of the absolute entropy than the one obtained by the broadly used quasi-harmonic method.
Multibody local approximation: application to conformational entropy calculations on biomolecules.
Suárez, Ernesto; Suárez, Dimas
2012-08-28
Multibody type expansions like mutual information expansions are widely used for computing or analyzing properties of large composite systems. The power of such expansions stems from their generality. Their weaknesses, however, are the large computational cost of including high order terms due to the combinatorial explosion and the fact that truncation errors do not decrease strictly with the expansion order. Herein, we take advantage of the redundancy of multibody expansions in order to derive an efficient reformulation that captures implicitly all-order correlation effects within a given cutoff, avoiding the combinatory explosion. This approach, which is cutoff dependent rather than order dependent, keeps the generality of the original expansions and simultaneously mitigates their limitations provided that a reasonable cutoff can be used. An application of particular interest can be the computation of the conformational entropy of flexible peptide molecules from molecular dynamics trajectories. By combining the multibody local estimations of conformational entropy with average values of the rigid-rotor and harmonic-oscillator entropic contributions, we obtain by far a tighter upper bound of the absolute entropy than the one obtained by the broadly used quasi-harmonic method.
Diffusivity anomaly in modified Stillinger-Weber liquids
NASA Astrophysics Data System (ADS)
Sengupta, Shiladitya; Vasisht, Vishwas V.; Sastry, Srikanth
2014-01-01
By modifying the tetrahedrality (the strength of the three body interactions) in the well-known Stillinger-Weber model for silicon, we study the diffusivity of a series of model liquids as a function of tetrahedrality and temperature at fixed pressure. Previous work has shown that at constant temperature, the diffusivity exhibits a maximum as a function of tetrahedrality, which we refer to as the diffusivity anomaly, in analogy with the well-known anomaly in water upon variation of pressure at constant temperature. We explore to what extent the structural and thermodynamic changes accompanying changes in the interaction potential can help rationalize the diffusivity anomaly, by employing the Rosenfeld relation between diffusivity and the excess entropy (over the ideal gas reference value), and the pair correlation entropy, which provides an approximation to the excess entropy in terms of the pair correlation function. We find that in the modified Stillinger-Weber liquids, the Rosenfeld relation works well above the melting temperatures but exhibits deviations below, with the deviations becoming smaller for smaller tetrahedrality. Further we find that both the excess entropy and the pair correlation entropy at constant temperature go through maxima as a function of the tetrahedrality, thus demonstrating the close relationship between structural, thermodynamic, and dynamical anomalies in the modified Stillinger-Weber liquids.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larche, Michael R.; Prowant, Matthew S.; Bruillard, Paul J.
This study compares different approaches for imaging the internal architecture of graphite/epoxy composites using backscattered ultrasound. Two cases are studied. In the first, near-surface defects in a thin graphite/epoxy plates are imaged. The same backscattered waveforms were used to produce peak-to-peak, logarithm of signal energy, as well as entropy images of different types. All of the entropy images exhibit better border delineation and defect contrast than the either peak-to-peak or logarithm of signal energy. The best results are obtained using the joint entropy of the backscattered waveforms with a reference function. Two different references are examined. The first is amore » reflection of the insonifying pulse from a stainless steel reflector. The second is an approximate optimum obtained from an iterative parametric search. The joint entropy images produced using this reference exhibit three times the contrast obtained in previous studies. These plates were later destructively analyzed to determine size and location of near-surface defects and the results found to agree with the defect location and shape as indicated by the entropy images. In the second study, images of long carbon graphite fibers (50% by weight) in polypropylene thermoplastic are obtained as a first step toward ultrasonic determination of the distributions of fiber position and orientation.« less
Prediction of Protein Configurational Entropy (Popcoen).
Goethe, Martin; Gleixner, Jan; Fita, Ignacio; Rubi, J Miguel
2018-03-13
A knowledge-based method for configurational entropy prediction of proteins is presented; this methodology is extremely fast, compared to previous approaches, because it does not involve any type of configurational sampling. Instead, the configurational entropy of a query fold is estimated by evaluating an artificial neural network, which was trained on molecular-dynamics simulations of ∼1000 proteins. The predicted entropy can be incorporated into a large class of protein software based on cost-function minimization/evaluation, in which configurational entropy is currently neglected for performance reasons. Software of this type is used for all major protein tasks such as structure predictions, proteins design, NMR and X-ray refinement, docking, and mutation effect predictions. Integrating the predicted entropy can yield a significant accuracy increase as we show exemplarily for native-state identification with the prominent protein software FoldX. The method has been termed Popcoen for Prediction of Protein Configurational Entropy. An implementation is freely available at http://fmc.ub.edu/popcoen/ .
Statistical physics inspired energy-efficient coded-modulation for optical communications.
Djordjevic, Ivan B; Xu, Lei; Wang, Ting
2012-04-15
Because Shannon's entropy can be obtained by Stirling's approximation of thermodynamics entropy, the statistical physics energy minimization methods are directly applicable to the signal constellation design. We demonstrate that statistical physics inspired energy-efficient (EE) signal constellation designs, in combination with large-girth low-density parity-check (LDPC) codes, significantly outperform conventional LDPC-coded polarization-division multiplexed quadrature amplitude modulation schemes. We also describe an EE signal constellation design algorithm. Finally, we propose the discrete-time implementation of D-dimensional transceiver and corresponding EE polarization-division multiplexed system. © 2012 Optical Society of America
Logarithmic black hole entropy corrections and holographic Rényi entropy
NASA Astrophysics Data System (ADS)
Mahapatra, Subhash
2018-01-01
The entanglement and Rényi entropies for spherical entangling surfaces in CFTs with gravity duals can be explicitly calculated by mapping these entropies first to the thermal entropy on hyperbolic space and then, using the AdS/CFT correspondence, to the Wald entropy of topological black holes. Here we extend this idea by taking into account corrections to the Wald entropy. Using the method based on horizon symmetries and the asymptotic Cardy formula, we calculate corrections to the Wald entropy and find that these corrections are proportional to the logarithm of the area of the horizon. With the corrected expression for the entropy of the black hole, we then find corrections to the Rényi entropies. We calculate these corrections for both Einstein and Gauss-Bonnet gravity duals. Corrections with logarithmic dependence on the area of the entangling surface naturally occur at the order GD^0. The entropic c-function and the inequalities of the Rényi entropy are also satisfied even with the correction terms.
NASA Astrophysics Data System (ADS)
Queiros-Conde, D.; Foucher, F.; Mounaïm-Rousselle, C.; Kassem, H.; Feidt, M.
2008-12-01
Multi-scale features of turbulent flames near a wall display two kinds of scale-dependent fractal features. In scale-space, an unique fractal dimension cannot be defined and the fractal dimension of the front is scale-dependent. Moreover, when the front approaches the wall, this dependency changes: fractal dimension also depends on the wall-distance. Our aim here is to propose a general geometrical framework that provides the possibility to integrate these two cases, in order to describe the multi-scale structure of turbulent flames interacting with a wall. Based on the scale-entropy quantity, which is simply linked to the roughness of the front, we thus introduce a general scale-entropy diffusion equation. We define the notion of “scale-evolutivity” which characterises the deviation of a multi-scale system from the pure fractal behaviour. The specific case of a constant “scale-evolutivity” over the scale-range is studied. In this case, called “parabolic scaling”, the fractal dimension is a linear function of the logarithm of scale. The case of a constant scale-evolutivity in the wall-distance space implies that the fractal dimension depends linearly on the logarithm of the wall-distance. We then verified experimentally, that parabolic scaling represents a good approximation of the real multi-scale features of turbulent flames near a wall.
Computational Methods for Configurational Entropy Using Internal and Cartesian Coordinates.
Hikiri, Simon; Yoshidome, Takashi; Ikeguchi, Mitsunori
2016-12-13
The configurational entropy of solute molecules is a crucially important quantity to study various biophysical processes. Consequently, it is necessary to establish an efficient quantitative computational method to calculate configurational entropy as accurately as possible. In the present paper, we investigate the quantitative performance of the quasi-harmonic and related computational methods, including widely used methods implemented in popular molecular dynamics (MD) software packages, compared with the Clausius method, which is capable of accurately computing the change of the configurational entropy upon temperature change. Notably, we focused on the choice of the coordinate systems (i.e., internal or Cartesian coordinates). The Boltzmann-quasi-harmonic (BQH) method using internal coordinates outperformed all the six methods examined here. The introduction of improper torsions in the BQH method improves its performance, and anharmonicity of proper torsions in proteins is identified to be the origin of the superior performance of the BQH method. In contrast, widely used methods implemented in MD packages show rather poor performance. In addition, the enhanced sampling of replica-exchange MD simulations was found to be efficient for the convergent behavior of entropy calculations. Also in folding/unfolding transitions of a small protein, Chignolin, the BQH method was reasonably accurate. However, the independent term without the correlation term in the BQH method was most accurate for the folding entropy among the methods considered in this study, because the QH approximation of the correlation term in the BQH method was no longer valid for the divergent unfolded structures.
Application of Renyi entropy for ultrasonic molecular imaging
Hughes, M. S.; Marsh, J. N.; Arbeit, J. M.; Neumann, R. G.; Fuhrhop, R. W.; Wallace, K. D.; Thomas, L.; Smith, J.; Agyem, K.; Lanza, G. M.; Wickline, S. A.; McCarthy, J. E.
2009-01-01
Previous work has demonstrated that a signal receiver based on a limiting form of the Shannon entropy is, in certain settings, more sensitive to subtle changes in scattering architecture than conventional energy-based signal receivers [M. S. Hughes et al., J. Acoust. Soc. Am. 121, 3542–3557 (2007)]. In this paper new results are presented demonstrating further improvements in sensitivity using a signal receiver based on the Renyi entropy. PMID:19425656
Algorithms for optimized maximum entropy and diagnostic tools for analytic continuation.
Bergeron, Dominic; Tremblay, A-M S
2016-08-01
Analytic continuation of numerical data obtained in imaginary time or frequency has become an essential part of many branches of quantum computational physics. It is, however, an ill-conditioned procedure and thus a hard numerical problem. The maximum-entropy approach, based on Bayesian inference, is the most widely used method to tackle that problem. Although the approach is well established and among the most reliable and efficient ones, useful developments of the method and of its implementation are still possible. In addition, while a few free software implementations are available, a well-documented, optimized, general purpose, and user-friendly software dedicated to that specific task is still lacking. Here we analyze all aspects of the implementation that are critical for accuracy and speed and present a highly optimized approach to maximum entropy. Original algorithmic and conceptual contributions include (1) numerical approximations that yield a computational complexity that is almost independent of temperature and spectrum shape (including sharp Drude peaks in broad background, for example) while ensuring quantitative accuracy of the result whenever precision of the data is sufficient, (2) a robust method of choosing the entropy weight α that follows from a simple consistency condition of the approach and the observation that information- and noise-fitting regimes can be identified clearly from the behavior of χ^{2} with respect to α, and (3) several diagnostics to assess the reliability of the result. Benchmarks with test spectral functions of different complexity and an example with an actual physical simulation are presented. Our implementation, which covers most typical cases for fermions, bosons, and response functions, is available as an open source, user-friendly software.
Algorithms for optimized maximum entropy and diagnostic tools for analytic continuation
NASA Astrophysics Data System (ADS)
Bergeron, Dominic; Tremblay, A.-M. S.
2016-08-01
Analytic continuation of numerical data obtained in imaginary time or frequency has become an essential part of many branches of quantum computational physics. It is, however, an ill-conditioned procedure and thus a hard numerical problem. The maximum-entropy approach, based on Bayesian inference, is the most widely used method to tackle that problem. Although the approach is well established and among the most reliable and efficient ones, useful developments of the method and of its implementation are still possible. In addition, while a few free software implementations are available, a well-documented, optimized, general purpose, and user-friendly software dedicated to that specific task is still lacking. Here we analyze all aspects of the implementation that are critical for accuracy and speed and present a highly optimized approach to maximum entropy. Original algorithmic and conceptual contributions include (1) numerical approximations that yield a computational complexity that is almost independent of temperature and spectrum shape (including sharp Drude peaks in broad background, for example) while ensuring quantitative accuracy of the result whenever precision of the data is sufficient, (2) a robust method of choosing the entropy weight α that follows from a simple consistency condition of the approach and the observation that information- and noise-fitting regimes can be identified clearly from the behavior of χ2 with respect to α , and (3) several diagnostics to assess the reliability of the result. Benchmarks with test spectral functions of different complexity and an example with an actual physical simulation are presented. Our implementation, which covers most typical cases for fermions, bosons, and response functions, is available as an open source, user-friendly software.
Financial time series analysis based on effective phase transfer entropy
NASA Astrophysics Data System (ADS)
Yang, Pengbo; Shang, Pengjian; Lin, Aijing
2017-02-01
Transfer entropy is a powerful technique which is able to quantify the impact of one dynamic system on another system. In this paper, we propose the effective phase transfer entropy method based on the transfer entropy method. We use simulated data to test the performance of this method, and the experimental results confirm that the proposed approach is capable of detecting the information transfer between the systems. We also explore the relationship between effective phase transfer entropy and some variables, such as data size, coupling strength and noise. The effective phase transfer entropy is positively correlated with the data size and the coupling strength. Even in the presence of a large amount of noise, it can detect the information transfer between systems, and it is very robust to noise. Moreover, this measure is indeed able to accurately estimate the information flow between systems compared with phase transfer entropy. In order to reflect the application of this method in practice, we apply this method to financial time series and gain new insight into the interactions between systems. It is demonstrated that the effective phase transfer entropy can be used to detect some economic fluctuations in the financial market. To summarize, the effective phase transfer entropy method is a very efficient tool to estimate the information flow between systems.
Calculating the Entropy of Solid and Liquid Metals, Based on Acoustic Data
NASA Astrophysics Data System (ADS)
Tekuchev, V. V.; Kalinkin, D. P.; Ivanova, I. V.
2018-05-01
The entropies of iron, cobalt, rhodium, and platinum are studied for the first time, based on acoustic data and using the Debye theory and rigid-sphere model, from 298 K up to the boiling point. A formula for the melting entropy of metals is validated. Good agreement between the research results and the literature data is obtained.
Controlling the Shannon Entropy of Quantum Systems
Xing, Yifan; Wu, Jun
2013-01-01
This paper proposes a new quantum control method which controls the Shannon entropy of quantum systems. For both discrete and continuous entropies, controller design methods are proposed based on probability density function control, which can drive the quantum state to any target state. To drive the entropy to any target at any prespecified time, another discretization method is proposed for the discrete entropy case, and the conditions under which the entropy can be increased or decreased are discussed. Simulations are done on both two- and three-dimensional quantum systems, where division and prediction are used to achieve more accurate tracking. PMID:23818819
Controlling the shannon entropy of quantum systems.
Xing, Yifan; Wu, Jun
2013-01-01
This paper proposes a new quantum control method which controls the Shannon entropy of quantum systems. For both discrete and continuous entropies, controller design methods are proposed based on probability density function control, which can drive the quantum state to any target state. To drive the entropy to any target at any prespecified time, another discretization method is proposed for the discrete entropy case, and the conditions under which the entropy can be increased or decreased are discussed. Simulations are done on both two- and three-dimensional quantum systems, where division and prediction are used to achieve more accurate tracking.
Accuracy of topological entanglement entropy on finite cylinders.
Jiang, Hong-Chen; Singh, Rajiv R P; Balents, Leon
2013-09-06
Topological phases are unique states of matter which support nonlocal excitations which behave as particles with fractional statistics. A universal characterization of gapped topological phases is provided by the topological entanglement entropy (TEE). We study the finite size corrections to the TEE by focusing on systems with a Z2 topological ordered state using density-matrix renormalization group and perturbative series expansions. We find that extrapolations of the TEE based on the Renyi entropies with a Renyi index of n≥2 suffer from much larger finite size corrections than do extrapolations based on the von Neumann entropy. In particular, when the circumference of the cylinder is about ten times the correlation length, the TEE obtained using von Neumann entropy has an error of order 10(-3), while for Renyi entropies it can even exceed 40%. We discuss the relevance of these findings to previous and future searches for topological ordered phases, including quantum spin liquids.
Derivation of Hunt equation for suspension distribution using Shannon entropy theory
NASA Astrophysics Data System (ADS)
Kundu, Snehasis
2017-12-01
In this study, the Hunt equation for computing suspension concentration in sediment-laden flows is derived using Shannon entropy theory. Considering the inverse of the void ratio as a random variable and using principle of maximum entropy, probability density function and cumulative distribution function of suspension concentration is derived. A new and more general cumulative distribution function for the flow domain is proposed which includes several specific other models of CDF reported in literature. This general form of cumulative distribution function also helps to derive the Rouse equation. The entropy based approach helps to estimate model parameters using suspension data of sediment concentration which shows the advantage of using entropy theory. Finally model parameters in the entropy based model are also expressed as functions of the Rouse number to establish a link between the parameters of the deterministic and probabilistic approaches.
Statistical mechanical theory for steady state systems. VI. Variational principles
NASA Astrophysics Data System (ADS)
Attard, Phil
2006-12-01
Several variational principles that have been proposed for nonequilibrium systems are analyzed. These include the principle of minimum rate of entropy production due to Prigogine [Introduction to Thermodynamics of Irreversible Processes (Interscience, New York, 1967)], the principle of maximum rate of entropy production, which is common on the internet and in the natural sciences, two principles of minimum dissipation due to Onsager [Phys. Rev. 37, 405 (1931)] and to Onsager and Machlup [Phys. Rev. 91, 1505 (1953)], and the principle of maximum second entropy due to Attard [J. Chem.. Phys. 122, 154101 (2005); Phys. Chem. Chem. Phys. 8, 3585 (2006)]. The approaches of Onsager and Attard are argued to be the only viable theories. These two are related, although their physical interpretation and mathematical approximations differ. A numerical comparison with computer simulation results indicates that Attard's expression is the only accurate theory. The implications for the Langevin and other stochastic differential equations are discussed.
Maximum entropy deconvolution of the optical jet of 3C 273
NASA Technical Reports Server (NTRS)
Evans, I. N.; Ford, H. C.; Hui, X.
1989-01-01
The technique of maximum entropy image restoration is applied to the problem of deconvolving the point spread function from a deep, high-quality V band image of the optical jet of 3C 273. The resulting maximum entropy image has an approximate spatial resolution of 0.6 arcsec and has been used to study the morphology of the optical jet. Four regularly-spaced optical knots are clearly evident in the data, together with an optical 'extension' at each end of the optical jet. The jet oscillates around its center of gravity, and the spatial scale of the oscillations is very similar to the spacing between the optical knots. The jet is marginally resolved in the transverse direction and has an asymmetric profile perpendicular to the jet axis. The distribution of V band flux along the length of the jet, and accurate astrometry of the optical knot positions are presented.
Photometric Mapping of Two Kepler Eclipsing Binaries: KIC11560447 and KIC8868650
NASA Astrophysics Data System (ADS)
Senavci, Hakan Volkan; Özavci, I.; Isik, E.; Hussain, G. A. J.; O'Neal, D. O.; Yilmaz, M.; Selam, S. O.
2018-04-01
We present the surface maps of two eclipsing binary systems KIC11560447 and KIC8868650, using the Kepler light curves covering approximately 4 years. We use the code DoTS, which is based on maximum entropy method in order to reconstruct the surface maps. We also perform numerical tests of DoTS to check the ability of the code in terms of tracking phase migration of spot clusters. The resulting latitudinally averaged maps of KIC11560447 show that spots drift towards increasing orbital longitudes, while the overall behaviour of spots on KIC8868650 drifts towards decreasing latitudes.
Fault Diagnosis for Micro-Gas Turbine Engine Sensors via Wavelet Entropy
Yu, Bing; Liu, Dongdong; Zhang, Tianhong
2011-01-01
Sensor fault diagnosis is necessary to ensure the normal operation of a gas turbine system. However, the existing methods require too many resources and this need can’t be satisfied in some occasions. Since the sensor readings are directly affected by sensor state, sensor fault diagnosis can be performed by extracting features of the measured signals. This paper proposes a novel fault diagnosis method for sensors based on wavelet entropy. Based on the wavelet theory, wavelet decomposition is utilized to decompose the signal in different scales. Then the instantaneous wavelet energy entropy (IWEE) and instantaneous wavelet singular entropy (IWSE) are defined based on the previous wavelet entropy theory. Subsequently, a fault diagnosis method for gas turbine sensors is proposed based on the results of a numerically simulated example. Then, experiments on this method are carried out on a real micro gas turbine engine. In the experiment, four types of faults with different magnitudes are presented. The experimental results show that the proposed method for sensor fault diagnosis is efficient. PMID:22163734
Fault diagnosis for micro-gas turbine engine sensors via wavelet entropy.
Yu, Bing; Liu, Dongdong; Zhang, Tianhong
2011-01-01
Sensor fault diagnosis is necessary to ensure the normal operation of a gas turbine system. However, the existing methods require too many resources and this need can't be satisfied in some occasions. Since the sensor readings are directly affected by sensor state, sensor fault diagnosis can be performed by extracting features of the measured signals. This paper proposes a novel fault diagnosis method for sensors based on wavelet entropy. Based on the wavelet theory, wavelet decomposition is utilized to decompose the signal in different scales. Then the instantaneous wavelet energy entropy (IWEE) and instantaneous wavelet singular entropy (IWSE) are defined based on the previous wavelet entropy theory. Subsequently, a fault diagnosis method for gas turbine sensors is proposed based on the results of a numerically simulated example. Then, experiments on this method are carried out on a real micro gas turbine engine. In the experiment, four types of faults with different magnitudes are presented. The experimental results show that the proposed method for sensor fault diagnosis is efficient.
Stochastic approach to equilibrium and nonequilibrium thermodynamics.
Tomé, Tânia; de Oliveira, Mário J
2015-04-01
We develop the stochastic approach to thermodynamics based on stochastic dynamics, which can be discrete (master equation) and continuous (Fokker-Planck equation), and on two assumptions concerning entropy. The first is the definition of entropy itself and the second the definition of entropy production rate, which is non-negative and vanishes in thermodynamic equilibrium. Based on these assumptions, we study interacting systems with many degrees of freedom in equilibrium or out of thermodynamic equilibrium and how the macroscopic laws are derived from the stochastic dynamics. These studies include the quasiequilibrium processes; the convexity of the equilibrium surface; the monotonic time behavior of thermodynamic potentials, including entropy; the bilinear form of the entropy production rate; the Onsager coefficients and reciprocal relations; and the nonequilibrium steady states of chemical reactions.
Compressibility Corrections to Closure Approximations for Turbulent Flow Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cloutman, L D
2003-02-01
We summarize some modifications to the usual closure approximations for statistical models of turbulence that are necessary for use with compressible fluids at all Mach numbers. We concentrate here on the gradient-flu approximation for the turbulent heat flux, on the buoyancy production of turbulence kinetic energy, and on a modification of the Smagorinsky model to include buoyancy. In all cases, there are pressure gradient terms that do not appear in the incompressible models and are usually omitted in compressible-flow models. Omission of these terms allows unphysical rates of entropy change.
Hydration entropy change from the hard sphere model.
Graziano, Giuseppe; Lee, Byungkook
2002-12-10
The gas to liquid transfer entropy change for a pure non-polar liquid can be calculated quite accurately using a hard sphere model that obeys the Carnahan-Starling equation of state. The same procedure fails to produce a reasonable value for hydrogen bonding liquids such as water, methanol and ethanol. However, the size of the molecules increases when the hydrogen bonds are turned off to produce the hard sphere system and the volume packing density rises. We show here that the hard sphere system that has this increased packing density reproduces the experimental transfer entropy values rather well. The gas to water transfer entropy values for small non-polar hydrocarbons is also not reproduced by a hard sphere model, whether one uses the normal (2.8 A diameter) or the increased (3.2 A) size for water. At least part of the reason that the hard sphere model with 2.8 A size water produces too small entropy change is that the size of water is too small for a system without hydrogen bonds. The reason that the 3.2 A model also produces too small entropy values is that this is an overly crowded system and that the free volume introduced in the system by the addition of a solute molecule produces too much of a relief to this crowding. A hard sphere model, in which the free volume increase is limited by requiring that the average surface-to-surface distance between the solute and water molecules is the same as that between the increased-size water molecules, does approximately reproduce the experimental hydration entropy values. Copyright 2002 Elsevier Science B.V.
Exact solutions for the entropy production rate of several irreversible processes.
Ross, John; Vlad, Marcel O
2005-11-24
We investigate thermal conduction described by Newton's law of cooling and by Fourier's transport equation and chemical reactions based on mass action kinetics where we detail a simple example of a reaction mechanism with one intermediate. In these cases we derive exact expressions for the entropy production rate and its differential. We show that at a stationary state the entropy production rate is an extremum if and only if the stationary state is a state of thermodynamic equilibrium. These results are exact and independent of any expansions of the entropy production rate. In the case of thermal conduction we compare our exact approach with the conventional approach based on the expansion of the entropy production rate near equilibrium. If we expand the entropy production rate in a series and keep terms up to the third order in the deviation variables and then differentiate, we find out that the entropy production rate is not an extremum at a nonequilibrium steady state. If there is a strict proportionality between fluxes and forces, then the entropy production rate is an extremum at the stationary state even if the stationary state is far away from equilibrium.
A new entropy based on a group-theoretical structure
NASA Astrophysics Data System (ADS)
Curado, Evaldo M. F.; Tempesta, Piergiulio; Tsallis, Constantino
2016-03-01
A multi-parametric version of the nonadditive entropy Sq is introduced. This new entropic form, denoted by S a , b , r, possesses many interesting statistical properties, and it reduces to the entropy Sq for b = 0, a = r : = 1 - q (hence Boltzmann-Gibbs entropy SBG for b = 0, a = r → 0). The construction of the entropy S a , b , r is based on a general group-theoretical approach recently proposed by one of us, Tempesta (2016). Indeed, essentially all the properties of this new entropy are obtained as a consequence of the existence of a rational group law, which expresses the structure of S a , b , r with respect to the composition of statistically independent subsystems. Depending on the choice of the parameters, the entropy S a , b , r can be used to cover a wide range of physical situations, in which the measure of the accessible phase space increases say exponentially with the number of particles N of the system, or even stabilizes, by increasing N, to a limiting value. This paves the way to the use of this entropy in contexts where the size of the phase space does not increase as fast as the number of its constituting particles (or subsystems) increases.
Pawlowski, Marcin Piotr; Jara, Antonio; Ogorzalek, Maciej
2015-01-01
Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors) have been analyzed. Additionally, the costs (i.e., time and memory consumption) of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things. PMID:26506357
Population entropies estimates of proteins
NASA Astrophysics Data System (ADS)
Low, Wai Yee
2017-05-01
The Shannon entropy equation provides a way to estimate variability of amino acids sequences in a multiple sequence alignment of proteins. Knowledge of protein variability is useful in many areas such as vaccine design, identification of antibody binding sites, and exploration of protein 3D structural properties. In cases where the population entropies of a protein are of interest but only a small sample size can be obtained, a method based on linear regression and random subsampling can be used to estimate the population entropy. This method is useful for comparisons of entropies where the actual sequence counts differ and thus, correction for alignment size bias is needed. In the current work, an R based package named EntropyCorrect that enables estimation of population entropy is presented and an empirical study on how well this new algorithm performs on simulated dataset of various combinations of population and sample sizes is discussed. The package is available at https://github.com/lloydlow/EntropyCorrect. This article, which was originally published online on 12 May 2017, contained an error in Eq. (1), where the summation sign was missing. The corrected equation appears in the Corrigendum attached to the pdf.
Pawlowski, Marcin Piotr; Jara, Antonio; Ogorzalek, Maciej
2015-10-22
Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors) have been analyzed. Additionally, the costs (i.e., time and memory consumption) of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things.
NASA Astrophysics Data System (ADS)
Alameddine, Ibrahim; Karmakar, Subhankar; Qian, Song S.; Paerl, Hans W.; Reckhow, Kenneth H.
2013-10-01
The total maximum daily load program aims to monitor more than 40,000 standard violations in around 20,000 impaired water bodies across the United States. Given resource limitations, future monitoring efforts have to be hedged against the uncertainties in the monitored system, while taking into account existing knowledge. In that respect, we have developed a hierarchical spatiotemporal Bayesian model that can be used to optimize an existing monitoring network by retaining stations that provide the maximum amount of information, while identifying locations that would benefit from the addition of new stations. The model assumes the water quality parameters are adequately described by a joint matrix normal distribution. The adopted approach allows for a reduction in redundancies, while emphasizing information richness rather than data richness. The developed approach incorporates the concept of entropy to account for the associated uncertainties. Three different entropy-based criteria are adopted: total system entropy, chlorophyll-a standard violation entropy, and dissolved oxygen standard violation entropy. A multiple attribute decision making framework is adopted to integrate the competing design criteria and to generate a single optimal design. The approach is implemented on the water quality monitoring system of the Neuse River Estuary in North Carolina, USA. The model results indicate that the high priority monitoring areas identified by the total system entropy and the dissolved oxygen violation entropy criteria are largely coincident. The monitoring design based on the chlorophyll-a standard violation entropy proved to be less informative, given the low probabilities of violating the water quality standard in the estuary.
Maximum Entropy Methods as the Bridge Between Microscopic and Macroscopic Theory
NASA Astrophysics Data System (ADS)
Taylor, Jamie M.
2016-09-01
This paper is concerned with an investigation into a function of macroscopic variables known as the singular potential, building on previous work by Ball and Majumdar. The singular potential is a function of the admissible statistical averages of probability distributions on a state space, defined so that it corresponds to the maximum possible entropy given known observed statistical averages, although non-classical entropy-like objective functions will also be considered. First the set of admissible moments must be established, and under the conditions presented in this work the set is open, bounded and convex allowing a description in terms of supporting hyperplanes, which provides estimates on the development of singularities for related probability distributions. Under appropriate conditions it is shown that the singular potential is strictly convex, as differentiable as the microscopic entropy, and blows up uniformly as the macroscopic variable tends to the boundary of the set of admissible moments. Applications of the singular potential are then discussed, and particular consideration will be given to certain free-energy functionals typical in mean-field theory, demonstrating an equivalence between certain microscopic and macroscopic free-energy functionals. This allows statements about L^1-local minimisers of Onsager's free energy to be obtained which cannot be given by two-sided variations, and overcomes the need to ensure local minimisers are bounded away from zero and +∞ before taking L^∞ variations. The analysis also permits the definition of a dual order parameter for which Onsager's free energy allows an explicit representation. Also, the difficulties in approximating the singular potential by everywhere defined functions, in particular by polynomial functions, are addressed, with examples demonstrating the failure of the Taylor approximation to preserve relevant shape properties of the singular potential.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Gregory H.
2003-08-06
In this paper we present a general iterative method for the solution of the Riemann problem for hyperbolic systems of PDEs. The method is based on the multiple shooting method for free boundary value problems. We demonstrate the method by solving one-dimensional Riemann problems for hyperelastic solid mechanics. Even for conditions representative of routine laboratory conditions and military ballistics, dramatic differences are seen between the exact and approximate Riemann solution. The greatest discrepancy arises from misallocation of energy between compressional and thermal modes by the approximate solver, resulting in nonphysical entropy and temperature estimates. Several pathological conditions arise in commonmore » practice, and modifications to the method to handle these are discussed. These include points where genuine nonlinearity is lost, degeneracies, and eigenvector deficiencies that occur upon melting.« less
Generalized Entanglement Entropy and Holography
NASA Astrophysics Data System (ADS)
Obregón, O.
2018-04-01
A nonextensive statistical mechanics entropy that depends only on the probability distribution is proposed in the framework of superstatistics. It is based on a Γ(χ 2) distribution that depends on β and also on pl . The corresponding modified von Neumann entropy is constructed; it is shown that it can also be obtained from a generalized Replica trick. We address the question whether the generalized entanglement entropy can play a role in the gauge/gravity duality. We pay attention to 2dCFT and their gravity duals. The correction terms to the von Neumann entropy result more relevant than the usual UV (for c = 1) ones and also than those due to the area dependent AdS 3 entropy which result comparable to the UV ones. Then the correction terms due to the new entropy would modify the Ryu-Takayanagi identification between the CFT entanglement entropy and the AdS entropy in a different manner than the UV ones or than the corrections to the AdS 3 area dependent entropy.
Characterizing time series via complexity-entropy curves
NASA Astrophysics Data System (ADS)
Ribeiro, Haroldo V.; Jauregui, Max; Zunino, Luciano; Lenzi, Ervin K.
2017-06-01
The search for patterns in time series is a very common task when dealing with complex systems. This is usually accomplished by employing a complexity measure such as entropies and fractal dimensions. However, such measures usually only capture a single aspect of the system dynamics. Here, we propose a family of complexity measures for time series based on a generalization of the complexity-entropy causality plane. By replacing the Shannon entropy by a monoparametric entropy (Tsallis q entropy) and after considering the proper generalization of the statistical complexity (q complexity), we build up a parametric curve (the q -complexity-entropy curve) that is used for characterizing and classifying time series. Based on simple exact results and numerical simulations of stochastic processes, we show that these curves can distinguish among different long-range, short-range, and oscillating correlated behaviors. Also, we verify that simulated chaotic and stochastic time series can be distinguished based on whether these curves are open or closed. We further test this technique in experimental scenarios related to chaotic laser intensity, stock price, sunspot, and geomagnetic dynamics, confirming its usefulness. Finally, we prove that these curves enhance the automatic classification of time series with long-range correlations and interbeat intervals of healthy subjects and patients with heart disease.
Secondary structural entropy in RNA switch (Riboswitch) identification.
Manzourolajdad, Amirhossein; Arnold, Jonathan
2015-04-28
RNA regulatory elements play a significant role in gene regulation. Riboswitches, a widespread group of regulatory RNAs, are vital components of many bacterial genomes. These regulatory elements generally function by forming a ligand-induced alternative fold that controls access to ribosome binding sites or other regulatory sites in RNA. Riboswitch-mediated mechanisms are ubiquitous across bacterial genomes. A typical class of riboswitch has its own unique structural and biological complexity, making de novo riboswitch identification a formidable task. Traditionally, riboswitches have been identified through comparative genomics based on sequence and structural homology. The limitations of structural-homology-based approaches, coupled with the assumption that there is a great diversity of undiscovered riboswitches, suggests the need for alternative methods for riboswitch identification, possibly based on features intrinsic to their structure. As of yet, no such reliable method has been proposed. We used structural entropy of riboswitch sequences as a measure of their secondary structural dynamics. Entropy values of a diverse set of riboswitches were compared to that of their mutants, their dinucleotide shuffles, and their reverse complement sequences under different stochastic context-free grammar folding models. Significance of our results was evaluated by comparison to other approaches, such as the base-pairing entropy and energy landscapes dynamics. Classifiers based on structural entropy optimized via sequence and structural features were devised as riboswitch identifiers and tested on Bacillus subtilis, Escherichia coli, and Synechococcus elongatus as an exploration of structural entropy based approaches. The unusually long untranslated region of the cotH in Bacillus subtilis, as well as upstream regions of certain genes, such as the sucC genes were associated with significant structural entropy values in genome-wide examinations. Various tests show that there is in fact a relationship between higher structural entropy and the potential for the RNA sequence to have alternative structures, within the limitations of our methodology. This relationship, though modest, is consistent across various tests. Understanding the behavior of structural entropy as a fairly new feature for RNA conformational dynamics, however, may require extensive exploratory investigation both across RNA sequences and folding models.
Yan, Xin-Zhong
2011-07-01
The discrete Fourier transform is approximated by summing over part of the terms with corresponding weights. The approximation reduces significantly the requirement for computer memory storage and enhances the numerical computation efficiency with several orders without losing accuracy. As an example, we apply the algorithm to study the three-dimensional interacting electron gas under the renormalized-ring-diagram approximation where the Green's function needs to be self-consistently solved. We present the results for the chemical potential, compressibility, free energy, entropy, and specific heat of the system. The ground-state energy obtained by the present calculation is compared with the existing results of Monte Carlo simulation and random-phase approximation.
Device-Independent Tests of Entropy
NASA Astrophysics Data System (ADS)
Chaves, Rafael; Brask, Jonatan Bohr; Brunner, Nicolas
2015-09-01
We show that the entropy of a message can be tested in a device-independent way. Specifically, we consider a prepare-and-measure scenario with classical or quantum communication, and develop two different methods for placing lower bounds on the communication entropy, given observable data. The first method is based on the framework of causal inference networks. The second technique, based on convex optimization, shows that quantum communication provides an advantage over classical communication, in the sense of requiring a lower entropy to reproduce given data. These ideas may serve as a basis for novel applications in device-independent quantum information processing.
Automated Detection of Driver Fatigue Based on AdaBoost Classifier with EEG Signals.
Hu, Jianfeng
2017-01-01
Purpose: Driving fatigue has become one of the important causes of road accidents, there are many researches to analyze driver fatigue. EEG is becoming increasingly useful in the measuring fatigue state. Manual interpretation of EEG signals is impossible, so an effective method for automatic detection of EEG signals is crucial needed. Method: In order to evaluate the complex, unstable, and non-linear characteristics of EEG signals, four feature sets were computed from EEG signals, in which fuzzy entropy (FE), sample entropy (SE), approximate Entropy (AE), spectral entropy (PE), and combined entropies (FE + SE + AE + PE) were included. All these feature sets were used as the input vectors of AdaBoost classifier, a boosting method which is fast and highly accurate. To assess our method, several experiments including parameter setting and classifier comparison were conducted on 28 subjects. For comparison, Decision Trees (DT), Support Vector Machine (SVM) and Naive Bayes (NB) classifiers are used. Results: The proposed method (combination of FE and AdaBoost) yields superior performance than other schemes. Using FE feature extractor, AdaBoost achieves improved area (AUC) under the receiver operating curve of 0.994, error rate (ERR) of 0.024, Precision of 0.969, Recall of 0.984, F1 score of 0.976, and Matthews correlation coefficient (MCC) of 0.952, compared to SVM (ERR at 0.035, Precision of 0.957, Recall of 0.974, F1 score of 0.966, and MCC of 0.930 with AUC of 0.990), DT (ERR at 0.142, Precision of 0.857, Recall of 0.859, F1 score of 0.966, and MCC of 0.716 with AUC of 0.916) and NB (ERR at 0.405, Precision of 0.646, Recall of 0.434, F1 score of 0.519, and MCC of 0.203 with AUC of 0.606). It shows that the FE feature set and combined feature set outperform other feature sets. AdaBoost seems to have better robustness against changes of ratio of test samples for all samples and number of subjects, which might therefore aid in the real-time detection of driver fatigue through the classification of EEG signals. Conclusion: By using combination of FE features and AdaBoost classifier to detect EEG-based driver fatigue, this paper ensured confidence in exploring the inherent physiological mechanisms and wearable application.
Automated Detection of Driver Fatigue Based on AdaBoost Classifier with EEG Signals
Hu, Jianfeng
2017-01-01
Purpose: Driving fatigue has become one of the important causes of road accidents, there are many researches to analyze driver fatigue. EEG is becoming increasingly useful in the measuring fatigue state. Manual interpretation of EEG signals is impossible, so an effective method for automatic detection of EEG signals is crucial needed. Method: In order to evaluate the complex, unstable, and non-linear characteristics of EEG signals, four feature sets were computed from EEG signals, in which fuzzy entropy (FE), sample entropy (SE), approximate Entropy (AE), spectral entropy (PE), and combined entropies (FE + SE + AE + PE) were included. All these feature sets were used as the input vectors of AdaBoost classifier, a boosting method which is fast and highly accurate. To assess our method, several experiments including parameter setting and classifier comparison were conducted on 28 subjects. For comparison, Decision Trees (DT), Support Vector Machine (SVM) and Naive Bayes (NB) classifiers are used. Results: The proposed method (combination of FE and AdaBoost) yields superior performance than other schemes. Using FE feature extractor, AdaBoost achieves improved area (AUC) under the receiver operating curve of 0.994, error rate (ERR) of 0.024, Precision of 0.969, Recall of 0.984, F1 score of 0.976, and Matthews correlation coefficient (MCC) of 0.952, compared to SVM (ERR at 0.035, Precision of 0.957, Recall of 0.974, F1 score of 0.966, and MCC of 0.930 with AUC of 0.990), DT (ERR at 0.142, Precision of 0.857, Recall of 0.859, F1 score of 0.966, and MCC of 0.716 with AUC of 0.916) and NB (ERR at 0.405, Precision of 0.646, Recall of 0.434, F1 score of 0.519, and MCC of 0.203 with AUC of 0.606). It shows that the FE feature set and combined feature set outperform other feature sets. AdaBoost seems to have better robustness against changes of ratio of test samples for all samples and number of subjects, which might therefore aid in the real-time detection of driver fatigue through the classification of EEG signals. Conclusion: By using combination of FE features and AdaBoost classifier to detect EEG-based driver fatigue, this paper ensured confidence in exploring the inherent physiological mechanisms and wearable application. PMID:28824409
Lagrangian particle method for compressible fluid dynamics
NASA Astrophysics Data System (ADS)
Samulyak, Roman; Wang, Xingyu; Chen, Hsin-Chiang
2018-06-01
A new Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface/multiphase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) a second-order particle-based algorithm that reduces to the first-order upwind method at local extremal points, providing accuracy and long term stability, and (c) more accurate resolution of entropy discontinuities and states at free interfaces. While the method is consistent and convergent to a prescribed order, the conservation of momentum and energy is not exact and depends on the convergence order. The method is generalizable to coupled hyperbolic-elliptic systems. Numerical verification tests demonstrating the convergence order are presented as well as examples of complex multiphase flows.
NASA Technical Reports Server (NTRS)
Mostrel, M. M.
1988-01-01
New shock-capturing finite difference approximations for solving two scalar conservation law nonlinear partial differential equations describing inviscid, isentropic, compressible flows of aerodynamics at transonic speeds are presented. A global linear stability theorem is applied to these schemes in order to derive a necessary and sufficient condition for the finite element method. A technique is proposed to render the described approximations total variation-stable by applying the flux limiters to the nonlinear terms of the difference equation dimension by dimension. An entropy theorem applying to the approximations is proved, and an implicit, forward Euler-type time discretization of the approximation is presented. Results of some numerical experiments using the approximations are reported.
NASA Astrophysics Data System (ADS)
Mohammad-Djafari, Ali
2015-01-01
The main object of this tutorial article is first to review the main inference tools using Bayesian approach, Entropy, Information theory and their corresponding geometries. This review is focused mainly on the ways these tools have been used in data, signal and image processing. After a short introduction of the different quantities related to the Bayes rule, the entropy and the Maximum Entropy Principle (MEP), relative entropy and the Kullback-Leibler divergence, Fisher information, we will study their use in different fields of data and signal processing such as: entropy in source separation, Fisher information in model order selection, different Maximum Entropy based methods in time series spectral estimation and finally, general linear inverse problems.
Entropy of electromyography time series
NASA Astrophysics Data System (ADS)
Kaufman, Miron; Zurcher, Ulrich; Sung, Paul S.
2007-12-01
A nonlinear analysis based on Renyi entropy is applied to electromyography (EMG) time series from back muscles. The time dependence of the entropy of the EMG signal exhibits a crossover from a subdiffusive regime at short times to a plateau at longer times. We argue that this behavior characterizes complex biological systems. The plateau value of the entropy can be used to differentiate between healthy and low back pain individuals.
The third law of thermodynamics and the fractional entropies
NASA Astrophysics Data System (ADS)
Baris Bagci, G.
2016-08-01
We consider the fractal calculus based Ubriaco and Machado entropies and investigate whether they conform to the third law of thermodynamics. The Ubriaco entropy satisfies the third law of thermodynamics in the interval 0 < q ≤ 1 exactly where it is also thermodynamically stable. The Machado entropy, on the other hand, yields diverging inverse temperature in the region 0 < q ≤ 1, albeit with non-vanishing negative entropy values. Therefore, despite the divergent inverse temperature behavior, the Machado entropy fails the third law of thermodynamics. We also show that the aforementioned results are also supported by the one-dimensional Ising model with no external field.
Entropy as an indicator of cerebral perfusion in patients with increased intracranial pressure.
Khan, James; Mariappan, Ramamani; Venkatraghavan, Lashmi
2014-07-01
Changes in electroencephalogram (EEG) patterns correlate well with changes in cerebral perfusion pressure (CPP) and hence entropy and bispectral index values may also correlate with CPP. To highlight the potential application of entropy, an EEG-based anesthetic depth monitor, on indicating cerebral perfusion in patients with increased intracranial pressure (ICP), we report two cases of emergency neurosurgical procedure in patients with raised ICP where anesthesia was titrated to entropy values and the entropy values suddenly increased after cranial decompression, reflecting the increase in CPP. Maintaining systemic blood pressure in order to maintain the CPP is the anesthetic goal while managing patients with raised ICP. EEG-based anesthetic depth monitors may hold valuable information on guiding anesthetic management in patients with decreased CPP for better neurological outcome.
Formulating the shear stress distribution in circular open channels based on the Renyi entropy
NASA Astrophysics Data System (ADS)
Khozani, Zohreh Sheikh; Bonakdari, Hossein
2018-01-01
The principle of maximum entropy is employed to derive the shear stress distribution by maximizing the Renyi entropy subject to some constraints and by assuming that dimensionless shear stress is a random variable. A Renyi entropy-based equation can be used to model the shear stress distribution along the entire wetted perimeter of circular channels and circular channels with flat beds and deposited sediments. A wide range of experimental results for 12 hydraulic conditions with different Froude numbers (0.375 to 1.71) and flow depths (20.3 to 201.5 mm) were used to validate the derived shear stress distribution. For circular channels, model performance enhanced with increasing flow depth (mean relative error (RE) of 0.0414) and only deteriorated slightly at the greatest flow depth (RE of 0.0573). For circular channels with flat beds, the Renyi entropy model predicted the shear stress distribution well at lower sediment depth. The Renyi entropy model results were also compared with Shannon entropy model results. Both models performed well for circular channels, but for circular channels with flat beds the Renyi entropy model displayed superior performance in estimating the shear stress distribution. The Renyi entropy model was highly precise and predicted the shear stress distribution in a circular channel with RE of 0.0480 and in a circular channel with a flat bed with RE of 0.0488.
2010-01-01
Using phase space reconstruct technique from one-dimensional and multi-dimensional time series and the quantitative criterion rule of system chaos, and combining the neural network; analyses, computations and sort are conducted on electroencephalogram (EEG) signals of five kinds of human consciousness activities (relaxation, mental arithmetic of multiplication, mental composition of a letter, visualizing a 3-dimensional object being revolved about an axis, and visualizing numbers being written or erased on a blackboard). Through comparative studies on the determinacy, the phase graph, the power spectra, the approximate entropy, the correlation dimension and the Lyapunov exponent of EEG signals of 5 kinds of consciousness activities, the following conclusions are shown: (1) The statistic results of the deterministic computation indicate that chaos characteristic may lie in human consciousness activities, and central tendency measure (CTM) is consistent with phase graph, so it can be used as a division way of EEG attractor. (2) The analyses of power spectra show that ideology of single subject is almost identical but the frequency channels of different consciousness activities have slight difference. (3) The approximate entropy between different subjects exist discrepancy. Under the same conditions, the larger the approximate entropy of subject is, the better the subject's innovation is. (4) The results of the correlation dimension and the Lyapunov exponent indicate that activities of human brain exist in attractors with fractional dimensions. (5) Nonlinear quantitative criterion rule, which unites the neural network, can classify different kinds of consciousness activities well. In this paper, the results of classification indicate that the consciousness activity of arithmetic has better differentiation degree than that of abstract. PMID:20420714
Wang, Xingyuan; Meng, Juan; Tan, Guilin; Zou, Lixian
2010-04-27
Using phase space reconstruct technique from one-dimensional and multi-dimensional time series and the quantitative criterion rule of system chaos, and combining the neural network; analyses, computations and sort are conducted on electroencephalogram (EEG) signals of five kinds of human consciousness activities (relaxation, mental arithmetic of multiplication, mental composition of a letter, visualizing a 3-dimensional object being revolved about an axis, and visualizing numbers being written or erased on a blackboard). Through comparative studies on the determinacy, the phase graph, the power spectra, the approximate entropy, the correlation dimension and the Lyapunov exponent of EEG signals of 5 kinds of consciousness activities, the following conclusions are shown: (1) The statistic results of the deterministic computation indicate that chaos characteristic may lie in human consciousness activities, and central tendency measure (CTM) is consistent with phase graph, so it can be used as a division way of EEG attractor. (2) The analyses of power spectra show that ideology of single subject is almost identical but the frequency channels of different consciousness activities have slight difference. (3) The approximate entropy between different subjects exist discrepancy. Under the same conditions, the larger the approximate entropy of subject is, the better the subject's innovation is. (4) The results of the correlation dimension and the Lyapunov exponent indicate that activities of human brain exist in attractors with fractional dimensions. (5) Nonlinear quantitative criterion rule, which unites the neural network, can classify different kinds of consciousness activities well. In this paper, the results of classification indicate that the consciousness activity of arithmetic has better differentiation degree than that of abstract.
Dynamic Bayesian wavelet transform: New methodology for extraction of repetitive transients
NASA Astrophysics Data System (ADS)
Wang, Dong; Tsui, Kwok-Leung
2017-05-01
Thanks to some recent research works, dynamic Bayesian wavelet transform as new methodology for extraction of repetitive transients is proposed in this short communication to reveal fault signatures hidden in rotating machine. The main idea of the dynamic Bayesian wavelet transform is to iteratively estimate posterior parameters of wavelet transform via artificial observations and dynamic Bayesian inference. First, a prior wavelet parameter distribution can be established by one of many fast detection algorithms, such as the fast kurtogram, the improved kurtogram, the enhanced kurtogram, the sparsogram, the infogram, continuous wavelet transform, discrete wavelet transform, wavelet packets, multiwavelets, empirical wavelet transform, empirical mode decomposition, local mean decomposition, etc.. Second, artificial observations can be constructed based on one of many metrics, such as kurtosis, the sparsity measurement, entropy, approximate entropy, the smoothness index, a synthesized criterion, etc., which are able to quantify repetitive transients. Finally, given artificial observations, the prior wavelet parameter distribution can be posteriorly updated over iterations by using dynamic Bayesian inference. More importantly, the proposed new methodology can be extended to establish the optimal parameters required by many other signal processing methods for extraction of repetitive transients.
Impact of aluminum doping on the thermo-physical properties of refractory medium-entropy alloys
NASA Astrophysics Data System (ADS)
Tian, Fuyang; Wang, Yang; Vitos, Levente
2017-01-01
We investigate the elastic moduli, ideal tensile strength, and thermodynamic properties of TiVNb and AlTiVNb refractory medium-entropy alloys (HEAs) by using ab initio alloy theories: the coherent potential approximation (CPA), the special quasi-random supercell (SQS), and a 432-atom supercell (SC). We find that with increasing number of alloy components, the SQS elastic constants become sensitive to the supercell size. The predicted elastic moduli are consistent with the available experiments. Aluminum doping decreases the stability of the body centered cubic phase. The ideal tensile strength calculation indicates that adding equiatomic Al to TiVNb random solid solution increases the intrinsic strength (ideal strain increase from 9.6% to 11.8%) and decreases the intrinsic strength (from 9.6 to 5.7 GPa). Based on the equation of states calculated by the CPA and SC methods, the thermodynamic properties obtained by the two ab initio methods are assessed. The L21 AlTiVNb (Ti-Al-V-Nb) alloy is predicted to be thermodynamically and dynamically stable with respect to the solid solution.
Isentropic Analysis of Convective Motions
NASA Technical Reports Server (NTRS)
Pauluis, Olivier M.; Mrowiec, Agnieszka A.
2013-01-01
This paper analyzes the convective mass transport by sorting air parcels in terms of their equivalent potential temperature to determine an isentropic streamfunction. By averaging the vertical mass flux at a constant value of the equivalent potential temperature, one can compute an isentropic mass transport that filters out reversible oscillatory motions such as gravity waves. This novel approach emphasizes the fact that the vertical energy and entropy transports by convection are due to the combination of ascending air parcels with high energy and entropy and subsiding air parcels with lower energy and entropy. Such conditional averaging can be extended to other dynamic and thermodynamic variables such as vertical velocity, temperature, or relative humidity to obtain a comprehensive description of convective motions. It is also shown how this approach can be used to determine the mean diabatic tendencies from the three-dimensional dynamic and thermodynamic fields. A two-stream approximation that partitions the isentropic circulation into a mean updraft and a mean downdraft is also introduced. This offers a straightforward way to identify the mean properties of rising and subsiding air parcels. The results from the two-stream approximation are compared with two other definitions of the cloud mass flux. It is argued that the isentropic analysis offers a robust definition of the convective mass transport that is not tainted by the need to arbitrarily distinguish between convection and its environment, and that separates the irreversible convective overturning fromoscillations associated with gravity waves.
Thermodynamics of nuclear track chemical etching
NASA Astrophysics Data System (ADS)
Rana, Mukhtar Ahmed
2018-05-01
This is a brief paper with new and useful scientific information on nuclear track chemical etching. Nuclear track etching is described here by using basic concepts of thermodynamics. Enthalpy, entropy and free energy parameters are considered for the nuclear track etching. The free energy of etching is determined using etching experiments of fission fragment tracks in CR-39. Relationship between the free energy and the etching temperature is explored and is found to be approximately linear. The above relationship is discussed. A simple enthalpy-entropy model of chemical etching is presented. Experimental and computational results presented here are of fundamental interest in nuclear track detection methodology.
Entropy-enthalpy compensation at the single protein level: pH sensing in the bacterial channel OmpF.
Alcaraz, Antonio; Queralt-Martín, María; Verdiá-Báguena, Carmina; Aguilella, Vicente M; Mafé, Salvador
2014-12-21
The pH sensing mechanism of the OmpF channel operates via ligand modification: increasing acidity induces the replacement of cations with protons in critical binding sites decreasing the channel conductance. Aside from the change in enthalpy associated with the binding, there is also a change in the microscopic arrangements of ligands, receptors and the surrounding solvent. We show that the pH-modulation of the single channel conduction involves small free energy changes because large enthalpic and entropic contributions change in opposite ways, demonstrating an approximate enthalpy-entropy compensation for different salts and concentrations.
Fine-grained state counting for black holes in loop quantum gravity.
Ghosh, A; Mitra, P
2009-04-10
A state of a black hole in loop quantum gravity is given by a distribution of spins on punctures on the horizon. The distribution is of the Boltzmann type, with the area playing the role of the energy. In investigations where the total area was kept approximately constant, there was a kind of thermal equilibrium between the spins which have the same analogue temperature and the entropy was proportional to the area. If the area is precisely fixed, however, multiple constraints appear, different spins have different analogue temperatures and the entropy is not strictly linear in the area, but is bounded by a linear rise.
Monte Carlo simulation of a noisy quantum channel with memory.
Akhalwaya, Ismail; Moodley, Mervlyn; Petruccione, Francesco
2015-10-01
The classical capacity of quantum channels is well understood for channels with uncorrelated noise. For the case of correlated noise, however, there are still open questions. We calculate the classical capacity of a forgetful channel constructed by Markov switching between two depolarizing channels. Techniques have previously been applied to approximate the output entropy of this channel and thus its capacity. In this paper, we use a Metropolis-Hastings Monte Carlo approach to numerically calculate the entropy. The algorithm is implemented in parallel and its performance is studied and optimized. The effects of memory on the capacity are explored and previous results are confirmed to higher precision.
Object-Location-Aware Hashing for Multi-Label Image Retrieval via Automatic Mask Learning.
Huang, Chang-Qin; Yang, Shang-Ming; Pan, Yan; Lai, Han-Jiang
2018-09-01
Learning-based hashing is a leading approach of approximate nearest neighbor search for large-scale image retrieval. In this paper, we develop a deep supervised hashing method for multi-label image retrieval, in which we propose to learn a binary "mask" map that can identify the approximate locations of objects in an image, so that we use this binary "mask" map to obtain length-limited hash codes which mainly focus on an image's objects but ignore the background. The proposed deep architecture consists of four parts: 1) a convolutional sub-network to generate effective image features; 2) a binary "mask" sub-network to identify image objects' approximate locations; 3) a weighted average pooling operation based on the binary "mask" to obtain feature representations and hash codes that pay most attention to foreground objects but ignore the background; and 4) the combination of a triplet ranking loss designed to preserve relative similarities among images and a cross entropy loss defined on image labels. We conduct comprehensive evaluations on four multi-label image data sets. The results indicate that the proposed hashing method achieves superior performance gains over the state-of-the-art supervised or unsupervised hashing baselines.
Lim, Jongil; Kwon, Ji Young; Song, Juhee; Choi, Hosoon; Shin, Jong Chul; Park, In Yang
2014-02-01
The interpretation of the fetal heart rate (FHR) signal considering labor progression may improve perinatal morbidity and mortality. However, there have been few studies that evaluate the fetus in each labor stage quantitatively. To evaluate whether the entropy indices of FHR are different according to labor progression. A retrospective comparative study of FHR recordings in three groups: 280 recordings in the second stage of labor before vaginal delivery, 31 recordings in the first stage of labor before emergency cesarean delivery, and 23 recordings in the pre-labor before elective cesarean delivery. The stored FHR recordings of external cardiotocography during labor. Approximate entropy (ApEn) and sample entropy (SampEn) for the final 2000 RR intervals. The median ApEn and SampEn for the 2000 RR intervals showed the lowest values in the second stage of labor, followed by the emergency cesarean group and the elective cesarean group for all time segments (all P<0.001). Also, in the second stage of labor, the final 5 min of 2000 RR intervals had a significantly lower median ApEn (0.49 vs. 0.44, P=0.001) and lower median SampEn (0.34 vs. 0.29, P<0.001) than the initial 5 min of 2000 RR intervals. Entropy indices of FHR were significantly different according to labor progression. This result supports the necessity of considering labor progression when developing intrapartum fetal monitoring using the entropy indices of FHR. Copyright © 2013 Elsevier Ltd. All rights reserved.
Maximum Relative Entropy of Coherence: An Operational Coherence Measure.
Bu, Kaifeng; Singh, Uttam; Fei, Shao-Ming; Pati, Arun Kumar; Wu, Junde
2017-10-13
The operational characterization of quantum coherence is the cornerstone in the development of the resource theory of coherence. We introduce a new coherence quantifier based on maximum relative entropy. We prove that the maximum relative entropy of coherence is directly related to the maximum overlap with maximally coherent states under a particular class of operations, which provides an operational interpretation of the maximum relative entropy of coherence. Moreover, we show that, for any coherent state, there are examples of subchannel discrimination problems such that this coherent state allows for a higher probability of successfully discriminating subchannels than that of all incoherent states. This advantage of coherent states in subchannel discrimination can be exactly characterized by the maximum relative entropy of coherence. By introducing a suitable smooth maximum relative entropy of coherence, we prove that the smooth maximum relative entropy of coherence provides a lower bound of one-shot coherence cost, and the maximum relative entropy of coherence is equivalent to the relative entropy of coherence in the asymptotic limit. Similar to the maximum relative entropy of coherence, the minimum relative entropy of coherence has also been investigated. We show that the minimum relative entropy of coherence provides an upper bound of one-shot coherence distillation, and in the asymptotic limit the minimum relative entropy of coherence is equivalent to the relative entropy of coherence.
Small-window parametric imaging based on information entropy for ultrasound tissue characterization
Tsui, Po-Hsiang; Chen, Chin-Kuo; Kuo, Wen-Hung; Chang, King-Jen; Fang, Jui; Ma, Hsiang-Yang; Chou, Dean
2017-01-01
Constructing ultrasound statistical parametric images by using a sliding window is a widely adopted strategy for characterizing tissues. Deficiency in spatial resolution, the appearance of boundary artifacts, and the prerequisite data distribution limit the practicability of statistical parametric imaging. In this study, small-window entropy parametric imaging was proposed to overcome the above problems. Simulations and measurements of phantoms were executed to acquire backscattered radiofrequency (RF) signals, which were processed to explore the feasibility of small-window entropy imaging in detecting scatterer properties. To validate the ability of entropy imaging in tissue characterization, measurements of benign and malignant breast tumors were conducted (n = 63) to compare performances of conventional statistical parametric (based on Nakagami distribution) and entropy imaging by the receiver operating characteristic (ROC) curve analysis. The simulation and phantom results revealed that entropy images constructed using a small sliding window (side length = 1 pulse length) adequately describe changes in scatterer properties. The area under the ROC for using small-window entropy imaging to classify tumors was 0.89, which was higher than 0.79 obtained using statistical parametric imaging. In particular, boundary artifacts were largely suppressed in the proposed imaging technique. Entropy enables using a small window for implementing ultrasound parametric imaging. PMID:28106118
Small-window parametric imaging based on information entropy for ultrasound tissue characterization
NASA Astrophysics Data System (ADS)
Tsui, Po-Hsiang; Chen, Chin-Kuo; Kuo, Wen-Hung; Chang, King-Jen; Fang, Jui; Ma, Hsiang-Yang; Chou, Dean
2017-01-01
Constructing ultrasound statistical parametric images by using a sliding window is a widely adopted strategy for characterizing tissues. Deficiency in spatial resolution, the appearance of boundary artifacts, and the prerequisite data distribution limit the practicability of statistical parametric imaging. In this study, small-window entropy parametric imaging was proposed to overcome the above problems. Simulations and measurements of phantoms were executed to acquire backscattered radiofrequency (RF) signals, which were processed to explore the feasibility of small-window entropy imaging in detecting scatterer properties. To validate the ability of entropy imaging in tissue characterization, measurements of benign and malignant breast tumors were conducted (n = 63) to compare performances of conventional statistical parametric (based on Nakagami distribution) and entropy imaging by the receiver operating characteristic (ROC) curve analysis. The simulation and phantom results revealed that entropy images constructed using a small sliding window (side length = 1 pulse length) adequately describe changes in scatterer properties. The area under the ROC for using small-window entropy imaging to classify tumors was 0.89, which was higher than 0.79 obtained using statistical parametric imaging. In particular, boundary artifacts were largely suppressed in the proposed imaging technique. Entropy enables using a small window for implementing ultrasound parametric imaging.
Entropy-based financial asset pricing.
Ormos, Mihály; Zibriczky, Dávid
2014-01-01
We investigate entropy as a financial risk measure. Entropy explains the equity premium of securities and portfolios in a simpler way and, at the same time, with higher explanatory power than the beta parameter of the capital asset pricing model. For asset pricing we define the continuous entropy as an alternative measure of risk. Our results show that entropy decreases in the function of the number of securities involved in a portfolio in a similar way to the standard deviation, and that efficient portfolios are situated on a hyperbola in the expected return-entropy system. For empirical investigation we use daily returns of 150 randomly selected securities for a period of 27 years. Our regression results show that entropy has a higher explanatory power for the expected return than the capital asset pricing model beta. Furthermore we show the time varying behavior of the beta along with entropy.
Entropy-Based Financial Asset Pricing
Ormos, Mihály; Zibriczky, Dávid
2014-01-01
We investigate entropy as a financial risk measure. Entropy explains the equity premium of securities and portfolios in a simpler way and, at the same time, with higher explanatory power than the beta parameter of the capital asset pricing model. For asset pricing we define the continuous entropy as an alternative measure of risk. Our results show that entropy decreases in the function of the number of securities involved in a portfolio in a similar way to the standard deviation, and that efficient portfolios are situated on a hyperbola in the expected return – entropy system. For empirical investigation we use daily returns of 150 randomly selected securities for a period of 27 years. Our regression results show that entropy has a higher explanatory power for the expected return than the capital asset pricing model beta. Furthermore we show the time varying behavior of the beta along with entropy. PMID:25545668
Entropy in self-similar shock profiles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margolin, Len G.; Reisner, Jon Michael; Jordan, Pedro M.
In this paper, we study the structure of a gaseous shock, and in particular the distribution of entropy within, in both a thermodynamics and a statistical mechanics context. The problem of shock structure has a long and distinguished history that we review. We employ the Navier–Stokes equations to construct a self–similar version of Becker’s solution for a shock assuming a particular (physically plausible) Prandtl number; that solution reproduces the well–known result of Morduchow & Libby that features a maximum of the equilibrium entropy inside the shock profile. We then construct an entropy profile, based on gas kinetic theory, that ismore » smooth and monotonically increasing. The extension of equilibrium thermodynamics to irreversible processes is based in part on the assumption of local thermodynamic equilibrium. We show that this assumption is not valid except for the weakest shocks. Finally, we conclude by hypothesizing a thermodynamic nonequilibrium entropy and demonstrating that it closely estimates the gas kinetic nonequilibrium entropy within a shock.« less
Statistical Mechanical Proof of the Second Law of Thermodynamics based on Volume Entropy
NASA Astrophysics Data System (ADS)
Campisi, Michele
2007-10-01
As pointed out in [M. Campisi. Stud. Hist. Phil. M. P. 36 (2005) 275-290] the volume entropy (that is the logarithm of the volume of phase space enclosed by the constant energy hyper-surface) provides a good mechanical analogue of thermodynamic entropy because it satisfies the heat theorem and it is an adiabatic invariant. This property explains the ``equal'' sign in Clausius principle (Sf>=Si) in a purely mechanical way and suggests that the volume entropy might explain the ``larger than'' sign (i.e. the Law of Entropy Increase) if non adiabatic transformations were considered. Based on the principles of quantum mechanics here we prove that, provided the initial equilibrium satisfy the natural condition of decreasing ordering of probabilities, the expectation value of the volume entropy cannot decrease for arbitrary transformations performed by some external sources of work on a insulated system. This can be regarded as a rigorous quantum mechanical proof of the Second Law.
Entropy in self-similar shock profiles
Margolin, Len G.; Reisner, Jon Michael; Jordan, Pedro M.
2017-07-16
In this paper, we study the structure of a gaseous shock, and in particular the distribution of entropy within, in both a thermodynamics and a statistical mechanics context. The problem of shock structure has a long and distinguished history that we review. We employ the Navier–Stokes equations to construct a self–similar version of Becker’s solution for a shock assuming a particular (physically plausible) Prandtl number; that solution reproduces the well–known result of Morduchow & Libby that features a maximum of the equilibrium entropy inside the shock profile. We then construct an entropy profile, based on gas kinetic theory, that ismore » smooth and monotonically increasing. The extension of equilibrium thermodynamics to irreversible processes is based in part on the assumption of local thermodynamic equilibrium. We show that this assumption is not valid except for the weakest shocks. Finally, we conclude by hypothesizing a thermodynamic nonequilibrium entropy and demonstrating that it closely estimates the gas kinetic nonequilibrium entropy within a shock.« less
Approximate entropy: a new evaluation approach of mental workload under multitask conditions
NASA Astrophysics Data System (ADS)
Yao, Lei; Li, Xiaoling; Wang, Wei; Dong, Yuanzhe; Jiang, Ying
2014-04-01
There are numerous instruments and an abundance of complex information in the traditional cockpit display-control system, and pilots require a long time to familiarize themselves with the cockpit interface. This can cause accidents when they cope with emergency events, suggesting that it is necessary to evaluate pilot cognitive workload. In order to establish a simplified method to evaluate cognitive workload under a multitask condition. We designed a series of experiments involving different instrument panels and collected electroencephalograms (EEG) from 10 healthy volunteers. The data were classified and analyzed with an approximate entropy (ApEn) signal processing. ApEn increased with increasing experiment difficulty, suggesting that it can be used to evaluate cognitive workload. Our results demonstrate that ApEn can be used as an evaluation criteria of cognitive workload and has good specificity and sensitivity. Moreover, we determined an empirical formula to assess the cognitive workload interval, which can simplify cognitive workload evaluation under multitask conditions.
Moore, Christopher; Marchant, Thomas
2017-07-12
Reconstructive volumetric imaging permeates medical practice because of its apparently clear depiction of anatomy. However, the tell tale signs of abnormality and its delineation for treatment demand experts work at the threshold of visibility for hints of structure. Hitherto, a suitable assistive metric that chimes with clinical experience has been absent. This paper develops the complexity measure approximate entropy (ApEn) from its 1D physiological origin into a three-dimensional (3D) algorithm to fill this gap. The first 3D algorithm for this is presented in detail. Validation results for known test arrays are followed by a comparison of fan-beam and cone-beam x-ray computed tomography image volumes used in image guided radiotherapy for cancer. Results show the structural detail down to individual voxel level, the strength of which is calibrated by the ApEn process itself. The potential for application in machine assisted manual interaction and automated image processing and interrogation, including radiomics associated with predictive outcome modeling, is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zinenko, V. I., E-mail: zvi@iph.krasn.ru; Pavlovskii, M. S.
We have analyzed the low-temperature thermodynamic properties of spin ice in the staggered and direct (acting along the [111] axis) fields for rare-earth oxides with the chalcolamprite structure and general formula Re{sub 2}{sup 3+}Me{sub 2}{sup 4+}O{sub 7}{sup 2-}. Calculations have been performed in the cluster approximation. The results have been compared with experimental temperature dependences of heat capacity and entropy for Dy{sub 2}Ti{sub 2}O{sub 7} compound for different values of the external field in the [111] direction. The experimental data and calculated results have also been compared for the Pr{sub 2}Ru{sub 2}O{sub 7} compound with the antiferromagnetic ordering of magneticmore » moments of ruthenium ions, which gives rise to the staggered field acting on the system of rare-earth ions. The calculated temperature dependences of heat capacity and entropy are in good agreement with experimental data.« less
Big Data Meets Quantum Chemistry Approximations: The Δ-Machine Learning Approach.
Ramakrishnan, Raghunathan; Dral, Pavlo O; Rupp, Matthias; von Lilienfeld, O Anatole
2015-05-12
Chemically accurate and comprehensive studies of the virtual space of all possible molecules are severely limited by the computational cost of quantum chemistry. We introduce a composite strategy that adds machine learning corrections to computationally inexpensive approximate legacy quantum methods. After training, highly accurate predictions of enthalpies, free energies, entropies, and electron correlation energies are possible, for significantly larger molecular sets than used for training. For thermochemical properties of up to 16k isomers of C7H10O2 we present numerical evidence that chemical accuracy can be reached. We also predict electron correlation energy in post Hartree-Fock methods, at the computational cost of Hartree-Fock, and we establish a qualitative relationship between molecular entropy and electron correlation. The transferability of our approach is demonstrated, using semiempirical quantum chemistry and machine learning models trained on 1 and 10% of 134k organic molecules, to reproduce enthalpies of all remaining molecules at density functional theory level of accuracy.
Yan, Jian-Jun; Wang, Yi-Qin; Guo, Rui; Zhou, Jin-Zhuan; Yan, Hai-Xia; Xia, Chun-Ming; Shen, Yong
2012-01-01
Auscultation signals are nonstationary in nature. Wavelet packet transform (WPT) has currently become a very useful tool in analyzing nonstationary signals. Sample entropy (SampEn) has recently been proposed to act as a measurement for quantifying regularity and complexity of time series data. WPT and SampEn were combined in this paper to analyze auscultation signals in traditional Chinese medicine (TCM). SampEns for WPT coefficients were computed to quantify the signals from qi- and yin-deficient, as well as healthy, subjects. The complexity of the signal can be evaluated with this scheme in different time-frequency resolutions. First, the voice signals were decomposed into approximated and detailed WPT coefficients. Then, SampEn values for approximated and detailed coefficients were calculated. Finally, SampEn values with significant differences in the three kinds of samples were chosen as the feature parameters for the support vector machine to identify the three types of auscultation signals. The recognition accuracy rates were higher than 90%.
Smith, Beth A.; Teulier, Caroline; Sansom, Jennifer; Stergiou, Nicholas; Ulrich, Beverly D.
2012-01-01
Purpose One obstacle to providing early intervention to infants with myelomeningocele (MMC) is the challenge of quantifying impaired neuromotor control of movements early in life. Methods We used the nonlinear analysis tool Approximate Entropy (ApEn) to analyze periodicity and complexity of supine spontaneous lower extremity movements of infants with MMC and typical development (TD) at 1, 3, 6 and 9 months of age. Results Movements of infants with MMC were more regular and repeatable (lower ApEn values) than movements of infants with TD indicating less adaptive and flexible movement patterns. For both groups ApEn values decreased with age, and the movements of infants with MMC were less complex than movements of infants with TD. Further, for infants with MMC, lesion level and age of walking onset correlated negatively with ApEn values. Conclusions Our study begins to demonstrate the feasibility of ApEn to identify impaired neuromotor control in infants with MMC. PMID:21829116
NASA Astrophysics Data System (ADS)
Moore, Christopher; Marchant, Thomas
2017-08-01
Reconstructive volumetric imaging permeates medical practice because of its apparently clear depiction of anatomy. However, the tell tale signs of abnormality and its delineation for treatment demand experts work at the threshold of visibility for hints of structure. Hitherto, a suitable assistive metric that chimes with clinical experience has been absent. This paper develops the complexity measure approximate entropy (ApEn) from its 1D physiological origin into a three-dimensional (3D) algorithm to fill this gap. The first 3D algorithm for this is presented in detail. Validation results for known test arrays are followed by a comparison of fan-beam and cone-beam x-ray computed tomography image volumes used in image guided radiotherapy for cancer. Results show the structural detail down to individual voxel level, the strength of which is calibrated by the ApEn process itself. The potential for application in machine assisted manual interaction and automated image processing and interrogation, including radiomics associated with predictive outcome modeling, is discussed.
Yan, Jian-Jun; Wang, Yi-Qin; Guo, Rui; Zhou, Jin-Zhuan; Yan, Hai-Xia; Xia, Chun-Ming; Shen, Yong
2012-01-01
Auscultation signals are nonstationary in nature. Wavelet packet transform (WPT) has currently become a very useful tool in analyzing nonstationary signals. Sample entropy (SampEn) has recently been proposed to act as a measurement for quantifying regularity and complexity of time series data. WPT and SampEn were combined in this paper to analyze auscultation signals in traditional Chinese medicine (TCM). SampEns for WPT coefficients were computed to quantify the signals from qi- and yin-deficient, as well as healthy, subjects. The complexity of the signal can be evaluated with this scheme in different time-frequency resolutions. First, the voice signals were decomposed into approximated and detailed WPT coefficients. Then, SampEn values for approximated and detailed coefficients were calculated. Finally, SampEn values with significant differences in the three kinds of samples were chosen as the feature parameters for the support vector machine to identify the three types of auscultation signals. The recognition accuracy rates were higher than 90%. PMID:22690242
Estimation of absolute solvent and solvation shell entropies via permutation reduction
NASA Astrophysics Data System (ADS)
Reinhard, Friedemann; Grubmüller, Helmut
2007-01-01
Despite its prominent contribution to the free energy of solvated macromolecules such as proteins or DNA, and although principally contained within molecular dynamics simulations, the entropy of the solvation shell is inaccessible to straightforward application of established entropy estimation methods. The complication is twofold. First, the configurational space density of such systems is too complex for a sufficiently accurate fit. Second, and in contrast to the internal macromolecular dynamics, the configurational space volume explored by the diffusive motion of the solvent molecules is too large to be exhaustively sampled by current simulation techniques. Here, we develop a method to overcome the second problem and to significantly alleviate the first one. We propose to exploit the permutation symmetry of the solvent by transforming the trajectory in a way that renders established estimation methods applicable, such as the quasiharmonic approximation or principal component analysis. Our permutation-reduced approach involves a combinatorial problem, which is solved through its equivalence with the linear assignment problem, for which O(N3) methods exist. From test simulations of dense Lennard-Jones gases, enhanced convergence and improved entropy estimates are obtained. Moreover, our approach renders diffusive systems accessible to improved fit functions.
Zhang, Wen; Liu, Peiqing; Guo, Hao; Wang, Jinjun
2017-11-01
The permutation entropy and the statistical complexity are employed to study the boundary-layer transition induced by the surface roughness. The velocity signals measured in the transition process are analyzed with these symbolic quantifiers, as well as the complexity-entropy causality plane, and the chaotic nature of the instability fluctuations is identified. The frequency of the dominant fluctuations has been found according to the time scales corresponding to the extreme values of the symbolic quantifiers. The laminar-turbulent transition process is accompanied by the evolution in the degree of organization of the complex eddy motions, which is also characterized with the growing smaller and flatter circles in the complexity-entropy causality plane. With the help of the permutation entropy and the statistical complexity, the differences between the chaotic fluctuations detected in the experiments and the classical Tollmien-Schlichting wave are shown and discussed. It is also found that the chaotic features of the instability fluctuations can be approximated with a number of regular sine waves superimposed on the fluctuations of the undisturbed laminar boundary layer. This result is related to the physical mechanism in the generation of the instability fluctuations, which is the noise-induced chaos.
NASA Astrophysics Data System (ADS)
Zhang, Wen; Liu, Peiqing; Guo, Hao; Wang, Jinjun
2017-11-01
The permutation entropy and the statistical complexity are employed to study the boundary-layer transition induced by the surface roughness. The velocity signals measured in the transition process are analyzed with these symbolic quantifiers, as well as the complexity-entropy causality plane, and the chaotic nature of the instability fluctuations is identified. The frequency of the dominant fluctuations has been found according to the time scales corresponding to the extreme values of the symbolic quantifiers. The laminar-turbulent transition process is accompanied by the evolution in the degree of organization of the complex eddy motions, which is also characterized with the growing smaller and flatter circles in the complexity-entropy causality plane. With the help of the permutation entropy and the statistical complexity, the differences between the chaotic fluctuations detected in the experiments and the classical Tollmien-Schlichting wave are shown and discussed. It is also found that the chaotic features of the instability fluctuations can be approximated with a number of regular sine waves superimposed on the fluctuations of the undisturbed laminar boundary layer. This result is related to the physical mechanism in the generation of the instability fluctuations, which is the noise-induced chaos.
Entropy Methods For Univariate Distributions in Decision Analysis
NASA Astrophysics Data System (ADS)
Abbas, Ali E.
2003-03-01
One of the most important steps in decision analysis practice is the elicitation of the decision-maker's belief about an uncertainty of interest in the form of a representative probability distribution. However, the probability elicitation process is a task that involves many cognitive and motivational biases. Alternatively, the decision-maker may provide other information about the distribution of interest, such as its moments, and the maximum entropy method can be used to obtain a full distribution subject to the given moment constraints. In practice however, decision makers cannot readily provide moments for the distribution, and are much more comfortable providing information about the fractiles of the distribution of interest or bounds on its cumulative probabilities. In this paper we present a graphical method to determine the maximum entropy distribution between upper and lower probability bounds and provide an interpretation for the shape of the maximum entropy distribution subject to fractile constraints, (FMED). We also discuss the problems with the FMED in that it is discontinuous and flat over each fractile interval. We present a heuristic approximation to a distribution if in addition to its fractiles, we also know it is continuous and work through full examples to illustrate the approach.
Dimensionality and entropy of spontaneous and evoked rate activity
NASA Astrophysics Data System (ADS)
Engelken, Rainer; Wolf, Fred
Cortical circuits exhibit complex activity patterns both spontaneously and evoked by external stimuli. Finding low-dimensional structure in population activity is a challenge. What is the diversity of the collective neural activity and how is it affected by an external stimulus? Using concepts from ergodic theory, we calculate the attractor dimensionality and dynamical entropy production of these networks. We obtain these two canonical measures of the collective network dynamics from the full set of Lyapunov exponents. We consider a randomly-wired firing-rate network that exhibits chaotic rate fluctuations for sufficiently strong synaptic weights. We show that dynamical entropy scales logarithmically with synaptic coupling strength, while the attractor dimensionality saturates. Thus, despite the increasing uncertainty, the diversity of collective activity saturates for strong coupling. We find that a time-varying external stimulus drastically reduces both entropy and dimensionality. Finally, we analytically approximate the full Lyapunov spectrum in several limiting cases by random matrix theory. Our study opens a novel avenue to characterize the complex dynamics of rate networks and the geometric structure of the corresponding high-dimensional chaotic attractor. received funding from Evangelisches Studienwerk Villigst, DFG through CRC 889 and Volkswagen Foundation.
Analysis of acoustic and entropy disturbances in a hypersonic wind tunnel
NASA Astrophysics Data System (ADS)
Schilden, Thomas; Schröder, Wolfgang; Ali, Syed Raza Christopher; Schreyer, Anne-Marie; Wu, Jie; Radespiel, Rolf
2016-05-01
The tunnel noise in a Mach 5.9 Ludwieg tube is determined by two methods, a newly developed cone-probe-DNS method and a reliable hot-wire-Pitot-probe method. The new method combines pressure and heat flux measurements using a cone probe and direct numerical simulation (DNS). The modal analysis is based on transfer functions obtained by the DNS to link the measured quantities to the tunnel noise. The measurements are performed for several unit-Reynolds numbers in the range of 5 ṡ 106 ≤ Re/m ≤ 16 ṡ 106 and probe positions to identify the sensitivities of tunnel noise. The DNS solutions show similar response mechanisms of the cone probe to incident acoustic and entropy waves which leads to high condition numbers of the transfer matrix such that a unique relationship between response and source mechanism can be only determined by neglecting the contribution of the non-acoustic modes to the pressure and heat flux fluctuations. The results of the cone-probe-DNS method are compared to a modal analysis based on the hot-wire-Pitot-probe method which provides reliable results in the frequency range less than 50 kHz. In this low frequency range the findings of the two different mode analyses agree well. At higher frequencies, the newly developed cone-probe-DNS method is still valid. The tunnel noise is dominated by the acoustic mode, since the entropy mode is lower by one order of magnitude and the vorticity mode can be neglected. The acoustic mode is approximately 0.5% at 30 kHz and the cone-probe-DNS data illustrate the acoustic mode to decrease and to asymptotically approach 0.2%.
NASA Astrophysics Data System (ADS)
Jia, Rui-Sheng; Sun, Hong-Mei; Peng, Yan-Jun; Liang, Yong-Quan; Lu, Xin-Ming
2017-07-01
Microseismic monitoring is an effective means for providing early warning of rock or coal dynamical disasters, and its first step is microseismic event detection, although low SNR microseismic signals often cannot effectively be detected by routine methods. To solve this problem, this paper presents permutation entropy and a support vector machine to detect low SNR microseismic events. First, an extraction method of signal features based on multi-scale permutation entropy is proposed by studying the influence of the scale factor on the signal permutation entropy. Second, the detection model of low SNR microseismic events based on the least squares support vector machine is built by performing a multi-scale permutation entropy calculation for the collected vibration signals, constructing a feature vector set of signals. Finally, a comparative analysis of the microseismic events and noise signals in the experiment proves that the different characteristics of the two can be fully expressed by using multi-scale permutation entropy. The detection model of microseismic events combined with the support vector machine, which has the features of high classification accuracy and fast real-time algorithms, can meet the requirements of online, real-time extractions of microseismic events.
NASA Astrophysics Data System (ADS)
Wang, WenBin; Wu, ZiNiu; Wang, ChunFeng; Hu, RuiFeng
2013-11-01
A model based on a thermodynamic approach is proposed for predicting the dynamics of communicable epidemics assumed to be governed by controlling efforts of multiple scales so that an entropy is associated with the system. All the epidemic details are factored into a single and time-dependent coefficient, the functional form of this coefficient is found through four constraints, including notably the existence of an inflexion point and a maximum. The model is solved to give a log-normal distribution for the spread rate, for which a Shannon entropy can be defined. The only parameter, that characterizes the width of the distribution function, is uniquely determined through maximizing the rate of entropy production. This entropy-based thermodynamic (EBT) model predicts the number of hospitalized cases with a reasonable accuracy for SARS in the year 2003. This EBT model can be of use for potential epidemics such as avian influenza and H7N9 in China.
The Conditional Entropy Power Inequality for Bosonic Quantum Systems
NASA Astrophysics Data System (ADS)
De Palma, Giacomo; Trevisan, Dario
2018-06-01
We prove the conditional Entropy Power Inequality for Gaussian quantum systems. This fundamental inequality determines the minimum quantum conditional von Neumann entropy of the output of the beam-splitter or of the squeezing among all the input states where the two inputs are conditionally independent given the memory and have given quantum conditional entropies. We also prove that, for any couple of values of the quantum conditional entropies of the two inputs, the minimum of the quantum conditional entropy of the output given by the conditional Entropy Power Inequality is asymptotically achieved by a suitable sequence of quantum Gaussian input states. Our proof of the conditional Entropy Power Inequality is based on a new Stam inequality for the quantum conditional Fisher information and on the determination of the universal asymptotic behaviour of the quantum conditional entropy under the heat semigroup evolution. The beam-splitter and the squeezing are the central elements of quantum optics, and can model the attenuation, the amplification and the noise of electromagnetic signals. This conditional Entropy Power Inequality will have a strong impact in quantum information and quantum cryptography. Among its many possible applications there is the proof of a new uncertainty relation for the conditional Wehrl entropy.
The Conditional Entropy Power Inequality for Bosonic Quantum Systems
NASA Astrophysics Data System (ADS)
De Palma, Giacomo; Trevisan, Dario
2018-01-01
We prove the conditional Entropy Power Inequality for Gaussian quantum systems. This fundamental inequality determines the minimum quantum conditional von Neumann entropy of the output of the beam-splitter or of the squeezing among all the input states where the two inputs are conditionally independent given the memory and have given quantum conditional entropies. We also prove that, for any couple of values of the quantum conditional entropies of the two inputs, the minimum of the quantum conditional entropy of the output given by the conditional Entropy Power Inequality is asymptotically achieved by a suitable sequence of quantum Gaussian input states. Our proof of the conditional Entropy Power Inequality is based on a new Stam inequality for the quantum conditional Fisher information and on the determination of the universal asymptotic behaviour of the quantum conditional entropy under the heat semigroup evolution. The beam-splitter and the squeezing are the central elements of quantum optics, and can model the attenuation, the amplification and the noise of electromagnetic signals. This conditional Entropy Power Inequality will have a strong impact in quantum information and quantum cryptography. Among its many possible applications there is the proof of a new uncertainty relation for the conditional Wehrl entropy.
Entropy and generalized least square methods in assessment of the regional value of streamgages
Markus, M.; Vernon, Knapp H.; Tasker, Gary D.
2003-01-01
The Illinois State Water Survey performed a study to assess the streamgaging network in the State of Illinois. One of the important aspects of the study was to assess the regional value of each station through an assessment of the information transfer among gaging records for low, average, and high flow conditions. This analysis was performed for the main hydrologic regions in the State, and the stations were initially evaluated using a new approach based on entropy analysis. To determine the regional value of each station within a region, several information parameters, including total net information, were defined based on entropy. Stations were ranked based on the total net information. For comparison, the regional value of the same stations was assessed using the generalized least square regression (GLS) method, developed by the US Geological Survey. Finally, a hybrid combination of GLS and entropy was created by including a function of the negative net information as a penalty function in the GLS. The weights of the combined model were determined to maximize the average correlation with the results of GLS and entropy. The entropy and GLS methods were evaluated using the high-flow data from southern Illinois stations. The combined method was compared with the entropy and GLS approaches using the high-flow data from eastern Illinois stations. ?? 2003 Elsevier B.V. All rights reserved.
An entropy model to measure heterogeneity of pedestrian crowds using self-propelled agents
NASA Astrophysics Data System (ADS)
Rangel-Huerta, A.; Ballinas-Hernández, A. L.; Muñoz-Meléndez, A.
2017-05-01
An entropy model to characterize the heterogeneity of a pedestrian crowd in a counter-flow corridor is presented. Pedestrians are modeled as self-propelled autonomous agents that are able to perform maneuvers to avoid collisions based on a set of simple rules of perception and action. An observer can determine a probability distribution function of the displayed behavior of pedestrians based only on external information. Three types of pedestrian are modeled, relaxed, standard and hurried pedestrians depending on their preferences of turn and non-turn when walking. Thus, using these types of pedestrians two crowds can be simulated: homogeneous and heterogeneous crowds. Heterogeneity is measured in this research based on the entropy in function of time. For that, the entropy of a homogeneous crowd comprising standard pedestrians is used as reference. A number of simulations to measure entropy of pedestrian crowds were conducted by varying different combinations of types of pedestrians, initial simulation conditions of macroscopic flow, as well as density of the crowd. Results from these simulations show that our entropy model is sensitive enough to capture the effect of both the initial simulation conditions about the spatial distribution of pedestrians in a corridor, and the composition of a crowd. Also, a relevant finding is that entropy in function of density presents a phase transition in the critical region.
Long memory and volatility clustering: Is the empirical evidence consistent across stock markets?
NASA Astrophysics Data System (ADS)
Bentes, Sónia R.; Menezes, Rui; Mendes, Diana A.
2008-06-01
Long memory and volatility clustering are two stylized facts frequently related to financial markets. Traditionally, these phenomena have been studied based on conditionally heteroscedastic models like ARCH, GARCH, IGARCH and FIGARCH, inter alia. One advantage of these models is their ability to capture nonlinear dynamics. Another interesting manner to study the volatility phenomenon is by using measures based on the concept of entropy. In this paper we investigate the long memory and volatility clustering for the SP 500, NASDAQ 100 and Stoxx 50 indexes in order to compare the US and European Markets. Additionally, we compare the results from conditionally heteroscedastic models with those from the entropy measures. In the latter, we examine Shannon entropy, Renyi entropy and Tsallis entropy. The results corroborate the previous evidence of nonlinear dynamics in the time series considered.
Multifractals embedded in short time series: An unbiased estimation of probability moment
NASA Astrophysics Data System (ADS)
Qiu, Lu; Yang, Tianguang; Yin, Yanhua; Gu, Changgui; Yang, Huijie
2016-12-01
An exact estimation of probability moments is the base for several essential concepts, such as the multifractals, the Tsallis entropy, and the transfer entropy. By means of approximation theory we propose a new method called factorial-moment-based estimation of probability moments. Theoretical prediction and computational results show that it can provide us an unbiased estimation of the probability moments of continuous order. Calculations on probability redistribution model verify that it can extract exactly multifractal behaviors from several hundred recordings. Its powerfulness in monitoring evolution of scaling behaviors is exemplified by two empirical cases, i.e., the gait time series for fast, normal, and slow trials of a healthy volunteer, and the closing price series for Shanghai stock market. By using short time series with several hundred lengths, a comparison with the well-established tools displays significant advantages of its performance over the other methods. The factorial-moment-based estimation can evaluate correctly the scaling behaviors in a scale range about three generations wider than the multifractal detrended fluctuation analysis and the basic estimation. The estimation of partition function given by the wavelet transform modulus maxima has unacceptable fluctuations. Besides the scaling invariance focused in the present paper, the proposed factorial moment of continuous order can find its various uses, such as finding nonextensive behaviors of a complex system and reconstructing the causality relationship network between elements of a complex system.
Selection of entropy-measure parameters for knowledge discovery in heart rate variability data
2014-01-01
Background Heart rate variability is the variation of the time interval between consecutive heartbeats. Entropy is a commonly used tool to describe the regularity of data sets. Entropy functions are defined using multiple parameters, the selection of which is controversial and depends on the intended purpose. This study describes the results of tests conducted to support parameter selection, towards the goal of enabling further biomarker discovery. Methods This study deals with approximate, sample, fuzzy, and fuzzy measure entropies. All data were obtained from PhysioNet, a free-access, on-line archive of physiological signals, and represent various medical conditions. Five tests were defined and conducted to examine the influence of: varying the threshold value r (as multiples of the sample standard deviation σ, or the entropy-maximizing rChon), the data length N, the weighting factors n for fuzzy and fuzzy measure entropies, and the thresholds rF and rL for fuzzy measure entropy. The results were tested for normality using Lilliefors' composite goodness-of-fit test. Consequently, the p-value was calculated with either a two sample t-test or a Wilcoxon rank sum test. Results The first test shows a cross-over of entropy values with regard to a change of r. Thus, a clear statement that a higher entropy corresponds to a high irregularity is not possible, but is rather an indicator of differences in regularity. N should be at least 200 data points for r = 0.2 σ and should even exceed a length of 1000 for r = rChon. The results for the weighting parameters n for the fuzzy membership function show different behavior when coupled with different r values, therefore the weighting parameters have been chosen independently for the different threshold values. The tests concerning rF and rL showed that there is no optimal choice, but r = rF = rL is reasonable with r = rChon or r = 0.2σ. Conclusions Some of the tests showed a dependency of the test significance on the data at hand. Nevertheless, as the medical conditions are unknown beforehand, compromises had to be made. Optimal parameter combinations are suggested for the methods considered. Yet, due to the high number of potential parameter combinations, further investigations of entropy for heart rate variability data will be necessary. PMID:25078574
Selection of entropy-measure parameters for knowledge discovery in heart rate variability data.
Mayer, Christopher C; Bachler, Martin; Hörtenhuber, Matthias; Stocker, Christof; Holzinger, Andreas; Wassertheurer, Siegfried
2014-01-01
Heart rate variability is the variation of the time interval between consecutive heartbeats. Entropy is a commonly used tool to describe the regularity of data sets. Entropy functions are defined using multiple parameters, the selection of which is controversial and depends on the intended purpose. This study describes the results of tests conducted to support parameter selection, towards the goal of enabling further biomarker discovery. This study deals with approximate, sample, fuzzy, and fuzzy measure entropies. All data were obtained from PhysioNet, a free-access, on-line archive of physiological signals, and represent various medical conditions. Five tests were defined and conducted to examine the influence of: varying the threshold value r (as multiples of the sample standard deviation σ, or the entropy-maximizing rChon), the data length N, the weighting factors n for fuzzy and fuzzy measure entropies, and the thresholds rF and rL for fuzzy measure entropy. The results were tested for normality using Lilliefors' composite goodness-of-fit test. Consequently, the p-value was calculated with either a two sample t-test or a Wilcoxon rank sum test. The first test shows a cross-over of entropy values with regard to a change of r. Thus, a clear statement that a higher entropy corresponds to a high irregularity is not possible, but is rather an indicator of differences in regularity. N should be at least 200 data points for r = 0.2 σ and should even exceed a length of 1000 for r = rChon. The results for the weighting parameters n for the fuzzy membership function show different behavior when coupled with different r values, therefore the weighting parameters have been chosen independently for the different threshold values. The tests concerning rF and rL showed that there is no optimal choice, but r = rF = rL is reasonable with r = rChon or r = 0.2σ. Some of the tests showed a dependency of the test significance on the data at hand. Nevertheless, as the medical conditions are unknown beforehand, compromises had to be made. Optimal parameter combinations are suggested for the methods considered. Yet, due to the high number of potential parameter combinations, further investigations of entropy for heart rate variability data will be necessary.
NASA Astrophysics Data System (ADS)
Liu, Haixing; Savić, Dragan; Kapelan, Zoran; Zhao, Ming; Yuan, Yixing; Zhao, Hongbin
2014-07-01
Flow entropy is a measure of uniformity of pipe flows in water distribution systems. By maximizing flow entropy one can identify reliable layouts or connectivity in networks. In order to overcome the disadvantage of the common definition of flow entropy that does not consider the impact of pipe diameter on reliability, an extended definition of flow entropy, termed as diameter-sensitive flow entropy, is proposed. This new methodology is then assessed by using other reliability methods, including Monte Carlo Simulation, a pipe failure probability model, and a surrogate measure (resilience index) integrated with water demand and pipe failure uncertainty. The reliability assessment is based on a sample of WDS designs derived from an optimization process for each of the two benchmark networks. Correlation analysis is used to evaluate quantitatively the relationship between entropy and reliability. To ensure reliability, a comparative analysis between the flow entropy and the new method is conducted. The results demonstrate that the diameter-sensitive flow entropy shows consistently much stronger correlation with the three reliability measures than simple flow entropy. Therefore, the new flow entropy method can be taken as a better surrogate measure for reliability and could be potentially integrated into the optimal design problem of WDSs. Sensitivity analysis results show that the velocity parameters used in the new flow entropy has no significant impact on the relationship between diameter-sensitive flow entropy and reliability.
NASA Astrophysics Data System (ADS)
Alagumariappan, Paramasivam; Krishnamurthy, Kamalanand; Kandiah, Sundravadivelu; Ponnuswamy, Mannar Jawahar
2017-06-01
Electrogastrograms (EGG) are electrical signals originating from the digestive system, which are closely correlated with its mechanical activity. Electrogastrography is an efficient non-invasive method for examining the physiological and pathological states of the human digestive system. There are several factors such as fat conductivity, abdominal thickness, change in electrode surface area etc, which affects the quality of the recorded EGG signals. In this work, the effect of variations in the contact area of surface electrodes on the information content of the measured electrogastrograms is analyzed using Rényi entropy and Teager-Kaiser Energy (TKE). Two different circular cutaneous electrodes with approximate contact areas of 201.14 mm2 and 283.64 mm2, have been adopted and EGG signals were acquired using the standard three electrode protocol. Further, the information content of the measured EGG signals were analyzed using the computed values of entropy and energy. Results demonstrate that the information content of the measured EGG signals increases by 6.72% for an increase in the contact area of the surface electrode by 29.09%. Further, it was observed that the average energy increases with increase in the contact surface area. This work appears to be of high clinical significance since the accurate measurement of EGG signals without loss in its information content, is highly useful for the design of diagnostic assistance tools for automated diagnosis and mass screening of digestive disorders.
An entropy-based analysis of lane changing behavior: An interactive approach.
Kosun, Caglar; Ozdemir, Serhan
2017-05-19
As a novelty, this article proposes the nonadditive entropy framework for the description of driver behaviors during lane changing. The authors also state that this entropy framework governs the lane changing behavior in traffic flow in accordance with the long-range vehicular interactions and traffic safety. The nonadditive entropy framework is the new generalized theory of thermostatistical mechanics. Vehicular interactions during lane changing are considered within this framework. The interactive approach for the lane changing behavior of the drivers is presented in the traffic flow scenarios presented in the article. According to the traffic flow scenarios, 4 categories of traffic flow and driver behaviors are obtained. Through the scenarios, comparative analyses of nonadditive and additive entropy domains are also provided. Two quadrants of the categories belong to the nonadditive entropy; the rest are involved in the additive entropy domain. Driving behaviors are extracted and the scenarios depict that nonadditivity matches safe driving well, whereas additivity corresponds to unsafe driving. Furthermore, the cooperative traffic system is considered in nonadditivity where the long-range interactions are present. However, the uncooperative traffic system falls into the additivity domain. The analyses also state that there would be possible traffic flow transitions among the quadrants. This article shows that lane changing behavior could be generalized as nonadditive, with additivity as a special case, based on the given traffic conditions. The nearest and close neighbor models are well within the conventional additive entropy framework. In this article, both the long-range vehicular interactions and safe driving behavior in traffic are handled in the nonadditive entropy domain. It is also inferred that the Tsallis entropy region would correspond to mandatory lane changing behavior, whereas additive and either the extensive or nonextensive entropy region would match discretionary lane changing behavior. This article states that driver behaviors would be in the nonadditive entropy domain to provide a safe traffic stream and hence with vehicle accident prevention in mind.
Quantum thermodynamics and quantum entanglement entropies in an expanding universe
NASA Astrophysics Data System (ADS)
Farahmand, Mehrnoosh; Mohammadzadeh, Hosein; Mehri-Dehnavi, Hossein
2017-05-01
We investigate an asymptotically spatially flat Robertson-Walker space-time from two different perspectives. First, using von Neumann entropy, we evaluate the entanglement generation due to the encoded information in space-time. Then, we work out the entropy of particle creation based on the quantum thermodynamics of the scalar field on the underlying space-time. We show that the general behavior of both entropies are the same. Therefore, the entanglement can be applied to the customary quantum thermodynamics of the universe. Also, using these entropies, we can recover some information about the parameters of space-time.
Measuring Gaussian quantum information and correlations using the Rényi entropy of order 2.
Adesso, Gerardo; Girolami, Davide; Serafini, Alessio
2012-11-09
We demonstrate that the Rényi-2 entropy provides a natural measure of information for any multimode Gaussian state of quantum harmonic systems, operationally linked to the phase-space Shannon sampling entropy of the Wigner distribution of the state. We prove that, in the Gaussian scenario, such an entropy satisfies the strong subadditivity inequality, a key requirement for quantum information theory. This allows us to define and analyze measures of Gaussian entanglement and more general quantum correlations based on such an entropy, which are shown to satisfy relevant properties such as monogamy.
Gyrokinetic simulations of turbulent transport in a ring dipole plasma.
Kobayashi, Sumire; Rogers, Barrett N; Dorland, William
2009-07-31
Gyrokinetic flux-tube simulations of turbulent transport due to small-scale entropy modes are presented in a ring-dipole magnetic geometry relevant to the Columbia-MIT levitated dipole experiment (LDX) [J. Kesner, Plasma Phys. J. 23, 742 (1997)]. Far from the current ring, the dipolar magnetic field leads to strong parallel variations, while close to the ring the system becomes nearly uniform along circular magnetic field lines. The transport in these two limits are found to be quantitatively similar given an appropriate normalization based on the local out-board parameters. The transport increases strongly with the density gradient, and for small eta=L(n)/L(T)<1, T(i) approximately T(e), and typical LDX parameters, can reach large levels. Consistent with linear theory, temperature gradients are stabilizing, and for T(i) approximately T(e) can completely cut off the transport when eta greater or similar to 0.6.
Surname complex network for Brazil and Portugal
NASA Astrophysics Data System (ADS)
Ferreira, G. D.; Viswanathan, G. M.; da Silva, L. R.; Herrmann, H. J.
2018-06-01
We present a study of social networks based on the analysis of Brazilian and Portuguese family names (surnames). We construct networks whose nodes are names of families and whose edges represent parental relations between two families. From these networks we extract the connectivity distribution, clustering coefficient, shortest path and centrality. We find that the connectivity distribution follows an approximate power law. We associate the number of hubs, centrality and entropy to the degree of miscegenation in the societies in both countries. Our results show that Portuguese society has a higher miscegenation degree than Brazilian society. All networks analyzed lead to approximate inverse square power laws in the degree distribution. We conclude that the thermodynamic limit is reached for small networks (3 or 4 thousand nodes). The assortative mixing of all networks is negative, showing that the more connected vertices are connected to vertices with lower connectivity. Finally, the network of surnames presents some small world characteristics.
NASA Astrophysics Data System (ADS)
Zhao, Hui; Qu, Weilu; Qiu, Weiting
2018-03-01
In order to evaluate sustainable development level of resource-based cities, an evaluation method with Shapely entropy and Choquet integral is proposed. First of all, a systematic index system is constructed, the importance of each attribute is calculated based on the maximum Shapely entropy principle, and then the Choquet integral is introduced to calculate the comprehensive evaluation value of each city from the bottom up, finally apply this method to 10 typical resource-based cities in China. The empirical results show that the evaluation method is scientific and reasonable, which provides theoretical support for the sustainable development path and reform direction of resource-based cities.
NASA Astrophysics Data System (ADS)
Mao, Chao; Chen, Shou
2017-01-01
According to the traditional entropy value method still have low evaluation accuracy when evaluating the performance of mining projects, a performance evaluation model of mineral project founded on improved entropy is proposed. First establish a new weight assignment model founded on compatible matrix analysis of analytic hierarchy process (AHP) and entropy value method, when the compatibility matrix analysis to achieve consistency requirements, if it has differences between subjective weights and objective weights, moderately adjust both proportions, then on this basis, the fuzzy evaluation matrix for performance evaluation. The simulation experiments show that, compared with traditional entropy and compatible matrix analysis method, the proposed performance evaluation model of mining project based on improved entropy value method has higher accuracy assessment.
Marsh, J. N.; Wallace, K. D.; McCarthy, J. E.; Wickerhauser, M. V.; Maurizi, B. N.; Lanza, G. M.; Wickline, S. A.; Hughes, M. S.
2011-01-01
Previously, we reported new methods for ultrasound signal characterization using entropy, Hf; a generalized entropy, the Renyi entropy, If(r); and a limiting form of Renyi entropy suitable for real-time calculation, If,∞. All of these quantities demonstrated significantly more sensitivity to subtle changes in scattering architecture than energy-based methods in certain settings. In this study, the real-time calculable limit of the Renyi entropy, If,∞, is applied for the imaging of angiogenic murine neovasculature in a breast cancer xenograft using a targeted contrast agent. It is shown that this approach may be used to detect reliably the accumulation of targeted nanoparticles at five minutes post-injection in this in vivo model. PMID:20679020
Relating quantum coherence and correlations with entropy-based measures.
Wang, Xiao-Li; Yue, Qiu-Ling; Yu, Chao-Hua; Gao, Fei; Qin, Su-Juan
2017-09-21
Quantum coherence and quantum correlations are important quantum resources for quantum computation and quantum information. In this paper, using entropy-based measures, we investigate the relationships between quantum correlated coherence, which is the coherence between subsystems, and two main kinds of quantum correlations as defined by quantum discord as well as quantum entanglement. In particular, we show that quantum discord and quantum entanglement can be well characterized by quantum correlated coherence. Moreover, we prove that the entanglement measure formulated by quantum correlated coherence is lower and upper bounded by the relative entropy of entanglement and the entanglement of formation, respectively, and equal to the relative entropy of entanglement for all the maximally correlated states.
Generalized permutation entropy analysis based on the two-index entropic form S q , δ
NASA Astrophysics Data System (ADS)
Xu, Mengjia; Shang, Pengjian
2015-05-01
Permutation entropy (PE) is a novel measure to quantify the complexity of nonlinear time series. In this paper, we propose a generalized permutation entropy ( P E q , δ ) based on the recently postulated entropic form, S q , δ , which was proposed as an unification of the well-known Sq of nonextensive-statistical mechanics and S δ , a possibly appropriate candidate for the black-hole entropy. We find that P E q , δ with appropriate parameters can amplify minor changes and trends of complexities in comparison to PE. Experiments with this generalized permutation entropy method are performed with both synthetic and stock data showing its power. Results show that P E q , δ is an exponential function of q and the power ( k ( δ ) ) is a constant if δ is determined. Some discussions about k ( δ ) are provided. Besides, we also find some interesting results about power law.
Generalized sample entropy analysis for traffic signals based on similarity measure
NASA Astrophysics Data System (ADS)
Shang, Du; Xu, Mengjia; Shang, Pengjian
2017-05-01
Sample entropy is a prevailing method used to quantify the complexity of a time series. In this paper a modified method of generalized sample entropy and surrogate data analysis is proposed as a new measure to assess the complexity of a complex dynamical system such as traffic signals. The method based on similarity distance presents a different way of signals patterns match showing distinct behaviors of complexity. Simulations are conducted over synthetic data and traffic signals for providing the comparative study, which is provided to show the power of the new method. Compared with previous sample entropy and surrogate data analysis, the new method has two main advantages. The first one is that it overcomes the limitation about the relationship between the dimension parameter and the length of series. The second one is that the modified sample entropy functions can be used to quantitatively distinguish time series from different complex systems by the similar measure.
Role of information theoretic uncertainty relations in quantum theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jizba, Petr, E-mail: p.jizba@fjfi.cvut.cz; ITP, Freie Universität Berlin, Arnimallee 14, D-14195 Berlin; Dunningham, Jacob A., E-mail: J.Dunningham@sussex.ac.uk
2015-04-15
Uncertainty relations based on information theory for both discrete and continuous distribution functions are briefly reviewed. We extend these results to account for (differential) Rényi entropy and its related entropy power. This allows us to find a new class of information-theoretic uncertainty relations (ITURs). The potency of such uncertainty relations in quantum mechanics is illustrated with a simple two-energy-level model where they outperform both the usual Robertson–Schrödinger uncertainty relation and Shannon entropy based uncertainty relation. In the continuous case the ensuing entropy power uncertainty relations are discussed in the context of heavy tailed wave functions and Schrödinger cat states. Again,more » improvement over both the Robertson–Schrödinger uncertainty principle and Shannon ITUR is demonstrated in these cases. Further salient issues such as the proof of a generalized entropy power inequality and a geometric picture of information-theoretic uncertainty relations are also discussed.« less
Fozouni, Niloufar; Chopp, Michael; Nejad-Davarani, Siamak P.; Zhang, Zheng Gang; Lehman, Norman L.; Gu, Steven; Ueno, Yuji; Lu, Mei; Ding, Guangliang; Li, Lian; Hu, Jiani; Bagher-Ebadian, Hassan; Hearshen, David; Jiang, Quan
2013-01-01
Background To overcome the limitations of conventional diffusion tensor magnetic resonance imaging resulting from the assumption of a Gaussian diffusion model for characterizing voxels containing multiple axonal orientations, Shannon's entropy was employed to evaluate white matter structure in human brain and in brain remodeling after traumatic brain injury (TBI) in a rat. Methods Thirteen healthy subjects were investigated using a Q-ball based DTI data sampling scheme. FA and entropy values were measured in white matter bundles, white matter fiber crossing areas, different gray matter (GM) regions and cerebrospinal fluid (CSF). Axonal densities' from the same regions of interest (ROIs) were evaluated in Bielschowsky and Luxol fast blue stained autopsy (n = 30) brain sections by light microscopy. As a case demonstration, a Wistar rat subjected to TBI and treated with bone marrow stromal cells (MSC) 1 week after TBI was employed to illustrate the superior ability of entropy over FA in detecting reorganized crossing axonal bundles as confirmed by histological analysis with Bielschowsky and Luxol fast blue staining. Results Unlike FA, entropy was less affected by axonal orientation and more affected by axonal density. A significant agreement (r = 0.91) was detected between entropy values from in vivo human brain and histologically measured axonal density from post mortum from the same brain structures. The MSC treated TBI rat demonstrated that the entropy approach is superior to FA in detecting axonal remodeling after injury. Compared with FA, entropy detected new axonal remodeling regions with crossing axons, confirmed with immunohistological staining. Conclusions Entropy measurement is more effective in distinguishing axonal remodeling after injury, when compared with FA. Entropy is also more sensitive to axonal density than axonal orientation, and thus may provide a more accurate reflection of axonal changes that occur in neurological injury and disease. PMID:24143186
Fozouni, Niloufar; Chopp, Michael; Nejad-Davarani, Siamak P; Zhang, Zheng Gang; Lehman, Norman L; Gu, Steven; Ueno, Yuji; Lu, Mei; Ding, Guangliang; Li, Lian; Hu, Jiani; Bagher-Ebadian, Hassan; Hearshen, David; Jiang, Quan
2013-01-01
To overcome the limitations of conventional diffusion tensor magnetic resonance imaging resulting from the assumption of a Gaussian diffusion model for characterizing voxels containing multiple axonal orientations, Shannon's entropy was employed to evaluate white matter structure in human brain and in brain remodeling after traumatic brain injury (TBI) in a rat. Thirteen healthy subjects were investigated using a Q-ball based DTI data sampling scheme. FA and entropy values were measured in white matter bundles, white matter fiber crossing areas, different gray matter (GM) regions and cerebrospinal fluid (CSF). Axonal densities' from the same regions of interest (ROIs) were evaluated in Bielschowsky and Luxol fast blue stained autopsy (n = 30) brain sections by light microscopy. As a case demonstration, a Wistar rat subjected to TBI and treated with bone marrow stromal cells (MSC) 1 week after TBI was employed to illustrate the superior ability of entropy over FA in detecting reorganized crossing axonal bundles as confirmed by histological analysis with Bielschowsky and Luxol fast blue staining. Unlike FA, entropy was less affected by axonal orientation and more affected by axonal density. A significant agreement (r = 0.91) was detected between entropy values from in vivo human brain and histologically measured axonal density from post mortum from the same brain structures. The MSC treated TBI rat demonstrated that the entropy approach is superior to FA in detecting axonal remodeling after injury. Compared with FA, entropy detected new axonal remodeling regions with crossing axons, confirmed with immunohistological staining. Entropy measurement is more effective in distinguishing axonal remodeling after injury, when compared with FA. Entropy is also more sensitive to axonal density than axonal orientation, and thus may provide a more accurate reflection of axonal changes that occur in neurological injury and disease.
Spectral simplicity of apparent complexity. II. Exact complexities and complexity spectra
NASA Astrophysics Data System (ADS)
Riechers, Paul M.; Crutchfield, James P.
2018-03-01
The meromorphic functional calculus developed in Part I overcomes the nondiagonalizability of linear operators that arises often in the temporal evolution of complex systems and is generic to the metadynamics of predicting their behavior. Using the resulting spectral decomposition, we derive closed-form expressions for correlation functions, finite-length Shannon entropy-rate approximates, asymptotic entropy rate, excess entropy, transient information, transient and asymptotic state uncertainties, and synchronization information of stochastic processes generated by finite-state hidden Markov models. This introduces analytical tractability to investigating information processing in discrete-event stochastic processes, symbolic dynamics, and chaotic dynamical systems. Comparisons reveal mathematical similarities between complexity measures originally thought to capture distinct informational and computational properties. We also introduce a new kind of spectral analysis via coronal spectrograms and the frequency-dependent spectra of past-future mutual information. We analyze a number of examples to illustrate the methods, emphasizing processes with multivariate dependencies beyond pairwise correlation. This includes spectral decomposition calculations for one representative example in full detail.
Nonlinear aerodynamic effects on bodies in supersonic flow
NASA Technical Reports Server (NTRS)
Pittman, J. L.; Siclari, M. J.
1984-01-01
The supersonic flow about generic bodies was analyzed to identify the elments of the nonlinear flow and to determine the influence of geometry and flow conditions on the magnitude of these nonlinearities. The nonlinear effects were attributed to separated-flow nonlinearities and attached-flow nonlinearities. The nonlinear attached-flow contribution was further broken down into large-disturbance effects and entropy effects. Conical, attached-flow bundaries were developed to illustrate the flow regimes where the nonlinear effects are significant, and the use of these boundaries for angle of attack and three-dimensional geometries was indicated. Normal-force and pressure comparisons showed that the large-disturbance and separated-flow effects were the dominant nonlinear effects at low supersonic Mach numbers and that the entropy effects were dominant for high supersonic Mach number flow. The magnitude of all the nonlinear effects increased with increasing angle of attack. A full-potential method, NCOREL, which includes an approximate entropy correction, was shown to provide accurate attached-flow pressure estimates from Mach 1.6 through 4.6.
Derivation of Markov processes that violate detailed balance
NASA Astrophysics Data System (ADS)
Lee, Julian
2018-03-01
Time-reversal symmetry of the microscopic laws dictates that the equilibrium distribution of a stochastic process must obey the condition of detailed balance. However, cyclic Markov processes that do not admit equilibrium distributions with detailed balance are often used to model systems driven out of equilibrium by external agents. I show that for a Markov model without detailed balance, an extended Markov model can be constructed, which explicitly includes the degrees of freedom for the driving agent and satisfies the detailed balance condition. The original cyclic Markov model for the driven system is then recovered as an approximation at early times by summing over the degrees of freedom for the driving agent. I also show that the widely accepted expression for the entropy production in a cyclic Markov model is actually a time derivative of an entropy component in the extended model. Further, I present an analytic expression for the entropy component that is hidden in the cyclic Markov model.
Thermodynamics of Terrestrial Evolution
Kirkaldy, J. S.
1965-01-01
The causal element of biological evolution and development can be understood in terms of a potential function which is generalized from the variational principles of irreversible thermodynamics. This potential function is approximated by the rate of entropy production in a configuration space which admits of macroscopic excursions by fluctuation and regression as well as microscopic ones. Analogously to Onsager's dissipation function, the potential takes the form of a saddle surface in this configuration space. The path of evolution following from an initial high dissipation state within the fixed constraint provided by the invariant energy flux from the sun tends toward the stable saddle point by a series of spontaneous regressions which lower the entropy production rate and by an alternating series of spontaneous fluctuations which introduce new internal constraints and lead to a higher entropy production rate. The potential thus rationalizes the system's observed tendency toward “chemical imperialism” (high dissipation) while simultaneously accommodating the development of “dynamic efficiency” and complication (low dissipation). PMID:5884019
Larson-Miller Constant of Heat-Resistant Steel
NASA Astrophysics Data System (ADS)
Tamura, Manabu; Abe, Fujio; Shiba, Kiyoyuki; Sakasegawa, Hideo; Tanigawa, Hiroyasu
2013-06-01
Long-term rupture data for 79 types of heat-resistant steels including carbon steel, low-alloy steel, high-alloy steel, austenitic stainless steel, and superalloy were analyzed, and a constant for the Larson-Miller (LM) parameter was obtained in the current study for each material. The calculated LM constant, C, is approximately 20 for heat-resistant steels and alloys except for high-alloy martensitic steels with high creep resistance, for which C ≈ 30 . The apparent activation energy was also calculated, and the LM constant was found to be proportional to the apparent activation energy with a high correlation coefficient, which suggests that the LM constant is a material constant possessing intrinsic physical meaning. The contribution of the entropy change to the LM constant is not small, especially for several martensitic steels with large values of C. Deformation of such martensitic steels should accompany a large entropy change of 10 times the gas constant at least, besides the entropy change due to self-diffusion.
Discontinuous Galerkin Methods for NonLinear Differential Systems
NASA Technical Reports Server (NTRS)
Barth, Timothy; Mansour, Nagi (Technical Monitor)
2001-01-01
This talk considers simplified finite element discretization techniques for first-order systems of conservation laws equipped with a convex (entropy) extension. Using newly developed techniques in entropy symmetrization theory, simplified forms of the discontinuous Galerkin (DG) finite element method have been developed and analyzed. The use of symmetrization variables yields numerical schemes which inherit global entropy stability properties of the PDE (partial differential equation) system. Central to the development of the simplified DG methods is the Eigenvalue Scaling Theorem which characterizes right symmetrizers of an arbitrary first-order hyperbolic system in terms of scaled eigenvectors of the corresponding flux Jacobian matrices. A constructive proof is provided for the Eigenvalue Scaling Theorem with detailed consideration given to the Euler equations of gas dynamics and extended conservation law systems derivable as moments of the Boltzmann equation. Using results from kinetic Boltzmann moment closure theory, we then derive and prove energy stability for several approximate DG fluxes which have practical and theoretical merit.
Shape dependence of two-cylinder Rényi entropies for free bosons on a lattice
NASA Astrophysics Data System (ADS)
Chojnacki, Leilee; Cook, Caleb Q.; Dalidovich, Denis; Hayward Sierens, Lauren E.; Lantagne-Hurtubise, Étienne; Melko, Roger G.; Vlaar, Tiffany J.
2016-10-01
Universal scaling terms occurring in Rényi entanglement entropies have the potential to bring new understanding to quantum critical points in free and interacting systems. Quantitative comparisons between analytical continuum theories and numerical calculations on lattice models play a crucial role in advancing such studies. In this paper, we exactly calculate the universal two-cylinder shape dependence of entanglement entropies for free bosons on finite-size square lattices, and compare to approximate functions derived in the continuum using several different Ansätze. Although none of these Ansätze are exact in the thermodynamic limit, we find that numerical fits are in good agreement with continuum functions derived using the anti-de Sitter/conformal field theory correspondence, an extensive mutual information model, and a quantum Lifshitz model. We use fits of our lattice data to these functions to calculate universal scalars defined in the thin-cylinder limit, and compare to values previously obtained for the free boson field theory in the continuum.
Beyond the Shannon–Khinchin formulation: The composability axiom and the universal-group entropy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tempesta, Piergiulio, E-mail: p.tempesta@fis.ucm.es
2016-02-15
The notion of entropy is ubiquitous both in natural and social sciences. In the last two decades, a considerable effort has been devoted to the study of new entropic forms, which generalize the standard Boltzmann–Gibbs (BG) entropy and could be applicable in thermodynamics, quantum mechanics and information theory. In Khinchin (1957), by extending previous ideas of Shannon (1948) and Shannon and Weaver (1949), Khinchin proposed a characterization of the BG entropy, based on four requirements, nowadays known as the Shannon–Khinchin (SK) axioms. The purpose of this paper is twofold. First, we show that there exists an intrinsic group-theoretical structure behindmore » the notion of entropy. It comes from the requirement of composability of an entropy with respect to the union of two statistically independent systems, that we propose in an axiomatic formulation. Second, we show that there exists a simple universal family of trace-form entropies. This class contains many well known examples of entropies and infinitely many new ones, a priori multi-parametric. Due to its specific relation with Lazard’s universal formal group of algebraic topology, the new general entropy introduced in this work will be called the universal-group entropy. A new example of multi-parametric entropy is explicitly constructed.« less
Principle of maximum entanglement entropy and local physics of strongly correlated materials.
Lanatà, Nicola; Strand, Hugo U R; Yao, Yongxin; Kotliar, Gabriel
2014-07-18
We argue that, because of quantum entanglement, the local physics of strongly correlated materials at zero temperature is described in a very good approximation by a simple generalized Gibbs distribution, which depends on a relatively small number of local quantum thermodynamical potentials. We demonstrate that our statement is exact in certain limits and present numerical calculations of the iron compounds FeSe and FeTe and of the elemental cerium by employing the Gutzwiller approximation that strongly support our theory in general.
Rogue waves and entropy consumption
NASA Astrophysics Data System (ADS)
Hadjihoseini, Ali; Lind, Pedro G.; Mori, Nobuhito; Hoffmann, Norbert P.; Peinke, Joachim
2017-11-01
Based on data from the Sea of Japan and the North Sea the occurrence of rogue waves is analyzed by a scale-dependent stochastic approach, which interlinks fluctuations of waves for different spacings. With this approach we are able to determine a stochastic cascade process, which provides information of the general multipoint statistics. Furthermore the evolution of single trajectories in scale, which characterize wave height fluctuations in the surroundings of a chosen location, can be determined. The explicit knowledge of the stochastic process enables to assign entropy values to all wave events. We show that for these entropies the integral fluctuation theorem, a basic law of non-equilibrium thermodynamics, is valid. This implies that positive and negative entropy events must occur. Extreme events like rogue waves are characterized as negative entropy events. The statistics of these entropy fluctuations changes with the wave state, thus for the Sea of Japan the statistics of the entropies has a more pronounced tail for negative entropy values, indicating a higher probability of rogue waves.
Fundamental limits on quantum dynamics based on entropy change
NASA Astrophysics Data System (ADS)
Das, Siddhartha; Khatri, Sumeet; Siopsis, George; Wilde, Mark M.
2018-01-01
It is well known in the realm of quantum mechanics and information theory that the entropy is non-decreasing for the class of unital physical processes. However, in general, the entropy does not exhibit monotonic behavior. This has restricted the use of entropy change in characterizing evolution processes. Recently, a lower bound on the entropy change was provided in the work of Buscemi, Das, and Wilde [Phys. Rev. A 93(6), 062314 (2016)]. We explore the limit that this bound places on the physical evolution of a quantum system and discuss how these limits can be used as witnesses to characterize quantum dynamics. In particular, we derive a lower limit on the rate of entropy change for memoryless quantum dynamics, and we argue that it provides a witness of non-unitality. This limit on the rate of entropy change leads to definitions of several witnesses for testing memory effects in quantum dynamics. Furthermore, from the aforementioned lower bound on entropy change, we obtain a measure of non-unitarity for unital evolutions.
Filter-based multiscale entropy analysis of complex physiological time series.
Xu, Yuesheng; Zhao, Liang
2013-08-01
Multiscale entropy (MSE) has been widely and successfully used in analyzing the complexity of physiological time series. We reinterpret the averaging process in MSE as filtering a time series by a filter of a piecewise constant type. From this viewpoint, we introduce filter-based multiscale entropy (FME), which filters a time series to generate multiple frequency components, and then we compute the blockwise entropy of the resulting components. By choosing filters adapted to the feature of a given time series, FME is able to better capture its multiscale information and to provide more flexibility for studying its complexity. Motivated by the heart rate turbulence theory, which suggests that the human heartbeat interval time series can be described in piecewise linear patterns, we propose piecewise linear filter multiscale entropy (PLFME) for the complexity analysis of the time series. Numerical results from PLFME are more robust to data of various lengths than those from MSE. The numerical performance of the adaptive piecewise constant filter multiscale entropy without prior information is comparable to that of PLFME, whose design takes prior information into account.
Irreversible entropy model for damage diagnosis in resistors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cuadras, Angel, E-mail: angel.cuadras@upc.edu; Crisóstomo, Javier; Ovejas, Victoria J.
2015-10-28
We propose a method to characterize electrical resistor damage based on entropy measurements. Irreversible entropy and the rate at which it is generated are more convenient parameters than resistance for describing damage because they are essentially positive in virtue of the second law of thermodynamics, whereas resistance may increase or decrease depending on the degradation mechanism. Commercial resistors were tested in order to characterize the damage induced by power surges. Resistors were biased with constant and pulsed voltage signals, leading to power dissipation in the range of 4–8 W, which is well above the 0.25 W nominal power to initiate failure. Entropymore » was inferred from the added power and temperature evolution. A model is proposed to understand the relationship among resistance, entropy, and damage. The power surge dissipates into heat (Joule effect) and damages the resistor. The results show a correlation between entropy generation rate and resistor failure. We conclude that damage can be conveniently assessed from irreversible entropy generation. Our results for resistors can be easily extrapolated to other systems or machines that can be modeled based on their resistance.« less
Entropy Measurement for Biometric Verification Systems.
Lim, Meng-Hui; Yuen, Pong C
2016-05-01
Biometric verification systems are designed to accept multiple similar biometric measurements per user due to inherent intrauser variations in the biometric data. This is important to preserve reasonable acceptance rate of genuine queries and the overall feasibility of the recognition system. However, such acceptance of multiple similar measurements decreases the imposter's difficulty of obtaining a system-acceptable measurement, thus resulting in a degraded security level. This deteriorated security needs to be measurable to provide truthful security assurance to the users. Entropy is a standard measure of security. However, the entropy formula is applicable only when there is a single acceptable possibility. In this paper, we develop an entropy-measuring model for biometric systems that accepts multiple similar measurements per user. Based on the idea of guessing entropy, the proposed model quantifies biometric system security in terms of adversarial guessing effort for two practical attacks. Excellent agreement between analytic and experimental simulation-based measurement results on a synthetic and a benchmark face dataset justify the correctness of our model and thus the feasibility of the proposed entropy-measuring approach.
Dispersion entropy for the analysis of resting-state MEG regularity in Alzheimer's disease.
Azami, Hamed; Rostaghi, Mostafa; Fernandez, Alberto; Escudero, Javier
2016-08-01
Alzheimer's disease (AD) is a progressive degenerative brain disorder affecting memory, thinking, behaviour and emotion. It is the most common form of dementia and a big social problem in western societies. The analysis of brain activity may help to diagnose this disease. Changes in entropy methods have been reported useful in research studies to characterize AD. We have recently proposed dispersion entropy (DisEn) as a very fast and powerful tool to quantify the irregularity of time series. The aim of this paper is to evaluate the ability of DisEn, in comparison with fuzzy entropy (FuzEn), sample entropy (SampEn), and permutation entropy (PerEn), to discriminate 36 AD patients from 26 elderly control subjects using resting-state magnetoencephalogram (MEG) signals. The results obtained by DisEn, FuzEn, and SampEn, unlike PerEn, show that the AD patients' signals are more regular than controls' time series. The p-values obtained by DisEn, FuzEn, SampEn, and PerEn based methods demonstrate the superiority of DisEn over PerEn, SampEn, and PerEn. Moreover, the computation time for the newly proposed DisEn-based method is noticeably less than for the FuzEn, SampEn, and PerEn based approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khvorostukhin, A. S.; Joint Institute for Nuclear Research, 141980 Dubna; Institute of Applied Physics, Moldova Academy of Science, MD-2028 Kishineu
Shear {eta} and bulk {zeta} viscosities are calculated in a quasiparticle model within a relaxation-time approximation for pure gluon matter. Below T{sub c}, the confined sector is described within a quasiparticle glueball model. The constructed equation of state reproduces the first-order phase transition for the glue matter. It is shown that with this equation of state, it is possible to describe the temperature dependence of the shear viscosity to entropy ratio {eta}/s and the bulk viscosity to entropy ratio {zeta}/s in reasonable agreement with available lattice data, but absolute values of the {zeta}/s ratio underestimate the upper limits of thismore » ratio in the lattice measurements typically by an order of magnitude.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sobotka, L.G.; Department of Physics, Washington University, St. Louis, Missouri 63130; Charity, R.J.
2006-01-15
The caloric curve for mononuclear configurations is studied with a model that allows for both increased surface diffusness and self-similar expansion. The evolution of the effective mass with density and excitation is included in a schematic fashion. The entropies, extracted in a local-density approximation, confirm that nuclei posess a soft mode that is predominately a surface expansion. We also find that the mononuclear caloric curve (temperature versus excitation energy) exhibits a plateau. Thus a plateau should be the expectation with or without a multifragmentationlike phase transition. This conclusion is relevant only for reactions that populate the mononuclear region of phasemore » space.« less
NASA Astrophysics Data System (ADS)
He, Jiayi; Shang, Pengjian; Xiong, Hui
2018-06-01
Stocks, as the concrete manifestation of financial time series with plenty of potential information, are often used in the study of financial time series. In this paper, we utilize the stock data to recognize their patterns through out the dissimilarity matrix based on modified cross-sample entropy, then three-dimensional perceptual maps of the results are provided through multidimensional scaling method. Two modified multidimensional scaling methods are proposed in this paper, that is, multidimensional scaling based on Kronecker-delta cross-sample entropy (MDS-KCSE) and multidimensional scaling based on permutation cross-sample entropy (MDS-PCSE). These two methods use Kronecker-delta based cross-sample entropy and permutation based cross-sample entropy to replace the distance or dissimilarity measurement in classical multidimensional scaling (MDS). Multidimensional scaling based on Chebyshev distance (MDSC) is employed to provide a reference for comparisons. Our analysis reveals a clear clustering both in synthetic data and 18 indices from diverse stock markets. It implies that time series generated by the same model are easier to have similar irregularity than others, and the difference in the stock index, which is caused by the country or region and the different financial policies, can reflect the irregularity in the data. In the synthetic data experiments, not only the time series generated by different models can be distinguished, the one generated under different parameters of the same model can also be detected. In the financial data experiment, the stock indices are clearly divided into five groups. Through analysis, we find that they correspond to five regions, respectively, that is, Europe, North America, South America, Asian-Pacific (with the exception of mainland China), mainland China and Russia. The results also demonstrate that MDS-KCSE and MDS-PCSE provide more effective divisions in experiments than MDSC.
Increased resting-state brain entropy in Alzheimer's disease.
Xue, Shao-Wei; Guo, Yonghu
2018-03-07
Entropy analysis of resting-state functional MRI (R-fMRI) is a novel approach to characterize brain temporal dynamics and facilitates the identification of abnormal brain activity caused by several disease conditions. However, Alzheimer's disease (AD)-related brain entropy mapping based on R-fMRI has not been assessed. Here, we measured the sample entropy and voxel-wise connectivity of the network degree centrality (DC) of the intrinsic brain activity acquired by R-fMRI in 26 patients with AD and 26 healthy controls. Compared with the controls, AD patients showed increased entropy in the middle temporal gyrus and the precentral gyrus and also showed decreased DC in the precuneus. Moreover, the magnitude of the negative correlation between local brain activity (entropy) and network connectivity (DC) was increased in AD patients in comparison with healthy controls. These findings provide new evidence on AD-related brain entropy alterations.
Microscopic insights into the NMR relaxation based protein conformational entropy meter
Kasinath, Vignesh; Sharp, Kim A.; Wand, A. Joshua
2013-01-01
Conformational entropy is a potentially important thermodynamic parameter contributing to protein function. Quantitative measures of conformational entropy are necessary for an understanding of its role but have been difficult to obtain. An empirical method that utilizes changes in conformational dynamics as a proxy for changes in conformational entropy has recently been introduced. Here we probe the microscopic origins of the link between conformational dynamics and conformational entropy using molecular dynamics simulations. Simulation of seven pro! teins gave an excellent correlation with measures of side-chain motion derived from NMR relaxation. The simulations show that the motion of methyl-bearing side-chains are sufficiently coupled to that of other side chains to serve as excellent reporters of the overall side-chain conformational entropy. These results tend to validate the use of experimentally accessible measures of methyl motion - the NMR-derived generalized order parameters - as a proxy from which to derive changes in protein conformational entropy. PMID:24007504
Geometric entropy and edge modes of the electromagnetic field
NASA Astrophysics Data System (ADS)
Donnelly, William; Wall, Aron C.
2016-11-01
We calculate the vacuum entanglement entropy of Maxwell theory in a class of curved spacetimes by Kaluza-Klein reduction of the theory onto a two-dimensional base manifold. Using two-dimensional duality, we express the geometric entropy of the electromagnetic field as the entropy of a tower of scalar fields, constant electric and magnetic fluxes, and a contact term, whose leading-order divergence was discovered by Kabat. The complete contact term takes the form of one negative scalar degree of freedom confined to the entangling surface. We show that the geometric entropy agrees with a statistical definition of entanglement entropy that includes edge modes: classical solutions determined by their boundary values on the entangling surface. This resolves a long-standing puzzle about the statistical interpretation of the contact term in the entanglement entropy. We discuss the implications of this negative term for black hole thermodynamics and the renormalization of Newton's constant.
NASA Astrophysics Data System (ADS)
Surblys, Donatas; Leroy, Frédéric; Yamaguchi, Yasutaka; Müller-Plathe, Florian
2018-04-01
We investigated the solid-liquid work of adhesion of water on a model silica surface by molecular dynamics simulations, where a methodology previously developed to determine the work of adhesion through thermodynamic integration was extended to a system with long-range electrostatic interactions between solid and liquid. In agreement with previous studies, the work of adhesion increased when the magnitude of the surface polarity was increased. On the other hand, we found that when comparing two systems with and without solid-liquid electrostatic interactions, which were set to have approximately the same total solid-liquid interfacial energy, former had a significantly smaller work of adhesion and a broader distribution in the interfacial energies, which has not been previously reported in detail. This was explained by the entropy contribution to the adhesion free energy; i.e., the former with a broader energy distribution had a larger interfacial entropy than the latter. While the entropy contribution to the work of adhesion has already been known, as a work of adhesion itself is free energy, these results indicate that, contrary to common belief, wetting behavior such as the contact angle is not only governed by the interfacial energy but also significantly affected by the interfacial entropy. Finally, a new interpretation of interfacial entropy in the context of solid-liquid energy variance was offered, from which a fast way to qualitatively estimate the work of adhesion was also presented.
Cycling-Induced Changes in the Entropy Profiles of Lithium Cobalt Oxide Electrodes
Hudak, N. S.; Davis, L. E.; Nagasubramanian, G.
2014-12-09
Entropy profiles of lithium cobalt oxide (LiCoO2) electrodes were measured at various stages in the cycle life to examine performance degradation and cycling-induced changes, or lack thereof, in thermodynamics. LiCoO 2 electrodes were cycled at C/2 rate in half-cells (vs. lithium anodes) up to 20 cycles or C/5 rate in full cells (vs. MCMB anodes) up to 500 cycles. The electrodes were then subjected to entropy measurements (∂E/∂T, where E is open-circuit potential and T is temperature) in half-cells at regular intervals over the approximate range 0.5 ≤ x ≤ 1 in LixCoO 2. Despite significant losses in capacity uponmore » cycling, neither cycling rate resulted in any change to the overall shape of the entropy profile relative to an uncycled electrode, indicating retention of the basic LiCoO 2 structure, lithium insertion mechanism, and thermodynamics. This confirms that cycling-induced performance degradation in LiCoO 2 electrodes is primarily caused by kinetic barriers that increase with cycling. In the case of electrodes cycled at C/5, there was a subtle, quantitative, and gradual change in the entropy profile in the narrow potential range of the hexagonal-to-monoclinic phase transition. The observed change is indicative of a decrease in the intralayer lithium ordering that occurs at these potentials, and it demonstrates that a cyclinginduced structural disorder accompanies the kinetic degradation mechanisms.« less
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Fisher, Travis C.; Nielsen, Eric J.; Frankel, Steven H.
2013-01-01
Nonlinear entropy stability and a summation-by-parts framework are used to derive provably stable, polynomial-based spectral collocation methods of arbitrary order. The new methods are closely related to discontinuous Galerkin spectral collocation methods commonly known as DGFEM, but exhibit a more general entropy stability property. Although the new schemes are applicable to a broad class of linear and nonlinear conservation laws, emphasis herein is placed on the entropy stability of the compressible Navier-Stokes equations.
Entropy of measurement and erasure: Szilard's membrane model revisited
NASA Astrophysics Data System (ADS)
Leff, Harvey S.; Rex, Andrew F.
1994-11-01
It is widely believed that measurement is accompanied by irreversible entropy increase. This conventional wisdom is based in part on Szilard's 1929 study of entropy decrease in a thermodynamic system by intelligent intervention (i.e., a Maxwell's demon) and Brillouin's association of entropy with information. Bennett subsequently argued that information acquisition is not necessarily irreversible, but information erasure must be dissipative (Landauer's principle). Inspired by the ensuing debate, we revisit the membrane model introduced by Szilard and find that it can illustrate and clarify (1) reversible measurement, (2) information storage, (3) decoupling of the memory from the system being measured, and (4) entropy increase associated with memory erasure and resetting.
Fault detection and diagnosis for gas turbines based on a kernelized information entropy model.
Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei
2014-01-01
Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms.
Epoch-based Entropy for Early Screening of Alzheimer's Disease.
Houmani, N; Dreyfus, G; Vialatte, F B
2015-12-01
In this paper, we introduce a novel entropy measure, termed epoch-based entropy. This measure quantifies disorder of EEG signals both at the time level and spatial level, using local density estimation by a Hidden Markov Model on inter-channel stationary epochs. The investigation is led on a multi-centric EEG database recorded from patients at an early stage of Alzheimer's disease (AD) and age-matched healthy subjects. We investigate the classification performances of this method, its robustness to noise, and its sensitivity to sampling frequency and to variations of hyperparameters. The measure is compared to two alternative complexity measures, Shannon's entropy and correlation dimension. The classification accuracies for the discrimination of AD patients from healthy subjects were estimated using a linear classifier designed on a development dataset, and subsequently tested on an independent test set. Epoch-based entropy reached a classification accuracy of 83% on the test dataset (specificity = 83.3%, sensitivity = 82.3%), outperforming the two other complexity measures. Furthermore, it was shown to be more stable to hyperparameter variations, and less sensitive to noise and sampling frequency disturbances than the other two complexity measures.
Fault Detection and Diagnosis for Gas Turbines Based on a Kernelized Information Entropy Model
Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei
2014-01-01
Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms. PMID:25258726
NASA Astrophysics Data System (ADS)
Chakrabarti, R.; Yogesh, V.
2016-04-01
We study the evolution of the hybrid entangled states in a bipartite (ultra) strongly coupled qubit-oscillator system. Using the generalized rotating wave approximation the reduced density matrices of the qubit and the oscillator are obtained. The reduced density matrix of the oscillator yields the phase space quasi probability distributions such as the diagonal P-representation, the Wigner W-distribution and the Husimi Q-function. In the strong coupling regime the Q-function evolves to uniformly separated macroscopically distinct Gaussian peaks representing ‘kitten’ states at certain specified times that depend on multiple time scales present in the interacting system. The ultrastrong coupling strength of the interaction triggers appearance of a large number of modes that quickly develop a randomization of their phase relationships. A stochastic averaging of the dynamical quantities sets in, and leads to the decoherence of the system. The delocalization in the phase space of the oscillator is studied by using the Wehrl entropy. The negativity of the W-distribution reflects the departure of the oscillator from the classical states, and allows us to study the underlying differences between various information-theoretic measures such as the Wehrl entropy and the Wigner entropy. Other features of nonclassicality such as the existence of the squeezed states and appearance of negative values of the Mandel parameter are realized during the course of evolution of the bipartite system. In the parametric regime studied here these properties do not survive in the time-averaged limit.
Sway Area and Velocity Correlated With MobileMat Balance Error Scoring System (BESS) Scores.
Caccese, Jaclyn B; Buckley, Thomas A; Kaminski, Thomas W
2016-08-01
The Balance Error Scoring System (BESS) is often used for sport-related concussion balance assessment. However, moderate intratester and intertester reliability may cause low initial sensitivity, suggesting that a more objective balance assessment method is needed. The MobileMat BESS was designed for objective BESS scoring, but the outcome measures must be validated with reliable balance measures. Thus, the purpose of this investigation was to compare MobileMat BESS scores to linear and nonlinear measures of balance. Eighty-eight healthy collegiate student-athletes (age: 20.0 ± 1.4 y, height: 177.7 ± 10.7 cm, mass: 74.8 ± 13.7 kg) completed the MobileMat BESS. MobileMat BESS scores were compared with 95% area, sway velocity, approximate entropy, and sample entropy. MobileMat BESS scores were significantly correlated with 95% area for single-leg (r = .332) and tandem firm (r = .474), and double-leg foam (r = .660); and with sway velocity for single-leg (r = .406) and tandem firm (r = .601), and double-leg (r = .575) and single-leg foam (r = .434). MobileMat BESS scores were not correlated with approximate or sample entropy. MobileMat BESS scores were low to moderately correlated with linear measures, suggesting the ability to identify changes in the center of mass-center of pressure relationship, but not higher-order processing associated with nonlinear measures. These results suggest that the MobileMat BESS may be a clinically-useful tool that provides objective linear balance measures.
Li, Jing Xin; Yang, Li; Yang, Lei; Zhang, Chao; Huo, Zhao Min; Chen, Min Hao; Luan, Xiao Feng
2018-03-01
Quantitative evaluation of ecosystem service is a primary premise for rational resources exploitation and sustainable development. Examining ecosystem services flow provides a scientific method to quantity ecosystem services. We built an assessment indicator system based on land cover/land use under the framework of four types of ecosystem services. The types of ecosystem services flow were reclassified. Using entropy theory, disorder degree and developing trend of indicators and urban ecosystem were quantitatively assessed. Beijing was chosen as the study area, and twenty-four indicators were selected for evaluation. The results showed that the entropy value of Beijing urban ecosystem during 2004 to 2015 was 0.794 and the entropy flow was -0.024, suggesting a large disordered degree and near verge of non-health. The system got maximum values for three times, while the mean annual variation of the system entropy value increased gradually in three periods, indicating that human activities had negative effects on urban ecosystem. Entropy flow reached minimum value in 2007, implying the environmental quality was the best in 2007. The determination coefficient for the fitting function of total permanent population in Beijing and urban ecosystem entropy flow was 0.921, indicating that urban ecosystem health was highly correlated with total permanent population.
Bayesian framework inspired no-reference region-of-interest quality measure for brain MRI images
Osadebey, Michael; Pedersen, Marius; Arnold, Douglas; Wendel-Mitoraj, Katrina
2017-01-01
Abstract. We describe a postacquisition, attribute-based quality assessment method for brain magnetic resonance imaging (MRI) images. It is based on the application of Bayes theory to the relationship between entropy and image quality attributes. The entropy feature image of a slice is segmented into low- and high-entropy regions. For each entropy region, there are three separate observations of contrast, standard deviation, and sharpness quality attributes. A quality index for a quality attribute is the posterior probability of an entropy region given any corresponding region in a feature image where quality attribute is observed. Prior belief in each entropy region is determined from normalized total clique potential (TCP) energy of the slice. For TCP below the predefined threshold, the prior probability for a region is determined by deviation of its percentage composition in the slice from a standard normal distribution built from 250 MRI volume data provided by Alzheimer’s Disease Neuroimaging Initiative. For TCP above the threshold, the prior is computed using a mathematical model that describes the TCP–noise level relationship in brain MRI images. Our proposed method assesses the image quality of each entropy region and the global image. Experimental results demonstrate good correlation with subjective opinions of radiologists for different types and levels of quality distortions. PMID:28630885
Harmonic analysis of electric locomotive and traction power system based on wavelet singular entropy
NASA Astrophysics Data System (ADS)
Dun, Xiaohong
2018-05-01
With the rapid development of high-speed railway and heavy-haul transport, the locomotive and traction power system has become the main harmonic source of China's power grid. In response to this phenomenon, the system's power quality issues need timely monitoring, assessment and governance. Wavelet singular entropy is an organic combination of wavelet transform, singular value decomposition and information entropy theory, which combines the unique advantages of the three in signal processing: the time-frequency local characteristics of wavelet transform, singular value decomposition explores the basic modal characteristics of data, and information entropy quantifies the feature data. Based on the theory of singular value decomposition, the wavelet coefficient matrix after wavelet transform is decomposed into a series of singular values that can reflect the basic characteristics of the original coefficient matrix. Then the statistical properties of information entropy are used to analyze the uncertainty of the singular value set, so as to give a definite measurement of the complexity of the original signal. It can be said that wavelet entropy has a good application prospect in fault detection, classification and protection. The mat lab simulation shows that the use of wavelet singular entropy on the locomotive and traction power system harmonic analysis is effective.
Optimization of a Circular Microchannel With Entropy Generation Minimization Method
NASA Astrophysics Data System (ADS)
Jafari, Arash; Ghazali, Normah Mohd
2010-06-01
New advances in micro and nano scales are being realized and the contributions of micro and nano heat dissipation devices are of high importance in this novel technology development. Past studies showed that microchannel design depends on its thermal resistance and pressure drop. However, entropy generation minimization (EGM) as a new optimization theory stated that the rate of entropy generation should be also optimized. Application of EGM in microchannel heat sink design is reviewed and discussed in this paper. Latest principles for deriving the entropy generation relations are discussed to present how this approach can be achieved. An optimization procedure using EGM method with the entropy generation rate is derived for a circular microchannel heat sink based upon thermal resistance and pressure drop. The equations are solved using MATLAB and the obtained results are compared to similar past studies. The effects of channel diameter, number of channels, heat flux, and pumping power on the entropy generation rate and Reynolds number are investigated. Analytical correlations are utilized for heat transfer and friction coefficients. A minimum entropy generation has been observed for N = 40 and channel diameter of 90μm. It is concluded that for N = 40 and channel hydraulic diameter of 90μm, the circular microchannel heat sink is on its optimum operating point based on second law of thermodynamics.
Quantitative characterization of brazing performance for Sn-plated silver alloy fillers
NASA Astrophysics Data System (ADS)
Wang, Xingxing; Peng, Jin; Cui, Datian
2017-12-01
Two types of AgCuZnSn fillers were prepared based on BAg50CuZn and BAg34CuZnSn alloy through a combinative process of electroplating and thermal diffusion. The models of wetting entropy and joint strength entropy of AgCuZnSn filler metals were established. The wetting entropy of the Sn-plated silver brazing alloys are lower than the traditional fillers, and its joint strength entropy value is slightly higher than the latter. The wetting entropy value of the Sn-plated brazing alloys and traditional filler metal are similar to the change trend of the wetting area. The trend of the joint strength entropy value with those fillers are consisted with the tensile strength of the stainless steel joints with the increase of Sn content.
Marsh, Jon N; Wallace, Kirk D; McCarthy, John E; Wickerhauser, Mladen V; Maurizi, Brian N; Lanza, Gregory M; Wickline, Samuel A; Hughes, Michael S
2010-08-01
Previously, we reported new methods for ultrasound signal characterization using entropy, H(f); a generalized entropy, the Renyi entropy, I(f)(r); and a limiting form of Renyi entropy suitable for real-time calculation, I(f),(infinity). All of these quantities demonstrated significantly more sensitivity to subtle changes in scattering architecture than energy-based methods in certain settings. In this study, the real-time calculable limit of the Renyi entropy, I(f),(infinity), is applied for the imaging of angiogenic murine neovasculature in a breast cancer xenograft using a targeted contrast agent. It is shown that this approach may be used to reliably detect the accumulation of targeted nanoparticles at five minutes post-injection in this in vivo model.
General monogamy of Tsallis q -entropy entanglement in multiqubit systems
NASA Astrophysics Data System (ADS)
Luo, Yu; Tian, Tian; Shao, Lian-He; Li, Yongming
2016-06-01
In this paper, we study the monogamy inequality of Tsallis q -entropy entanglement. We first provide an analytic formula of Tsallis q -entropy entanglement in two-qubit systems for 5/-√{13 } 2 ≤q ≤5/+√{13 } 2 . The analytic formula of Tsallis q -entropy entanglement in 2 ⊗d system is also obtained and we show that Tsallis q -entropy entanglement satisfies a set of hierarchical monogamy equalities. Furthermore, we prove the squared Tsallis q -entropy entanglement follows a general inequality in the qubit systems. Based on the monogamy relations, a set of multipartite entanglement indicators is constructed, which can detect all genuine multiqubit entangled states even in the case of N -tangle vanishes. Moreover, we study some examples in multipartite higher-dimensional system for the monogamy inequalities.
An adaptive technique to maximize lossless image data compression of satellite images
NASA Technical Reports Server (NTRS)
Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe
1994-01-01
Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.
Shearlet-based measures of entropy and complexity for two-dimensional patterns
NASA Astrophysics Data System (ADS)
Brazhe, Alexey
2018-06-01
New spatial entropy and complexity measures for two-dimensional patterns are proposed. The approach is based on the notion of disequilibrium and is built on statistics of directional multiscale coefficients of the fast finite shearlet transform. Shannon entropy and Jensen-Shannon divergence measures are employed. Both local and global spatial complexity and entropy estimates can be obtained, thus allowing for spatial mapping of complexity in inhomogeneous patterns. The algorithm is validated in numerical experiments with a gradually decaying periodic pattern and Ising surfaces near critical state. It is concluded that the proposed algorithm can be instrumental in describing a wide range of two-dimensional imaging data, textures, or surfaces, where an understanding of the level of order or randomness is desired.
Consistent Application of the Boltzmann Distribution to Residual Entropy in Crystals
ERIC Educational Resources Information Center
Kozliak, Evguenii I.
2007-01-01
Four different approaches to residual entropy (the entropy remaining in crystals comprised of nonsymmetric molecules like CO, N[subscript 2]O, FClO[subscript 3], and H[subscript 2]O as temperatures approach 0 K) are analyzed and a new method of its calculation is developed based on application of the Boltzmann distribution. The inherent connection…
NASA Astrophysics Data System (ADS)
Weinketz, Sieghard
1998-07-01
The reordering kinetics of a diffusion lattice-gas system of adsorbates with nearest- and next-nearest-neighbor interactions on a square lattice is studied within a dynamic Monte Carlo simulation, as it evolves towards the equilibrium from a given initial configuration, at a constant temperature. The diffusion kinetics proceeds through adsorbate hoppings to empty nearest-neighboring sites (Kawasaki dynamics). The Monte Carlo procedure allows a ``real'' time definition from the local transition rates, and the configurational entropy and internal energy can be obtained from the lattice configuration at any instant t by counting the local clusters and using the C2 approximation of the cluster variation method. These state functions are then used in their nonequilibrium form as a direct measure of reordering along the time. Different reordering processes are analyzed within this approach, presenting a rich variety of behaviors. It can also be shown that the time derivative of entropy (times temperature) is always equal to or lower than the time derivative of energy, and that the reordering path is always strongly dependent on the initial order, presenting in some cases an ``invariance'' of the entropy function to the magnitude of the interactions as far as the final order is unaltered.
Entropy Filtered Density Function for Large Eddy Simulation of Turbulent Reacting Flows
NASA Astrophysics Data System (ADS)
Safari, Mehdi
Analysis of local entropy generation is an effective means to optimize the performance of energy and combustion systems by minimizing the irreversibilities in transport processes. Large eddy simulation (LES) is employed to describe entropy transport and generation in turbulent reacting flows. The entropy transport equation in LES contains several unclosed terms. These are the subgrid scale (SGS) entropy flux and entropy generation caused by irreversible processes: heat conduction, mass diffusion, chemical reaction and viscous dissipation. The SGS effects are taken into account using a novel methodology based on the filtered density function (FDF). This methodology, entitled entropy FDF (En-FDF), is developed and utilized in the form of joint entropy-velocity-scalar-turbulent frequency FDF and the marginal scalar-entropy FDF, both of which contain the chemical reaction effects in a closed form. The former constitutes the most comprehensive form of the En-FDF and provides closure for all the unclosed filtered moments. This methodology is applied for LES of a turbulent shear layer involving transport of passive scalars. Predictions show favor- able agreements with the data generated by direct numerical simulation (DNS) of the same layer. The marginal En-FDF accounts for entropy generation effects as well as scalar and entropy statistics. This methodology is applied to a turbulent nonpremixed jet flame (Sandia Flame D) and predictions are validated against experimental data. In both flows, sources of irreversibility are predicted and analyzed.
Papadelis, Christos; Chen, Zhe; Kourtidou-Papadeli, Chrysoula; Bamidis, Panagiotis D; Chouvarda, Ioanna; Bekiaris, Evangelos; Maglaveras, Nikos
2007-09-01
The objective of this study is the development and evaluation of efficient neurophysiological signal statistics, which may assess the driver's alertness level and serve as potential indicators of sleepiness in the design of an on-board countermeasure system. Multichannel EEG, EOG, EMG, and ECG were recorded from sleep-deprived subjects exposed to real field driving conditions. A number of severe driving errors occurred during the experiments. The analysis was performed in two main dimensions: the macroscopic analysis that estimates the on-going temporal evolution of physiological measurements during the driving task, and the microscopic event analysis that focuses on the physiological measurements' alterations just before, during, and after the driving errors. Two independent neurophysiologists visually interpreted the measurements. The EEG data were analyzed by using both linear and non-linear analysis tools. We observed the occurrence of brief paroxysmal bursts of alpha activity and an increased synchrony among EEG channels before the driving errors. The alpha relative band ratio (RBR) significantly increased, and the Cross Approximate Entropy that quantifies the synchrony among channels also significantly decreased before the driving errors. Quantitative EEG analysis revealed significant variations of RBR by driving time in the frequency bands of delta, alpha, beta, and gamma. Most of the estimated EEG statistics, such as the Shannon Entropy, Kullback-Leibler Entropy, Coherence, and Cross-Approximate Entropy, were significantly affected by driving time. We also observed an alteration of eyes blinking duration by increased driving time and a significant increase of eye blinks' number and duration before driving errors. EEG and EOG are promising neurophysiological indicators of driver sleepiness and have the potential of monitoring sleepiness in occupational settings incorporated in a sleepiness countermeasure device. The occurrence of brief paroxysmal bursts of alpha activity before severe driving errors is described in detail for the first time. Clear evidence is presented that eye-blinking statistics are sensitive to the driver's sleepiness and should be considered in the design of an efficient and driver-friendly sleepiness detection countermeasure device.
Entropy, complexity, and Markov diagrams for random walk cancer models.
Newton, Paul K; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter
2014-12-19
The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.
Entropy, complexity, and Markov diagrams for random walk cancer models
NASA Astrophysics Data System (ADS)
Newton, Paul K.; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter
2014-12-01
The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.
Lagrangian particle method for compressible fluid dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samulyak, Roman; Wang, Xingyu; Chen, Hsin -Chiang
A new Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface / multi-phase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) a second-order particle-based algorithm that reduces to the first-order upwind method at local extremalmore » points, providing accuracy and long term stability, and (c) more accurate resolution of entropy discontinuities and states at free inter-faces. While the method is consistent and convergent to a prescribed order, the conservation of momentum and energy is not exact and depends on the convergence order . The method is generalizable to coupled hyperbolic-elliptic systems. As a result, numerical verification tests demonstrating the convergence order are presented as well as examples of complex multiphase flows.« less
First-principles modeling of hafnia-based nanotubes.
Evarestov, Robert A; Bandura, Andrei V; Porsev, Vitaly V; Kovalenko, Alexey V
2017-09-15
Hybrid density functional theory calculations were performed for the first time on structure, stability, phonon frequencies, and thermodynamic functions of hafnia-based single-wall nanotubes. The nanotubes were rolled up from the thin free layers of cubic and tetragonal phases of HfO 2 . It was shown that the most stable HfO 2 single-wall nanotubes can be obtained from hexagonal (111) layer of the cubic phase. Phonon frequencies have been calculated for different HfO 2 nanolayers and nanotubes to prove the local stability and to find the thermal contributions to their thermodynamic functions. The role of phonons in stability of nanotubes seems to be negligible for the internal energy and noticeable for the Helmholtz free energy. Zone folding approach has been applied to estimate the connection between phonon modes of the layer and nanotubes and to approximate the nanotube thermodynamic properties. It is found that the zone-folding approximation is sufficiently accurate for heat capacity, but less accurate for entropy. The comparison has been done between the properties of TiO 2 , ZrO 2 , and HfO 2 . © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Lagrangian particle method for compressible fluid dynamics
Samulyak, Roman; Wang, Xingyu; Chen, Hsin -Chiang
2018-02-09
A new Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface / multi-phase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) a second-order particle-based algorithm that reduces to the first-order upwind method at local extremalmore » points, providing accuracy and long term stability, and (c) more accurate resolution of entropy discontinuities and states at free inter-faces. While the method is consistent and convergent to a prescribed order, the conservation of momentum and energy is not exact and depends on the convergence order . The method is generalizable to coupled hyperbolic-elliptic systems. As a result, numerical verification tests demonstrating the convergence order are presented as well as examples of complex multiphase flows.« less
Exp(1076) Shades of Black: Aspects of Black Hole Microstates
NASA Astrophysics Data System (ADS)
Vasilakis, Orestis
In this thesis we examine smooth supergravity solutions known as "microstate geometries". These solutions have neither a horizon, nor a singularity, yet they have the same asymptotic structure and conserved charges as black holes. Specifically we study supersymmetric and extremal non-supersymmetric solutions. The goal of this program is to construct enough microstates to account for the correct scaling behavior of the black hole entropy with respect to the charges within the supergravity approximation. For supersymmetric systems that are ⅛-BPS, microstate geometries account so far only for Q5/4 of the total entropy S ˜ Q3/2, while for non-supersymmetric systems the known microstate geometries are sporadic. For the supersymmetric case we construct solutions with three and four charges. Five-dimensional systems with three and four charges are ⅛-BPS. Thus they admit macroscopic horizons making the supergravity approximation valid. For the three-charge case we present some steps towards the construction of the superstratum, a microstate geometry depending on arbitrary functions of two variables, which is expected to provide the necessary entropy for this class of solutions. Specifically we construct multiple concentric solutions with three electric and two dipole magnetic charges which depend on arbitrary functions of two variables and examine their properties. These solutions have no KKM charge and thus are singular. For the four-charge case we construct microstate geometries by extending results available in the literature for three charges. We find smooth solutions in terms of bubbled geometries with ambipolar Gibbons-Hawking base space and by constructing the relevant supertubes. In the non-supersymmetric case we work with a three-charge system of extremal black holes known as almost-BPS, which provides a controlled way of breaking sypersymmetry. By using supertubes we construct the first systematic example of a family of almost-BPS microstate geometries and examine the moduli space of solutions. Furthermore by using brane probe analysis we show that, despite the breaking of supersymmetry, almost-BPS solutions receive no quantum corrections and thus must be subject to some kind of non-renormalization theorem.
Jiang, Xiaoying; Wei, Rong; Zhang, Tongliang; Gu, Quan
2008-01-01
The function of protein is closely correlated with it subcellular location. Prediction of subcellular location of apoptosis proteins is an important research area in post-genetic era because the knowledge of apoptosis proteins is useful to understand the mechanism of programmed cell death. Compared with the conventional amino acid composition (AAC), the Pseudo Amino Acid composition (PseAA) as originally introduced by Chou can incorporate much more information of a protein sequence so as to remarkably enhance the power of using a discrete model to predict various attributes of a protein. In this study, a novel approach is presented to predict apoptosis protein solely from sequence based on the concept of Chou's PseAA composition. The concept of approximate entropy (ApEn), which is a parameter denoting complexity of time series, is used to construct PseAA composition as additional features. Fuzzy K-nearest neighbor (FKNN) classifier is selected as prediction engine. Particle swarm optimization (PSO) algorithm is adopted for optimizing the weight factors which are important in PseAA composition. Two datasets are used to validate the performance of the proposed approach, which incorporate six subcellular location and four subcellular locations, respectively. The results obtained by jackknife test are quite encouraging. It indicates that the ApEn of protein sequence could represent effectively the information of apoptosis proteins subcellular locations. It can at least play a complimentary role to many of the existing methods, and might become potentially useful tool for protein function prediction. The software in Matlab is available freely by contacting the corresponding author.
NASA Astrophysics Data System (ADS)
Suparmi, A.; Cari, C.; Nur Pratiwi, Beta; Arya Nugraha, Dewanta
2017-01-01
D dimensional Schrodinger equation for the mixed Manning Rosen potential was investigated using supersymmetric quantum mechanics. We obtained the energy eigenvalues from radial part solution and wavefunctions in radial and angular parts solution. From the lowest radial wavefunctions, we evaluated the Shannon entropy information using Matlab software. Based on the entropy densities demonstrated graphically, we obtained that the wave of position information entropy density moves right when the value of potential parameter q increases, while its wave moves left with the increase of parameter α. The wave of momentum information entropy densities were expressed in graphs. We observe that its amplitude increase with increasing parameter q and α
Investigation of FeNiCrWMn - a new high entropy alloy
NASA Astrophysics Data System (ADS)
Buluc, G.; Florea, I.; Bălţătescu, O.; Florea, R. M.; Carcea, I.
2015-11-01
The term of high entropy alloys started from the analysis of multicomponent alloys, which were produced at an experimental level since 1995 by developing a new concept related to the development of metallic materials. Recent developments in the field of high-entropy alloys have revealed that they have versatile properties like: ductility, toughness, hardness and corrosion resistance [1]. Up until now, it has been demonstrated that the explored this alloys are feasible to be synthesized, processed and analyzed contrary to the misunderstanding based on traditional experiences. Moreover, there are many opportunities in this field for academic studies and industrial applications [1, 2]. As the combinations of composition and process for producing high entropy alloys are numerous and each high entropy alloy has its own microstructure and properties to be identified and understood, the research work is truly limitless. The novelty of these alloys consists of chemical composition. These alloys have been named high entropy alloys due to the atomic scale mixing entropies higher than traditional alloys. In this paper, I will present the microscopy and the mechanical properties of high entropy alloy FeNiCrWMn.
Transfer Entropy as a Log-Likelihood Ratio
NASA Astrophysics Data System (ADS)
Barnett, Lionel; Bossomaier, Terry
2012-09-01
Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.
Transfer entropy as a log-likelihood ratio.
Barnett, Lionel; Bossomaier, Terry
2012-09-28
Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.
Surface entropy of liquids via a direct Monte Carlo approach - Application to liquid Si
NASA Technical Reports Server (NTRS)
Wang, Z. Q.; Stroud, D.
1990-01-01
Two methods are presented for a direct Monte Carlo evaluation of the surface entropy S(s) of a liquid interacting by specified, volume-independent potentials. The first method is based on an application of the approach of Ferrenberg and Swendsen (1988, 1989) to Monte Carlo simulations at two different temperatures; it gives much more reliable results for S(s) in liquid Si than previous calculations based on numerical differentiation. The second method expresses the surface entropy directly as a canonical average at fixed temperature.
Infrared image segmentation method based on spatial coherence histogram and maximum entropy
NASA Astrophysics Data System (ADS)
Liu, Songtao; Shen, Tongsheng; Dai, Yao
2014-11-01
In order to segment the target well and suppress background noises effectively, an infrared image segmentation method based on spatial coherence histogram and maximum entropy is proposed. First, spatial coherence histogram is presented by weighting the importance of the different position of these pixels with the same gray-level, which is obtained by computing their local density. Then, after enhancing the image by spatial coherence histogram, 1D maximum entropy method is used to segment the image. The novel method can not only get better segmentation results, but also have a faster computation time than traditional 2D histogram-based segmentation methods.
Rényi entropies and observables.
Lesche, Bernhard
2004-01-01
Evidence is given that Rényi entropies of macroscopic thermodynamic systems defined on the bases of probabilities of microstates cannot be related to observables. The notion of observable is clarified.
Delchini, Marc O.; Ragusa, Jean C.; Ferguson, Jim
2017-02-17
A viscous regularization technique, based on the local entropy residual, was proposed by Delchini et al. (2015) to stabilize the nonequilibrium-diffusion Grey Radiation-Hydrodynamic equations using an artificial viscosity technique. This viscous regularization is modulated by the local entropy production and is consistent with the entropy minimum principle. However, Delchini et al. (2015) only based their work on the hyperbolic parts of the Grey Radiation-Hydrodynamic equations and thus omitted the relaxation and diffusion terms present in the material energy and radiation energy equations. Here in this paper, we extend the theoretical grounds for the method and derive an entropy minimum principlemore » for the full set of nonequilibrium-diffusion Grey Radiation-Hydrodynamic equations. This further strengthens the applicability of the entropy viscosity method as a stabilization technique for radiation-hydrodynamic shock simulations. Radiative shock calculations using constant and temperature-dependent opacities are compared against semi-analytical reference solutions, and we present a procedure to perform spatial convergence studies of such simulations.« less
Li, Z J; Zell, M T; Munson, E J; Grant, D J
1999-03-01
The identification of the racemic species, as a racemic compound, a racemic conglomerate, or a racemic solid solution (pseudoracemate), is crucial for rationalizing the potential for resolution of racemates by crystallization. The melting points and enthalpies of fusion of a number of chiral drugs and their salts were measured by differential scanning calorimetry. Based on a thermodynamic cycle involving the solid and liquid phases of the enantiomers and racemic species, the enthalpy, entropy and Gibbs free energy of the racemic species were derived from the thermal data. The Gibbs free energy of formation, is always negative for a racemic compound, if it can exist, and the contribution from the entropy of mixing in the liquid state to the free energy of formation is the driving force for the process. For a racemic conglomerate, the entropy of mixing in the liquid state is close to the ideal value of R ln 2 (1.38 cal.mol-1. K-1). Pseudoracemates behave differently from the other two types of racemic species. When the melting points of the racemic species is about 30 K below that of the homochiral species, is approximately zero, indicating that the racemic compound and racemic conglomerate possess similar relative stabilities. The powder X-ray diffraction patterns and 13C solid-state nuclear magnetic resonance spectra are valuable for revealing structural differences between a racemic compound and a racemic conglomerate. Thermodynamic prediction, thermal analysis, and structural study are in excellent agreement for identifying the nature of the racemic species.
Gradient Dynamics and Entropy Production Maximization
NASA Astrophysics Data System (ADS)
Janečka, Adam; Pavelka, Michal
2018-01-01
We compare two methods for modeling dissipative processes, namely gradient dynamics and entropy production maximization. Both methods require similar physical inputs-how energy (or entropy) is stored and how it is dissipated. Gradient dynamics describes irreversible evolution by means of dissipation potential and entropy, it automatically satisfies Onsager reciprocal relations as well as their nonlinear generalization (Maxwell-Onsager relations), and it has statistical interpretation. Entropy production maximization is based on knowledge of free energy (or another thermodynamic potential) and entropy production. It also leads to the linear Onsager reciprocal relations and it has proven successful in thermodynamics of complex materials. Both methods are thermodynamically sound as they ensure approach to equilibrium, and we compare them and discuss their advantages and shortcomings. In particular, conditions under which the two approaches coincide and are capable of providing the same constitutive relations are identified. Besides, a commonly used but not often mentioned step in the entropy production maximization is pinpointed and the condition of incompressibility is incorporated into gradient dynamics.
Entropy of hydrological systems under small samples: Uncertainty and variability
NASA Astrophysics Data System (ADS)
Liu, Dengfeng; Wang, Dong; Wang, Yuankun; Wu, Jichun; Singh, Vijay P.; Zeng, Xiankui; Wang, Lachun; Chen, Yuanfang; Chen, Xi; Zhang, Liyuan; Gu, Shenghua
2016-01-01
Entropy theory has been increasingly applied in hydrology in both descriptive and inferential ways. However, little attention has been given to the small-sample condition widespread in hydrological practice, where either hydrological measurements are limited or are even nonexistent. Accordingly, entropy estimated under this condition may incur considerable bias. In this study, small-sample condition is considered and two innovative entropy estimators, the Chao-Shen (CS) estimator and the James-Stein-type shrinkage (JSS) estimator, are introduced. Simulation tests are conducted with common distributions in hydrology, that lead to the best-performing JSS estimator. Then, multi-scale moving entropy-based hydrological analyses (MM-EHA) are applied to indicate the changing patterns of uncertainty of streamflow data collected from the Yangtze River and the Yellow River, China. For further investigation into the intrinsic property of entropy applied in hydrological uncertainty analyses, correlations of entropy and other statistics at different time-scales are also calculated, which show connections between the concept of uncertainty and variability.
NASA Astrophysics Data System (ADS)
Ghikas, Demetris P. K.; Oikonomou, Fotios D.
2018-04-01
Using the generalized entropies which depend on two parameters we propose a set of quantitative characteristics derived from the Information Geometry based on these entropies. Our aim, at this stage, is to construct first some fundamental geometric objects which will be used in the development of our geometrical framework. We first establish the existence of a two-parameter family of probability distributions. Then using this family we derive the associated metric and we state a generalized Cramer-Rao Inequality. This gives a first two-parameter classification of complex systems. Finally computing the scalar curvature of the information manifold we obtain a further discrimination of the corresponding classes. Our analysis is based on the two-parameter family of generalized entropies of Hanel and Thurner (2011).
Hydration of an apolar solute in a two-dimensional waterlike lattice fluid
NASA Astrophysics Data System (ADS)
Buzano, C.; de Stefanis, E.; Pretti, M.
2005-05-01
In a previous work, we investigated a two-dimensional lattice-fluid model, displaying some waterlike thermodynamic anomalies. The model, defined on a triangular lattice, is now extended to aqueous solutions with apolar species. Water molecules are of the “Mercedes Benz” type, i.e., they possess a D3 (equilateral triangle) symmetry, with three equivalent bonding arms. Bond formation depends both on orientation and local density. The insertion of inert molecules displays typical signatures of hydrophobic hydration: large positive transfer free energy, large negative transfer entropy (at low temperature), strong temperature dependence of the transfer enthalpy and entropy, i.e., large (positive) transfer heat capacity. Model properties are derived by a generalized first order approximation on a triangle cluster.
Hydration of an apolar solute in a two-dimensional waterlike lattice fluid.
Buzano, C; De Stefanis, E; Pretti, M
2005-05-01
In a previous work, we investigated a two-dimensional lattice-fluid model, displaying some waterlike thermodynamic anomalies. The model, defined on a triangular lattice, is now extended to aqueous solutions with apolar species. Water molecules are of the "Mercedes Benz" type, i.e., they possess a D3 (equilateral triangle) symmetry, with three equivalent bonding arms. Bond formation depends both on orientation and local density. The insertion of inert molecules displays typical signatures of hydrophobic hydration: large positive transfer free energy, large negative transfer entropy (at low temperature), strong temperature dependence of the transfer enthalpy and entropy, i.e., large (positive) transfer heat capacity. Model properties are derived by a generalized first order approximation on a triangle cluster.
A Review on the Nonlinear Dynamical System Analysis of Electrocardiogram Signal
Mohapatra, Biswajit
2018-01-01
Electrocardiogram (ECG) signal analysis has received special attention of the researchers in the recent past because of its ability to divulge crucial information about the electrophysiology of the heart and the autonomic nervous system activity in a noninvasive manner. Analysis of the ECG signals has been explored using both linear and nonlinear methods. However, the nonlinear methods of ECG signal analysis are gaining popularity because of their robustness in feature extraction and classification. The current study presents a review of the nonlinear signal analysis methods, namely, reconstructed phase space analysis, Lyapunov exponents, correlation dimension, detrended fluctuation analysis (DFA), recurrence plot, Poincaré plot, approximate entropy, and sample entropy along with their recent applications in the ECG signal analysis. PMID:29854361
A Review on the Nonlinear Dynamical System Analysis of Electrocardiogram Signal.
Nayak, Suraj K; Bit, Arindam; Dey, Anilesh; Mohapatra, Biswajit; Pal, Kunal
2018-01-01
Electrocardiogram (ECG) signal analysis has received special attention of the researchers in the recent past because of its ability to divulge crucial information about the electrophysiology of the heart and the autonomic nervous system activity in a noninvasive manner. Analysis of the ECG signals has been explored using both linear and nonlinear methods. However, the nonlinear methods of ECG signal analysis are gaining popularity because of their robustness in feature extraction and classification. The current study presents a review of the nonlinear signal analysis methods, namely, reconstructed phase space analysis, Lyapunov exponents, correlation dimension, detrended fluctuation analysis (DFA), recurrence plot, Poincaré plot, approximate entropy, and sample entropy along with their recent applications in the ECG signal analysis.
Hu, Dehua; Liu, Qing; Tisdale, Jeremy; ...
2015-04-15
This paper reports Seebeck effects driven by both surface polarization difference and entropy difference by using intramolecular charge-transfer states in n-type and p-type conjugated polymers, namely IIDT and IIDDT, based on vertical conductor/polymer/conductor thin-film devices. Large Seebeck coefficients of -898 V/K and 1300 V/K from are observed from n-type IIDT p-type IIDDT, respectively, when the charge-transfer states are generated by a white light illumination of 100 mW/cm2. Simultaneously, electrical conductivities are increased from almost insulating states in dark condition to conducting states under photoexcitation in both n-type IIDT and p-type IIDDT devices. We find that the intramolecular charge-transfer states canmore » largely enhance Seebeck effects in the n-type IIDT and p-type IIDDT devices driven by both surface polarization difference and entropy difference. Furthermore, the Seebeck effects can be shifted between polarization and entropy regimes when electrical conductivities are changed. This reveals a new concept to develop Seebeck effects by controlling polarization and entropy regimes based on charge-transfer states in vertical conductor/polymer/conductor thin-film devices.« less
NASA Astrophysics Data System (ADS)
Sjögreen, Björn; Yee, H. C.
2018-07-01
The Sjogreen and Yee [31] high order entropy conservative numerical method for compressible gas dynamics is extended to include discontinuities and also extended to equations of ideal magnetohydrodynamics (MHD). The basic idea is based on Tadmor's [40] original work for inviscid perfect gas flows. For the MHD four formulations of the MHD are considered: (a) the conservative MHD, (b) the Godunov [14] non-conservative form, (c) the Janhunen [19] - MHD with magnetic field source terms, and (d) a MHD with source terms by Brackbill and Barnes [5]. Three forms of the high order entropy numerical fluxes for the MHD in the finite difference framework are constructed. They are based on the extension of the low order form of Chandrashekar and Klingenberg [9], and two forms with modifications of the Winters and Gassner [49] numerical fluxes. For flows containing discontinuities and multiscale turbulence fluctuations the high order entropy conservative numerical fluxes as the new base scheme under the Yee and Sjogreen [31] and Kotov et al. [21,22] high order nonlinear filter approach is developed. The added nonlinear filter step on the high order centered entropy conservative spatial base scheme is only utilized at isolated computational regions, while maintaining high accuracy almost everywhere for long time integration of unsteady flows and DNS and LES of turbulence computations. Representative test cases for both smooth flows and problems containing discontinuities for the gas dynamics and the ideal MHD are included. The results illustrate the improved stability by using the high order entropy conservative numerical flux as the base scheme instead of the pure high order central scheme.
Shear viscosity of an ultrarelativistic Boltzmann gas with isotropic inelastic scattering processes
NASA Astrophysics Data System (ADS)
El, A.; Lauciello, F.; Wesp, C.; Bouras, I.; Xu, Z.; Greiner, C.
2014-05-01
We derive an analytic expression for the shear viscosity of an ultra-relativistic gas in presence of both elastic 2→2 and inelastic 2↔3 processes with isotropic differential cross sections. The derivation is based on the entropy principle and Grad's approximation for the off-equilibrium distribution function. The obtained formula relates the shear viscosity coefficient η to the total cross sections σ22 and σ23 of the elastic resp. inelastic processes. The values of shear viscosity extracted using the Green-Kubo formula from kinetic transport calculations are shown to be in excellent agreement with the analytic results which demonstrates the validity of the derived formula.
Inverting Monotonic Nonlinearities by Entropy Maximization
López-de-Ipiña Pena, Karmele; Caiafa, Cesar F.
2016-01-01
This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results. PMID:27780261
Inverting Monotonic Nonlinearities by Entropy Maximization.
Solé-Casals, Jordi; López-de-Ipiña Pena, Karmele; Caiafa, Cesar F
2016-01-01
This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results.
Combining Mixture Components for Clustering*
Baudry, Jean-Patrick; Raftery, Adrian E.; Celeux, Gilles; Lo, Kenneth; Gottardo, Raphaël
2010-01-01
Model-based clustering consists of fitting a mixture model to data and identifying each cluster with one of its components. Multivariate normal distributions are typically used. The number of clusters is usually determined from the data, often using BIC. In practice, however, individual clusters can be poorly fitted by Gaussian distributions, and in that case model-based clustering tends to represent one non-Gaussian cluster by a mixture of two or more Gaussian distributions. If the number of mixture components is interpreted as the number of clusters, this can lead to overestimation of the number of clusters. This is because BIC selects the number of mixture components needed to provide a good approximation to the density, rather than the number of clusters as such. We propose first selecting the total number of Gaussian mixture components, K, using BIC and then combining them hierarchically according to an entropy criterion. This yields a unique soft clustering for each number of clusters less than or equal to K. These clusterings can be compared on substantive grounds, and we also describe an automatic way of selecting the number of clusters via a piecewise linear regression fit to the rescaled entropy plot. We illustrate the method with simulated data and a flow cytometry dataset. Supplemental Materials are available on the journal Web site and described at the end of the paper. PMID:20953302
The maximum entropy production principle: two basic questions.
Martyushev, Leonid M
2010-05-12
The overwhelming majority of maximum entropy production applications to ecological and environmental systems are based on thermodynamics and statistical physics. Here, we discuss briefly maximum entropy production principle and raises two questions: (i) can this principle be used as the basis for non-equilibrium thermodynamics and statistical mechanics and (ii) is it possible to 'prove' the principle? We adduce one more proof which is most concise today.
Entropy production during hadronization of a quark-gluon plasma
NASA Astrophysics Data System (ADS)
Biró, Tamás S.; Schram, Zsolt; Jenkovszky, László
2018-02-01
We revisit some physical pictures for the hadronization of quark-gluon plasma, concentrating on the problem of entropy production during processes where the number of degrees of freedom is seemingly reduced due to color confinement. Based on observations on Regge trajectories we propose not having an infinite tower of hadronic resonances. We discuss possible entropy production mechanisms far from equilibrium in terms of stochastic dynamics.
Entanglement entropy and entanglement spectrum of the Kitaev model.
Yao, Hong; Qi, Xiao-Liang
2010-08-20
In this letter, we obtain an exact formula for the entanglement entropy of the ground state and all excited states of the Kitaev model. Remarkably, the entanglement entropy can be expressed in a simple separable form S = SG+SF, with SF the entanglement entropy of a free Majorana fermion system and SG that of a Z2 gauge field. The Z2 gauge field part contributes to the universal "topological entanglement entropy" of the ground state while the fermion part is responsible for the nonlocal entanglement carried by the Z2 vortices (visons) in the non-Abelian phase. Our result also enables the calculation of the entire entanglement spectrum and the more general Renyi entropy of the Kitaev model. Based on our results we propose a new quantity to characterize topologically ordered states--the capacity of entanglement, which can distinguish the st ates with and without topologically protected gapless entanglement spectrum.
Spin-phase-space-entropy production
NASA Astrophysics Data System (ADS)
Santos, Jader P.; Céleri, Lucas C.; Brito, Frederico; Landi, Gabriel T.; Paternostro, Mauro
2018-05-01
Quantifying the degree of irreversibility of an open system dynamics represents a problem of both fundamental and applied relevance. Even though a well-known framework exists for thermal baths, the results give diverging results in the limit of zero temperature and are also not readily extended to nonequilibrium reservoirs, such as dephasing baths. Aimed at filling this gap, in this paper we introduce a phase-space-entropy production framework for quantifying the irreversibility of spin systems undergoing Lindblad dynamics. The theory is based on the spin Husimi-Q function and its corresponding phase-space entropy, known as Wehrl entropy. Unlike the von Neumann entropy production rate, we show that in our framework, the Wehrl entropy production rate remains valid at any temperature and is also readily extended to arbitrary nonequilibrium baths. As an application, we discuss the irreversibility associated with the interaction of a two-level system with a single-photon pulse, a problem which cannot be treated using the conventional approach.
Entropy of black holes with multiple horizons
NASA Astrophysics Data System (ADS)
He, Yun; Ma, Meng-Sen; Zhao, Ren
2018-05-01
We examine the entropy of black holes in de Sitter space and black holes surrounded by quintessence. These black holes have multiple horizons, including at least the black hole event horizon and a horizon outside it (cosmological horizon for de Sitter black holes and "quintessence horizon" for the black holes surrounded by quintessence). Based on the consideration that the two horizons are not independent each other, we conjecture that the total entropy of these black holes should not be simply the sum of entropies of the two horizons, but should have an extra term coming from the correlations between the two horizons. Different from our previous works, in this paper we consider the cosmological constant as the variable and employ an effective method to derive the explicit form of the entropy. We also try to discuss the thermodynamic stabilities of these black holes according to the entropy and the effective temperature.
Compression based entropy estimation of heart rate variability on multiple time scales.
Baumert, Mathias; Voss, Andreas; Javorka, Michal
2013-01-01
Heart rate fluctuates beat by beat in a complex manner. The aim of this study was to develop a framework for entropy assessment of heart rate fluctuations on multiple time scales. We employed the Lempel-Ziv algorithm for lossless data compression to investigate the compressibility of RR interval time series on different time scales, using a coarse-graining procedure. We estimated the entropy of RR interval time series of 20 young and 20 old subjects and also investigated the compressibility of randomly shuffled surrogate RR time series. The original RR time series displayed significantly smaller compression entropy values than randomized RR interval data. The RR interval time series of older subjects showed significantly different entropy characteristics over multiple time scales than those of younger subjects. In conclusion, data compression may be useful approach for multiscale entropy assessment of heart rate variability.
Entropy factor for randomness quantification in neuronal data.
Rajdl, K; Lansky, P; Kostal, L
2017-11-01
A novel measure of neural spike train randomness, an entropy factor, is proposed. It is based on the Shannon entropy of the number of spikes in a time window and can be seen as an analogy to the Fano factor. Theoretical properties of the new measure are studied for equilibrium renewal processes and further illustrated on gamma and inverse Gaussian probability distributions of interspike intervals. Finally, the entropy factor is evaluated from the experimental records of spontaneous activity in macaque primary visual cortex and compared to its theoretical behavior deduced for the renewal process models. Both theoretical and experimental results show substantial differences between the Fano and entropy factors. Rather paradoxically, an increase in the variability of spike count is often accompanied by an increase of its predictability, as evidenced by the entropy factor. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Relative entropy of steering: on its definition and properties
NASA Astrophysics Data System (ADS)
Kaur, Eneet; Wilde, Mark M.
2017-11-01
In Gallego and Aolita (2015 Phys. Rev. X 5 041008), the authors proposed a definition for the relative entropy of steering and showed that the resulting quantity is a convex steering monotone. Here we advocate for a different definition for relative entropy of steering, based on well grounded concerns coming from quantum Shannon theory. We prove that this modified relative entropy of steering is a convex steering monotone. Furthermore, we establish that it is uniformly continuous and faithful, in both cases giving quantitative bounds that should be useful in applications. We also consider a restricted relative entropy of steering which is relevant for the case in which the free operations in the resource theory of steering have a more restricted form (the restricted operations could be more relevant in practical scenarios). The restricted relative entropy of steering is convex, monotone with respect to these restricted operations, uniformly continuous, and faithful.
Entropy of adsorption of mixed surfactants from solutions onto the air/water interface
Chen, L.-W.; Chen, J.-H.; Zhou, N.-F.
1995-01-01
The partial molar entropy change for mixed surfactant molecules adsorbed from solution at the air/water interface has been investigated by surface thermodynamics based upon the experimental surface tension isotherms at various temperatures. Results for different surfactant mixtures of sodium dodecyl sulfate and sodium tetradecyl sulfate, decylpyridinium chloride and sodium alkylsulfonates have shown that the partial molar entropy changes for adsorption of the mixed surfactants were generally negative and decreased with increasing adsorption to a minimum near the maximum adsorption and then increased abruptly. The entropy decrease can be explained by the adsorption-orientation of surfactant molecules in the adsorbed monolayer and the abrupt entropy increase at the maximum adsorption is possible due to the strong repulsion between the adsorbed molecules.
Entropy, complexity, and Markov diagrams for random walk cancer models
Newton, Paul K.; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter
2014-01-01
The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential. PMID:25523357
On the Consequences of Clausius-Duhem Inequality for Electrolyte Solutions
NASA Astrophysics Data System (ADS)
Reis, Martina; Bassi, Adalberto Bono Maurizio Sacchi
2014-03-01
Based on the fundamentals of thermo-statics, non-equilibrium thermodynamics theories frequently employ an entropy inequality, where the entropy flux is collinear to the heat flux, and the entropy supply is proportional to the energy supply. Although this assumption is suitable for many material bodies, e.g. heat-conducting viscous fluids, there is a class of materials for which these assumptions are not valid. By assuming that the entropy flux and the entropy supply are constitutive quantities, in this work it is demonstrated that the entropy flux for a reacting ionic mixture of non-volatile solutes presents a non-collinear term due to the diffusive fluxes. The consequences of the collinearity between the entropy flux and the heat flux, as well as the proportionality of the entropy supply and the energy supply on the stability of chemical systems are also investigated. Furthermore, by considering an electrolyte solution of non-volatile solutes in phase equilibrium with water vapor, and the constitutive nature of the entropy flux, the stability of a vapor-electrolyte solution interface is studied. Despite this work only deals with electrolyte solutions, the results presented can be easily extended to more complex chemical reacting systems. The first author acknowledges financial support from CNPq (National Counsel of Technological and Scientific Development).
Distance-Based Configurational Entropy of Proteins from Molecular Dynamics Simulations
Fogolari, Federico; Corazza, Alessandra; Fortuna, Sara; Soler, Miguel Angel; VanSchouwen, Bryan; Brancolini, Giorgia; Corni, Stefano; Melacini, Giuseppe; Esposito, Gennaro
2015-01-01
Estimation of configurational entropy from molecular dynamics trajectories is a difficult task which is often performed using quasi-harmonic or histogram analysis. An entirely different approach, proposed recently, estimates local density distribution around each conformational sample by measuring the distance from its nearest neighbors. In this work we show this theoretically well grounded the method can be easily applied to estimate the entropy from conformational sampling. We consider a set of systems that are representative of important biomolecular processes. In particular: reference entropies for amino acids in unfolded proteins are obtained from a database of residues not participating in secondary structure elements;the conformational entropy of folding of β2-microglobulin is computed from molecular dynamics simulations using reference entropies for the unfolded state;backbone conformational entropy is computed from molecular dynamics simulations of four different states of the EPAC protein and compared with order parameters (often used as a measure of entropy);the conformational and rototranslational entropy of binding is computed from simulations of 20 tripeptides bound to the peptide binding protein OppA and of β2-microglobulin bound to a citrate coated gold surface. This work shows the potential of the method in the most representative biological processes involving proteins, and provides a valuable alternative, principally in the shown cases, where other approaches are problematic. PMID:26177039
Distance-Based Configurational Entropy of Proteins from Molecular Dynamics Simulations.
Fogolari, Federico; Corazza, Alessandra; Fortuna, Sara; Soler, Miguel Angel; VanSchouwen, Bryan; Brancolini, Giorgia; Corni, Stefano; Melacini, Giuseppe; Esposito, Gennaro
2015-01-01
Estimation of configurational entropy from molecular dynamics trajectories is a difficult task which is often performed using quasi-harmonic or histogram analysis. An entirely different approach, proposed recently, estimates local density distribution around each conformational sample by measuring the distance from its nearest neighbors. In this work we show this theoretically well grounded the method can be easily applied to estimate the entropy from conformational sampling. We consider a set of systems that are representative of important biomolecular processes. In particular: reference entropies for amino acids in unfolded proteins are obtained from a database of residues not participating in secondary structure elements;the conformational entropy of folding of β2-microglobulin is computed from molecular dynamics simulations using reference entropies for the unfolded state;backbone conformational entropy is computed from molecular dynamics simulations of four different states of the EPAC protein and compared with order parameters (often used as a measure of entropy);the conformational and rototranslational entropy of binding is computed from simulations of 20 tripeptides bound to the peptide binding protein OppA and of β2-microglobulin bound to a citrate coated gold surface. This work shows the potential of the method in the most representative biological processes involving proteins, and provides a valuable alternative, principally in the shown cases, where other approaches are problematic.
Dudowicz, Jacek; Freed, Karl F; Douglas, Jack F
2015-06-07
We develop a statistical mechanical lattice theory for polymer solvation by a pair of relatively low molar mass solvents that compete for binding to the polymer backbone. A theory for the equilibrium mixture of solvated polymer clusters {AiBCj} and free unassociated molecules A, B, and C is formulated in the spirit of Flory-Huggins mean-field approximation. This theoretical framework enables us to derive expressions for the boundaries for phase stability (spinodals) and other basic properties of these polymer solutions: the internal energy U, entropy S, specific heat CV, extent of solvation Φsolv, average degree of solvation 〈Nsolv〉, and second osmotic virial coefficient B2 as functions of temperature and the composition of the mixture. Our theory predicts many new phenomena, but the current paper applies the theory to describe the entropy-enthalpy compensation in the free energy of polymer solvation, a phenomenon observed for many years without theoretical explanation and with significant relevance to liquid chromatography and other polymer separation methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noronha, Jorge; Denicol, Gabriel S.
In this paper we obtain an analytical solution of the relativistic Boltzmann equation under the relaxation time approximation that describes the out-of-equilibrium dynamics of a radially expanding massless gas. This solution is found by mapping this expanding system in flat spacetime to a static flow in the curved spacetime AdS 2 Ⓧ S 2. We further derive explicit analytic expressions for the momentum dependence of the single-particle distribution function as well as for the spatial dependence of its moments. We find that this dissipative system has the ability to flow as a perfect fluid even though its entropy density doesmore » not match the equilibrium form. The nonequilibrium contribution to the entropy density is shown to be due to higher-order scalar moments (which possess no hydrodynamical interpretation) of the Boltzmann equation that can remain out of equilibrium but do not couple to the energy-momentum tensor of the system. Furthermore, in this system the slowly moving hydrodynamic degrees of freedom can exhibit true perfect fluidity while being totally decoupled from the fast moving, nonhydrodynamical microscopic degrees of freedom that lead to entropy production.« less
High-order computer-assisted estimates of topological entropy
NASA Astrophysics Data System (ADS)
Grote, Johannes
The concept of Taylor Models is introduced, which offers highly accurate C0-estimates for the enclosures of functional dependencies, combining high-order Taylor polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified interval arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly nonlinear dynamical systems. A method to obtain sharp rigorous enclosures of Poincare maps for certain types of flows and surfaces is developed and numerical examples are presented. Differential algebraic techniques allow the efficient and accurate computation of polynomial approximations for invariant curves of certain planar maps around hyperbolic fixed points. Subsequently we introduce a procedure to extend these polynomial curves to verified Taylor Model enclosures of local invariant manifolds with C0-errors of size 10-10--10 -14, and proceed to generate the global invariant manifold tangle up to comparable accuracy through iteration in Taylor Model arithmetic. Knowledge of the global manifold structure up to finite iterations of the local manifold pieces enables us to find all homoclinic and heteroclinic intersections in the generated manifold tangle. Combined with the mapping properties of the homoclinic points and their ordering we are able to construct a subshift of finite type as a topological factor of the original planar system to obtain rigorous lower bounds for its topological entropy. This construction is fully automatic and yields homoclinic tangles with several hundred homoclinic points. As an example rigorous lower bounds for the topological entropy of the Henon map are computed, which to the best knowledge of the authors yield the largest such estimates published so far.
NASA Astrophysics Data System (ADS)
Vyhnalek, Brian; Zurcher, Ulrich; O'Dwyer, Rebecca; Kaufman, Miron
2009-10-01
A wide range of heart rate irregularities have been reported in small studies of patients with temporal lobe epilepsy [TLE]. We hypothesize that patients with TLE display cardiac dysautonomia in either a subclinical or clinical manner. In a small study, we have retrospectively identified (2003-8) two groups of patients from the epilepsy monitoring unit [EMU] at the Cleveland Clinic. No patients were diagnosed with cardiovascular morbidities. The control group consisted of patients with confirmed pseudoseizures and the experimental group had confirmed right temporal lobe epilepsy through a seizure free outcome after temporal lobectomy. We quantified the heart rate variability using the approximate entropy [ApEn]. We found similar values of the ApEn in all three states of consciousness (awake, sleep, and proceeding seizure onset). In the TLE group, there is some evidence for greater variability in the awake than in either the sleep or proceeding seizure onset. Here we present results for mathematically-generated time series: the heart rate fluctuations ξ follow the γ statistics i.e., p(ξ)=γ-1(k) ξ^k exp(-ξ). This probability function has well-known properties and its Shannon entropy can be expressed in terms of the γ-function. The parameter k allows us to generate a family of heart rate time series with different statistics. The ApEn calculated for the generated time series for different values of k mimic the properties found for the TLE and pseudoseizure group. Our results suggest that the ApEn is an effective tool to probe differences in statistics of heart rate fluctuations.
On the apparent power law in CDM halo pseudo-phase space density profiles
NASA Astrophysics Data System (ADS)
Nadler, Ethan O.; Oh, S. Peng; Ji, Suoqing
2017-09-01
We investigate the apparent power-law scaling of the pseudo-phase space density (PPSD) in cold dark matter (CDM) haloes. We study fluid collapse, using the close analogy between the gas entropy and the PPSD in the fluid approximation. Our hydrodynamic calculations allow for a precise evaluation of logarithmic derivatives. For scale-free initial conditions, entropy is a power law in Lagrangian (mass) coordinates, but not in Eulerian (radial) coordinates. The deviation from a radial power law arises from incomplete hydrostatic equilibrium (HSE), linked to bulk inflow and mass accretion, and the convergence to the asymptotic central power-law slope is very slow. For more realistic collapse, entropy is not a power law with either radius or mass due to deviations from HSE and scale-dependent initial conditions. Instead, it is a slowly rolling power law that appears approximately linear on a log-log plot. Our fluid calculations recover PPSD power-law slopes and residual amplitudes similar to N-body simulations, indicating that deviations from a power law are not numerical artefacts. In addition, we find that realistic collapse is not self-similar; scalelengths such as the shock radius and the turnaround radius are not power-law functions of time. We therefore argue that the apparent power-law PPSD cannot be used to make detailed dynamical inferences or extrapolate halo profiles inwards, and that it does not indicate any hidden integrals of motion. We also suggest that the apparent agreement between the PPSD and the asymptotic Bertschinger slope is purely coincidental.
Bubble Entropy: An Entropy Almost Free of Parameters.
Manis, George; Aktaruzzaman, Md; Sassi, Roberto
2017-11-01
Objective : A critical point in any definition of entropy is the selection of the parameters employed to obtain an estimate in practice. We propose a new definition of entropy aiming to reduce the significance of this selection. Methods: We call the new definition Bubble Entropy . Bubble Entropy is based on permutation entropy, where the vectors in the embedding space are ranked. We use the bubble sort algorithm for the ordering procedure and count instead the number of swaps performed for each vector. Doing so, we create a more coarse-grained distribution and then compute the entropy of this distribution. Results: Experimental results with both real and synthetic HRV signals showed that bubble entropy presents remarkable stability and exhibits increased descriptive and discriminating power compared to all other definitions, including the most popular ones. Conclusion: The definition proposed is almost free of parameters. The most common ones are the scale factor r and the embedding dimension m . In our definition, the scale factor is totally eliminated and the importance of m is significantly reduced. The proposed method presents increased stability and discriminating power. Significance: After the extensive use of some entropy measures in physiological signals, typical values for their parameters have been suggested, or at least, widely used. However, the parameters are still there, application and dataset dependent, influencing the computed value and affecting the descriptive power. Reducing their significance or eliminating them alleviates the problem, decoupling the method from the data and the application, and eliminating subjective factors. Objective : A critical point in any definition of entropy is the selection of the parameters employed to obtain an estimate in practice. We propose a new definition of entropy aiming to reduce the significance of this selection. Methods: We call the new definition Bubble Entropy . Bubble Entropy is based on permutation entropy, where the vectors in the embedding space are ranked. We use the bubble sort algorithm for the ordering procedure and count instead the number of swaps performed for each vector. Doing so, we create a more coarse-grained distribution and then compute the entropy of this distribution. Results: Experimental results with both real and synthetic HRV signals showed that bubble entropy presents remarkable stability and exhibits increased descriptive and discriminating power compared to all other definitions, including the most popular ones. Conclusion: The definition proposed is almost free of parameters. The most common ones are the scale factor r and the embedding dimension m . In our definition, the scale factor is totally eliminated and the importance of m is significantly reduced. The proposed method presents increased stability and discriminating power. Significance: After the extensive use of some entropy measures in physiological signals, typical values for their parameters have been suggested, or at least, widely used. However, the parameters are still there, application and dataset dependent, influencing the computed value and affecting the descriptive power. Reducing their significance or eliminating them alleviates the problem, decoupling the method from the data and the application, and eliminating subjective factors.
Convertino, Matteo; Mangoubi, Rami S.; Linkov, Igor; Lowry, Nathan C.; Desai, Mukund
2012-01-01
Background The quantification of species-richness and species-turnover is essential to effective monitoring of ecosystems. Wetland ecosystems are particularly in need of such monitoring due to their sensitivity to rainfall, water management and other external factors that affect hydrology, soil, and species patterns. A key challenge for environmental scientists is determining the linkage between natural and human stressors, and the effect of that linkage at the species level in space and time. We propose pixel intensity based Shannon entropy for estimating species-richness, and introduce a method based on statistical wavelet multiresolution texture analysis to quantitatively assess interseasonal and interannual species turnover. Methodology/Principal Findings We model satellite images of regions of interest as textures. We define a texture in an image as a spatial domain where the variations in pixel intensity across the image are both stochastic and multiscale. To compare two textures quantitatively, we first obtain a multiresolution wavelet decomposition of each. Either an appropriate probability density function (pdf) model for the coefficients at each subband is selected, and its parameters estimated, or, a non-parametric approach using histograms is adopted. We choose the former, where the wavelet coefficients of the multiresolution decomposition at each subband are modeled as samples from the generalized Gaussian pdf. We then obtain the joint pdf for the coefficients for all subbands, assuming independence across subbands; an approximation that simplifies the computational burden significantly without sacrificing the ability to statistically distinguish textures. We measure the difference between two textures' representative pdf's via the Kullback-Leibler divergence (KL). Species turnover, or diversity, is estimated using both this KL divergence and the difference in Shannon entropy. Additionally, we predict species richness, or diversity, based on the Shannon entropy of pixel intensity.To test our approach, we specifically use the green band of Landsat images for a water conservation area in the Florida Everglades. We validate our predictions against data of species occurrences for a twenty-eight years long period for both wet and dry seasons. Our method correctly predicts 73% of species richness. For species turnover, the newly proposed KL divergence prediction performance is near 100% accurate. This represents a significant improvement over the more conventional Shannon entropy difference, which provides 85% accuracy. Furthermore, we find that changes in soil and water patterns, as measured by fluctuations of the Shannon entropy for the red and blue bands respectively, are positively correlated with changes in vegetation. The fluctuations are smaller in the wet season when compared to the dry season. Conclusions/Significance Texture-based statistical multiresolution image analysis is a promising method for quantifying interseasonal differences and, consequently, the degree to which vegetation, soil, and water patterns vary. The proposed automated method for quantifying species richness and turnover can also provide analysis at higher spatial and temporal resolution than is currently obtainable from expensive monitoring campaigns, thus enabling more prompt, more cost effective inference and decision making support regarding anomalous variations in biodiversity. Additionally, a matrix-based visualization of the statistical multiresolution analysis is presented to facilitate both insight and quick recognition of anomalous data. PMID:23115629
Towards operational interpretations of generalized entropies
NASA Astrophysics Data System (ADS)
Topsøe, Flemming
2010-12-01
The driving force behind our study has been to overcome the difficulties you encounter when you try to extend the clear and convincing operational interpretations of classical Boltzmann-Gibbs-Shannon entropy to other notions, especially to generalized entropies as proposed by Tsallis. Our approach is philosophical, based on speculations regarding the interplay between truth, belief and knowledge. The main result demonstrates that, accepting philosophically motivated assumptions, the only possible measures of entropy are those suggested by Tsallis - which, as we know, include classical entropy. This result constitutes, so it seems, a more transparent interpretation of entropy than previously available. However, further research to clarify the assumptions is still needed. Our study points to the thesis that one should never consider the notion of entropy in isolation - in order to enable a rich and technically smooth study, further concepts, such as divergence, score functions and descriptors or controls should be included in the discussion. This will clarify the distinction between Nature and Observer and facilitate a game theoretical discussion. The usefulness of this distinction and the subsequent exploitation of game theoretical results - such as those connected with the notion of Nash equilibrium - is demonstrated by a discussion of the Maximum Entropy Principle.
Coarse-graining of proteins based on elastic network models
NASA Astrophysics Data System (ADS)
Sinitskiy, Anton V.; Voth, Gregory A.
2013-08-01
To simulate molecular processes on biologically relevant length- and timescales, coarse-grained (CG) models of biomolecular systems with tens to even hundreds of residues per CG site are required. One possible way to build such models is explored in this article: an elastic network model (ENM) is employed to define the CG variables. Free energy surfaces are approximated by Taylor series, with the coefficients found by force-matching. CG potentials are shown to undergo renormalization due to roughness of the energy landscape and smoothing of it under coarse-graining. In the case study of hen egg-white lysozyme, the entropy factor is shown to be of critical importance for maintaining the native structure, and a relationship between the proposed ENM-mode-based CG models and traditional CG-bead-based models is discussed. The proposed approach uncovers the renormalizable character of CG models and offers new opportunities for automated and computationally efficient studies of complex free energy surfaces.
A Comparison of Multiscale Permutation Entropy Measures in On-Line Depth of Anesthesia Monitoring
Li, Xiaoli; Li, Duan; Li, Yongwang; Ursino, Mauro
2016-01-01
Objective Multiscale permutation entropy (MSPE) is becoming an interesting tool to explore neurophysiological mechanisms in recent years. In this study, six MSPE measures were proposed for on-line depth of anesthesia (DoA) monitoring to quantify the anesthetic effect on the real-time EEG recordings. The performance of these measures in describing the transient characters of simulated neural populations and clinical anesthesia EEG were evaluated and compared. Methods Six MSPE algorithms—derived from Shannon permutation entropy (SPE), Renyi permutation entropy (RPE) and Tsallis permutation entropy (TPE) combined with the decomposition procedures of coarse-graining (CG) method and moving average (MA) analysis—were studied. A thalamo-cortical neural mass model (TCNMM) was used to generate noise-free EEG under anesthesia to quantitatively assess the robustness of each MSPE measure against noise. Then, the clinical anesthesia EEG recordings from 20 patients were analyzed with these measures. To validate their effectiveness, the ability of six measures were compared in terms of tracking the dynamical changes in EEG data and the performance in state discrimination. The Pearson correlation coefficient (R) was used to assess the relationship among MSPE measures. Results CG-based MSPEs failed in on-line DoA monitoring at multiscale analysis. In on-line EEG analysis, the MA-based MSPE measures at 5 decomposed scales could track the transient changes of EEG recordings and statistically distinguish the awake state, unconsciousness and recovery of consciousness (RoC) state significantly. Compared to single-scale SPE and RPE, MSPEs had better anti-noise ability and MA-RPE at scale 5 performed best in this aspect. MA-TPE outperformed other measures with faster tracking speed of the loss of unconsciousness. Conclusions MA-based multiscale permutation entropies have the potential for on-line anesthesia EEG analysis with its simple computation and sensitivity to drug effect changes. CG-based multiscale permutation entropies may fail to describe the characteristics of EEG at high decomposition scales. PMID:27723803
A Comparison of Multiscale Permutation Entropy Measures in On-Line Depth of Anesthesia Monitoring.
Su, Cui; Liang, Zhenhu; Li, Xiaoli; Li, Duan; Li, Yongwang; Ursino, Mauro
2016-01-01
Multiscale permutation entropy (MSPE) is becoming an interesting tool to explore neurophysiological mechanisms in recent years. In this study, six MSPE measures were proposed for on-line depth of anesthesia (DoA) monitoring to quantify the anesthetic effect on the real-time EEG recordings. The performance of these measures in describing the transient characters of simulated neural populations and clinical anesthesia EEG were evaluated and compared. Six MSPE algorithms-derived from Shannon permutation entropy (SPE), Renyi permutation entropy (RPE) and Tsallis permutation entropy (TPE) combined with the decomposition procedures of coarse-graining (CG) method and moving average (MA) analysis-were studied. A thalamo-cortical neural mass model (TCNMM) was used to generate noise-free EEG under anesthesia to quantitatively assess the robustness of each MSPE measure against noise. Then, the clinical anesthesia EEG recordings from 20 patients were analyzed with these measures. To validate their effectiveness, the ability of six measures were compared in terms of tracking the dynamical changes in EEG data and the performance in state discrimination. The Pearson correlation coefficient (R) was used to assess the relationship among MSPE measures. CG-based MSPEs failed in on-line DoA monitoring at multiscale analysis. In on-line EEG analysis, the MA-based MSPE measures at 5 decomposed scales could track the transient changes of EEG recordings and statistically distinguish the awake state, unconsciousness and recovery of consciousness (RoC) state significantly. Compared to single-scale SPE and RPE, MSPEs had better anti-noise ability and MA-RPE at scale 5 performed best in this aspect. MA-TPE outperformed other measures with faster tracking speed of the loss of unconsciousness. MA-based multiscale permutation entropies have the potential for on-line anesthesia EEG analysis with its simple computation and sensitivity to drug effect changes. CG-based multiscale permutation entropies may fail to describe the characteristics of EEG at high decomposition scales.