NASA Technical Reports Server (NTRS)
Becker, Joseph F.; Valentin, Jose
1996-01-01
The maximum entropy technique was successfully applied to the deconvolution of overlapped chromatographic peaks. An algorithm was written in which the chromatogram was represented as a vector of sample concentrations multiplied by a peak shape matrix. Simulation results demonstrated that there is a trade off between the detector noise and peak resolution in the sense that an increase of the noise level reduced the peak separation that could be recovered by the maximum entropy method. Real data originated from a sample storage column was also deconvoluted using maximum entropy. Deconvolution is useful in this type of system because the conservation of time dependent profiles depends on the band spreading processes in the chromatographic column, which might smooth out the finer details in the concentration profile. The method was also applied to the deconvolution of previously interpretted Pioneer Venus chromatograms. It was found in this case that the correct choice of peak shape function was critical to the sensitivity of maximum entropy in the reconstruction of these chromatograms.
DECONV-TOOL: An IDL based deconvolution software package
NASA Technical Reports Server (NTRS)
Varosi, F.; Landsman, W. B.
1992-01-01
There are a variety of algorithms for deconvolution of blurred images, each having its own criteria or statistic to be optimized in order to estimate the original image data. Using the Interactive Data Language (IDL), we have implemented the Maximum Likelihood, Maximum Entropy, Maximum Residual Likelihood, and sigma-CLEAN algorithms in a unified environment called DeConv_Tool. Most of the algorithms have as their goal the optimization of statistics such as standard deviation and mean of residuals. Shannon entropy, log-likelihood, and chi-square of the residual auto-correlation are computed by DeConv_Tool for the purpose of determining the performance and convergence of any particular method and comparisons between methods. DeConv_Tool allows interactive monitoring of the statistics and the deconvolved image during computation. The final results, and optionally, the intermediate results, are stored in a structure convenient for comparison between methods and review of the deconvolution computation. The routines comprising DeConv_Tool are available via anonymous FTP through the IDL Astronomy User's Library.
NASA Astrophysics Data System (ADS)
Cheng, Yao; Zhou, Ning; Zhang, Weihua; Wang, Zhiwei
2018-07-01
Minimum entropy deconvolution is a widely-used tool in machinery fault diagnosis, because it enhances the impulse component of the signal. The filter coefficients that greatly influence the performance of the minimum entropy deconvolution are calculated by an iterative procedure. This paper proposes an improved deconvolution method for the fault detection of rolling element bearings. The proposed method solves the filter coefficients by the standard particle swarm optimization algorithm, assisted by a generalized spherical coordinate transformation. When optimizing the filters performance for enhancing the impulses in fault diagnosis (namely, faulty rolling element bearings), the proposed method outperformed the classical minimum entropy deconvolution method. The proposed method was validated in simulation and experimental signals from railway bearings. In both simulation and experimental studies, the proposed method delivered better deconvolution performance than the classical minimum entropy deconvolution method, especially in the case of low signal-to-noise ratio.
NASA Astrophysics Data System (ADS)
McDonald, Geoff L.; Zhao, Qing
2017-01-01
Minimum Entropy Deconvolution (MED) has been applied successfully to rotating machine fault detection from vibration data, however this method has limitations. A convolution adjustment to the MED definition and solution is proposed in this paper to address the discontinuity at the start of the signal - in some cases causing spurious impulses to be erroneously deconvolved. A problem with the MED solution is that it is an iterative selection process, and will not necessarily design an optimal filter for the posed problem. Additionally, the problem goal in MED prefers to deconvolve a single-impulse, while in rotating machine faults we expect one impulse-like vibration source per rotational period of the faulty element. Maximum Correlated Kurtosis Deconvolution was proposed to address some of these problems, and although it solves the target goal of multiple periodic impulses, it is still an iterative non-optimal solution to the posed problem and only solves for a limited set of impulses in a row. Ideally, the problem goal should target an impulse train as the output goal, and should directly solve for the optimal filter in a non-iterative manner. To meet these goals, we propose a non-iterative deconvolution approach called Multipoint Optimal Minimum Entropy Deconvolution Adjusted (MOMEDA). MOMEDA proposes a deconvolution problem with an infinite impulse train as the goal and the optimal filter solution can be solved for directly. From experimental data on a gearbox with and without a gear tooth chip, we show that MOMEDA and its deconvolution spectrums according to the period between the impulses can be used to detect faults and study the health of rotating machine elements effectively.
Comparison of image deconvolution algorithms on simulated and laboratory infrared images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Proctor, D.
1994-11-15
We compare Maximum Likelihood, Maximum Entropy, Accelerated Lucy-Richardson, Weighted Goodness of Fit, and Pixon reconstructions of simple scenes as a function of signal-to-noise ratio for simulated images with randomly generated noise. Reconstruction results of infrared images taken with the TAISIR (Temperature and Imaging System InfraRed) are also discussed.
Minimum entropy deconvolution and blind equalisation
NASA Technical Reports Server (NTRS)
Satorius, E. H.; Mulligan, J. J.
1992-01-01
Relationships between minimum entropy deconvolution, developed primarily for geophysics applications, and blind equalization are pointed out. It is seen that a large class of existing blind equalization algorithms are directly related to the scale-invariant cost functions used in minimum entropy deconvolution. Thus the extensive analyses of these cost functions can be directly applied to blind equalization, including the important asymptotic results of Donoho.
NASA Technical Reports Server (NTRS)
Schade, David J.; Elson, Rebecca A. W.
1993-01-01
We describe experiments with deconvolutions of simulations of deep HST Wide Field Camera images containing faint, compact galaxies to determine under what circumstances there is a quantitative advantage to image deconvolution, and explore whether it is (1) helpful for distinguishing between stars and compact galaxies, or between spiral and elliptical galaxies, and whether it (2) improves the accuracy with which characteristic radii and integrated magnitudes may be determined. The Maximum Entropy and Richardson-Lucy deconvolution algorithms give the same results. For medium and low S/N images, deconvolution does not significantly improve our ability to distinguish between faint stars and compact galaxies, nor between spiral and elliptical galaxies. Measurements from both raw and deconvolved images are biased and must be corrected; it is easier to quantify and remove the biases for cases that have not been deconvolved. We find no benefit from deconvolution for measuring luminosity profiles, but these results are limited to low S/N images of very compact (often undersampled) galaxies.
Maximum entropy deconvolution of the optical jet of 3C 273
NASA Technical Reports Server (NTRS)
Evans, I. N.; Ford, H. C.; Hui, X.
1989-01-01
The technique of maximum entropy image restoration is applied to the problem of deconvolving the point spread function from a deep, high-quality V band image of the optical jet of 3C 273. The resulting maximum entropy image has an approximate spatial resolution of 0.6 arcsec and has been used to study the morphology of the optical jet. Four regularly-spaced optical knots are clearly evident in the data, together with an optical 'extension' at each end of the optical jet. The jet oscillates around its center of gravity, and the spatial scale of the oscillations is very similar to the spacing between the optical knots. The jet is marginally resolved in the transverse direction and has an asymmetric profile perpendicular to the jet axis. The distribution of V band flux along the length of the jet, and accurate astrometry of the optical knot positions are presented.
Parsimonious Charge Deconvolution for Native Mass Spectrometry
2018-01-01
Charge deconvolution infers the mass from mass over charge (m/z) measurements in electrospray ionization mass spectra. When applied over a wide input m/z or broad target mass range, charge-deconvolution algorithms can produce artifacts, such as false masses at one-half or one-third of the correct mass. Indeed, a maximum entropy term in the objective function of MaxEnt, the most commonly used charge deconvolution algorithm, favors a deconvolved spectrum with many peaks over one with fewer peaks. Here we describe a new “parsimonious” charge deconvolution algorithm that produces fewer artifacts. The algorithm is especially well-suited to high-resolution native mass spectrometry of intact glycoproteins and protein complexes. Deconvolution of native mass spectra poses special challenges due to salt and small molecule adducts, multimers, wide mass ranges, and fewer and lower charge states. We demonstrate the performance of the new deconvolution algorithm on a range of samples. On the heavily glycosylated plasma properdin glycoprotein, the new algorithm could deconvolve monomer and dimer simultaneously and, when focused on the m/z range of the monomer, gave accurate and interpretable masses for glycoforms that had previously been analyzed manually using m/z peaks rather than deconvolved masses. On therapeutic antibodies, the new algorithm facilitated the analysis of extensions, truncations, and Fab glycosylation. The algorithm facilitates the use of native mass spectrometry for the qualitative and quantitative analysis of protein and protein assemblies. PMID:29376659
NASA Astrophysics Data System (ADS)
Boutet de Monvel, Jacques; Le Calvez, Sophie; Ulfendahl, Mats
2000-05-01
Image restoration algorithms provide efficient tools for recovering part of the information lost in the imaging process of a microscope. We describe recent progress in the application of deconvolution to confocal microscopy. The point spread function of a Biorad-MRC1024 confocal microscope was measured under various imaging conditions, and used to process 3D-confocal images acquired in an intact preparation of the inner ear developed at Karolinska Institutet. Using these experiments we investigate the application of denoising methods based on wavelet analysis as a natural regularization of the deconvolution process. Within the Bayesian approach to image restoration, we compare wavelet denoising with the use of a maximum entropy constraint as another natural regularization method. Numerical experiments performed with test images show a clear advantage of the wavelet denoising approach, allowing to `cool down' the image with respect to the signal, while suppressing much of the fine-scale artifacts appearing during deconvolution due to the presence of noise, incomplete knowledge of the point spread function, or undersampling problems. We further describe a natural development of this approach, which consists of performing the Bayesian inference directly in the wavelet domain.
Application of digital image processing techniques to astronomical imagery, 1979
NASA Technical Reports Server (NTRS)
Lorre, J. J.
1979-01-01
Several areas of applications of image processing to astronomy were identified and discussed. These areas include: (1) deconvolution for atmospheric seeing compensation; a comparison between maximum entropy and conventional Wiener algorithms; (2) polarization in galaxies from photographic plates; (3) time changes in M87 and methods of displaying these changes; (4) comparing emission line images in planetary nebulae; and (5) log intensity, hue saturation intensity, and principal component color enhancements of M82. Examples are presented of these techniques applied to a variety of objects.
Application of a multiscale maximum entropy image restoration algorithm to HXMT observations
NASA Astrophysics Data System (ADS)
Guan, Ju; Song, Li-Ming; Huo, Zhuo-Xi
2016-08-01
This paper introduces a multiscale maximum entropy (MSME) algorithm for image restoration of the Hard X-ray Modulation Telescope (HXMT), which is a collimated scan X-ray satellite mainly devoted to a sensitive all-sky survey and pointed observations in the 1-250 keV range. The novelty of the MSME method is to use wavelet decomposition and multiresolution support to control noise amplification at different scales. Our work is focused on the application and modification of this method to restore diffuse sources detected by HXMT scanning observations. An improved method, the ensemble multiscale maximum entropy (EMSME) algorithm, is proposed to alleviate the problem of mode mixing exiting in MSME. Simulations have been performed on the detection of the diffuse source Cen A by HXMT in all-sky survey mode. The results show that the MSME method is adapted to the deconvolution task of HXMT for diffuse source detection and the improved method could suppress noise and improve the correlation and signal-to-noise ratio, thus proving itself a better algorithm for image restoration. Through one all-sky survey, HXMT could reach a capacity of detecting a diffuse source with maximum differential flux of 0.5 mCrab. Supported by Strategic Priority Research Program on Space Science, Chinese Academy of Sciences (XDA04010300) and National Natural Science Foundation of China (11403014)
Multi-GPU maximum entropy image synthesis for radio astronomy
NASA Astrophysics Data System (ADS)
Cárcamo, M.; Román, P. E.; Casassus, S.; Moral, V.; Rannou, F. R.
2018-01-01
The maximum entropy method (MEM) is a well known deconvolution technique in radio-interferometry. This method solves a non-linear optimization problem with an entropy regularization term. Other heuristics such as CLEAN are faster but highly user dependent. Nevertheless, MEM has the following advantages: it is unsupervised, it has a statistical basis, it has a better resolution and better image quality under certain conditions. This work presents a high performance GPU version of non-gridding MEM, which is tested using real and simulated data. We propose a single-GPU and a multi-GPU implementation for single and multi-spectral data, respectively. We also make use of the Peer-to-Peer and Unified Virtual Addressing features of newer GPUs which allows to exploit transparently and efficiently multiple GPUs. Several ALMA data sets are used to demonstrate the effectiveness in imaging and to evaluate GPU performance. The results show that a speedup from 1000 to 5000 times faster than a sequential version can be achieved, depending on data and image size. This allows to reconstruct the HD142527 CO(6-5) short baseline data set in 2.1 min, instead of 2.5 days that takes a sequential version on CPU.
NASA Astrophysics Data System (ADS)
Li, Gang; Zhao, Qing
2017-03-01
In this paper, a minimum entropy deconvolution based sinusoidal synthesis (MEDSS) filter is proposed to improve the fault detection performance of the regular sinusoidal synthesis (SS) method. The SS filter is an efficient linear predictor that exploits the frequency properties during model construction. The phase information of the harmonic components is not used in the regular SS filter. However, the phase relationships are important in differentiating noise from characteristic impulsive fault signatures. Therefore, in this work, the minimum entropy deconvolution (MED) technique is used to optimize the SS filter during the model construction process. A time-weighted-error Kalman filter is used to estimate the MEDSS model parameters adaptively. Three simulation examples and a practical application case study are provided to illustrate the effectiveness of the proposed method. The regular SS method and the autoregressive MED (ARMED) method are also implemented for comparison. The MEDSS model has demonstrated superior performance compared to the regular SS method and it also shows comparable or better performance with much less computational intensity than the ARMED method.
NASA Astrophysics Data System (ADS)
Miao, Yonghao; Zhao, Ming; Lin, Jing; Lei, Yaguo
2017-08-01
The extraction of periodic impulses, which are the important indicators of rolling bearing faults, from vibration signals is considerably significance for fault diagnosis. Maximum correlated kurtosis deconvolution (MCKD) developed from minimum entropy deconvolution (MED) has been proven as an efficient tool for enhancing the periodic impulses in the diagnosis of rolling element bearings and gearboxes. However, challenges still exist when MCKD is applied to the bearings operating under harsh working conditions. The difficulties mainly come from the rigorous requires for the multi-input parameters and the complicated resampling process. To overcome these limitations, an improved MCKD (IMCKD) is presented in this paper. The new method estimates the iterative period by calculating the autocorrelation of the envelope signal rather than relies on the provided prior period. Moreover, the iterative period will gradually approach to the true fault period through updating the iterative period after every iterative step. Since IMCKD is unaffected by the impulse signals with the high kurtosis value, the new method selects the maximum kurtosis filtered signal as the final choice from all candidates in the assigned iterative counts. Compared with MCKD, IMCKD has three advantages. First, without considering prior period and the choice of the order of shift, IMCKD is more efficient and has higher robustness. Second, the resampling process is not necessary for IMCKD, which is greatly convenient for the subsequent frequency spectrum analysis and envelope spectrum analysis without resetting the sampling rate. Third, IMCKD has a significant performance advantage in diagnosing the bearing compound-fault which expands the application range. Finally, the effectiveness and superiority of IMCKD are validated by a number of simulated bearing fault signals and applying to compound faults and single fault diagnosis of a locomotive bearing.
NASA Technical Reports Server (NTRS)
Lester, D. F.; Harvey, P. M.; Joy, M.; Ellis, H. B., Jr.
1986-01-01
Far-infrared continuum studies from the Kuiper Airborne Observatory are described that are designed to fully exploit the small-scale spatial information that this facility can provide. This work gives the clearest picture to data on the structure of galactic and extragalactic star forming regions in the far infrared. Work is presently being done with slit scans taken simultaneously at 50 and 100 microns, yielding one-dimensional data. Scans of sources in different directions have been used to get certain information on two dimensional structure. Planned work with linear arrays will allow us to generalize our techniques to two dimensional image restoration. For faint sources, spatial information at the diffraction limit of the telescope is obtained, while for brighter sources, nonlinear deconvolution techniques have allowed us to improve over the diffraction limit by as much as a factor of four. Information on the details of the color temperature distribution is derived as well. This is made possible by the accuracy with which the instrumental point-source profile (PSP) is determined at both wavelengths. While these two PSPs are different, data at different wavelengths can be compared by proper spatial filtering. Considerable effort has been devoted to implementing deconvolution algorithms. Nonlinear deconvolution methods offer the potential of superresolution -- that is, inference of power at spatial frequencies that exceed D lambda. This potential is made possible by the implicit assumption by the algorithm of positivity of the deconvolved data, a universally justifiable constraint for photon processes. We have tested two nonlinear deconvolution algorithms on our data; the Richardson-Lucy (R-L) method and the Maximum Entropy Method (MEM). The limits of image deconvolution techniques for achieving spatial resolution are addressed.
An X-ray image of the violent interstellar medium in 30 Doradus
NASA Technical Reports Server (NTRS)
Wang, Q.; Helfand, D. J.
1991-01-01
A detailed analysis of the X-ray emission from the largest H II region complex in the Local Group, 30 Dor, is presented. Applying a new maximum entropy deconvolution algorithm to the Einstein Observatory data, reveals striking correlations among the X-ray, radio, and optical morphologies of the region, with X-ray-emitting bubbles filling cavities surrounded by H-alpha shells and coextensive diffuse X-ray and radio continuum emission from throughout the region. The total X-ray luminosity in the 0.16-3.5 keV band from an area within 160 pc of the central cluster R136 is about 2 x 10 to the 37th ergs/sec.
Isotopic orientational order in acetyl salicylic acid
NASA Astrophysics Data System (ADS)
Schiebel, P.; Prandl, W.; Papoular, R.; Paulus, W.; Detken, A.; Haeberlen, U.; Zimmermann, H.
2000-03-01
Isotopically mixed methyl groups CD xH 3- x with zero averaged deuteron/hydrogen scattering length 0=< a>= xaD+(3- x) aH are expected to be invisible in a neutron diffraction experiment. We find, indeed, in the scattering length density of aspirin-CD xH 3- x, reconstructed by maximum-entropy methods, at room temperature only three very week minima. At 10 K, however, one positive and two negative extrema are visible: unique evidence for orientational isotopic order. From a combination of 1-d-Fourier and algebraic methods we deconvolute < a> and derive the orientational distribution function f( φ) which has three equivalent maxima/minima at 300 K and loses this 3 φ periodicity at 10 K. f( φ) is the basis for the determination of the hindrance potential with cos( φ) as the leading term.
Maximum Relative Entropy of Coherence: An Operational Coherence Measure.
Bu, Kaifeng; Singh, Uttam; Fei, Shao-Ming; Pati, Arun Kumar; Wu, Junde
2017-10-13
The operational characterization of quantum coherence is the cornerstone in the development of the resource theory of coherence. We introduce a new coherence quantifier based on maximum relative entropy. We prove that the maximum relative entropy of coherence is directly related to the maximum overlap with maximally coherent states under a particular class of operations, which provides an operational interpretation of the maximum relative entropy of coherence. Moreover, we show that, for any coherent state, there are examples of subchannel discrimination problems such that this coherent state allows for a higher probability of successfully discriminating subchannels than that of all incoherent states. This advantage of coherent states in subchannel discrimination can be exactly characterized by the maximum relative entropy of coherence. By introducing a suitable smooth maximum relative entropy of coherence, we prove that the smooth maximum relative entropy of coherence provides a lower bound of one-shot coherence cost, and the maximum relative entropy of coherence is equivalent to the relative entropy of coherence in the asymptotic limit. Similar to the maximum relative entropy of coherence, the minimum relative entropy of coherence has also been investigated. We show that the minimum relative entropy of coherence provides an upper bound of one-shot coherence distillation, and in the asymptotic limit the minimum relative entropy of coherence is equivalent to the relative entropy of coherence.
Hargiss, Leonard O; Zipp, G Greg; Jessop, Theodore C; Sun, Xuejun; Keyes, Philip; Rawlins, David B; Liang, Zhi; Park, Kum Joo; Gu, Huizhong
2014-11-15
An ultra high-pressure liquid chromatography/mass spectrometry (UHPLC/MS) separation and analysis method has been devised for open access analysis of synthetic reactions used in the production of DNA-encoded chemical libraries. The aqueous mobile phase is 100mM hexafluoroisopropanol and 8.6mM triethylamine; the organic mobile phase is methanol. The UHPLC separation uses a C18 OST column (50mm×2.1mm×1.7μm) at 60°C, with a flow rate of 0.6mL/min. Gradient concentration is from 10 to 40% B in 1.0min, increasing to 95% B at 1.2min. Cycle time was about 5min. This method provides a detection limit of a 20-mer oligonucleotide by mass spectrometry of better than 1pmol on-column. Linear UV response for 20-mer extends from 2 to 200pmol/μL in concentration, same-day relative average deviations are less than 5% and bias (observed minus expected) is less than 10%. Deconvoluted mass spectra are generated for components in the predicted mass range using a maximum entropy algorithm. Mass accuracy of deconvoluted spectra is typically 20ppm or better for isotopomers of oligonucleotides up to 7000Da. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Neuer, Marcus J.
2013-11-01
A technique for the spectral identification of strontium-90 is shown, utilising a Maximum-Likelihood deconvolution. Different deconvolution approaches are discussed and summarised. Based on the intensity distribution of the beta emission and Geant4 simulations, a combined response matrix is derived, tailored to the β- detection process in sodium iodide detectors. It includes scattering effects and attenuation by applying a base material decomposition extracted from Geant4 simulations with a CAD model for a realistic detector system. Inversion results of measurements show the agreement between deconvolution and reconstruction. A detailed investigation with additional masking sources like 40K, 226Ra and 131I shows that a contamination of strontium can be found in the presence of these nuisance sources. Identification algorithms for strontium are presented based on the derived technique. For the implementation of blind identification, an exemplary masking ratio is calculated.
Maximum Tsallis entropy with generalized Gini and Gini mean difference indices constraints
NASA Astrophysics Data System (ADS)
Khosravi Tanak, A.; Mohtashami Borzadaran, G. R.; Ahmadi, J.
2017-04-01
Using the maximum entropy principle with Tsallis entropy, some distribution families for modeling income distribution are obtained. By considering income inequality measures, maximum Tsallis entropy distributions under the constraint on generalized Gini and Gini mean difference indices are derived. It is shown that the Tsallis entropy maximizers with the considered constraints belong to generalized Pareto family.
Random versus maximum entropy models of neural population activity
NASA Astrophysics Data System (ADS)
Ferrari, Ulisse; Obuchi, Tomoyuki; Mora, Thierry
2017-04-01
The principle of maximum entropy provides a useful method for inferring statistical mechanics models from observations in correlated systems, and is widely used in a variety of fields where accurate data are available. While the assumptions underlying maximum entropy are intuitive and appealing, its adequacy for describing complex empirical data has been little studied in comparison to alternative approaches. Here, data from the collective spiking activity of retinal neurons is reanalyzed. The accuracy of the maximum entropy distribution constrained by mean firing rates and pairwise correlations is compared to a random ensemble of distributions constrained by the same observables. For most of the tested networks, maximum entropy approximates the true distribution better than the typical or mean distribution from that ensemble. This advantage improves with population size, with groups as small as eight being almost always better described by maximum entropy. Failure of maximum entropy to outperform random models is found to be associated with strong correlations in the population.
Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains
NASA Astrophysics Data System (ADS)
Cofré, Rodrigo; Maldonado, Cesar
2018-01-01
We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. We review large deviations techniques useful in this context to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.
Haseli, Y
2016-05-01
The objective of this study is to investigate the thermal efficiency and power production of typical models of endoreversible heat engines at the regime of minimum entropy generation rate. The study considers the Curzon-Ahlborn engine, the Novikov's engine, and the Carnot vapor cycle. The operational regimes at maximum thermal efficiency, maximum power output and minimum entropy production rate are compared for each of these engines. The results reveal that in an endoreversible heat engine, a reduction in entropy production corresponds to an increase in thermal efficiency. The three criteria of minimum entropy production, the maximum thermal efficiency, and the maximum power may become equivalent at the condition of fixed heat input.
In Vivo potassium-39 NMR spectra by the burg maximum-entropy method
NASA Astrophysics Data System (ADS)
Uchiyama, Takanori; Minamitani, Haruyuki
The Burg maximum-entropy method was applied to estimate 39K NMR spectra of mung bean root tips. The maximum-entropy spectra have as good a linearity between peak areas and potassium concentrations as those obtained by fast Fourier transform and give a better estimation of intracellular potassium concentrations. Therefore potassium uptake and loss processes of mung bean root tips are shown to be more clearly traced by the maximum-entropy method.
Unification of field theory and maximum entropy methods for learning probability densities
NASA Astrophysics Data System (ADS)
Kinney, Justin B.
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Unification of field theory and maximum entropy methods for learning probability densities.
Kinney, Justin B
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
NASA Technical Reports Server (NTRS)
Hucek, Richard R.; Ardanuy, Philip E.; Kyle, H. Lee
1987-01-01
A deconvolution method for extracting the top of the atmosphere (TOA) mean, daily albedo field from a set of wide-FOV (WFOV) shortwave radiometer measurements is proposed. The method is based on constructing a synthetic measurement for each satellite observation. The albedo field is represented as a truncated series of spherical harmonic functions, and these linear equations are presented. Simulation studies were conducted to determine the sensitivity of the method. It is observed that a maximum of about 289 pieces of data can be extracted from a set of Nimbus 7 WFOV satellite measurements. The albedos derived using the deconvolution method are compared with albedos derived using the WFOV archival method; the developed albedo field achieved a 20 percent reduction in the global rms regional reflected flux density errors. The deconvolution method is applied to estimate the mean, daily average TOA albedo field for January 1983. A strong and extensive albedo maximum (0.42), which corresponds to the El Nino/Southern Oscillation event of 1982-1983, is detected over the south central Pacific Ocean.
Ramachandra, Ranjan; de Jonge, Niels
2012-01-01
Three-dimensional (3D) data sets were recorded of gold nanoparticles placed on both sides of silicon nitride membranes using focal series aberration-corrected scanning transmission electron microscopy (STEM). The deconvolution of the 3D datasets was optimized to obtain the highest possible axial resolution. The deconvolution involved two different point spread function (PSF)s, each calculated iteratively via blind deconvolution.. Supporting membranes of different thicknesses were tested to study the effect of beam broadening on the deconvolution. It was found that several iterations of deconvolution was efficient in reducing the imaging noise. With an increasing number of iterations, the axial resolution was increased, and most of the structural information was preserved. Additional iterations improved the axial resolution by maximal a factor of 4 to 6, depending on the particular dataset, and up to 8 nm maximal, but at the cost of a reduction of the lateral size of the nanoparticles in the image. Thus, the deconvolution procedure optimized for highest axial resolution is best suited for applications where one is interested in the 3D locations of nanoparticles only. PMID:22152090
Entropy and equilibrium via games of complexity
NASA Astrophysics Data System (ADS)
Topsøe, Flemming
2004-09-01
It is suggested that thermodynamical equilibrium equals game theoretical equilibrium. Aspects of this thesis are discussed. The philosophy is consistent with maximum entropy thinking of Jaynes, but goes one step deeper by deriving the maximum entropy principle from an underlying game theoretical principle. The games introduced are based on measures of complexity. Entropy is viewed as minimal complexity. It is demonstrated that Tsallis entropy ( q-entropy) and Kaniadakis entropy ( κ-entropy) can be obtained in this way, based on suitable complexity measures. A certain unifying effect is obtained by embedding these measures in a two-parameter family of entropy functions.
NASA Astrophysics Data System (ADS)
Taylor, Christopher T.; Hutchinson, Simon; Salmon, Neil A.; Wilkinson, Peter N.; Cameron, Colin D.
2014-06-01
Image processing techniques can be used to improve the cost-effectiveness of future interferometric Passive MilliMetre Wave (PMMW) imagers. The implementation of such techniques will allow for a reduction in the number of collecting elements whilst ensuring adequate image fidelity is maintained. Various techniques have been developed by the radio astronomy community to enhance the imaging capability of sparse interferometric arrays. The most prominent are Multi- Frequency Synthesis (MFS) and non-linear deconvolution algorithms, such as the Maximum Entropy Method (MEM) and variations of the CLEAN algorithm. This investigation focuses on the implementation of these methods in the defacto standard for radio astronomy image processing, the Common Astronomy Software Applications (CASA) package, building upon the discussion presented in Taylor et al., SPIE 8362-0F. We describe the image conversion process into a CASA suitable format, followed by a series of simulations that exploit the highlighted deconvolution and MFS algorithms assuming far-field imagery. The primary target application used for this investigation is an outdoor security scanner for soft-sided Heavy Goods Vehicles. A quantitative analysis of the effectiveness of the aforementioned image processing techniques is presented, with thoughts on the potential cost-savings such an approach could yield. Consideration is also given to how the implementation of these techniques in CASA might be adapted to operate in a near-field target environment. This may enable a much wider usability by the imaging community outside of radio astronomy and thus would be directly relevant to portal screening security systems in the microwave and millimetre wave bands.
Navarro, Jorge; Ring, Terry A.; Nigg, David W.
2015-03-01
A deconvolution method for a LaBr₃ 1"x1" detector for nondestructive Advanced Test Reactor (ATR) fuel burnup applications was developed. The method consisted of obtaining the detector response function, applying a deconvolution algorithm to 1”x1” LaBr₃ simulated, data along with evaluating the effects that deconvolution have on nondestructively determining ATR fuel burnup. The simulated response function of the detector was obtained using MCNPX as well with experimental data. The Maximum-Likelihood Expectation Maximization (MLEM) deconvolution algorithm was selected to enhance one-isotope source-simulated and fuel- simulated spectra. The final evaluation of the study consisted of measuring the performance of the fuel burnup calibrationmore » curve for the convoluted and deconvoluted cases. The methodology was developed in order to help design a reliable, high resolution, rugged and robust detection system for the ATR fuel canal capable of collecting high performance data for model validation, along with a system that can calculate burnup and using experimental scintillator detector data.« less
NASA Astrophysics Data System (ADS)
Thurner, Stefan; Corominas-Murtra, Bernat; Hanel, Rudolf
2017-09-01
There are at least three distinct ways to conceptualize entropy: entropy as an extensive thermodynamic quantity of physical systems (Clausius, Boltzmann, Gibbs), entropy as a measure for information production of ergodic sources (Shannon), and entropy as a means for statistical inference on multinomial processes (Jaynes maximum entropy principle). Even though these notions represent fundamentally different concepts, the functional form of the entropy for thermodynamic systems in equilibrium, for ergodic sources in information theory, and for independent sampling processes in statistical systems, is degenerate, H (p ) =-∑ipilogpi . For many complex systems, which are typically history-dependent, nonergodic, and nonmultinomial, this is no longer the case. Here we show that for such processes, the three entropy concepts lead to different functional forms of entropy, which we will refer to as SEXT for extensive entropy, SIT for the source information rate in information theory, and SMEP for the entropy functional that appears in the so-called maximum entropy principle, which characterizes the most likely observable distribution functions of a system. We explicitly compute these three entropy functionals for three concrete examples: for Pólya urn processes, which are simple self-reinforcing processes, for sample-space-reducing (SSR) processes, which are simple history dependent processes that are associated with power-law statistics, and finally for multinomial mixture processes.
A MAP blind image deconvolution algorithm with bandwidth over-constrained
NASA Astrophysics Data System (ADS)
Ren, Zhilei; Liu, Jin; Liang, Yonghui; He, Yulong
2018-03-01
We demonstrate a maximum a posteriori (MAP) blind image deconvolution algorithm with bandwidth over-constrained and total variation (TV) regularization to recover a clear image from the AO corrected images. The point spread functions (PSFs) are estimated by bandwidth limited less than the cutoff frequency of the optical system. Our algorithm performs well in avoiding noise magnification. The performance is demonstrated on simulated data.
Maximum-entropy probability distributions under Lp-norm constraints
NASA Technical Reports Server (NTRS)
Dolinar, S.
1991-01-01
Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.
The maximum entropy production principle: two basic questions.
Martyushev, Leonid M
2010-05-12
The overwhelming majority of maximum entropy production applications to ecological and environmental systems are based on thermodynamics and statistical physics. Here, we discuss briefly maximum entropy production principle and raises two questions: (i) can this principle be used as the basis for non-equilibrium thermodynamics and statistical mechanics and (ii) is it possible to 'prove' the principle? We adduce one more proof which is most concise today.
The maximum entropy production and maximum Shannon information entropy in enzyme kinetics
NASA Astrophysics Data System (ADS)
Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš
2018-04-01
We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.
Nonadditive entropy maximization is inconsistent with Bayesian updating
NASA Astrophysics Data System (ADS)
Pressé, Steve
2014-11-01
The maximum entropy method—used to infer probabilistic models from data—is a special case of Bayes's model inference prescription which, in turn, is grounded in basic propositional logic. By contrast to the maximum entropy method, the compatibility of nonadditive entropy maximization with Bayes's model inference prescription has never been established. Here we demonstrate that nonadditive entropy maximization is incompatible with Bayesian updating and discuss the immediate implications of this finding. We focus our attention on special cases as illustrations.
Nonadditive entropy maximization is inconsistent with Bayesian updating.
Pressé, Steve
2014-11-01
The maximum entropy method-used to infer probabilistic models from data-is a special case of Bayes's model inference prescription which, in turn, is grounded in basic propositional logic. By contrast to the maximum entropy method, the compatibility of nonadditive entropy maximization with Bayes's model inference prescription has never been established. Here we demonstrate that nonadditive entropy maximization is incompatible with Bayesian updating and discuss the immediate implications of this finding. We focus our attention on special cases as illustrations.
DEM interpolation weight calculation modulus based on maximum entropy
NASA Astrophysics Data System (ADS)
Chen, Tian-wei; Yang, Xia
2015-12-01
There is negative-weight in traditional interpolation of gridding DEM, in the article, the principle of Maximum Entropy is utilized to analyze the model system which depends on modulus of space weight. Negative-weight problem of the DEM interpolation is researched via building Maximum Entropy model, and adding nonnegative, first and second order's Moment constraints, the negative-weight problem is solved. The correctness and accuracy of the method was validated with genetic algorithm in matlab program. The method is compared with the method of Yang Chizhong interpolation and quadratic program. Comparison shows that the volume and scaling of Maximum Entropy's weight is fit to relations of space and the accuracy is superior to the latter two.
NASA Astrophysics Data System (ADS)
Yu, Zhongzhi; Liu, Shaocong; Sun, Shiyi; Kuang, Cuifang; Liu, Xu
2018-06-01
Parallel detection, which can use the additional information of a pinhole plane image taken at every excitation scan position, could be an efficient method to enhance the resolution of a confocal laser scanning microscope. In this paper, we discuss images obtained under different conditions and using different image restoration methods with parallel detection to quantitatively compare the imaging quality. The conditions include different noise levels and different detector array settings. The image restoration methods include linear deconvolution and pixel reassignment with Richard-Lucy deconvolution and with maximum-likelihood estimation deconvolution. The results show that the linear deconvolution share properties such as high-efficiency and the best performance under all different conditions, and is therefore expected to be of use for future biomedical routine research.
Horger, Marius; Fallier-Becker, Petra; Thaiss, Wolfgang M; Sauter, Alexander; Bösmüller, Hans; Martella, Manuela; Preibsch, Heike; Fritz, Jan; Nikolaou, Konstantin; Kloth, Christopher
2018-05-03
This study aimed to test the hypothesis that ultrastructural wall abnormalities of lymphoma vessels correlate with perfusion computed tomography (PCT) kinetics. Our local institutional review board approved this prospective study. Between February 2013 and June 2016, we included 23 consecutive subjects with newly diagnosed lymphoma, who were referred for computed tomography-guided biopsy (6 women, 17 men; mean age, 60.61 ± 12.43 years; range, 28-74 years) and additionally agreed to undergo PCT of the target lymphoma tissues. PCT was obtained for 40 seconds using 80 kV, 120 mAs, 64 × 0.6-mm collimation, 6.9-cm z-axis coverage, and 26 volume measurements. Mean and maximum k-trans (mL/100 mL/min), blood flow (BF; mL/100 mL/min) and blood volume (BV) were quantified using the deconvolution and the maximum slope + Patlak calculation models. Immunohistochemical staining was performed for microvessel density quantification (vessels/m 2 ), and electron microscopy was used to determine the presence or absence of tight junctions, endothelial fenestration, basement membrane, and pericytes, and to measure extracellular matrix thickness. Extracellular matrix thickness as well as the presence or absence of tight junctions, basal lamina, and pericytes did not correlate with computed tomography perfusion parameters. Endothelial fenestrations correlated significantly with mean BF deconvolution (P = .047, r = 0.418) and additionally was significantly associated with higher mean BV deconvolution (P < .005). Mean k-trans Patlak correlated strongly with mean k-trans deconvolution (r = 0.939, P = .001), and both correlated with mean BF deconvolution (P = .001, r = 0.748), max BF deconvolution (P = .028, r = 0.564), mean BV deconvolution (P = .001, r = 0.752), and max BV deconvolution (P = .001, r = 0.771). Microvessel density correlated with max k-trans deconvolution (r = 0.564, P = .023). Vascular endothelial growth factor receptor-3 expression (receptor specific for lymphatics) correlated significantly with max k-trans Patlak (P = .041, r = 0.686) and mean BF deconvolution (P = .038, r = 0.695). k-Trans values of PCT do not correlate with ultrastructural microvessel features, whereas endothelial fenestrations correlate with increased intra-tumoral BVs. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Cheng, Jian; Deriche, Rachid; Jiang, Tianzi; Shen, Dinggang; Yap, Pew-Thian
2014-11-01
Spherical Deconvolution (SD) is commonly used for estimating fiber Orientation Distribution Functions (fODFs) from diffusion-weighted signals. Existing SD methods can be classified into two categories: 1) Continuous Representation based SD (CR-SD), where typically Spherical Harmonic (SH) representation is used for convenient analytical solutions, and 2) Discrete Representation based SD (DR-SD), where the signal profile is represented by a discrete set of basis functions uniformly oriented on the unit sphere. A feasible fODF should be non-negative and should integrate to unity throughout the unit sphere S(2). However, to our knowledge, most existing SH-based SD methods enforce non-negativity only on discretized points and not the whole continuum of S(2). Maximum Entropy SD (MESD) and Cartesian Tensor Fiber Orientation Distributions (CT-FOD) are the only SD methods that ensure non-negativity throughout the unit sphere. They are however computational intensive and are susceptible to errors caused by numerical spherical integration. Existing SD methods are also known to overestimate the number of fiber directions, especially in regions with low anisotropy. DR-SD introduces additional error in peak detection owing to the angular discretization of the unit sphere. This paper proposes a SD framework, called Non-Negative SD (NNSD), to overcome all the limitations above. NNSD is significantly less susceptible to the false-positive peaks, uses SH representation for efficient analytical spherical deconvolution, and allows accurate peak detection throughout the whole unit sphere. We further show that NNSD and most existing SD methods can be extended to work on multi-shell data by introducing a three-dimensional fiber response function. We evaluated NNSD in comparison with Constrained SD (CSD), a quadratic programming variant of CSD, MESD, and an L1-norm regularized non-negative least-squares DR-SD. Experiments on synthetic and real single-/multi-shell data indicate that NNSD improves estimation performance in terms of mean difference of angles, peak detection consistency, and anisotropy contrast between isotropic and anisotropic regions. Copyright © 2014 Elsevier Inc. All rights reserved.
Convex Accelerated Maximum Entropy Reconstruction
Worley, Bradley
2016-01-01
Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm – called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm – is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra. PMID:26894476
Entropy-based goodness-of-fit test: Application to the Pareto distribution
NASA Astrophysics Data System (ADS)
Lequesne, Justine
2013-08-01
Goodness-of-fit tests based on entropy have been introduced in [13] for testing normality. The maximum entropy distribution in a class of probability distributions defined by linear constraints induces a Pythagorean equality between the Kullback-Leibler information and an entropy difference. This allows one to propose a goodness-of-fit test for maximum entropy parametric distributions which is based on the Kullback-Leibler information. We will focus on the application of the method to the Pareto distribution. The power of the proposed test is computed through Monte Carlo simulation.
Hanel, Rudolf; Thurner, Stefan; Gell-Mann, Murray
2014-05-13
The maximum entropy principle (MEP) is a method for obtaining the most likely distribution functions of observables from statistical systems by maximizing entropy under constraints. The MEP has found hundreds of applications in ergodic and Markovian systems in statistical mechanics, information theory, and statistics. For several decades there has been an ongoing controversy over whether the notion of the maximum entropy principle can be extended in a meaningful way to nonextensive, nonergodic, and complex statistical systems and processes. In this paper we start by reviewing how Boltzmann-Gibbs-Shannon entropy is related to multiplicities of independent random processes. We then show how the relaxation of independence naturally leads to the most general entropies that are compatible with the first three Shannon-Khinchin axioms, the (c,d)-entropies. We demonstrate that the MEP is a perfectly consistent concept for nonergodic and complex statistical systems if their relative entropy can be factored into a generalized multiplicity and a constraint term. The problem of finding such a factorization reduces to finding an appropriate representation of relative entropy in a linear basis. In a particular example we show that path-dependent random processes with memory naturally require specific generalized entropies. The example is to our knowledge the first exact derivation of a generalized entropy from the microscopic properties of a path-dependent random process.
Maximum entropy method applied to deblurring images on a MasPar MP-1 computer
NASA Technical Reports Server (NTRS)
Bonavito, N. L.; Dorband, John; Busse, Tim
1991-01-01
A statistical inference method based on the principle of maximum entropy is developed for the purpose of enhancing and restoring satellite images. The proposed maximum entropy image restoration method is shown to overcome the difficulties associated with image restoration and provide the smoothest and most appropriate solution consistent with the measured data. An implementation of the method on the MP-1 computer is described, and results of tests on simulated data are presented.
NASA Astrophysics Data System (ADS)
Mohammad-Djafari, Ali
2015-01-01
The main object of this tutorial article is first to review the main inference tools using Bayesian approach, Entropy, Information theory and their corresponding geometries. This review is focused mainly on the ways these tools have been used in data, signal and image processing. After a short introduction of the different quantities related to the Bayes rule, the entropy and the Maximum Entropy Principle (MEP), relative entropy and the Kullback-Leibler divergence, Fisher information, we will study their use in different fields of data and signal processing such as: entropy in source separation, Fisher information in model order selection, different Maximum Entropy based methods in time series spectral estimation and finally, general linear inverse problems.
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-04-06
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-01-01
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503
de Beer, Alex G F; Samson, Jean-Sebastièn; Hua, Wei; Huang, Zishuai; Chen, Xiangke; Allen, Heather C; Roke, Sylvie
2011-12-14
We present a direct comparison of phase sensitive sum-frequency generation experiments with phase reconstruction obtained by the maximum entropy method. We show that both methods lead to the same complex spectrum. Furthermore, we discuss the strengths and weaknesses of each of these methods, analyzing possible sources of experimental and analytical errors. A simulation program for maximum entropy phase reconstruction is available at: http://lbp.epfl.ch/. © 2011 American Institute of Physics
Consistent maximum entropy representations of pipe flow networks
NASA Astrophysics Data System (ADS)
Waldrip, Steven H.; Niven, Robert K.; Abel, Markus; Schlegel, Michael
2017-06-01
The maximum entropy method is used to predict flows on water distribution networks. This analysis extends the water distribution network formulation of Waldrip et al. (2016) Journal of Hydraulic Engineering (ASCE), by the use of a continuous relative entropy defined on a reduced parameter set. This reduction in the parameters that the entropy is defined over ensures consistency between different representations of the same network. The performance of the proposed reduced parameter method is demonstrated with a one-loop network case study.
Maximum entropy production: Can it be used to constrain conceptual hydrological models?
M.C. Westhoff; E. Zehe
2013-01-01
In recent years, optimality principles have been proposed to constrain hydrological models. The principle of maximum entropy production (MEP) is one of the proposed principles and is subject of this study. It states that a steady state system is organized in such a way that entropy production is maximized. Although successful applications have been reported in...
Roelfsema, Ferdinand; Pereira, Alberto M; Adriaanse, Ria; Endert, Erik; Fliers, Eric; Romijn, Johannes A; Veldhuis, Johannes D
2010-02-01
Twenty-four-hour TSH secretion profiles in primary hypothyroidism have been analyzed with methods no longer in use. The insights afforded by earlier methods are limited. We studied TSH secretion in patients with primary hypothyroidism (eight patients with severe and eight patients with mild hypothyroidism) with up-to-date analytical tools and compared the results with outcomes in 38 healthy controls. Patients and controls underwent a 24-h study with 10-min blood sampling. TSH data were analyzed with a newly developed automated deconvolution program, approximate entropy, spikiness assessment, and cosinor regression. Both basal and pulsatile TSH secretion rates were increased in hypothyroid patients, the latter by increased burst mass with unchanged frequency. Secretory regularity (approximate entropy) was diminished, and spikiness was increased only in patients with severe hypothyroidism. A diurnal TSH rhythm was present in all but two patients, although with an earlier acrophase in severe hypothyroidism. The estimated slow component of the TSH half-life was shortened in all patients. Increased TSH concentrations in hypothyroidism are mediated by amplification of basal secretion and burst size. Secretory abnormalities quantitated by approximate entropy and spikiness were only present in patients with severe disease and thus are possibly related to the increased thyrotrope cell mass.
On the statistical equivalence of restrained-ensemble simulations with the maximum entropy method
Roux, Benoît; Weare, Jonathan
2013-01-01
An issue of general interest in computer simulations is to incorporate information from experiments into a structural model. An important caveat in pursuing this goal is to avoid corrupting the resulting model with spurious and arbitrary biases. While the problem of biasing thermodynamic ensembles can be formulated rigorously using the maximum entropy method introduced by Jaynes, the approach can be cumbersome in practical applications with the need to determine multiple unknown coefficients iteratively. A popular alternative strategy to incorporate the information from experiments is to rely on restrained-ensemble molecular dynamics simulations. However, the fundamental validity of this computational strategy remains in question. Here, it is demonstrated that the statistical distribution produced by restrained-ensemble simulations is formally consistent with the maximum entropy method of Jaynes. This clarifies the underlying conditions under which restrained-ensemble simulations will yield results that are consistent with the maximum entropy method. PMID:23464140
Joint deconvolution and classification with applications to passive acoustic underwater multipath.
Anderson, Hyrum S; Gupta, Maya R
2008-11-01
This paper addresses the problem of classifying signals that have been corrupted by noise and unknown linear time-invariant (LTI) filtering such as multipath, given labeled uncorrupted training signals. A maximum a posteriori approach to the deconvolution and classification is considered, which produces estimates of the desired signal, the unknown channel, and the class label. For cases in which only a class label is needed, the classification accuracy can be improved by not committing to an estimate of the channel or signal. A variant of the quadratic discriminant analysis (QDA) classifier is proposed that probabilistically accounts for the unknown LTI filtering, and which avoids deconvolution. The proposed QDA classifier can work either directly on the signal or on features whose transformation by LTI filtering can be analyzed; as an example a classifier for subband-power features is derived. Results on simulated data and real Bowhead whale vocalizations show that jointly considering deconvolution with classification can dramatically improve classification performance over traditional methods over a range of signal-to-noise ratios.
Bayesian Deconvolution for Angular Super-Resolution in Forward-Looking Scanning Radar
Zha, Yuebo; Huang, Yulin; Sun, Zhichao; Wang, Yue; Yang, Jianyu
2015-01-01
Scanning radar is of notable importance for ground surveillance, terrain mapping and disaster rescue. However, the angular resolution of a scanning radar image is poor compared to the achievable range resolution. This paper presents a deconvolution algorithm for angular super-resolution in scanning radar based on Bayesian theory, which states that the angular super-resolution can be realized by solving the corresponding deconvolution problem with the maximum a posteriori (MAP) criterion. The algorithm considers that the noise is composed of two mutually independent parts, i.e., a Gaussian signal-independent component and a Poisson signal-dependent component. In addition, the Laplace distribution is used to represent the prior information about the targets under the assumption that the radar image of interest can be represented by the dominant scatters in the scene. Experimental results demonstrate that the proposed deconvolution algorithm has higher precision for angular super-resolution compared with the conventional algorithms, such as the Tikhonov regularization algorithm, the Wiener filter and the Richardson–Lucy algorithm. PMID:25806871
NASA Astrophysics Data System (ADS)
Oba, T.; Riethmüller, T. L.; Solanki, S. K.; Iida, Y.; Quintero Noda, C.; Shimizu, T.
2017-11-01
Solar granules are bright patterns surrounded by dark channels, called intergranular lanes, in the solar photosphere and are a manifestation of overshooting convection. Observational studies generally find stronger upflows in granules and weaker downflows in intergranular lanes. This trend is, however, inconsistent with the results of numerical simulations in which downflows are stronger than upflows through the joint action of gravitational acceleration/deceleration and pressure gradients. One cause of this discrepancy is the image degradation caused by optical distortion and light diffraction and scattering that takes place in an imaging instrument. We apply a deconvolution technique to Hinode/SP data in an attempt to recover the original solar scene. Our results show a significant enhancement in both the convective upflows and downflows but particularly for the latter. After deconvolution, the up- and downflows reach maximum amplitudes of -3.0 km s-1 and +3.0 km s-1 at an average geometrical height of roughly 50 km, respectively. We found that the velocity distributions after deconvolution match those derived from numerical simulations. After deconvolution, the net LOS velocity averaged over the whole field of view lies close to zero as expected in a rough sense from mass balance.
Maximum and minimum entropy states yielding local continuity bounds
NASA Astrophysics Data System (ADS)
Hanson, Eric P.; Datta, Nilanjana
2018-04-01
Given an arbitrary quantum state (σ), we obtain an explicit construction of a state ρɛ * ( σ ) [respectively, ρ * , ɛ ( σ ) ] which has the maximum (respectively, minimum) entropy among all states which lie in a specified neighborhood (ɛ-ball) of σ. Computing the entropy of these states leads to a local strengthening of the continuity bound of the von Neumann entropy, i.e., the Audenaert-Fannes inequality. Our bound is local in the sense that it depends on the spectrum of σ. The states ρɛ * ( σ ) and ρ * , ɛ (σ) depend only on the geometry of the ɛ-ball and are in fact optimizers for a larger class of entropies. These include the Rényi entropy and the minimum- and maximum-entropies, providing explicit formulas for certain smoothed quantities. This allows us to obtain local continuity bounds for these quantities as well. In obtaining this bound, we first derive a more general result which may be of independent interest, namely, a necessary and sufficient condition under which a state maximizes a concave and Gâteaux-differentiable function in an ɛ-ball around a given state σ. Examples of such a function include the von Neumann entropy and the conditional entropy of bipartite states. Our proofs employ tools from the theory of convex optimization under non-differentiable constraints, in particular Fermat's rule, and majorization theory.
Advanced Source Deconvolution Methods for Compton Telescopes
NASA Astrophysics Data System (ADS)
Zoglauer, Andreas
The next generation of space telescopes utilizing Compton scattering for astrophysical observations is destined to one day unravel the mysteries behind Galactic nucleosynthesis, to determine the origin of the positron annihilation excess near the Galactic center, and to uncover the hidden emission mechanisms behind gamma-ray bursts. Besides astrophysics, Compton telescopes are establishing themselves in heliophysics, planetary sciences, medical imaging, accelerator physics, and environmental monitoring. Since the COMPTEL days, great advances in the achievable energy and position resolution were possible, creating an extremely vast, but also extremely sparsely sampled data space. Unfortunately, the optimum way to analyze the data from the next generation of Compton telescopes has not yet been found, which can retrieve all source parameters (location, spectrum, polarization, flux) and achieves the best possible resolution and sensitivity at the same time. This is especially important for all sciences objectives looking at the inner Galaxy: the large amount of expected sources, the high background (internal and Galactic diffuse emission), and the limited angular resolution, make it the most taxing case for data analysis. In general, two key challenges exist: First, what are the best data space representations to answer the specific science questions? Second, what is the best way to deconvolve the data to fully retrieve the source parameters? For modern Compton telescopes, the existing data space representations can either correctly reconstruct the absolute flux (binned mode) or achieve the best possible resolution (list-mode), both together were not possible up to now. Here we propose to develop a two-stage hybrid reconstruction method which combines the best aspects of both. Using a proof-of-concept implementation we can for the first time show that it is possible to alternate during each deconvolution step between a binned-mode approach to get the flux right and a list-mode approach to get the best angular resolution, to get achieve both at the same time! The second open question concerns the best deconvolution algorithm. For example, several algorithms have been investigated for the famous COMPTEL 26Al map which resulted in significantly different images. There is no clear answer as to which approach provides the most accurate result, largely due to the fact that detailed simulations to test and verify the approaches and their limitations were not possible at that time. This has changed, and therefore we propose to evaluate several deconvolution algorithms (e.g. Richardson-Lucy, Maximum-Entropy, MREM, and stochastic origin ensembles) with simulations of typical observations to find the best algorithm for each application and for each stage of the hybrid reconstruction approach. We will adapt, implement, and fully evaluate the hybrid source reconstruction approach as well as the various deconvolution algorithms with simulations of synthetic benchmarks and simulations of key science objectives such as diffuse nuclear line science and continuum science of point sources, as well as with calibrations/observations of the COSI balloon telescope. This proposal for "development of new data analysis methods for future satellite missions" will significantly improve the source deconvolution techniques for modern Compton telescopes and will allow unlocking the full potential of envisioned satellite missions using Compton-scatter technology in astrophysics, heliophysics and planetary sciences, and ultimately help them to "discover how the universe works" and to better "understand the sun". Ultimately it will also benefit ground based applications such as nuclear medicine and environmental monitoring as all developed algorithms will be made publicly available within the open-source Compton telescope analysis framework MEGAlib.
From Maximum Entropy Models to Non-Stationarity and Irreversibility
NASA Astrophysics Data System (ADS)
Cofre, Rodrigo; Cessac, Bruno; Maldonado, Cesar
The maximum entropy distribution can be obtained from a variational principle. This is important as a matter of principle and for the purpose of finding approximate solutions. One can exploit this fact to obtain relevant information about the underlying stochastic process. We report here in recent progress in three aspects to this approach.1- Biological systems are expected to show some degree of irreversibility in time. Based on the transfer matrix technique to find the spatio-temporal maximum entropy distribution, we build a framework to quantify the degree of irreversibility of any maximum entropy distribution.2- The maximum entropy solution is characterized by a functional called Gibbs free energy (solution of the variational principle). The Legendre transformation of this functional is the rate function, which controls the speed of convergence of empirical averages to their ergodic mean. We show how the correct description of this functional is determinant for a more rigorous characterization of first and higher order phase transitions.3- We assess the impact of a weak time-dependent external stimulus on the collective statistics of spiking neuronal networks. We show how to evaluate this impact on any higher order spatio-temporal correlation. RC supported by ERC advanced Grant ``Bridges'', BC: KEOPS ANR-CONICYT, Renvision and CM: CONICYT-FONDECYT No. 3140572.
Fast and Efficient Stochastic Optimization for Analytic Continuation
Bao, Feng; Zhang, Guannan; Webster, Clayton G; ...
2016-09-28
In this analytic continuation of imaginary-time quantum Monte Carlo data to extract real-frequency spectra remains a key problem in connecting theory with experiment. Here we present a fast and efficient stochastic optimization method (FESOM) as a more accessible variant of the stochastic optimization method introduced by Mishchenko et al. [Phys. Rev. B 62, 6317 (2000)], and we benchmark the resulting spectra with those obtained by the standard maximum entropy method for three representative test cases, including data taken from studies of the two-dimensional Hubbard model. Genearally, we find that our FESOM approach yields spectra similar to the maximum entropy results.more » In particular, while the maximum entropy method yields superior results when the quality of the data is strong, we find that FESOM is able to resolve fine structure with more detail when the quality of the data is poor. In addition, because of its stochastic nature, the method provides detailed information on the frequency-dependent uncertainty of the resulting spectra, while the maximum entropy method does so only for the spectral weight integrated over a finite frequency region. Therefore, we believe that this variant of the stochastic optimization approach provides a viable alternative to the routinely used maximum entropy method, especially for data of poor quality.« less
Crowd macro state detection using entropy model
NASA Astrophysics Data System (ADS)
Zhao, Ying; Yuan, Mengqi; Su, Guofeng; Chen, Tao
2015-08-01
In the crowd security research area a primary concern is to identify the macro state of crowd behaviors to prevent disasters and to supervise the crowd behaviors. The entropy is used to describe the macro state of a self-organization system in physics. The entropy change indicates the system macro state change. This paper provides a method to construct crowd behavior microstates and the corresponded probability distribution using the individuals' velocity information (magnitude and direction). Then an entropy model was built up to describe the crowd behavior macro state. Simulation experiments and video detection experiments were conducted. It was verified that in the disordered state, the crowd behavior entropy is close to the theoretical maximum entropy; while in ordered state, the entropy is much lower than half of the theoretical maximum entropy. The crowd behavior macro state sudden change leads to the entropy change. The proposed entropy model is more applicable than the order parameter model in crowd behavior detection. By recognizing the entropy mutation, it is possible to detect the crowd behavior macro state automatically by utilizing cameras. Results will provide data support on crowd emergency prevention and on emergency manual intervention.
Maximum-Entropy Inference with a Programmable Annealer
Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.
2016-01-01
Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition. PMID:26936311
Block entropy and quantum phase transition in the anisotropic Kondo necklace model
NASA Astrophysics Data System (ADS)
Mendoza-Arenas, J. J.; Franco, R.; Silva-Valencia, J.
2010-06-01
We study the von Neumann block entropy in the Kondo necklace model for different anisotropies η in the XY interaction between conduction spins using the density matrix renormalization group method. It was found that the block entropy presents a maximum for each η considered, and, comparing it with the results of the quantum criticality of the model based on the behavior of the energy gap, we observe that the maximum block entropy occurs at the quantum critical point between an antiferromagnetic and a Kondo singlet state, so this measure of entanglement is useful for giving information about where a quantum phase transition occurs in this model. We observe that the block entropy also presents a maximum at the quantum critical points that are obtained when an anisotropy Δ is included in the Kondo exchange between localized and conduction spins; when Δ diminishes for a fixed value of η, the critical point increases, favoring the antiferromagnetic phase.
Entropy of adsorption of mixed surfactants from solutions onto the air/water interface
Chen, L.-W.; Chen, J.-H.; Zhou, N.-F.
1995-01-01
The partial molar entropy change for mixed surfactant molecules adsorbed from solution at the air/water interface has been investigated by surface thermodynamics based upon the experimental surface tension isotherms at various temperatures. Results for different surfactant mixtures of sodium dodecyl sulfate and sodium tetradecyl sulfate, decylpyridinium chloride and sodium alkylsulfonates have shown that the partial molar entropy changes for adsorption of the mixed surfactants were generally negative and decreased with increasing adsorption to a minimum near the maximum adsorption and then increased abruptly. The entropy decrease can be explained by the adsorption-orientation of surfactant molecules in the adsorbed monolayer and the abrupt entropy increase at the maximum adsorption is possible due to the strong repulsion between the adsorbed molecules.
Tsallis Entropy and the Transition to Scaling in Fragmentation
NASA Astrophysics Data System (ADS)
Sotolongo-Costa, Oscar; Rodriguez, Arezky H.; Rodgers, G. J.
2000-12-01
By using the maximum entropy principle with Tsallis entropy we obtain a fragment size distribution function which undergoes a transition to scaling. This distribution function reduces to those obtained by other authors using Shannon entropy. The treatment is easily generalisable to any process of fractioning with suitable constraints.
A pairwise maximum entropy model accurately describes resting-state human brain networks
Watanabe, Takamitsu; Hirose, Satoshi; Wada, Hiroyuki; Imai, Yoshio; Machida, Toru; Shirouzu, Ichiro; Konishi, Seiki; Miyashita, Yasushi; Masuda, Naoki
2013-01-01
The resting-state human brain networks underlie fundamental cognitive functions and consist of complex interactions among brain regions. However, the level of complexity of the resting-state networks has not been quantified, which has prevented comprehensive descriptions of the brain activity as an integrative system. Here, we address this issue by demonstrating that a pairwise maximum entropy model, which takes into account region-specific activity rates and pairwise interactions, can be robustly and accurately fitted to resting-state human brain activities obtained by functional magnetic resonance imaging. Furthermore, to validate the approximation of the resting-state networks by the pairwise maximum entropy model, we show that the functional interactions estimated by the pairwise maximum entropy model reflect anatomical connexions more accurately than the conventional functional connectivity method. These findings indicate that a relatively simple statistical model not only captures the structure of the resting-state networks but also provides a possible method to derive physiological information about various large-scale brain networks. PMID:23340410
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oba, T.; Riethmüller, T. L.; Solanki, S. K.
Solar granules are bright patterns surrounded by dark channels, called intergranular lanes, in the solar photosphere and are a manifestation of overshooting convection. Observational studies generally find stronger upflows in granules and weaker downflows in intergranular lanes. This trend is, however, inconsistent with the results of numerical simulations in which downflows are stronger than upflows through the joint action of gravitational acceleration/deceleration and pressure gradients. One cause of this discrepancy is the image degradation caused by optical distortion and light diffraction and scattering that takes place in an imaging instrument. We apply a deconvolution technique to Hinode /SP data inmore » an attempt to recover the original solar scene. Our results show a significant enhancement in both the convective upflows and downflows but particularly for the latter. After deconvolution, the up- and downflows reach maximum amplitudes of −3.0 km s{sup −1} and +3.0 km s{sup −1} at an average geometrical height of roughly 50 km, respectively. We found that the velocity distributions after deconvolution match those derived from numerical simulations. After deconvolution, the net LOS velocity averaged over the whole field of view lies close to zero as expected in a rough sense from mass balance.« less
Infrared image segmentation method based on spatial coherence histogram and maximum entropy
NASA Astrophysics Data System (ADS)
Liu, Songtao; Shen, Tongsheng; Dai, Yao
2014-11-01
In order to segment the target well and suppress background noises effectively, an infrared image segmentation method based on spatial coherence histogram and maximum entropy is proposed. First, spatial coherence histogram is presented by weighting the importance of the different position of these pixels with the same gray-level, which is obtained by computing their local density. Then, after enhancing the image by spatial coherence histogram, 1D maximum entropy method is used to segment the image. The novel method can not only get better segmentation results, but also have a faster computation time than traditional 2D histogram-based segmentation methods.
Stationary properties of maximum-entropy random walks.
Dixit, Purushottam D
2015-10-01
Maximum-entropy (ME) inference of state probabilities using state-dependent constraints is popular in the study of complex systems. In stochastic systems, how state space topology and path-dependent constraints affect ME-inferred state probabilities remains unknown. To that end, we derive the transition probabilities and the stationary distribution of a maximum path entropy Markov process subject to state- and path-dependent constraints. A main finding is that the stationary distribution over states differs significantly from the Boltzmann distribution and reflects a competition between path multiplicity and imposed constraints. We illustrate our results with particle diffusion on a two-dimensional landscape. Connections with the path integral approach to diffusion are discussed.
NASA Astrophysics Data System (ADS)
Cheung, Shao-Yong; Lee, Chieh-Han; Yu, Hwa-Lung
2017-04-01
Due to the limited hydrogeological observation data and high levels of uncertainty within, parameter estimation of the groundwater model has been an important issue. There are many methods of parameter estimation, for example, Kalman filter provides a real-time calibration of parameters through measurement of groundwater monitoring wells, related methods such as Extended Kalman Filter and Ensemble Kalman Filter are widely applied in groundwater research. However, Kalman Filter method is limited to linearity. This study propose a novel method, Bayesian Maximum Entropy Filtering, which provides a method that can considers the uncertainty of data in parameter estimation. With this two methods, we can estimate parameter by given hard data (certain) and soft data (uncertain) in the same time. In this study, we use Python and QGIS in groundwater model (MODFLOW) and development of Extended Kalman Filter and Bayesian Maximum Entropy Filtering in Python in parameter estimation. This method may provide a conventional filtering method and also consider the uncertainty of data. This study was conducted through numerical model experiment to explore, combine Bayesian maximum entropy filter and a hypothesis for the architecture of MODFLOW groundwater model numerical estimation. Through the virtual observation wells to simulate and observe the groundwater model periodically. The result showed that considering the uncertainty of data, the Bayesian maximum entropy filter will provide an ideal result of real-time parameters estimation.
Entropy generation in biophysical systems
NASA Astrophysics Data System (ADS)
Lucia, U.; Maino, G.
2013-03-01
Recently, in theoretical biology and in biophysical engineering the entropy production has been verified to approach asymptotically its maximum rate, by using the probability of individual elementary modes distributed in accordance with the Boltzmann distribution. The basis of this approach is the hypothesis that the entropy production rate is maximum at the stationary state. In the present work, this hypothesis is explained and motivated, starting from the entropy generation analysis. This latter quantity is obtained from the entropy balance for open systems considering the lifetime of the natural real process. The Lagrangian formalism is introduced in order to develop an analytical approach to the thermodynamic analysis of the open irreversible systems. The stationary conditions of the open systems are thus obtained in relation to the entropy generation and the least action principle. Consequently, the considered hypothesis is analytically proved and it represents an original basic approach in theoretical and mathematical biology and also in biophysical engineering. It is worth remarking that the present results show that entropy generation not only increases but increases as fast as possible.
Determining Dynamical Path Distributions usingMaximum Relative Entropy
2015-05-31
entropy to a one-dimensional continuum labeled by a parameter η. The resulting η-entropies are equivalent to those proposed by Renyi [12] or by Tsallis [13...1995). [12] A. Renyi , “On measures of entropy and information,”Proc. 4th Berkeley Simposium on Mathematical Statistics and Probability, Vol 1, p. 547-461
NASA Astrophysics Data System (ADS)
Benfenati, Francesco; Beretta, Gian Paolo
2018-04-01
We show that to prove the Onsager relations using the microscopic time reversibility one necessarily has to make an ergodic hypothesis, or a hypothesis closely linked to that. This is true in all the proofs of the Onsager relations in the literature: from the original proof by Onsager, to more advanced proofs in the context of linear response theory and the theory of Markov processes, to the proof in the context of the kinetic theory of gases. The only three proofs that do not require any kind of ergodic hypothesis are based on additional hypotheses on the macroscopic evolution: Ziegler's maximum entropy production principle (MEPP), the principle of time reversal invariance of the entropy production, or the steepest entropy ascent principle (SEAP).
Elements of the cognitive universe
NASA Astrophysics Data System (ADS)
Topsøe, Flemming
2017-06-01
"The least biased inference, taking available information into account, is the one with maximum entropy". So we are taught by Jaynes. The many followers from a broad spectrum of the natural and social sciences point to the wisdom of this principle, the maximum entropy principle, MaxEnt. But "entropy" need not be tied only to classical entropy and thus to probabilistic thinking. In fact, the arguments found in Jaynes' writings and elsewhere can, as we shall attempt to demonstrate, profitably be revisited, elaborated and transformed to apply in a much more general abstract setting. The approach is based on game theoretical thinking. Philosophical considerations dealing with notions of cognition - basically truth and belief - lie behind. Quantitative elements are introduced via a concept of description effort. An interpretation of Tsallis Entropy is indicated.
Exploiting the Maximum Entropy Principle to Increase Retrieval Effectiveness.
ERIC Educational Resources Information Center
Cooper, William S.
1983-01-01
Presents information retrieval design approach in which queries of computer-based system consist of sets of terms, either unweighted or weighted with subjective term precision estimates, and retrieval outputs ranked by probability of usefulness estimated by "maximum entropy principle." Boolean and weighted request systems are discussed.…
1985-02-01
Energy Analysis , a branch of dynamic modal analysis developed for analyzing acoustic vibration problems, its present stage of development embodies a...Maximum Entropy Stochastic Modelling and Reduced-Order Design Synthesis is a rigorous new approach to this class of problems. Inspired by Statistical
Inverting ion images without Abel inversion: maximum entropy reconstruction of velocity maps.
Dick, Bernhard
2014-01-14
A new method for the reconstruction of velocity maps from ion images is presented, which is based on the maximum entropy concept. In contrast to other methods used for Abel inversion the new method never applies an inversion or smoothing to the data. Instead, it iteratively finds the map which is the most likely cause for the observed data, using the correct likelihood criterion for data sampled from a Poissonian distribution. The entropy criterion minimizes the information content in this map, which hence contains no information for which there is no evidence in the data. Two implementations are proposed, and their performance is demonstrated with simulated and experimental data: Maximum Entropy Velocity Image Reconstruction (MEVIR) obtains a two-dimensional slice through the velocity distribution and can be compared directly to Abel inversion. Maximum Entropy Velocity Legendre Reconstruction (MEVELER) finds one-dimensional distribution functions Q(l)(v) in an expansion of the velocity distribution in Legendre polynomials P((cos θ) for the angular dependence. Both MEVIR and MEVELER can be used for the analysis of ion images with intensities as low as 0.01 counts per pixel, with MEVELER performing significantly better than MEVIR for images with low intensity. Both methods perform better than pBASEX, in particular for images with less than one average count per pixel.
Chapman Enskog-maximum entropy method on time-dependent neutron transport equation
NASA Astrophysics Data System (ADS)
Abdou, M. A.
2006-09-01
The time-dependent neutron transport equation in semi and infinite medium with linear anisotropic and Rayleigh scattering is proposed. The problem is solved by means of the flux-limited, Chapman Enskog-maximum entropy for obtaining the solution of the time-dependent neutron transport. The solution gives the neutron distribution density function which is used to compute numerically the radiant energy density E(x,t), net flux F(x,t) and reflectivity Rf. The behaviour of the approximate flux-limited maximum entropy neutron density function are compared with those found by other theories. Numerical calculations for the radiant energy, net flux and reflectivity of the proposed medium are calculated at different time and space.
DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K
2012-04-05
We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.
Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle.
Shalymov, Dmitry S; Fradkov, Alexander L
2016-01-01
We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined.
Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle
2016-01-01
We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined. PMID:26997886
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Exact computation of the maximum-entropy potential of spiking neural-network models.
Cofré, R; Cessac, B
2014-05-01
Understanding how stimuli and synaptic connectivity influence the statistics of spike patterns in neural networks is a central question in computational neuroscience. The maximum-entropy approach has been successfully used to characterize the statistical response of simultaneously recorded spiking neurons responding to stimuli. However, in spite of good performance in terms of prediction, the fitting parameters do not explain the underlying mechanistic causes of the observed correlations. On the other hand, mathematical models of spiking neurons (neuromimetic models) provide a probabilistic mapping between the stimulus, network architecture, and spike patterns in terms of conditional probabilities. In this paper we build an exact analytical mapping between neuromimetic and maximum-entropy models.
NASA Astrophysics Data System (ADS)
Li, Jimeng; Li, Ming; Zhang, Jinfeng
2017-08-01
Rolling bearings are the key components in the modern machinery, and tough operation environments often make them prone to failure. However, due to the influence of the transmission path and background noise, the useful feature information relevant to the bearing fault contained in the vibration signals is weak, which makes it difficult to identify the fault symptom of rolling bearings in time. Therefore, the paper proposes a novel weak signal detection method based on time-delayed feedback monostable stochastic resonance (TFMSR) system and adaptive minimum entropy deconvolution (MED) to realize the fault diagnosis of rolling bearings. The MED method is employed to preprocess the vibration signals, which can deconvolve the effect of transmission path and clarify the defect-induced impulses. And a modified power spectrum kurtosis (MPSK) index is constructed to realize the adaptive selection of filter length in the MED algorithm. By introducing the time-delayed feedback item in to an over-damped monostable system, the TFMSR method can effectively utilize the historical information of input signal to enhance the periodicity of SR output, which is beneficial to the detection of periodic signal. Furthermore, the influence of time delay and feedback intensity on the SR phenomenon is analyzed, and by selecting appropriate time delay, feedback intensity and re-scaling ratio with genetic algorithm, the SR can be produced to realize the resonance detection of weak signal. The combination of the adaptive MED (AMED) method and TFMSR method is conducive to extracting the feature information from strong background noise and realizing the fault diagnosis of rolling bearings. Finally, some experiments and engineering application are performed to evaluate the effectiveness of the proposed AMED-TFMSR method in comparison with a traditional bistable SR method.
Application of the Maximum Entropy Method to Risk Analysis of Mergers and Acquisitions
NASA Astrophysics Data System (ADS)
Xie, Jigang; Song, Wenyun
The maximum entropy (ME) method can be used to analyze the risk of mergers and acquisitions when only pre-acquisition information is available. A practical example of the risk analysis of China listed firms’ mergers and acquisitions is provided to testify the feasibility and practicality of the method.
Sridhar, Vivek Kumar Rangarajan; Bangalore, Srinivas; Narayanan, Shrikanth S.
2009-01-01
In this paper, we describe a maximum entropy-based automatic prosody labeling framework that exploits both language and speech information. We apply the proposed framework to both prominence and phrase structure detection within the Tones and Break Indices (ToBI) annotation scheme. Our framework utilizes novel syntactic features in the form of supertags and a quantized acoustic–prosodic feature representation that is similar to linear parameterizations of the prosodic contour. The proposed model is trained discriminatively and is robust in the selection of appropriate features for the task of prosody detection. The proposed maximum entropy acoustic–syntactic model achieves pitch accent and boundary tone detection accuracies of 86.0% and 93.1% on the Boston University Radio News corpus, and, 79.8% and 90.3% on the Boston Directions corpus. The phrase structure detection through prosodic break index labeling provides accuracies of 84% and 87% on the two corpora, respectively. The reported results are significantly better than previously reported results and demonstrate the strength of maximum entropy model in jointly modeling simple lexical, syntactic, and acoustic features for automatic prosody labeling. PMID:19603083
Holographic equipartition and the maximization of entropy
NASA Astrophysics Data System (ADS)
Krishna, P. B.; Mathew, Titus K.
2017-09-01
The accelerated expansion of the Universe can be interpreted as a tendency to satisfy holographic equipartition. It can be expressed by a simple law, Δ V =Δ t (Nsurf-ɛ Nbulk) , where V is the Hubble volume in Planck units, t is the cosmic time in Planck units, and Nsurf /bulk is the number of degrees of freedom on the horizon/bulk of the Universe. We show that this holographic equipartition law effectively implies the maximization of entropy. In the cosmological context, a system that obeys the holographic equipartition law behaves as an ordinary macroscopic system that proceeds to an equilibrium state of maximum entropy. We consider the standard Λ CDM model of the Universe and show that it is consistent with the holographic equipartition law. Analyzing the entropy evolution, we find that it also proceeds to an equilibrium state of maximum entropy.
NASA Astrophysics Data System (ADS)
Caticha, Ariel
2011-03-01
In this tutorial we review the essential arguments behing entropic inference. We focus on the epistemological notion of information and its relation to the Bayesian beliefs of rational agents. The problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), includes as special cases both MaxEnt and Bayes' rule, and therefore unifies the two themes of these workshops—the Maximum Entropy and the Bayesian methods—into a single general inference scheme.
Uncertainty estimation of the self-thinning process by Maximum-Entropy Principle
Shoufan Fang; George Z. Gertner
2000-01-01
When available information is scarce, the Maximum-Entropy Principle can estimate the distributions of parameters. In our case study, we estimated the distributions of the parameters of the forest self-thinning process based on literature information, and we derived the conditional distribution functions and estimated the 95 percent confidence interval (CI) of the self-...
DeVore, Matthew S.; Gull, Stephen F.; Johnson, Carey K.
2012-01-01
We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions. PMID:22338694
A unified approach to computational drug discovery.
Tseng, Chih-Yuan; Tuszynski, Jack
2015-11-01
It has been reported that a slowdown in the development of new medical therapies is affecting clinical outcomes. The FDA has thus initiated the Critical Path Initiative project investigating better approaches. We review the current strategies in drug discovery and focus on the advantages of the maximum entropy method being introduced in this area. The maximum entropy principle is derived from statistical thermodynamics and has been demonstrated to be an inductive inference tool. We propose a unified method to drug discovery that hinges on robust information processing using entropic inductive inference. Increasingly, applications of maximum entropy in drug discovery employ this unified approach and demonstrate the usefulness of the concept in the area of pharmaceutical sciences. Copyright © 2015. Published by Elsevier Ltd.
Designing a stable feedback control system for blind image deconvolution.
Cheng, Shichao; Liu, Risheng; Fan, Xin; Luo, Zhongxuan
2018-05-01
Blind image deconvolution is one of the main low-level vision problems with wide applications. Many previous works manually design regularization to simultaneously estimate the latent sharp image and the blur kernel under maximum a posterior framework. However, it has been demonstrated that such joint estimation strategies may lead to the undesired trivial solution. In this paper, we present a novel perspective, using a stable feedback control system, to simulate the latent sharp image propagation. The controller of our system consists of regularization and guidance, which decide the sparsity and sharp features of latent image, respectively. Furthermore, the formational model of blind image is introduced into the feedback process to avoid the image restoration deviating from the stable point. The stability analysis of the system indicates the latent image propagation in blind deconvolution task can be efficiently estimated and controlled by cues and priors. Thus the kernel estimation used for image restoration becomes more precision. Experimental results show that our system is effective on image propagation, and can perform favorably against the state-of-the-art blind image deconvolution methods on different benchmark image sets and special blurred images. Copyright © 2018 Elsevier Ltd. All rights reserved.
Economics and Maximum Entropy Production
NASA Astrophysics Data System (ADS)
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Metabolic networks evolve towards states of maximum entropy production.
Unrean, Pornkamol; Srienc, Friedrich
2011-11-01
A metabolic network can be described by a set of elementary modes or pathways representing discrete metabolic states that support cell function. We have recently shown that in the most likely metabolic state the usage probability of individual elementary modes is distributed according to the Boltzmann distribution law while complying with the principle of maximum entropy production. To demonstrate that a metabolic network evolves towards such state we have carried out adaptive evolution experiments with Thermoanaerobacterium saccharolyticum operating with a reduced metabolic functionality based on a reduced set of elementary modes. In such reduced metabolic network metabolic fluxes can be conveniently computed from the measured metabolite secretion pattern. Over a time span of 300 generations the specific growth rate of the strain continuously increased together with a continuous increase in the rate of entropy production. We show that the rate of entropy production asymptotically approaches the maximum entropy production rate predicted from the state when the usage probability of individual elementary modes is distributed according to the Boltzmann distribution. Therefore, the outcome of evolution of a complex biological system can be predicted in highly quantitative terms using basic statistical mechanical principles. Copyright © 2011 Elsevier Inc. All rights reserved.
The maximum entropy method of moments and Bayesian probability theory
NASA Astrophysics Data System (ADS)
Bretthorst, G. Larry
2013-08-01
The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.
Time dependence of Hawking radiation entropy
NASA Astrophysics Data System (ADS)
Page, Don N.
2013-09-01
If a black hole starts in a pure quantum state and evaporates completely by a unitary process, the von Neumann entropy of the Hawking radiation initially increases and then decreases back to zero when the black hole has disappeared. Here numerical results are given for an approximation to the time dependence of the radiation entropy under an assumption of fast scrambling, for large nonrotating black holes that emit essentially only photons and gravitons. The maximum of the von Neumann entropy then occurs after about 53.81% of the evaporation time, when the black hole has lost about 40.25% of its original Bekenstein-Hawking (BH) entropy (an upper bound for its von Neumann entropy) and then has a BH entropy that equals the entropy in the radiation, which is about 59.75% of the original BH entropy 4πM02, or about 7.509M02 ≈ 6.268 × 1076(M0/Msolar)2, using my 1976 calculations that the photon and graviton emission process into empty space gives about 1.4847 times the BH entropy loss of the black hole. Results are also given for black holes in initially impure states. If the black hole starts in a maximally mixed state, the von Neumann entropy of the Hawking radiation increases from zero up to a maximum of about 119.51% of the original BH entropy, or about 15.018M02 ≈ 1.254 × 1077(M0/Msolar)2, and then decreases back down to 4πM02 = 1.049 × 1077(M0/Msolar)2.
Using maximum entropy modeling to identify and prioritize red spruce forest habitat in West Virginia
Nathan R. Beane; James S. Rentch; Thomas M. Schuler
2013-01-01
Red spruce forests in West Virginia are found in island-like distributions at high elevations and provide essential habitat for the endangered Cheat Mountain salamander and the recently delisted Virginia northern flying squirrel. Therefore, it is important to identify restoration priorities of red spruce forests. Maximum entropy modeling was used to identify areas of...
Maximum entropy PDF projection: A review
NASA Astrophysics Data System (ADS)
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Ruixing; Yang, LV; Xu, Kele
Purpose: Deconvolution is a widely used tool in the field of image reconstruction algorithm when the linear imaging system has been blurred by the imperfect system transfer function. However, due to the nature of Gaussian-liked distribution for point spread function (PSF), the components with coherent high frequency in the image are hard to restored in most of the previous scanning imaging system, even the relatively accurate PSF is acquired. We propose a novel method for deconvolution of images which are obtained by using shape-modulated PSF. Methods: We use two different types of PSF - Gaussian shape and donut shape -more » to convolute the original image in order to simulate the process of scanning imaging. By employing deconvolution of the two images with corresponding given priors, the image quality of the deblurred images are compared. Then we find the critical size of the donut shape compared with the Gaussian shape which has similar deconvolution results. Through calculation of tightened focusing process using radially polarized beam, such size of donut is achievable under same conditions. Results: The effects of different relative size of donut and Gaussian shapes are investigated. When the full width at half maximum (FWHM) ratio of donut and Gaussian shape is set about 1.83, similar resolution results are obtained through our deconvolution method. Decreasing the size of donut will favor the deconvolution method. A mask with both amplitude and phase modulation is used to create a donut-shaped PSF compared with the non-modulated Gaussian PSF. Donut with size smaller than our critical value is obtained. Conclusion: The utility of donutshaped PSF are proved useful and achievable in the imaging and deconvolution processing, which is expected to have potential practical applications in high resolution imaging for biological samples.« less
Deconvolution Method on OSL Curves from ZrO2 Irradiated by Beta and UV Radiations
NASA Astrophysics Data System (ADS)
Rivera, T.; Kitis, G.; Azorín, J.; Furetta, C.
This paper reports the optically stimulated luminescent (OSL) response of ZrO2 to beta and ultraviolet radiations in order to investigate the potential use of this material as a radiation dosimeter. The experimentally obtained OSL decay curves were analyzed using the computerized curve de-convolution (CCD) method. It was found that the OSL curve structure, for the short (practical) illumination time used, consists of three first order components. The individual OSL dose response behavior of each component was found. The values of the time at the OSL peak maximum and the decay constant of each component were also estimated.
NASA Technical Reports Server (NTRS)
Hsia, Wei-Shen
1986-01-01
In the Control Systems Division of the Systems Dynamics Laboratory of the NASA/MSFC, a Ground Facility (GF), in which the dynamics and control system concepts being considered for Large Space Structures (LSS) applications can be verified, was designed and built. One of the important aspects of the GF is to design an analytical model which will be as close to experimental data as possible so that a feasible control law can be generated. Using Hyland's Maximum Entropy/Optimal Projection Approach, a procedure was developed in which the maximum entropy principle is used for stochastic modeling and the optimal projection technique is used for a reduced-order dynamic compensator design for a high-order plant.
Zaylaa, Amira; Oudjemia, Souad; Charara, Jamal; Girault, Jean-Marc
2015-09-01
This paper presents two new concepts for discrimination of signals of different complexity. The first focused initially on solving the problem of setting entropy descriptors by varying the pattern size instead of the tolerance. This led to the search for the optimal pattern size that maximized the similarity entropy. The second paradigm was based on the n-order similarity entropy that encompasses the 1-order similarity entropy. To improve the statistical stability, n-order fuzzy similarity entropy was proposed. Fractional Brownian motion was simulated to validate the different methods proposed, and fetal heart rate signals were used to discriminate normal from abnormal fetuses. In all cases, it was found that it was possible to discriminate time series of different complexity such as fractional Brownian motion and fetal heart rate signals. The best levels of performance in terms of sensitivity (90%) and specificity (90%) were obtained with the n-order fuzzy similarity entropy. However, it was shown that the optimal pattern size and the maximum similarity measurement were related to intrinsic features of the time series. Copyright © 2015 Elsevier Ltd. All rights reserved.
2017-08-21
distributions, and we discuss some applications for engineered and biological information transmission systems. Keywords: information theory; minimum...of its interpretation as a measure of the amount of information communicable by a neural system to groups of downstream neurons. Previous authors...of the maximum entropy approach. Our results also have relevance for engineered information transmission systems. We show that empirically measured
Interatomic potentials in condensed matter via the maximum-entropy principle
NASA Astrophysics Data System (ADS)
Carlsson, A. E.
1987-09-01
A general method is described for the calculation of interatomic potentials in condensed-matter systems by use of a maximum-entropy Ansatz for the interatomic correlation functions. The interatomic potentials are given explicitly in terms of statistical correlation functions involving the potential energy and the structure factor of a ``reference medium.'' Illustrations are given for Al-Cu alloys and a model transition metal.
NASA Astrophysics Data System (ADS)
Li, X.; Sang, Y. F.
2017-12-01
Mountain torrents, urban floods and other disasters caused by extreme precipitation bring great losses to the ecological environment, social and economic development, people's lives and property security. So there is of great significance to floods prevention and control by the study of its spatial distribution. Based on the annual maximum rainfall data of 60min, 6h and 24h, the paper generate long sequences following Pearson-III distribution, and then use the information entropy index to study the spatial distribution and difference of different duration. The results show that the information entropy value of annual maximum rainfall in the south region is greater than that in the north region, indicating more obvious stochastic characteristics of annual maximum rainfall in the latter. However, the spatial distribution of stochastic characteristics is different in different duration. For example, stochastic characteristics of 60min annual maximum rainfall in the Eastern Tibet is smaller than surrounding, but 6h and 24h annual maximum rainfall is larger than surrounding area. In the Haihe River Basin and the Huaihe River Basin, the stochastic characteristics of the 60min annual maximum rainfall was not significantly different from that in the surrounding area, and stochastic characteristics of 6h and 24h was smaller than that in the surrounding area. We conclude that the spatial distribution of information entropy values of annual maximum rainfall in different duration can reflect the spatial distribution of its stochastic characteristics, thus the results can be an importantly scientific basis for the flood prevention and control, agriculture, economic-social developments and urban flood control and waterlogging.
Voigt deconvolution method and its applications to pure oxygen absorption spectrum at 1270 nm band.
Al-Jalali, Muhammad A; Aljghami, Issam F; Mahzia, Yahia M
2016-03-15
Experimental spectral lines of pure oxygen at 1270 nm band were analyzed by Voigt deconvolution method. The method gave a total Voigt profile, which arises from two overlapping bands. Deconvolution of total Voigt profile leads to two Voigt profiles, the first as a result of O2 dimol at 1264 nm band envelope, and the second from O2 monomer at 1268 nm band envelope. In addition, Voigt profile itself is the convolution of Lorentzian and Gaussian distributions. Competition between thermal and collisional effects was clearly observed through competition between Gaussian and Lorentzian width for each band envelope. Voigt full width at half-maximum height (Voigt FWHM) for each line, and the width ratio between Lorentzian and Gaussian width (ΓLΓG(-1)) have been investigated. The following applied pressures were at 1, 2, 3, 4, 5, and 8 bar, while the temperatures were at 298 K, 323 K, 348 K, and 373 K range. Copyright © 2015 Elsevier B.V. All rights reserved.
Hom, Erik F. Y.; Marchis, Franck; Lee, Timothy K.; Haase, Sebastian; Agard, David A.; Sedat, John W.
2011-01-01
We describe an adaptive image deconvolution algorithm (AIDA) for myopic deconvolution of multi-frame and three-dimensional data acquired through astronomical and microscopic imaging. AIDA is a reimplementation and extension of the MISTRAL method developed by Mugnier and co-workers and shown to yield object reconstructions with excellent edge preservation and photometric precision [J. Opt. Soc. Am. A 21, 1841 (2004)]. Written in Numerical Python with calls to a robust constrained conjugate gradient method, AIDA has significantly improved run times over the original MISTRAL implementation. Included in AIDA is a scheme to automatically balance maximum-likelihood estimation and object regularization, which significantly decreases the amount of time and effort needed to generate satisfactory reconstructions. We validated AIDA using synthetic data spanning a broad range of signal-to-noise ratios and image types and demonstrated the algorithm to be effective for experimental data from adaptive optics–equipped telescope systems and wide-field microscopy. PMID:17491626
Abe, Sumiyoshi
2002-10-01
The q-exponential distributions, which are generalizations of the Zipf-Mandelbrot power-law distribution, are frequently encountered in complex systems at their stationary states. From the viewpoint of the principle of maximum entropy, they can apparently be derived from three different generalized entropies: the Rényi entropy, the Tsallis entropy, and the normalized Tsallis entropy. Accordingly, mere fittings of observed data by the q-exponential distributions do not lead to identification of the correct physical entropy. Here, stabilities of these entropies, i.e., their behaviors under arbitrary small deformation of a distribution, are examined. It is shown that, among the three, the Tsallis entropy is stable and can provide an entropic basis for the q-exponential distributions, whereas the others are unstable and cannot represent any experimentally observable quantities.
Time dependence of Hawking radiation entropy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Page, Don N., E-mail: profdonpage@gmail.com
2013-09-01
If a black hole starts in a pure quantum state and evaporates completely by a unitary process, the von Neumann entropy of the Hawking radiation initially increases and then decreases back to zero when the black hole has disappeared. Here numerical results are given for an approximation to the time dependence of the radiation entropy under an assumption of fast scrambling, for large nonrotating black holes that emit essentially only photons and gravitons. The maximum of the von Neumann entropy then occurs after about 53.81% of the evaporation time, when the black hole has lost about 40.25% of its originalmore » Bekenstein-Hawking (BH) entropy (an upper bound for its von Neumann entropy) and then has a BH entropy that equals the entropy in the radiation, which is about 59.75% of the original BH entropy 4πM{sub 0}{sup 2}, or about 7.509M{sub 0}{sup 2} ≈ 6.268 × 10{sup 76}(M{sub 0}/M{sub s}un){sup 2}, using my 1976 calculations that the photon and graviton emission process into empty space gives about 1.4847 times the BH entropy loss of the black hole. Results are also given for black holes in initially impure states. If the black hole starts in a maximally mixed state, the von Neumann entropy of the Hawking radiation increases from zero up to a maximum of about 119.51% of the original BH entropy, or about 15.018M{sub 0}{sup 2} ≈ 1.254 × 10{sup 77}(M{sub 0}/M{sub s}un){sup 2}, and then decreases back down to 4πM{sub 0}{sup 2} = 1.049 × 10{sup 77}(M{sub 0}/M{sub s}un){sup 2}.« less
Bistability, non-ergodicity, and inhibition in pairwise maximum-entropy models
Grün, Sonja; Helias, Moritz
2017-01-01
Pairwise maximum-entropy models have been used in neuroscience to predict the activity of neuronal populations, given only the time-averaged correlations of the neuron activities. This paper provides evidence that the pairwise model, applied to experimental recordings, would produce a bimodal distribution for the population-averaged activity, and for some population sizes the second mode would peak at high activities, that experimentally would be equivalent to 90% of the neuron population active within time-windows of few milliseconds. Several problems are connected with this bimodality: 1. The presence of the high-activity mode is unrealistic in view of observed neuronal activity and on neurobiological grounds. 2. Boltzmann learning becomes non-ergodic, hence the pairwise maximum-entropy distribution cannot be found: in fact, Boltzmann learning would produce an incorrect distribution; similarly, common variants of mean-field approximations also produce an incorrect distribution. 3. The Glauber dynamics associated with the model is unrealistically bistable and cannot be used to generate realistic surrogate data. This bimodality problem is first demonstrated for an experimental dataset from 159 neurons in the motor cortex of macaque monkey. Evidence is then provided that this problem affects typical neural recordings of population sizes of a couple of hundreds or more neurons. The cause of the bimodality problem is identified as the inability of standard maximum-entropy distributions with a uniform reference measure to model neuronal inhibition. To eliminate this problem a modified maximum-entropy model is presented, which reflects a basic effect of inhibition in the form of a simple but non-uniform reference measure. This model does not lead to unrealistic bimodalities, can be found with Boltzmann learning, and has an associated Glauber dynamics which incorporates a minimal asymmetric inhibition. PMID:28968396
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
A key component of computational biology is to compare the results of computer modelling with experimental measurements. Despite substantial progress in the models and algorithms used in many areas of computational biology, such comparisons sometimes reveal that the computations are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results that are at not in complete and quantitative accordance with experiments. A common solution to this problem is to explicitly ensure agreement between the two by perturbing the potential energy function towards the experimental data. So far, a general consensus for how such perturbations should be implemented has been lacking. Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges. PMID:24586124
Bistability, non-ergodicity, and inhibition in pairwise maximum-entropy models.
Rostami, Vahid; Porta Mana, PierGianLuca; Grün, Sonja; Helias, Moritz
2017-10-01
Pairwise maximum-entropy models have been used in neuroscience to predict the activity of neuronal populations, given only the time-averaged correlations of the neuron activities. This paper provides evidence that the pairwise model, applied to experimental recordings, would produce a bimodal distribution for the population-averaged activity, and for some population sizes the second mode would peak at high activities, that experimentally would be equivalent to 90% of the neuron population active within time-windows of few milliseconds. Several problems are connected with this bimodality: 1. The presence of the high-activity mode is unrealistic in view of observed neuronal activity and on neurobiological grounds. 2. Boltzmann learning becomes non-ergodic, hence the pairwise maximum-entropy distribution cannot be found: in fact, Boltzmann learning would produce an incorrect distribution; similarly, common variants of mean-field approximations also produce an incorrect distribution. 3. The Glauber dynamics associated with the model is unrealistically bistable and cannot be used to generate realistic surrogate data. This bimodality problem is first demonstrated for an experimental dataset from 159 neurons in the motor cortex of macaque monkey. Evidence is then provided that this problem affects typical neural recordings of population sizes of a couple of hundreds or more neurons. The cause of the bimodality problem is identified as the inability of standard maximum-entropy distributions with a uniform reference measure to model neuronal inhibition. To eliminate this problem a modified maximum-entropy model is presented, which reflects a basic effect of inhibition in the form of a simple but non-uniform reference measure. This model does not lead to unrealistic bimodalities, can be found with Boltzmann learning, and has an associated Glauber dynamics which incorporates a minimal asymmetric inhibition.
Cong-Cong, Xia; Cheng-Fang, Lu; Si, Li; Tie-Jun, Zhang; Sui-Heng, Lin; Yi, Hu; Ying, Liu; Zhi-Jie, Zhang
2016-12-02
To explore the technique of maximum entropy model for extracting Oncomelania hupensis snail habitats in Poyang Lake zone. The information of snail habitats and related environment factors collected in Poyang Lake zone were integrated to set up the maximum entropy based species model and generate snail habitats distribution map. Two Landsat 7 ETM+ remote sensing images of both wet and drought seasons in Poyang Lake zone were obtained, where the two indices of modified normalized difference water index (MNDWI) and normalized difference vegetation index (NDVI) were applied to extract snail habitats. The ROC curve, sensitivities and specificities were applied to assess their results. Furthermore, the importance of the variables for snail habitats was analyzed by using Jackknife approach. The evaluation results showed that the area under receiver operating characteristic curve (AUC) of testing data by the remote sensing-based method was only 0.56, and the sensitivity and specificity were 0.23 and 0.89 respectively. Nevertheless, those indices above-mentioned of maximum entropy model were 0.876, 0.89 and 0.74 respectively. The main concentration of snail habitats in Poyang Lake zone covered the northeast part of Yongxiu County, northwest of Yugan County, southwest of Poyang County and middle of Xinjian County, and the elevation was the most important environment variable affecting the distribution of snails, and the next was land surface temperature (LST). The maximum entropy model is more reliable and accurate than the remote sensing-based method for the sake of extracting snail habitats, which has certain guiding significance for the relevant departments to carry out measures to prevent and control high-risk snail habitats.
Conditional maximum-entropy method for selecting prior distributions in Bayesian statistics
NASA Astrophysics Data System (ADS)
Abe, Sumiyoshi
2014-11-01
The conditional maximum-entropy method (abbreviated here as C-MaxEnt) is formulated for selecting prior probability distributions in Bayesian statistics for parameter estimation. This method is inspired by a statistical-mechanical approach to systems governed by dynamics with largely separated time scales and is based on three key concepts: conjugate pairs of variables, dimensionless integration measures with coarse-graining factors and partial maximization of the joint entropy. The method enables one to calculate a prior purely from a likelihood in a simple way. It is shown, in particular, how it not only yields Jeffreys's rules but also reveals new structures hidden behind them.
Cosmic equilibration: A holographic no-hair theorem from the generalized second law
NASA Astrophysics Data System (ADS)
Carroll, Sean M.; Chatwin-Davies, Aidan
2018-02-01
In a wide class of cosmological models, a positive cosmological constant drives cosmological evolution toward an asymptotically de Sitter phase. Here we connect this behavior to the increase of entropy over time, based on the idea that de Sitter spacetime is a maximum-entropy state. We prove a cosmic no-hair theorem for Robertson-Walker and Bianchi I spacetimes that admit a Q-screen ("quantum" holographic screen) with certain entropic properties: If generalized entropy, in the sense of the cosmological version of the generalized second law conjectured by Bousso and Engelhardt, increases up to a finite maximum value along the screen, then the spacetime is asymptotically de Sitter in the future. Moreover, the limiting value of generalized entropy coincides with the de Sitter horizon entropy. We do not use the Einstein field equations in our proof, nor do we assume the existence of a positive cosmological constant. As such, asymptotic relaxation to a de Sitter phase can, in a precise sense, be thought of as cosmological equilibration.
Pareto versus lognormal: A maximum entropy test
NASA Astrophysics Data System (ADS)
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum Entropy for the International Division of Labor.
Lei, Hongmei; Chen, Ying; Li, Ruiqi; He, Deli; Zhang, Jiang
2015-01-01
As a result of the international division of labor, the trade value distribution on different products substantiated by international trade flows can be regarded as one country's strategy for competition. According to the empirical data of trade flows, countries may spend a large fraction of export values on ubiquitous and competitive products. Meanwhile, countries may also diversify their exports share on different types of products to reduce the risk. In this paper, we report that the export share distribution curves can be derived by maximizing the entropy of shares on different products under the product's complexity constraint once the international market structure (the country-product bipartite network) is given. Therefore, a maximum entropy model provides a good fit to empirical data. The empirical data is consistent with maximum entropy subject to a constraint on the expected value of the product complexity for each country. One country's strategy is mainly determined by the types of products this country can export. In addition, our model is able to fit the empirical export share distribution curves of nearly every country very well by tuning only one parameter.
Maximum Entropy for the International Division of Labor
Lei, Hongmei; Chen, Ying; Li, Ruiqi; He, Deli; Zhang, Jiang
2015-01-01
As a result of the international division of labor, the trade value distribution on different products substantiated by international trade flows can be regarded as one country’s strategy for competition. According to the empirical data of trade flows, countries may spend a large fraction of export values on ubiquitous and competitive products. Meanwhile, countries may also diversify their exports share on different types of products to reduce the risk. In this paper, we report that the export share distribution curves can be derived by maximizing the entropy of shares on different products under the product’s complexity constraint once the international market structure (the country-product bipartite network) is given. Therefore, a maximum entropy model provides a good fit to empirical data. The empirical data is consistent with maximum entropy subject to a constraint on the expected value of the product complexity for each country. One country’s strategy is mainly determined by the types of products this country can export. In addition, our model is able to fit the empirical export share distribution curves of nearly every country very well by tuning only one parameter. PMID:26172052
Maximum entropy production in environmental and ecological systems.
Kleidon, Axel; Malhi, Yadvinder; Cox, Peter M
2010-05-12
The coupled biosphere-atmosphere system entails a vast range of processes at different scales, from ecosystem exchange fluxes of energy, water and carbon to the processes that drive global biogeochemical cycles, atmospheric composition and, ultimately, the planetary energy balance. These processes are generally complex with numerous interactions and feedbacks, and they are irreversible in their nature, thereby producing entropy. The proposed principle of maximum entropy production (MEP), based on statistical mechanics and information theory, states that thermodynamic processes far from thermodynamic equilibrium will adapt to steady states at which they dissipate energy and produce entropy at the maximum possible rate. This issue focuses on the latest development of applications of MEP to the biosphere-atmosphere system including aspects of the atmospheric circulation, the role of clouds, hydrology, vegetation effects, ecosystem exchange of energy and mass, biogeochemical interactions and the Gaia hypothesis. The examples shown in this special issue demonstrate the potential of MEP to contribute to improved understanding and modelling of the biosphere and the wider Earth system, and also explore limitations and constraints to the application of the MEP principle.
Propane spectral resolution enhancement by the maximum entropy method
NASA Technical Reports Server (NTRS)
Bonavito, N. L.; Stewart, K. P.; Hurley, E. J.; Yeh, K. C.; Inguva, R.
1990-01-01
The Burg algorithm for maximum entropy power spectral density estimation is applied to a time series of data obtained from a Michelson interferometer and compared with a standard FFT estimate for resolution capability. The propane transmittance spectrum was estimated by use of the FFT with a 2 to the 18th data sample interferogram, giving a maximum unapodized resolution of 0.06/cm. This estimate was then interpolated by zero filling an additional 2 to the 18th points, and the final resolution was taken to be 0.06/cm. Comparison of the maximum entropy method (MEM) estimate with the FFT was made over a 45/cm region of the spectrum for several increasing record lengths of interferogram data beginning at 2 to the 10th. It is found that over this region the MEM estimate with 2 to the 16th data samples is in close agreement with the FFT estimate using 2 to the 18th samples.
NASA Astrophysics Data System (ADS)
Faber, T. L.; Raghunath, N.; Tudorascu, D.; Votaw, J. R.
2009-02-01
Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. Existing correction methods that use known patient motion obtained from tracking devices either require multi-frame acquisitions, detailed knowledge of the scanner, or specialized reconstruction algorithms. A deconvolution algorithm has been developed that alleviates these drawbacks by using the reconstructed image to estimate the original non-blurred image using maximum likelihood estimation maximization (MLEM) techniques. A high-resolution digital phantom was created by shape-based interpolation of the digital Hoffman brain phantom. Three different sets of 20 movements were applied to the phantom. For each frame of the motion, sinograms with attenuation and three levels of noise were simulated and then reconstructed using filtered backprojection. The average of the 20 frames was considered the motion blurred image, which was restored with the deconvolution algorithm. After correction, contrast increased from a mean of 2.0, 1.8 and 1.4 in the motion blurred images, for the three increasing amounts of movement, to a mean of 2.5, 2.4 and 2.2. Mean error was reduced by an average of 55% with motion correction. In conclusion, deconvolution can be used for correction of motion blur when subject motion is known.
Thermodynamic resource theories, non-commutativity and maximum entropy principles
NASA Astrophysics Data System (ADS)
Lostaglio, Matteo; Jennings, David; Rudolph, Terry
2017-04-01
We discuss some features of thermodynamics in the presence of multiple conserved quantities. We prove a generalisation of Landauer principle illustrating tradeoffs between the erasure costs paid in different ‘currencies’. We then show how the maximum entropy and complete passivity approaches give different answers in the presence of multiple observables. We discuss how this seems to prevent current resource theories from fully capturing thermodynamic aspects of non-commutativity.
2015-08-20
evapotranspiration (ET) over oceans may be significantly lower than previously thought. The MEP model parameterized turbulent transfer coefficients...fluxes, ocean freshwater fluxes, regional crop yield among others. An on-going study suggests that the global annual evapotranspiration (ET) over...Bras, Jingfeng Wang. A model of evapotranspiration based on the theory of maximum entropy production, Water Resources Research, (03 2011): 0. doi
Hydrodynamic equations for electrons in graphene obtained from the maximum entropy principle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barletti, Luigi, E-mail: luigi.barletti@unifi.it
2014-08-15
The maximum entropy principle is applied to the formal derivation of isothermal, Euler-like equations for semiclassical fermions (electrons and holes) in graphene. After proving general mathematical properties of the equations so obtained, their asymptotic form corresponding to significant physical regimes is investigated. In particular, the diffusive regime, the Maxwell-Boltzmann regime (high temperature), the collimation regime and the degenerate gas limit (vanishing temperature) are considered.
NASA Technical Reports Server (NTRS)
Hyland, D. C.; Bernstein, D. S.
1987-01-01
The underlying philosophy and motivation of the optimal projection/maximum entropy (OP/ME) stochastic modeling and reduced control design methodology for high order systems with parameter uncertainties are discussed. The OP/ME design equations for reduced-order dynamic compensation including the effect of parameter uncertainties are reviewed. The application of the methodology to several Large Space Structures (LSS) problems of representative complexity is illustrated.
Quantifying and minimizing entropy generation in AMTEC cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendricks, T.J.; Huang, C.
1997-12-31
Entropy generation in an AMTEC cell represents inherent power loss to the AMTEC cell. Minimizing cell entropy generation directly maximizes cell power generation and efficiency. An internal project is on-going at AMPS to identify, quantify and minimize entropy generation mechanisms within an AMTEC cell, with the goal of determining cost-effective design approaches for maximizing AMTEC cell power generation. Various entropy generation mechanisms have been identified and quantified. The project has investigated several cell design techniques in a solar-driven AMTEC system to minimize cell entropy generation and produce maximum power cell designs. In many cases, various sources of entropy generation aremore » interrelated such that minimizing entropy generation requires cell and system design optimization. Some of the tradeoffs between various entropy generation mechanisms are quantified and explained and their implications on cell design are discussed. The relationship between AMTEC cell power and efficiency and entropy generation is presented and discussed.« less
Moisture sorption isotherms and thermodynamic properties of bovine leather
NASA Astrophysics Data System (ADS)
Fakhfakh, Rihab; Mihoubi, Daoued; Kechaou, Nabil
2018-04-01
This study was aimed at the determination of bovine leather moisture sorption characteristics using a static gravimetric method at 30, 40, 50, 60 and 70 °C. The curves exhibit type II behaviour according to the BET classification. The sorption isotherms fitting by seven equations shows that GAB model is able to reproduce the equilibrium moisture content evolution with water activity for moisture range varying from 0.02 to 0.83 kg/kg d.b (0.9898 < R2 < 0.999). The sorption isotherms exhibit hysteresis effect. Additionally, sorption isotherms data were used to determine the thermodynamic properties such as isosteric heat of sorption, sorption entropy, spreading pressure, net integral enthalpy and entropy. Net isosteric heat of sorption and differential entropy were evaluated through direct use of moisture isotherms by applying the Clausius-Clapeyron equation and used to investigate the enthalpy-entropy compensation theory. Both sorption enthalpy and entropy for desorption increase to a maximum with increasing moisture content, and then decrease sharply with rising moisture content. Adsorption enthalpy decreases with increasing moisture content. Whereas, adsorption entropy increases smoothly with increasing moisture content to a maximum of 6.29 J/K.mol. Spreading pressure increases with rising water activity. The net integral enthalpy seemed to decrease and then increase to become asymptotic. The net integral entropy decreased with moisture content increase.
Entropy jump across an inviscid shock wave
NASA Technical Reports Server (NTRS)
Salas, Manuel D.; Iollo, Angelo
1995-01-01
The shock jump conditions for the Euler equations in their primitive form are derived by using generalized functions. The shock profiles for specific volume, speed, and pressure are shown to be the same, however density has a different shock profile. Careful study of the equations that govern the entropy shows that the inviscid entropy profile has a local maximum within the shock layer. We demonstrate that because of this phenomenon, the entropy, propagation equation cannot be used as a conservation law.
NASA Astrophysics Data System (ADS)
Hu, Xiaoqian; Tao, Jinxu; Ye, Zhongfu; Qiu, Bensheng; Xu, Jinzhang
2018-05-01
In order to solve the problem of medical image segmentation, a wavelet neural network medical image segmentation algorithm based on combined maximum entropy criterion is proposed. Firstly, we use bee colony algorithm to optimize the network parameters of wavelet neural network, get the parameters of network structure, initial weights and threshold values, and so on, we can quickly converge to higher precision when training, and avoid to falling into relative extremum; then the optimal number of iterations is obtained by calculating the maximum entropy of the segmented image, so as to achieve the automatic and accurate segmentation effect. Medical image segmentation experiments show that the proposed algorithm can reduce sample training time effectively and improve convergence precision, and segmentation effect is more accurate and effective than traditional BP neural network (back propagation neural network : a multilayer feed forward neural network which trained according to the error backward propagation algorithm.
Investigation of the hydrogen fluxes in the plasma edge of W7-AS during H-mode discharges
NASA Astrophysics Data System (ADS)
Langer, U.; Taglauer, E.; Fischer, R.; W7-AS Team
2001-03-01
In the stellarator W7-AS the H-mode is characterized by an edge transport barrier which is localized within a few centimeters inside the separatrix. The corresponding L-H transition shows well-known features such as the steepening of the temperature and density profiles in the region of the separatrix. With a so-called sniffer probe the temporal development of the hydrogen and deuterium fluxes has been studied in the plasma edge during different H-mode discharges with deuterium gas puffing. Prior to the transition a significant reduction of the deuterium and also the hydrogen fluxes can be observed. This fact confirms the assumption that the steepening of the density profiles starts at the outermost edge of the plasma. Moreover, sniffer probe measurements in the plasma edge could therefore identify a precursor for the L-H transition. The analysis of the hydrogen neutral gases shows a distinct change of the hydrogen isotope ratio during the transition. This observation is in agreement with the change in the particle fluxes onto the targets and can also be seen in the reduced H α signals from the limiters. It is further demonstrated that significant improvement in the time resolution of the measured data can be obtained by deconvolution of the data with the apparatus function using Bayesian probability theory and the Maximum Entropy method with adaptive kernels.
Quantum entropy and uncertainty for two-mode squeezed, coherent and intelligent spin states
NASA Technical Reports Server (NTRS)
Aragone, C.; Mundarain, D.
1993-01-01
We compute the quantum entropy for monomode and two-mode systems set in squeezed states. Thereafter, the quantum entropy is also calculated for angular momentum algebra when the system is either in a coherent or in an intelligent spin state. These values are compared with the corresponding values of the respective uncertainties. In general, quantum entropies and uncertainties have the same minimum and maximum points. However, for coherent and intelligent spin states, it is found that some minima for the quantum entropy turn out to be uncertainty maxima. We feel that the quantum entropy we use provides the right answer, since it is given in an essentially unique way.
Entropy and climate. I - ERBE observations of the entropy production of the earth
NASA Technical Reports Server (NTRS)
Stephens, G. L.; O'Brien, D. M.
1993-01-01
An approximate method for estimating the global distributions of the entropy fluxes flowing through the upper boundary of the climate system is introduced, and an estimate of the entropy exchange between the earth and space and the entropy production of the planet is provided. Entropy fluxes calculated from the Earth Radiation Budget Experiment measurements show how the long-wave entropy flux densities dominate the total entropy fluxes at all latitudes compared with the entropy flux densities associated with reflected sunlight, although the short-wave flux densities are important in the context of clear sky-cloudy sky net entropy flux differences. It is suggested that the entropy production of the planet is both constant for the 36 months of data considered and very near its maximum possible value. The mean value of this production is 0.68 x 10 exp 15 W/K, and the amplitude of the annual cycle is approximately 1 to 2 percent of this value.
Mixed memory, (non) Hurst effect, and maximum entropy of rainfall in the tropical Andes
NASA Astrophysics Data System (ADS)
Poveda, Germán
2011-02-01
Diverse linear and nonlinear statistical parameters of rainfall under aggregation in time and the kind of temporal memory are investigated. Data sets from the Andes of Colombia at different resolutions (15 min and 1-h), and record lengths (21 months and 8-40 years) are used. A mixture of two timescales is found in the autocorrelation and autoinformation functions, with short-term memory holding for time lags less than 15-30 min, and long-term memory onwards. Consistently, rainfall variance exhibits different temporal scaling regimes separated at 15-30 min and 24 h. Tests for the Hurst effect evidence the frailty of the R/ S approach in discerning the kind of memory in high resolution rainfall, whereas rigorous statistical tests for short-memory processes do reject the existence of the Hurst effect. Rainfall information entropy grows as a power law of aggregation time, S( T) ˜ Tβ with < β> = 0.51, up to a timescale, TMaxEnt (70-202 h), at which entropy saturates, with β = 0 onwards. Maximum entropy is reached through a dynamic Generalized Pareto distribution, consistently with the maximum information-entropy principle for heavy-tailed random variables, and with its asymptotically infinitely divisible property. The dynamics towards the limit distribution is quantified. Tsallis q-entropies also exhibit power laws with T, such that Sq( T) ˜ Tβ( q) , with β( q) ⩽ 0 for q ⩽ 0, and β( q) ≃ 0.5 for q ⩾ 1. No clear patterns are found in the geographic distribution within and among the statistical parameters studied, confirming the strong variability of tropical Andean rainfall.
NASA Astrophysics Data System (ADS)
Alameddine, Ibrahim; Karmakar, Subhankar; Qian, Song S.; Paerl, Hans W.; Reckhow, Kenneth H.
2013-10-01
The total maximum daily load program aims to monitor more than 40,000 standard violations in around 20,000 impaired water bodies across the United States. Given resource limitations, future monitoring efforts have to be hedged against the uncertainties in the monitored system, while taking into account existing knowledge. In that respect, we have developed a hierarchical spatiotemporal Bayesian model that can be used to optimize an existing monitoring network by retaining stations that provide the maximum amount of information, while identifying locations that would benefit from the addition of new stations. The model assumes the water quality parameters are adequately described by a joint matrix normal distribution. The adopted approach allows for a reduction in redundancies, while emphasizing information richness rather than data richness. The developed approach incorporates the concept of entropy to account for the associated uncertainties. Three different entropy-based criteria are adopted: total system entropy, chlorophyll-a standard violation entropy, and dissolved oxygen standard violation entropy. A multiple attribute decision making framework is adopted to integrate the competing design criteria and to generate a single optimal design. The approach is implemented on the water quality monitoring system of the Neuse River Estuary in North Carolina, USA. The model results indicate that the high priority monitoring areas identified by the total system entropy and the dissolved oxygen violation entropy criteria are largely coincident. The monitoring design based on the chlorophyll-a standard violation entropy proved to be less informative, given the low probabilities of violating the water quality standard in the estuary.
Competition between Homophily and Information Entropy Maximization in Social Networks
Zhao, Jichang; Liang, Xiao; Xu, Ke
2015-01-01
In social networks, it is conventionally thought that two individuals with more overlapped friends tend to establish a new friendship, which could be stated as homophily breeding new connections. While the recent hypothesis of maximum information entropy is presented as the possible origin of effective navigation in small-world networks. We find there exists a competition between information entropy maximization and homophily in local structure through both theoretical and experimental analysis. This competition suggests that a newly built relationship between two individuals with more common friends would lead to less information entropy gain for them. We demonstrate that in the evolution of the social network, both of the two assumptions coexist. The rule of maximum information entropy produces weak ties in the network, while the law of homophily makes the network highly clustered locally and the individuals would obtain strong and trust ties. A toy model is also presented to demonstrate the competition and evaluate the roles of different rules in the evolution of real networks. Our findings could shed light on the social network modeling from a new perspective. PMID:26334994
Kleidon, Axel
2009-06-01
The Earth system is maintained in a unique state far from thermodynamic equilibrium, as, for instance, reflected in the high concentration of reactive oxygen in the atmosphere. The myriad of processes that transform energy, that result in the motion of mass in the atmosphere, in oceans, and on land, processes that drive the global water, carbon, and other biogeochemical cycles, all have in common that they are irreversible in their nature. Entropy production is a general consequence of these processes and measures their degree of irreversibility. The proposed principle of maximum entropy production (MEP) states that systems are driven to steady states in which they produce entropy at the maximum possible rate given the prevailing constraints. In this review, the basics of nonequilibrium thermodynamics are described, as well as how these apply to Earth system processes. Applications of the MEP principle are discussed, ranging from the strength of the atmospheric circulation, the hydrological cycle, and biogeochemical cycles to the role that life plays in these processes. Nonequilibrium thermodynamics and the MEP principle have potentially wide-ranging implications for our understanding of Earth system functioning, how it has evolved in the past, and why it is habitable. Entropy production allows us to quantify an objective direction of Earth system change (closer to vs further away from thermodynamic equilibrium, or, equivalently, towards a state of MEP). When a maximum in entropy production is reached, MEP implies that the Earth system reacts to perturbations primarily with negative feedbacks. In conclusion, this nonequilibrium thermodynamic view of the Earth system shows great promise to establish a holistic description of the Earth as one system. This perspective is likely to allow us to better understand and predict its function as one entity, how it has evolved in the past, and how it is modified by human activities in the future.
Beyond maximum entropy: Fractal pixon-based image reconstruction
NASA Technical Reports Server (NTRS)
Puetter, R. C.; Pina, R. K.
1994-01-01
We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other methods, including Goodness-of-Fit (e.g. Least-Squares and Lucy-Richardson) and Maximum Entropy (ME). Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME.
Applications of the principle of maximum entropy: from physics to ecology.
Banavar, Jayanth R; Maritan, Amos; Volkov, Igor
2010-02-17
There are numerous situations in physics and other disciplines which can be described at different levels of detail in terms of probability distributions. Such descriptions arise either intrinsically as in quantum mechanics, or because of the vast amount of details necessary for a complete description as, for example, in Brownian motion and in many-body systems. We show that an application of the principle of maximum entropy for estimating the underlying probability distribution can depend on the variables used for describing the system. The choice of characterization of the system carries with it implicit assumptions about fundamental attributes such as whether the system is classical or quantum mechanical or equivalently whether the individuals are distinguishable or indistinguishable. We show that the correct procedure entails the maximization of the relative entropy subject to known constraints and, additionally, requires knowledge of the behavior of the system in the absence of these constraints. We present an application of the principle of maximum entropy to understanding species diversity in ecology and introduce a new statistical ensemble corresponding to the distribution of a variable population of individuals into a set of species not defined a priori.
Entropy Methods For Univariate Distributions in Decision Analysis
NASA Astrophysics Data System (ADS)
Abbas, Ali E.
2003-03-01
One of the most important steps in decision analysis practice is the elicitation of the decision-maker's belief about an uncertainty of interest in the form of a representative probability distribution. However, the probability elicitation process is a task that involves many cognitive and motivational biases. Alternatively, the decision-maker may provide other information about the distribution of interest, such as its moments, and the maximum entropy method can be used to obtain a full distribution subject to the given moment constraints. In practice however, decision makers cannot readily provide moments for the distribution, and are much more comfortable providing information about the fractiles of the distribution of interest or bounds on its cumulative probabilities. In this paper we present a graphical method to determine the maximum entropy distribution between upper and lower probability bounds and provide an interpretation for the shape of the maximum entropy distribution subject to fractile constraints, (FMED). We also discuss the problems with the FMED in that it is discontinuous and flat over each fractile interval. We present a heuristic approximation to a distribution if in addition to its fractiles, we also know it is continuous and work through full examples to illustrate the approach.
NASA Astrophysics Data System (ADS)
Yu, Jian; Yin, Qian; Guo, Ping; Luo, A.-li
2014-09-01
This paper presents an efficient method for the extraction of astronomical spectra from two-dimensional (2D) multifibre spectrographs based on the regularized least-squares QR-factorization (LSQR) algorithm. We address two issues: we propose a modified Gaussian point spread function (PSF) for modelling the 2D PSF from multi-emission-line gas-discharge lamp images (arc images), and we develop an efficient deconvolution method to extract spectra in real circumstances. The proposed modified 2D Gaussian PSF model can fit various types of 2D PSFs, including different radial distortion angles and ellipticities. We adopt the regularized LSQR algorithm to solve the sparse linear equations constructed from the sparse convolution matrix, which we designate the deconvolution spectrum extraction method. Furthermore, we implement a parallelized LSQR algorithm based on graphics processing unit programming in the Compute Unified Device Architecture to accelerate the computational processing. Experimental results illustrate that the proposed extraction method can greatly reduce the computational cost and memory use of the deconvolution method and, consequently, increase its efficiency and practicability. In addition, the proposed extraction method has a stronger noise tolerance than other methods, such as the boxcar (aperture) extraction and profile extraction methods. Finally, we present an analysis of the sensitivity of the extraction results to the radius and full width at half-maximum of the 2D PSF.
Clauser-Horne-Shimony-Holt violation and the entropy-concurrence plane
DOE Office of Scientific and Technical Information (OSTI.GOV)
Derkacz, Lukasz; Jakobczyk, Lech
2005-10-15
We characterize violation of Clauser-Horne-Shimony-Holt (CHSH) inequalities for mixed two-qubit states by their mixedness and entanglement. The class of states that have maximum degree of CHSH violation for a given linear entropy is also constructed.
Reduced nocturnal ACTH-driven cortisol secretion during critical illness
Boonen, Eva; Meersseman, Philippe; Vervenne, Hilke; Meyfroidt, Geert; Guïza, Fabian; Wouters, Pieter J.; Veldhuis, Johannes D.
2014-01-01
Recently, during critical illness, cortisol metabolism was found to be reduced. We hypothesize that such reduced cortisol breakdown may suppress pulsatile ACTH and cortisol secretion via feedback inhibition. To test this hypothesis, nocturnal ACTH and cortisol secretory profiles were constructed by deconvolution analysis from plasma concentration time series in 40 matched critically ill patients and eight healthy controls, excluding diseases or drugs that affect the hypothalamic-pituitary-adrenal axis. Blood was sampled every 10 min between 2100 and 0600 to quantify plasma concentrations of ACTH and (free) cortisol. Approximate entropy, an estimation of process irregularity, cross-approximate entropy, a measure of ACTH-cortisol asynchrony, and ACTH-cortisol dose-response relationships were calculated. Total and free plasma cortisol concentrations were higher at all times in patients than in controls (all P < 0.04). Pulsatile cortisol secretion was 54% lower in patients than in controls (P = 0.005), explained by reduced cortisol burst mass (P = 0.03), whereas cortisol pulse frequency (P = 0.35) and nonpulsatile cortisol secretion (P = 0.80) were unaltered. Pulsatile ACTH secretion was 31% lower in patients than in controls (P = 0.03), again explained by a lower ACTH burst mass (P = 0.02), whereas ACTH pulse frequency (P = 0.50) and nonpulsatile ACTH secretion (P = 0.80) were unchanged. ACTH-cortisol dose response estimates were similar in patients and controls. ACTH and cortisol approximate entropy were higher in patients (P ≤ 0.03), as was ACTH-cortisol cross-approximate entropy (P ≤ 0.001). We conclude that hypercortisolism during critical illness coincided with suppressed pulsatile ACTH and cortisol secretion and a normal ACTH-cortisol dose response. Increased irregularity and asynchrony of the ACTH and cortisol time series supported non-ACTH-dependent mechanisms driving hypercortisolism during critical illness. PMID:24569590
Maximum entropy, fluctuations and priors
NASA Astrophysics Data System (ADS)
Caticha, A.
2001-05-01
The method of maximum entropy (ME) is extended to address the following problem: Once one accepts that the ME distribution is to be preferred over all others, the question is to what extent are distributions with lower entropy supposed to be ruled out. Two applications are given. The first is to the theory of thermodynamic fluctuations. The formulation is exact, covariant under changes of coordinates, and allows fluctuations of both the extensive and the conjugate intensive variables. The second application is to the construction of an objective prior for Bayesian inference. The prior obtained by following the ME method to its inevitable conclusion turns out to be a special case (α=1) of what are currently known under the name of entropic priors. .
REMARKS ON THE MAXIMUM ENTROPY METHOD APPLIED TO FINITE TEMPERATURE LATTICE QCD.
DOE Office of Scientific and Technical Information (OSTI.GOV)
UMEDA, T.; MATSUFURU, H.
2005-07-25
We make remarks on the Maximum Entropy Method (MEM) for studies of the spectral function of hadronic correlators in finite temperature lattice QCD. We discuss the virtues and subtlety of MEM in the cases that one does not have enough number of data points such as at finite temperature. Taking these points into account, we suggest several tests which one should examine to keep the reliability for the results, and also apply them using mock and lattice QCD data.
NASA Astrophysics Data System (ADS)
Zhang, Lijuan; Li, Yang; Wang, Junnan; Liu, Ying
2018-03-01
In this paper, we propose a point spread function (PSF) reconstruction method and joint maximum a posteriori (JMAP) estimation method for the adaptive optics image restoration. Using the JMAP method as the basic principle, we establish the joint log likelihood function of multi-frame adaptive optics (AO) images based on the image Gaussian noise models. To begin with, combining the observed conditions and AO system characteristics, a predicted PSF model for the wavefront phase effect is developed; then, we build up iterative solution formulas of the AO image based on our proposed algorithm, addressing the implementation process of multi-frame AO images joint deconvolution method. We conduct a series of experiments on simulated and real degraded AO images to evaluate our proposed algorithm. Compared with the Wiener iterative blind deconvolution (Wiener-IBD) algorithm and Richardson-Lucy IBD algorithm, our algorithm has better restoration effects including higher peak signal-to-noise ratio ( PSNR) and Laplacian sum ( LS) value than the others. The research results have a certain application values for actual AO image restoration.
It is not the entropy you produce, rather, how you produce it
Volk, Tyler; Pauluis, Olivier
2010-01-01
The principle of maximum entropy production (MEP) seeks to better understand a large variety of the Earth's environmental and ecological systems by postulating that processes far from thermodynamic equilibrium will ‘adapt to steady states at which they dissipate energy and produce entropy at the maximum possible rate’. Our aim in this ‘outside view’, invited by Axel Kleidon, is to focus on what we think is an outstanding challenge for MEP and for irreversible thermodynamics in general: making specific predictions about the relative contribution of individual processes to entropy production. Using studies that compared entropy production in the atmosphere of a dry versus humid Earth, we show that two systems might have the same entropy production rate but very different internal dynamics of dissipation. Using the results of several of the papers in this special issue and a thought experiment, we show that components of life-containing systems can evolve to either lower or raise the entropy production rate. Our analysis makes explicit fundamental questions for MEP that should be brought into focus: can MEP predict not just the overall state of entropy production of a system but also the details of the sub-systems of dissipaters within the system? Which fluxes of the system are those that are most likely to be maximized? How it is possible for MEP theory to be so domain-neutral that it can claim to apply equally to both purely physical–chemical systems and also systems governed by the ‘laws’ of biological evolution? We conclude that the principle of MEP needs to take on the issue of exactly how entropy is produced. PMID:20368249
NASA Astrophysics Data System (ADS)
DeVore, Matthew S.; Gull, Stephen F.; Johnson, Carey K.
2013-08-01
We analyzed single molecule FRET burst measurements using Bayesian nested sampling. The MultiNest algorithm produces accurate FRET efficiency distributions from single-molecule data. FRET efficiency distributions recovered by MultiNest and classic maximum entropy are compared for simulated data and for calmodulin labeled at residues 44 and 117. MultiNest compares favorably with maximum entropy analysis for simulated data, judged by the Bayesian evidence. FRET efficiency distributions recovered for calmodulin labeled with two different FRET dye pairs depended on the dye pair and changed upon Ca2+ binding. We also looked at the FRET efficiency distributions of calmodulin bound to the calcium/calmodulin dependent protein kinase II (CaMKII) binding domain. For both dye pairs, the FRET efficiency distribution collapsed to a single peak in the case of calmodulin bound to the CaMKII peptide. These measurements strongly suggest that consideration of dye-protein interactions is crucial in forming an accurate picture of protein conformations from FRET data.
DeVore, Matthew S.; Gull, Stephen F.; Johnson, Carey K.
2013-01-01
We analyze single molecule FRET burst measurements using Bayesian nested sampling. The MultiNest algorithm produces accurate FRET efficiency distributions from single-molecule data. FRET efficiency distributions recovered by MultiNest and classic maximum entropy are compared for simulated data and for calmodulin labeled at residues 44 and 117. MultiNest compares favorably with maximum entropy analysis for simulated data, judged by the Bayesian evidence. FRET efficiency distributions recovered for calmodulin labeled with two different FRET dye pairs depended on the dye pair and changed upon Ca2+ binding. We also looked at the FRET efficiency distributions of calmodulin bound to the calcium/calmodulin dependent protein kinase II (CaMKII) binding domain. For both dye pairs, the FRET efficiency distribution collapsed to a single peak in the case of calmodulin bound to the CaMKII peptide. These measurements strongly suggest that consideration of dye-protein interactions is crucial in forming an accurate picture of protein conformations from FRET data. PMID:24223465
Perspective: Maximum caliber is a general variational principle for dynamical systems
NASA Astrophysics Data System (ADS)
Dixit, Purushottam D.; Wagoner, Jason; Weistuch, Corey; Pressé, Steve; Ghosh, Kingshuk; Dill, Ken A.
2018-01-01
We review here Maximum Caliber (Max Cal), a general variational principle for inferring distributions of paths in dynamical processes and networks. Max Cal is to dynamical trajectories what the principle of maximum entropy is to equilibrium states or stationary populations. In Max Cal, you maximize a path entropy over all possible pathways, subject to dynamical constraints, in order to predict relative path weights. Many well-known relationships of non-equilibrium statistical physics—such as the Green-Kubo fluctuation-dissipation relations, Onsager's reciprocal relations, and Prigogine's minimum entropy production—are limited to near-equilibrium processes. Max Cal is more general. While it can readily derive these results under those limits, Max Cal is also applicable far from equilibrium. We give examples of Max Cal as a method of inference about trajectory distributions from limited data, finding reaction coordinates in bio-molecular simulations, and modeling the complex dynamics of non-thermal systems such as gene regulatory networks or the collective firing of neurons. We also survey its basis in principle and some limitations.
Devore, Matthew S; Gull, Stephen F; Johnson, Carey K
2013-08-30
We analyze single molecule FRET burst measurements using Bayesian nested sampling. The MultiNest algorithm produces accurate FRET efficiency distributions from single-molecule data. FRET efficiency distributions recovered by MultiNest and classic maximum entropy are compared for simulated data and for calmodulin labeled at residues 44 and 117. MultiNest compares favorably with maximum entropy analysis for simulated data, judged by the Bayesian evidence. FRET efficiency distributions recovered for calmodulin labeled with two different FRET dye pairs depended on the dye pair and changed upon Ca 2+ binding. We also looked at the FRET efficiency distributions of calmodulin bound to the calcium/calmodulin dependent protein kinase II (CaMKII) binding domain. For both dye pairs, the FRET efficiency distribution collapsed to a single peak in the case of calmodulin bound to the CaMKII peptide. These measurements strongly suggest that consideration of dye-protein interactions is crucial in forming an accurate picture of protein conformations from FRET data.
Estimation of Lithological Classification in Taipei Basin: A Bayesian Maximum Entropy Method
NASA Astrophysics Data System (ADS)
Wu, Meng-Ting; Lin, Yuan-Chien; Yu, Hwa-Lung
2015-04-01
In environmental or other scientific applications, we must have a certain understanding of geological lithological composition. Because of restrictions of real conditions, only limited amount of data can be acquired. To find out the lithological distribution in the study area, many spatial statistical methods used to estimate the lithological composition on unsampled points or grids. This study applied the Bayesian Maximum Entropy (BME method), which is an emerging method of the geological spatiotemporal statistics field. The BME method can identify the spatiotemporal correlation of the data, and combine not only the hard data but the soft data to improve estimation. The data of lithological classification is discrete categorical data. Therefore, this research applied Categorical BME to establish a complete three-dimensional Lithological estimation model. Apply the limited hard data from the cores and the soft data generated from the geological dating data and the virtual wells to estimate the three-dimensional lithological classification in Taipei Basin. Keywords: Categorical Bayesian Maximum Entropy method, Lithological Classification, Hydrogeological Setting
Perspective: Maximum caliber is a general variational principle for dynamical systems.
Dixit, Purushottam D; Wagoner, Jason; Weistuch, Corey; Pressé, Steve; Ghosh, Kingshuk; Dill, Ken A
2018-01-07
We review here Maximum Caliber (Max Cal), a general variational principle for inferring distributions of paths in dynamical processes and networks. Max Cal is to dynamical trajectories what the principle of maximum entropy is to equilibrium states or stationary populations. In Max Cal, you maximize a path entropy over all possible pathways, subject to dynamical constraints, in order to predict relative path weights. Many well-known relationships of non-equilibrium statistical physics-such as the Green-Kubo fluctuation-dissipation relations, Onsager's reciprocal relations, and Prigogine's minimum entropy production-are limited to near-equilibrium processes. Max Cal is more general. While it can readily derive these results under those limits, Max Cal is also applicable far from equilibrium. We give examples of Max Cal as a method of inference about trajectory distributions from limited data, finding reaction coordinates in bio-molecular simulations, and modeling the complex dynamics of non-thermal systems such as gene regulatory networks or the collective firing of neurons. We also survey its basis in principle and some limitations.
NASA Astrophysics Data System (ADS)
Caticha, Ariel
2007-11-01
What is information? Is it physical? We argue that in a Bayesian theory the notion of information must be defined in terms of its effects on the beliefs of rational agents. Information is whatever constrains rational beliefs and therefore it is the force that induces us to change our minds. This problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), which is designed for updating from arbitrary priors given information in the form of arbitrary constraints, includes as special cases both MaxEnt (which allows arbitrary constraints) and Bayes' rule (which allows arbitrary priors). Thus, ME unifies the two themes of these workshops—the Maximum Entropy and the Bayesian methods—into a single general inference scheme that allows us to handle problems that lie beyond the reach of either of the two methods separately. I conclude with a couple of simple illustrative examples.
Development and application of the maximum entropy method and other spectral estimation techniques
NASA Astrophysics Data System (ADS)
King, W. R.
1980-09-01
This summary report is a collection of four separate progress reports prepared under three contracts, which are all sponsored by the Office of Naval Research in Arlington, Virginia. This report contains the results of investigations into the application of the maximum entropy method (MEM), a high resolution, frequency and wavenumber estimation technique. The report also contains a description of two, new, stable, high resolution spectral estimation techniques that is provided in the final report section. Many examples of wavenumber spectral patterns for all investigated techniques are included throughout the report. The maximum entropy method is also known as the maximum entropy spectral analysis (MESA) technique, and both names are used in the report. Many MEM wavenumber spectral patterns are demonstrated using both simulated and measured radar signal and noise data. Methods for obtaining stable MEM wavenumber spectra are discussed, broadband signal detection using the MEM prediction error transform (PET) is discussed, and Doppler radar narrowband signal detection is demonstrated using the MEM technique. It is also shown that MEM cannot be applied to randomly sampled data. The two new, stable, high resolution, spectral estimation techniques discussed in the final report section, are named the Wiener-King and the Fourier spectral estimation techniques. The two new techniques have a similar derivation based upon the Wiener prediction filter, but the two techniques are otherwise quite different. Further development of the techniques and measurement of the technique spectral characteristics is recommended for subsequent investigation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arima, Takashi, E-mail: tks@stat.nitech.ac.jp; Mentrelli, Andrea, E-mail: andrea.mentrelli@unibo.it; Ruggeri, Tommaso, E-mail: tommaso.ruggeri@unibo.it
Molecular extended thermodynamics of rarefied polyatomic gases is characterized by two hierarchies of equations for moments of a suitable distribution function in which the internal degrees of freedom of a molecule is taken into account. On the basis of physical relevance the truncation orders of the two hierarchies are proven to be not independent on each other, and the closure procedures based on the maximum entropy principle (MEP) and on the entropy principle (EP) are proven to be equivalent. The characteristic velocities of the emerging hyperbolic system of differential equations are compared to those obtained for monatomic gases and themore » lower bound estimate for the maximum equilibrium characteristic velocity established for monatomic gases (characterized by only one hierarchy for moments with truncation order of moments N) by Boillat and Ruggeri (1997) (λ{sub (N)}{sup E,max})/(c{sub 0}) ⩾√(6/5 (N−1/2 )),(c{sub 0}=√(5/3 k/m T)) is proven to hold also for rarefied polyatomic gases independently from the degrees of freedom of a molecule. -- Highlights: •Molecular extended thermodynamics of rarefied polyatomic gases is studied. •The relation between two hierarchies of equations for moments is derived. •The equivalence of maximum entropy principle and entropy principle is proven. •The characteristic velocities are compared to those of monatomic gases. •The lower bound of the maximum characteristic velocity is estimated.« less
Quantum Rényi relative entropies affirm universality of thermodynamics.
Misra, Avijit; Singh, Uttam; Bera, Manabendra Nath; Rajagopal, A K
2015-10-01
We formulate a complete theory of quantum thermodynamics in the Rényi entropic formalism exploiting the Rényi relative entropies, starting from the maximum entropy principle. In establishing the first and second laws of quantum thermodynamics, we have correctly identified accessible work and heat exchange in both equilibrium and nonequilibrium cases. The free energy (internal energy minus temperature times entropy) remains unaltered, when all the entities entering this relation are suitably defined. Exploiting Rényi relative entropies we have shown that this "form invariance" holds even beyond equilibrium and has profound operational significance in isothermal process. These results reduce to the Gibbs-von Neumann results when the Rényi entropic parameter α approaches 1. Moreover, it is shown that the universality of the Carnot statement of the second law is the consequence of the form invariance of the free energy, which is in turn the consequence of maximum entropy principle. Further, the Clausius inequality, which is the precursor to the Carnot statement, is also shown to hold based on the data processing inequalities for the traditional and sandwiched Rényi relative entropies. Thus, we find that the thermodynamics of nonequilibrium state and its deviation from equilibrium together determine the thermodynamic laws. This is another important manifestation of the concepts of information theory in thermodynamics when they are extended to the quantum realm. Our work is a substantial step towards formulating a complete theory of quantum thermodynamics and corresponding resource theory.
Low Streamflow Forcasting using Minimum Relative Entropy
NASA Astrophysics Data System (ADS)
Cui, H.; Singh, V. P.
2013-12-01
Minimum relative entropy spectral analysis is derived in this study, and applied to forecast streamflow time series. Proposed method extends the autocorrelation in the manner that the relative entropy of underlying process is minimized so that time series data can be forecasted. Different prior estimation, such as uniform, exponential and Gaussian assumption, is taken to estimate the spectral density depending on the autocorrelation structure. Seasonal and nonseasonal low streamflow series obtained from Colorado River (Texas) under draught condition is successfully forecasted using proposed method. Minimum relative entropy determines spectral of low streamflow series with higher resolution than conventional method. Forecasted streamflow is compared to the prediction using Burg's maximum entropy spectral analysis (MESA) and Configurational entropy. The advantage and disadvantage of each method in forecasting low streamflow is discussed.
Energy conservation and maximal entropy production in enzyme reactions.
Dobovišek, Andrej; Vitas, Marko; Brumen, Milan; Fajmut, Aleš
2017-08-01
A procedure for maximization of the density of entropy production in a single stationary two-step enzyme reaction is developed. Under the constraints of mass conservation, fixed equilibrium constant of a reaction and fixed products of forward and backward enzyme rate constants the existence of maximum in the density of entropy production is demonstrated. In the state with maximal density of entropy production the optimal enzyme rate constants, the stationary concentrations of the substrate and the product, the stationary product yield as well as the stationary reaction flux are calculated. The test, whether these calculated values of the reaction parameters are consistent with their corresponding measured values, is performed for the enzyme Glucose Isomerase. It is found that calculated and measured rate constants agree within an order of magnitude, whereas the calculated reaction flux and the product yield differ from their corresponding measured values for less than 20 % and 5 %, respectively. This indicates that the enzyme Glucose Isomerase, considered in a non-equilibrium stationary state, as found in experiments using the continuous stirred tank reactors, possibly operates close to the state with the maximum in the density of entropy production. Copyright © 2017 Elsevier B.V. All rights reserved.
Shock heating of the solar wind plasma
NASA Technical Reports Server (NTRS)
Whang, Y. C.; Liu, Shaoliang; Burlaga, L. F.
1990-01-01
The role played by shocks in heating solar-wind plasma is investigated using data on 413 shocks which were identified from the plasma and magnetic-field data collected between 1973 and 1982 by Pioneer and Voyager spacecraft. It is found that the average shock strength increased with the heliocentric distance outside 1 AU, reaching a maximum near 5 AU, after which the shock strength decreased with the distance; the entropy of the solar wind protons also reached a maximum at 5 AU. An MHD simulation model in which shock heating is the only heating mechanism available was used to calculate the entropy changes for the November 1977 event. The calculated entropy agreed well with the value calculated from observational data, suggesting that shocks are chiefly responsible for heating solar wind plasma between 1 and 15 AU.
NASA Astrophysics Data System (ADS)
Khosravi Tanak, A.; Mohtashami Borzadaran, G. R.; Ahmadi, J.
2015-11-01
In economics and social sciences, the inequality measures such as Gini index, Pietra index etc., are commonly used to measure the statistical dispersion. There is a generalization of Gini index which includes it as special case. In this paper, we use principle of maximum entropy to approximate the model of income distribution with a given mean and generalized Gini index. Many distributions have been used as descriptive models for the distribution of income. The most widely known of these models are the generalized beta of second kind and its subclass distributions. The obtained maximum entropy distributions are fitted to the US family total money income in 2009, 2011 and 2013 and their relative performances with respect to generalized beta of second kind family are compared.
A Maximum Entropy Method for Particle Filtering
NASA Astrophysics Data System (ADS)
Eyink, Gregory L.; Kim, Sangil
2006-06-01
Standard ensemble or particle filtering schemes do not properly represent states of low priori probability when the number of available samples is too small, as is often the case in practical applications. We introduce here a set of parametric resampling methods to solve this problem. Motivated by a general H-theorem for relative entropy, we construct parametric models for the filter distributions as maximum-entropy/minimum-information models consistent with moments of the particle ensemble. When the prior distributions are modeled as mixtures of Gaussians, our method naturally generalizes the ensemble Kalman filter to systems with highly non-Gaussian statistics. We apply the new particle filters presented here to two simple test cases: a one-dimensional diffusion process in a double-well potential and the three-dimensional chaotic dynamical system of Lorenz.
Maximum entropy and equations of state for random cellular structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rivier, N.
Random, space-filling cellular structures (biological tissues, metallurgical grain aggregates, foams, etc.) are investigated. Maximum entropy inference under a few constraints yields structural equations of state, relating the size of cells to their topological shape. These relations are known empirically as Lewis's law in Botany, or Desch's relation in Metallurgy. Here, the functional form of the constraints is now known as a priori, and one takes advantage of this arbitrariness to increase the entropy further. The resulting structural equations of state are independent of priors, they are measurable experimentally and constitute therefore a direct test for the applicability of MaxEnt inferencemore » (given that the structure is in statistical equilibrium, a fact which can be tested by another simple relation (Aboav's law)). 23 refs., 2 figs., 1 tab.« less
Use of mutual information to decrease entropy: Implications for the second law of thermodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lloyd, S.
1989-05-15
Several theorems on the mechanics of gathering information are proved, and the possibility of violating the second law of thermodynamics by obtaining information is discussed in light of these theorems. Maxwell's demon can lower the entropy of his surroundings by an amount equal to the difference between the maximum entropy of his recording device and its initial entropy, without generating a compensating entropy increase. A demon with human-scale recording devices can reduce the entropy of a gas by a negligible amount only, but the proof of the demon's impracticability leaves open the possibility that systems highly correlated with their environmentmore » can reduce the environment's entropy by a substantial amount without increasing entropy elsewhere. In the event that a boundary condition for the universe requires it to be in a state of low entropy when small, the correlations induced between different particle modes during the expansion phase allow the modes to behave like Maxwell's demons during the contracting phase, reducing the entropy of the universe to a low value.« less
NASA Astrophysics Data System (ADS)
Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo
2018-05-01
Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.
Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo
2018-05-14
Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.
Exact Maximum-Entropy Estimation with Feynman Diagrams
NASA Astrophysics Data System (ADS)
Netser Zernik, Amitai; Schlank, Tomer M.; Tessler, Ran J.
2018-02-01
A longstanding open problem in statistics is finding an explicit expression for the probability measure which maximizes entropy with respect to given constraints. In this paper a solution to this problem is found, using perturbative Feynman calculus. The explicit expression is given as a sum over weighted trees.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naghavi, S. Shahab; Emery, Antoine A.; Hansen, Heine A.
Previous studies have shown that a large solid-state entropy of reduction increases the thermodynamic efficiency of metal oxides, such as ceria, for two-step thermochemical water splitting cycles. In this context, the configurational entropy arising from oxygen off-stoichiometry in the oxide, has been the focus of most previous work. Here we report a different source of entropy, the onsite electronic configurational entropy, arising from coupling between orbital and spin angular momenta in lanthanide f orbitals. We find that onsite electronic configurational entropy is sizable in all lanthanides, and reaches a maximum value of ≈4.7 k B per oxygen vacancy for Cemore » 4+/Ce 3+ reduction. This unique and large positive entropy source in ceria explains its excellent performance for high-temperature catalytic redox reactions such as water splitting. Our calculations also show that terbium dioxide has a high electronic entropy and thus could also be a potential candidate for solar thermochemical reactions.« less
Nonequilibrium Entropy in a Shock
Margolin, Len G.
2017-07-19
In a classic paper, Morduchow and Libby use an analytic solution for the profile of a Navier–Stokes shock to show that the equilibrium thermodynamic entropy has a maximum inside the shock. There is no general nonequilibrium thermodynamic formulation of entropy; the extension of equilibrium theory to nonequililbrium processes is usually made through the assumption of local thermodynamic equilibrium (LTE). However, gas kinetic theory provides a perfectly general formulation of a nonequilibrium entropy in terms of the probability distribution function (PDF) solutions of the Boltzmann equation. In this paper I will evaluate the Boltzmann entropy for the PDF that underlies themore » Navier–Stokes equations and also for the PDF of the Mott–Smith shock solution. I will show that both monotonically increase in the shock. As a result, I will propose a new nonequilibrium thermodynamic entropy and show that it is also monotone and closely approximates the Boltzmann entropy.« less
Nonequilibrium Entropy in a Shock
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margolin, Len G.
In a classic paper, Morduchow and Libby use an analytic solution for the profile of a Navier–Stokes shock to show that the equilibrium thermodynamic entropy has a maximum inside the shock. There is no general nonequilibrium thermodynamic formulation of entropy; the extension of equilibrium theory to nonequililbrium processes is usually made through the assumption of local thermodynamic equilibrium (LTE). However, gas kinetic theory provides a perfectly general formulation of a nonequilibrium entropy in terms of the probability distribution function (PDF) solutions of the Boltzmann equation. In this paper I will evaluate the Boltzmann entropy for the PDF that underlies themore » Navier–Stokes equations and also for the PDF of the Mott–Smith shock solution. I will show that both monotonically increase in the shock. As a result, I will propose a new nonequilibrium thermodynamic entropy and show that it is also monotone and closely approximates the Boltzmann entropy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khosla, D.; Singh, M.
The estimation of three-dimensional dipole current sources on the cortical surface from the measured magnetoencephalogram (MEG) is a highly under determined inverse problem as there are many {open_quotes}feasible{close_quotes} images which are consistent with the MEG data. Previous approaches to this problem have concentrated on the use of weighted minimum norm inverse methods. While these methods ensure a unique solution, they often produce overly smoothed solutions and exhibit severe sensitivity to noise. In this paper we explore the maximum entropy approach to obtain better solutions to the problem. This estimation technique selects that image from the possible set of feasible imagesmore » which has the maximum entropy permitted by the information available to us. In order to account for the presence of noise in the data, we have also incorporated a noise rejection or likelihood term into our maximum entropy method. This makes our approach mirror a Bayesian maximum a posteriori (MAP) formulation. Additional information from other functional techniques like functional magnetic resonance imaging (fMRI) can be incorporated in the proposed method in the form of a prior bias function to improve solutions. We demonstrate the method with experimental phantom data from a clinical 122 channel MEG system.« less
Cheng, Nai-Ming; Fang, Yu-Hua Dean; Tsan, Din-Li
2016-01-01
Purpose We compared attenuation correction of PET images with helical CT (PET/HCT) and respiration-averaged CT (PET/ACT) in patients with non-small-cell lung cancer (NSCLC) with the goal of investigating the impact of respiration-averaged CT on 18F FDG PET texture parameters. Materials and Methods A total of 56 patients were enrolled. Tumors were segmented on pretreatment PET images using the adaptive threshold. Twelve different texture parameters were computed: standard uptake value (SUV) entropy, uniformity, entropy, dissimilarity, homogeneity, coarseness, busyness, contrast, complexity, grey-level nonuniformity, zone-size nonuniformity, and high grey-level large zone emphasis. Comparisons of PET/HCT and PET/ACT were performed using Wilcoxon signed-rank tests, intraclass correlation coefficients, and Bland-Altman analysis. Receiver operating characteristic (ROC) curves as well as univariate and multivariate Cox regression analyses were used to identify the parameters significantly associated with disease-specific survival (DSS). A fixed threshold at 45% of the maximum SUV (T45) was used for validation. Results SUV maximum and total lesion glycolysis (TLG) were significantly higher in PET/ACT. However, texture parameters obtained with PET/ACT and PET/HCT showed a high degree of agreement. The lowest levels of variation between the two modalities were observed for SUV entropy (9.7%) and entropy (9.8%). SUV entropy, entropy, and coarseness from both PET/ACT and PET/HCT were significantly associated with DSS. Validation analyses using T45 confirmed the usefulness of SUV entropy and entropy in both PET/HCT and PET/ACT for the prediction of DSS, but only coarseness from PET/ACT achieved the statistical significance threshold. Conclusions Our results indicate that 1) texture parameters from PET/ACT are clinically useful in the prediction of survival in NSCLC patients and 2) SUV entropy and entropy are robust to attenuation correction methods. PMID:26930211
NASA Astrophysics Data System (ADS)
Vallino, J. J.; Algar, C. K.; Huber, J. A.; Fernandez-Gonzalez, N.
2014-12-01
The maximum entropy production (MEP) principle holds that non equilibrium systems with sufficient degrees of freedom will likely be found in a state that maximizes entropy production or, analogously, maximizes potential energy destruction rate. The theory does not distinguish between abiotic or biotic systems; however, we will show that systems that can coordinate function over time and/or space can potentially dissipate more free energy than purely Markovian processes (such as fire or a rock rolling down a hill) that only maximize instantaneous entropy production. Biological systems have the ability to store useful information acquired via evolution and curated by natural selection in genomic sequences that allow them to execute temporal strategies and coordinate function over space. For example, circadian rhythms allow phototrophs to "predict" that sun light will return and can orchestrate metabolic machinery appropriately before sunrise, which not only gives them a competitive advantage, but also increases the total entropy production rate compared to systems that lack such anticipatory control. Similarly, coordination over space, such a quorum sensing in microbial biofilms, can increase acquisition of spatially distributed resources and free energy and thereby enhance entropy production. In this talk we will develop a modeling framework to describe microbial biogeochemistry based on the MEP conjecture constrained by information and resource availability. Results from model simulations will be compared to laboratory experiments to demonstrate the usefulness of the MEP approach.
Giant onsite electronic entropy enhances the performance of ceria for water splitting
Naghavi, S. Shahab; Emery, Antoine A.; Hansen, Heine A.; ...
2017-08-18
Previous studies have shown that a large solid-state entropy of reduction increases the thermodynamic efficiency of metal oxides, such as ceria, for two-step thermochemical water splitting cycles. In this context, the configurational entropy arising from oxygen off-stoichiometry in the oxide, has been the focus of most previous work. Here we report a different source of entropy, the onsite electronic configurational entropy, arising from coupling between orbital and spin angular momenta in lanthanide f orbitals. We find that onsite electronic configurational entropy is sizable in all lanthanides, and reaches a maximum value of ≈4.7 k B per oxygen vacancy for Cemore » 4+/Ce 3+ reduction. This unique and large positive entropy source in ceria explains its excellent performance for high-temperature catalytic redox reactions such as water splitting. Our calculations also show that terbium dioxide has a high electronic entropy and thus could also be a potential candidate for solar thermochemical reactions.« less
Giant onsite electronic entropy enhances the performance of ceria for water splitting.
Naghavi, S Shahab; Emery, Antoine A; Hansen, Heine A; Zhou, Fei; Ozolins, Vidvuds; Wolverton, Chris
2017-08-18
Previous studies have shown that a large solid-state entropy of reduction increases the thermodynamic efficiency of metal oxides, such as ceria, for two-step thermochemical water splitting cycles. In this context, the configurational entropy arising from oxygen off-stoichiometry in the oxide, has been the focus of most previous work. Here we report a different source of entropy, the onsite electronic configurational entropy, arising from coupling between orbital and spin angular momenta in lanthanide f orbitals. We find that onsite electronic configurational entropy is sizable in all lanthanides, and reaches a maximum value of ≈4.7 k B per oxygen vacancy for Ce 4+ /Ce 3+ reduction. This unique and large positive entropy source in ceria explains its excellent performance for high-temperature catalytic redox reactions such as water splitting. Our calculations also show that terbium dioxide has a high electronic entropy and thus could also be a potential candidate for solar thermochemical reactions.Solid-state entropy of reduction increases the thermodynamic efficiency of ceria for two-step thermochemical water splitting. Here, the authors report a large and different source of entropy, the onsite electronic configurational entropy arising from coupling between orbital and spin angular momenta in f orbitals.
The limit behavior of the evolution of the Tsallis entropy in self-gravitating systems
NASA Astrophysics Data System (ADS)
Zheng, Yahui; Du, Jiulin; Liang, Faku
2017-06-01
In this letter, we study the limit behavior of the evolution of the Tsallis entropy in self-gravitating systems. The study is carried out under two different situations, drawing the same conclusion. No matter in the energy transfer process or in the mass transfer process inside the system, when the nonextensive parameter q is more than unity, the total entropy is bounded; on the contrary, when this parameter is less than unity, the total entropy is unbounded. There are proofs in both theory and observation that the q is always more than unity. So the Tsallis entropy in self-gravitating systems generally exhibits a bounded property. This indicates the existence of a global maximum of the Tsallis entropy. It is possible for self-gravitating systems to evolve to thermodynamically stable states.
NASA Astrophysics Data System (ADS)
Leow, Alex D.; Zhu, Siwei
2008-03-01
Diffusion weighted MR imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitizing gradients along a minimum of 6 directions, second-order tensors (represetnted by 3-by-3 positive definiite matrices) can be computed to model dominant diffusion processes. However, it has been shown that conventional DTI is not sufficient to resolve more complicated white matter configurations, e.g. crossing fiber tracts. More recently, High Angular Resolution Diffusion Imaging (HARDI) seeks to address this issue by employing more than 6 gradient directions. To account for fiber crossing when analyzing HARDI data, several methodologies have been introduced. For example, q-ball imaging was proposed to approximate Orientation Diffusion Function (ODF). Similarly, the PAS method seeks to reslove the angular structure of displacement probability functions using the maximum entropy principle. Alternatively, deconvolution methods extract multiple fiber tracts by computing fiber orientations using a pre-specified single fiber response function. In this study, we introduce Tensor Distribution Function (TDF), a probability function defined on the space of symmetric and positive definite matrices. Using calculus of variations, we solve for the TDF that optimally describes the observed data. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, ODF can easily be computed by analytical integration of the resulting displacement probability function. Moreover, principle fiber directions can also be directly derived from the TDF.
Novel sonar signal processing tool using Shannon entropy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quazi, A.H.
1996-06-01
Traditionally, conventional signal processing extracts information from sonar signals using amplitude, signal energy or frequency domain quantities obtained using spectral analysis techniques. The object is to investigate an alternate approach which is entirely different than that of traditional signal processing. This alternate approach is to utilize the Shannon entropy as a tool for the processing of sonar signals with emphasis on detection, classification, and localization leading to superior sonar system performance. Traditionally, sonar signals are processed coherently, semi-coherently, and incoherently, depending upon the a priori knowledge of the signals and noise. Here, the detection, classification, and localization technique will bemore » based on the concept of the entropy of the random process. Under a constant energy constraint, the entropy of a received process bearing finite number of sample points is maximum when hypothesis H{sub 0} (that the received process consists of noise alone) is true and decreases when correlated signal is present (H{sub 1}). Therefore, the strategy used for detection is: (I) Calculate the entropy of the received data; then, (II) compare the entropy with the maximum value; and, finally, (III) make decision: H{sub 1} is assumed if the difference is large compared to pre-assigned threshold and H{sub 0} is otherwise assumed. The test statistics will be different between entropies under H{sub 0} and H{sub 1}. Here, we shall show the simulated results for detecting stationary and non-stationary signals in noise, and results on detection of defects in a Plexiglas bar using an ultrasonic experiment conducted by Hughes. {copyright} {ital 1996 American Institute of Physics.}« less
Sgarlata, Carmelo; Mugridge, Jeffrey S; Pluth, Michael D; Tiedemann, Bryan E F; Zito, Valeria; Arena, Giuseppe; Raymond, Kenneth N
2010-01-27
NMR, UV-vis, and isothermal titration calorimetry (ITC) measurements probe different aspects of competing host-guest equilibria as simple alkylammonium guest molecules interact with both the exterior (ion-association) and interior (encapsulation) of the [Ga(4)L(6)](12-) supramolecular assembly in water. Data obtained by each independent technique measure different components of the host-guest equilibria and only when analyzed together does a complete picture of the solution thermodynamics emerge. Striking differences between the internal and external guest binding are found. External binding is enthalpy driven and mainly due to attractive interactions between the guests and the exterior surface of the assembly while encapsulation is entropy driven as a result of desolvation and release of solvent molecules from the host cavity.
Gruendling, Till; Guilhaus, Michael; Barner-Kowollik, Christopher
2008-09-15
We report on the successful application of size exclusion chromatography (SEC) combined with electrospray ionization mass spectrometry (ESI-MS) and refractive index (RI) detection for the determination of accurate molecular weight distributions of synthetic polymers, corrected for chromatographic band broadening. The presented method makes use of the ability of ESI-MS to accurately depict the peak profiles and retention volumes of individual oligomers eluting from the SEC column, whereas quantitative information on the absolute concentration of oligomers is obtained from the RI-detector only. A sophisticated computational algorithm based on the maximum entropy principle is used to process the data gained by both detectors, yielding an accurate molecular weight distribution, corrected for chromatographic band broadening. Poly(methyl methacrylate) standards with molecular weights up to 10 kDa serve as model compounds. Molecular weight distributions (MWDs) obtained by the maximum entropy procedure are compared to MWDs, which were calculated by a conventional calibration of the SEC-retention time axis with peak retention data obtained from the mass spectrometer. Comparison showed that for the employed chromatographic system, distributions below 7 kDa were only weakly influenced by chromatographic band broadening. However, the maximum entropy algorithm could successfully correct the MWD of a 10 kDa standard for band broadening effects. Molecular weight averages were between 5 and 14% lower than the manufacturer stated data obtained by classical means of calibration. The presented method demonstrates a consistent approach for analyzing data obtained by coupling mass spectrometric detectors and concentration sensitive detectors to polymer liquid chromatography.
Predicting protein β-sheet contacts using a maximum entropy-based correlated mutation measure.
Burkoff, Nikolas S; Várnai, Csilla; Wild, David L
2013-03-01
The problem of ab initio protein folding is one of the most difficult in modern computational biology. The prediction of residue contacts within a protein provides a more tractable immediate step. Recently introduced maximum entropy-based correlated mutation measures (CMMs), such as direct information, have been successful in predicting residue contacts. However, most correlated mutation studies focus on proteins that have large good-quality multiple sequence alignments (MSA) because the power of correlated mutation analysis falls as the size of the MSA decreases. However, even with small autogenerated MSAs, maximum entropy-based CMMs contain information. To make use of this information, in this article, we focus not on general residue contacts but contacts between residues in β-sheets. The strong constraints and prior knowledge associated with β-contacts are ideally suited for prediction using a method that incorporates an often noisy CMM. Using contrastive divergence, a statistical machine learning technique, we have calculated a maximum entropy-based CMM. We have integrated this measure with a new probabilistic model for β-contact prediction, which is used to predict both residue- and strand-level contacts. Using our model on a standard non-redundant dataset, we significantly outperform a 2D recurrent neural network architecture, achieving a 5% improvement in true positives at the 5% false-positive rate at the residue level. At the strand level, our approach is competitive with the state-of-the-art single methods achieving precision of 61.0% and recall of 55.4%, while not requiring residue solvent accessibility as an input. http://www2.warwick.ac.uk/fac/sci/systemsbiology/research/software/
Nonequilibrium Thermodynamics in Biological Systems
NASA Astrophysics Data System (ADS)
Aoki, I.
2005-12-01
1. Respiration Oxygen-uptake by respiration in organisms decomposes macromolecules such as carbohydrate, protein and lipid and liberates chemical energy of high quality, which is then used to chemical reactions and motions of matter in organisms to support lively order in structure and function in organisms. Finally, this chemical energy becomes heat energy of low quality and is discarded to the outside (dissipation function). Accompanying this heat energy, entropy production which inevitably occurs by irreversibility also is discarded to the outside. Dissipation function and entropy production are estimated from data of respiration. 2. Human body From the observed data of respiration (oxygen absorption), the entropy production in human body can be estimated. Entropy production from 0 to 75 years old human has been obtained, and extrapolated to fertilized egg (beginning of human life) and to 120 years old (maximum period of human life). Entropy production show characteristic behavior in human life span : early rapid increase in short growing phase and later slow decrease in long aging phase. It is proposed that this tendency is ubiquitous and constitutes a Principle of Organization in complex biotic systems. 3. Ecological communities From the data of respiration of eighteen aquatic communities, specific (i.e. per biomass) entropy productions are obtained. They show two phase character with respect to trophic diversity : early increase and later decrease with the increase of trophic diversity. The trophic diversity in these aquatic ecosystems is shown to be positively correlated with the degree of eutrophication, and the degree of eutrophication is an "arrow of time" in the hierarchy of aquatic ecosystems. Hence specific entropy production has the two phase: early increase and later decrease with time. 4. Entropy principle for living systems The Second Law of Thermodynamics has been expressed as follows. 1) In isolated systems, entropy increases with time and approaches to a maximum value. This is well-known classical Clausius principle. 2) In open systems near equilibrium entropy production always decreases with time approaching a minimum stationary level. This is the minimum entropy production principle by Prigogine. These two principle are established ones. However, living systems are not isolated and not near to equilibrium. Hence, these two principles can not be applied to living systems. What is entropy principle for living systems? Answer: Entropy production in living systems consists of multi-stages with time: early increasing, later decreasing and/or intermediate stages. This tendency is supported by various living systems.
Human vision is determined based on information theory.
Delgado-Bonal, Alfonso; Martín-Torres, Javier
2016-11-03
It is commonly accepted that the evolution of the human eye has been driven by the maximum intensity of the radiation emitted by the Sun. However, the interpretation of the surrounding environment is constrained not only by the amount of energy received but also by the information content of the radiation. Information is related to entropy rather than energy. The human brain follows Bayesian statistical inference for the interpretation of visual space. The maximization of information occurs in the process of maximizing the entropy. Here, we show that the photopic and scotopic vision absorption peaks in humans are determined not only by the intensity but also by the entropy of radiation. We suggest that through the course of evolution, the human eye has not adapted only to the maximum intensity or to the maximum information but to the optimal wavelength for obtaining information. On Earth, the optimal wavelengths for photopic and scotopic vision are 555 nm and 508 nm, respectively, as inferred experimentally. These optimal wavelengths are determined by the temperature of the star (in this case, the Sun) and by the atmospheric composition.
Human vision is determined based on information theory
NASA Astrophysics Data System (ADS)
Delgado-Bonal, Alfonso; Martín-Torres, Javier
2016-11-01
It is commonly accepted that the evolution of the human eye has been driven by the maximum intensity of the radiation emitted by the Sun. However, the interpretation of the surrounding environment is constrained not only by the amount of energy received but also by the information content of the radiation. Information is related to entropy rather than energy. The human brain follows Bayesian statistical inference for the interpretation of visual space. The maximization of information occurs in the process of maximizing the entropy. Here, we show that the photopic and scotopic vision absorption peaks in humans are determined not only by the intensity but also by the entropy of radiation. We suggest that through the course of evolution, the human eye has not adapted only to the maximum intensity or to the maximum information but to the optimal wavelength for obtaining information. On Earth, the optimal wavelengths for photopic and scotopic vision are 555 nm and 508 nm, respectively, as inferred experimentally. These optimal wavelengths are determined by the temperature of the star (in this case, the Sun) and by the atmospheric composition.
Quantifying Extrinsic Noise in Gene Expression Using the Maximum Entropy Framework
Dixit, Purushottam D.
2013-01-01
We present a maximum entropy framework to separate intrinsic and extrinsic contributions to noisy gene expression solely from the profile of expression. We express the experimentally accessible probability distribution of the copy number of the gene product (mRNA or protein) by accounting for possible variations in extrinsic factors. The distribution of extrinsic factors is estimated using the maximum entropy principle. Our results show that extrinsic factors qualitatively and quantitatively affect the probability distribution of the gene product. We work out, in detail, the transcription of mRNA from a constitutively expressed promoter in Escherichia coli. We suggest that the variation in extrinsic factors may account for the observed wider-than-Poisson distribution of mRNA copy numbers. We successfully test our framework on a numerical simulation of a simple gene expression scheme that accounts for the variation in extrinsic factors. We also make falsifiable predictions, some of which are tested on previous experiments in E. coli whereas others need verification. Application of the presented framework to more complex situations is also discussed. PMID:23790383
Quantifying extrinsic noise in gene expression using the maximum entropy framework.
Dixit, Purushottam D
2013-06-18
We present a maximum entropy framework to separate intrinsic and extrinsic contributions to noisy gene expression solely from the profile of expression. We express the experimentally accessible probability distribution of the copy number of the gene product (mRNA or protein) by accounting for possible variations in extrinsic factors. The distribution of extrinsic factors is estimated using the maximum entropy principle. Our results show that extrinsic factors qualitatively and quantitatively affect the probability distribution of the gene product. We work out, in detail, the transcription of mRNA from a constitutively expressed promoter in Escherichia coli. We suggest that the variation in extrinsic factors may account for the observed wider-than-Poisson distribution of mRNA copy numbers. We successfully test our framework on a numerical simulation of a simple gene expression scheme that accounts for the variation in extrinsic factors. We also make falsifiable predictions, some of which are tested on previous experiments in E. coli whereas others need verification. Application of the presented framework to more complex situations is also discussed. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Human vision is determined based on information theory
Delgado-Bonal, Alfonso; Martín-Torres, Javier
2016-01-01
It is commonly accepted that the evolution of the human eye has been driven by the maximum intensity of the radiation emitted by the Sun. However, the interpretation of the surrounding environment is constrained not only by the amount of energy received but also by the information content of the radiation. Information is related to entropy rather than energy. The human brain follows Bayesian statistical inference for the interpretation of visual space. The maximization of information occurs in the process of maximizing the entropy. Here, we show that the photopic and scotopic vision absorption peaks in humans are determined not only by the intensity but also by the entropy of radiation. We suggest that through the course of evolution, the human eye has not adapted only to the maximum intensity or to the maximum information but to the optimal wavelength for obtaining information. On Earth, the optimal wavelengths for photopic and scotopic vision are 555 nm and 508 nm, respectively, as inferred experimentally. These optimal wavelengths are determined by the temperature of the star (in this case, the Sun) and by the atmospheric composition. PMID:27808236
Palaniappan, Rajkumar; Sundaraj, Kenneth; Sundaraj, Sebastian; Huliraj, N; Revadi, S S
2017-06-08
Auscultation is a medical procedure used for the initial diagnosis and assessment of lung and heart diseases. From this perspective, we propose assessing the performance of the extreme learning machine (ELM) classifiers for the diagnosis of pulmonary pathology using breath sounds. Energy and entropy features were extracted from the breath sound using the wavelet packet transform. The statistical significance of the extracted features was evaluated by one-way analysis of variance (ANOVA). The extracted features were inputted into the ELM classifier. The maximum classification accuracies obtained for the conventional validation (CV) of the energy and entropy features were 97.36% and 98.37%, respectively, whereas the accuracies obtained for the cross validation (CRV) of the energy and entropy features were 96.80% and 97.91%, respectively. In addition, maximum classification accuracies of 98.25% and 99.25% were obtained for the CV and CRV of the ensemble features, respectively. The results indicate that the classification accuracy obtained with the ensemble features was higher than those obtained with the energy and entropy features.
Berg, Eric; Roncali, Emilie; Hutchcroft, Will; Qi, Jinyi; Cherry, Simon R.
2016-01-01
In a scintillation detector, the light generated in the scintillator by a gamma interaction is converted to photoelectrons by a photodetector and produces a time-dependent waveform, the shape of which depends on the scintillator properties and the photodetector response. Several depth-of-interaction (DOI) encoding strategies have been developed that manipulate the scintillator’s temporal response along the crystal length and therefore require pulse shape discrimination techniques to differentiate waveform shapes. In this work, we demonstrate how maximum likelihood (ML) estimation methods can be applied to pulse shape discrimination to better estimate deposited energy, DOI and interaction time (for time-of-flight (TOF) PET) of a gamma ray in a scintillation detector. We developed likelihood models based on either the estimated detection times of individual photoelectrons or the number of photoelectrons in discrete time bins, and applied to two phosphor-coated crystals (LFS and LYSO) used in a previously developed TOF-DOI detector concept. Compared with conventional analytical methods, ML pulse shape discrimination improved DOI encoding by 27% for both crystals. Using the ML DOI estimate, we were able to counter depth-dependent changes in light collection inherent to long scintillator crystals and recover the energy resolution measured with fixed depth irradiation (~11.5% for both crystals). Lastly, we demonstrated how the Richardson-Lucy algorithm, an iterative, ML-based deconvolution technique, can be applied to the digitized waveforms to deconvolve the photodetector’s single photoelectron response and produce waveforms with a faster rising edge. After deconvolution and applying DOI and time-walk corrections, we demonstrated a 13% improvement in coincidence timing resolution (from 290 to 254 ps) with the LFS crystal and an 8% improvement (323 to 297 ps) with the LYSO crystal. PMID:27295658
Berg, Eric; Roncali, Emilie; Hutchcroft, Will; Qi, Jinyi; Cherry, Simon R
2016-11-01
In a scintillation detector, the light generated in the scintillator by a gamma interaction is converted to photoelectrons by a photodetector and produces a time-dependent waveform, the shape of which depends on the scintillator properties and the photodetector response. Several depth-of-interaction (DOI) encoding strategies have been developed that manipulate the scintillator's temporal response along the crystal length and therefore require pulse shape discrimination techniques to differentiate waveform shapes. In this work, we demonstrate how maximum likelihood (ML) estimation methods can be applied to pulse shape discrimination to better estimate deposited energy, DOI and interaction time (for time-of-flight (TOF) PET) of a gamma ray in a scintillation detector. We developed likelihood models based on either the estimated detection times of individual photoelectrons or the number of photoelectrons in discrete time bins, and applied to two phosphor-coated crystals (LFS and LYSO) used in a previously developed TOF-DOI detector concept. Compared with conventional analytical methods, ML pulse shape discrimination improved DOI encoding by 27% for both crystals. Using the ML DOI estimate, we were able to counter depth-dependent changes in light collection inherent to long scintillator crystals and recover the energy resolution measured with fixed depth irradiation (~11.5% for both crystals). Lastly, we demonstrated how the Richardson-Lucy algorithm, an iterative, ML-based deconvolution technique, can be applied to the digitized waveforms to deconvolve the photodetector's single photoelectron response and produce waveforms with a faster rising edge. After deconvolution and applying DOI and time-walk corrections, we demonstrated a 13% improvement in coincidence timing resolution (from 290 to 254 ps) with the LFS crystal and an 8% improvement (323 to 297 ps) with the LYSO crystal.
Jia, Feng; Lei, Yaguo; Shan, Hongkai; Lin, Jing
2015-01-01
The early fault characteristics of rolling element bearings carried by vibration signals are quite weak because the signals are generally masked by heavy background noise. To extract the weak fault characteristics of bearings from the signals, an improved spectral kurtosis (SK) method is proposed based on maximum correlated kurtosis deconvolution (MCKD). The proposed method combines the ability of MCKD in indicating the periodic fault transients and the ability of SK in locating these transients in the frequency domain. A simulation signal overwhelmed by heavy noise is used to demonstrate the effectiveness of the proposed method. The results show that MCKD is beneficial to clarify the periodic impulse components of the bearing signals, and the method is able to detect the resonant frequency band of the signal and extract its fault characteristic frequency. Through analyzing actual vibration signals collected from wind turbines and hot strip rolling mills, we confirm that by using the proposed method, it is possible to extract fault characteristics and diagnose early faults of rolling element bearings. Based on the comparisons with the SK method, it is verified that the proposed method is more suitable to diagnose early faults of rolling element bearings. PMID:26610501
Derivation of Hunt equation for suspension distribution using Shannon entropy theory
NASA Astrophysics Data System (ADS)
Kundu, Snehasis
2017-12-01
In this study, the Hunt equation for computing suspension concentration in sediment-laden flows is derived using Shannon entropy theory. Considering the inverse of the void ratio as a random variable and using principle of maximum entropy, probability density function and cumulative distribution function of suspension concentration is derived. A new and more general cumulative distribution function for the flow domain is proposed which includes several specific other models of CDF reported in literature. This general form of cumulative distribution function also helps to derive the Rouse equation. The entropy based approach helps to estimate model parameters using suspension data of sediment concentration which shows the advantage of using entropy theory. Finally model parameters in the entropy based model are also expressed as functions of the Rouse number to establish a link between the parameters of the deterministic and probabilistic approaches.
Applications of quantum entropy to statistics
NASA Astrophysics Data System (ADS)
Silver, R. N.; Martz, H. F.
This paper develops two generalizations of the maximum entropy (ME) principle. First, Shannon classical entropy is replaced by von Neumann quantum entropy to yield a broader class of information divergences (or penalty functions) for statistics applications. Negative relative quantum entropy enforces convexity, positivity, non-local extensivity and prior correlations such as smoothness. This enables the extension of ME methods from their traditional domain of ill-posed in-verse problems to new applications such as non-parametric density estimation. Second, given a choice of information divergence, a combination of ME and Bayes rule is used to assign both prior and posterior probabilities. Hyperparameters are interpreted as Lagrange multipliers enforcing constraints. Conservation principles are proposed to act statistical regularization and other hyperparameters, such as conservation of information and smoothness. ME provides an alternative to hierarchical Bayes methods.
Lorenz, Ralph D
2010-05-12
The 'two-box model' of planetary climate is discussed. This model has been used to demonstrate consistency of the equator-pole temperature gradient on Earth, Mars and Titan with what would be predicted from a principle of maximum entropy production (MEP). While useful for exposition and for generating first-order estimates of planetary heat transports, it has too low a resolution to investigate climate systems with strong feedbacks. A two-box MEP model agrees well with the observed day : night temperature contrast observed on the extrasolar planet HD 189733b.
Lorenz, Ralph D.
2010-01-01
The ‘two-box model’ of planetary climate is discussed. This model has been used to demonstrate consistency of the equator–pole temperature gradient on Earth, Mars and Titan with what would be predicted from a principle of maximum entropy production (MEP). While useful for exposition and for generating first-order estimates of planetary heat transports, it has too low a resolution to investigate climate systems with strong feedbacks. A two-box MEP model agrees well with the observed day : night temperature contrast observed on the extrasolar planet HD 189733b. PMID:20368253
Optimal behavior of viscoelastic flow at resonant frequencies.
Lambert, A A; Ibáñez, G; Cuevas, S; del Río, J A
2004-11-01
The global entropy generation rate in the zero-mean oscillatory flow of a Maxwell fluid in a pipe is analyzed with the aim of determining its behavior at resonant flow conditions. This quantity is calculated explicitly using the analytic expression for the velocity field and assuming isothermal conditions. The global entropy generation rate shows well-defined peaks at the resonant frequencies where the flow displays maximum velocities. It was found that resonant frequencies can be considered optimal in the sense that they maximize the power transmitted to the pulsating flow at the expense of maximum dissipation.
Twenty-five years of maximum-entropy principle
NASA Astrophysics Data System (ADS)
Kapur, J. N.
1983-04-01
The strengths and weaknesses of the maximum entropy principle (MEP) are examined and some challenging problems that remain outstanding at the end of the first quarter century of the principle are discussed. The original formalism of the MEP is presented and its relationship to statistical mechanics is set forth. The use of MEP for characterizing statistical distributions, in statistical inference, nonlinear spectral analysis, transportation models, population density models, models for brand-switching in marketing and vote-switching in elections is discussed. Its application to finance, insurance, image reconstruction, pattern recognition, operations research and engineering, biology and medicine, and nonparametric density estimation is considered.
Statistical mechanical theory for steady state systems. VI. Variational principles
NASA Astrophysics Data System (ADS)
Attard, Phil
2006-12-01
Several variational principles that have been proposed for nonequilibrium systems are analyzed. These include the principle of minimum rate of entropy production due to Prigogine [Introduction to Thermodynamics of Irreversible Processes (Interscience, New York, 1967)], the principle of maximum rate of entropy production, which is common on the internet and in the natural sciences, two principles of minimum dissipation due to Onsager [Phys. Rev. 37, 405 (1931)] and to Onsager and Machlup [Phys. Rev. 91, 1505 (1953)], and the principle of maximum second entropy due to Attard [J. Chem.. Phys. 122, 154101 (2005); Phys. Chem. Chem. Phys. 8, 3585 (2006)]. The approaches of Onsager and Attard are argued to be the only viable theories. These two are related, although their physical interpretation and mathematical approximations differ. A numerical comparison with computer simulation results indicates that Attard's expression is the only accurate theory. The implications for the Langevin and other stochastic differential equations are discussed.
NASA Technical Reports Server (NTRS)
Sohn, Byung-Ju; Smith, Eric A.
1993-01-01
The maximum entropy production principle suggested by Paltridge (1975) is applied to separating the satellite-determined required total transports into atmospheric and oceanic components. Instead of using the excessively restrictive equal energy dissipation hypothesis as a deterministic tool for separating transports between the atmosphere and ocean fluids, the satellite-inferred required 2D energy transports are imposed on Paltridge's energy balance model, which is then solved as a variational problem using the equal energy dissipation hypothesis only to provide an initial guess field. It is suggested that Southern Ocean transports are weaker than previously reported. It is argued that a maximum entropy production principle can serve as a governing rule on macroscale global climate, and, in conjunction with conventional satellite measurements of the net radiation balance, provides a means to decompose atmosphere and ocean transports from the total transport field.
NASA Astrophysics Data System (ADS)
Basso, Vittorio; Russo, Florence; Gerard, Jean-François; Pruvost, Sébastien
2013-11-01
We investigated the entropy change in poly(vinylidene fluoride-trifluoroethylene-chlorotrifluoroethylene) (P(VDF-TrFE-CTFE)) films in the temperature range between -5 ∘C and 60 ∘C by direct heat flux calorimetry using Peltier cell heat flux sensors. At the electric field E = 50 MVm-1 the isothermal entropy change attains a maximum of |Δs|=4.2 Jkg-1K-1 at 31∘C with an adiabatic temperature change ΔTad=1.1 K. At temperatures below the maximum, in the range from 25 ∘C to -5 ∘C, the entropy change |Δs | rapidly decreases and the unipolar P vs E relationship becomes hysteretic. This phenomenon is interpreted as the fact that the fluctuations of the polar segments of the polymer chain, responsible for the electrocaloric effect ECE in the polymer, becomes progressively frozen below the relaxor transition.
Maximum Renyi entropy principle for systems with power-law Hamiltonians.
Bashkirov, A G
2004-09-24
The Renyi distribution ensuring the maximum of Renyi entropy is investigated for a particular case of a power-law Hamiltonian. Both Lagrange parameters alpha and beta can be eliminated. It is found that beta does not depend on a Renyi parameter q and can be expressed in terms of an exponent kappa of the power-law Hamiltonian and an average energy U. The Renyi entropy for the resulting Renyi distribution reaches its maximal value at q=1/(1+kappa) that can be considered as the most probable value of q when we have no additional information on the behavior of the stochastic process. The Renyi distribution for such q becomes a power-law distribution with the exponent -(kappa+1). When q=1/(1+kappa)+epsilon (0
Maximum one-shot dissipated work from Rényi divergences
NASA Astrophysics Data System (ADS)
Yunger Halpern, Nicole; Garner, Andrew J. P.; Dahlsten, Oscar C. O.; Vedral, Vlatko
2018-05-01
Thermodynamics describes large-scale, slowly evolving systems. Two modern approaches generalize thermodynamics: fluctuation theorems, which concern finite-time nonequilibrium processes, and one-shot statistical mechanics, which concerns small scales and finite numbers of trials. Combining these approaches, we calculate a one-shot analog of the average dissipated work defined in fluctuation contexts: the cost of performing a protocol in finite time instead of quasistatically. The average dissipated work has been shown to be proportional to a relative entropy between phase-space densities, to a relative entropy between quantum states, and to a relative entropy between probability distributions over possible values of work. We derive one-shot analogs of all three equations, demonstrating that the order-infinity Rényi divergence is proportional to the maximum possible dissipated work in each case. These one-shot analogs of fluctuation-theorem results contribute to the unification of these two toolkits for small-scale, nonequilibrium statistical physics.
Maximum one-shot dissipated work from Rényi divergences.
Yunger Halpern, Nicole; Garner, Andrew J P; Dahlsten, Oscar C O; Vedral, Vlatko
2018-05-01
Thermodynamics describes large-scale, slowly evolving systems. Two modern approaches generalize thermodynamics: fluctuation theorems, which concern finite-time nonequilibrium processes, and one-shot statistical mechanics, which concerns small scales and finite numbers of trials. Combining these approaches, we calculate a one-shot analog of the average dissipated work defined in fluctuation contexts: the cost of performing a protocol in finite time instead of quasistatically. The average dissipated work has been shown to be proportional to a relative entropy between phase-space densities, to a relative entropy between quantum states, and to a relative entropy between probability distributions over possible values of work. We derive one-shot analogs of all three equations, demonstrating that the order-infinity Rényi divergence is proportional to the maximum possible dissipated work in each case. These one-shot analogs of fluctuation-theorem results contribute to the unification of these two toolkits for small-scale, nonequilibrium statistical physics.
A graphic approach to include dissipative-like effects in reversible thermal cycles
NASA Astrophysics Data System (ADS)
Gonzalez-Ayala, Julian; Arias-Hernandez, Luis Antonio; Angulo-Brown, Fernando
2017-05-01
Since the decade of 1980's, a connection between a family of maximum-work reversible thermal cycles and maximum-power finite-time endoreversible cycles has been established. The endoreversible cycles produce entropy at their couplings with the external heat baths. Thus, this kind of cycles can be optimized under criteria of merit that involve entropy production terms. Meanwhile the relation between the concept of work and power is quite direct, apparently, the finite-time objective functions involving entropy production have not reversible counterparts. In the present paper we show that it is also possible to establish a connection between irreversible cycle models and reversible ones by means of the concept of "geometric dissipation", which has to do with the equivalent role of a deficit of areas between some reversible cycles and the Carnot cycle and actual dissipative terms in a Curzon-Ahlborn engine.
Steepest entropy ascent for two-state systems with slowly varying Hamiltonians
NASA Astrophysics Data System (ADS)
Militello, Benedetto
2018-05-01
The steepest entropy ascent approach is considered and applied to two-state systems. When the Hamiltonian of the system is time-dependent, the principle of maximum entropy production can still be exploited; arguments to support this fact are given. In the limit of slowly varying Hamiltonians, which allows for the adiabatic approximation for the unitary part of the dynamics, the system exhibits significant robustness to the thermalization process. Specific examples such as a spin in a rotating field and a generic two-state system undergoing an avoided crossing are considered.
Entropy Inequality Violations from Ultraspinning Black Holes.
Hennigar, Robie A; Mann, Robert B; Kubizňák, David
2015-07-17
We construct a new class of rotating anti-de Sitter (AdS) black hole solutions with noncompact event horizons of finite area in any dimension and study their thermodynamics. In four dimensions these black holes are solutions to gauged supergravity. We find that their entropy exceeds the maximum implied from the conjectured reverse isoperimetric inequality, which states that for a given thermodynamic volume, the black hole entropy is maximized for Schwarzschild-AdS space. We use this result to suggest more stringent conditions under which this conjecture may hold.
Biomathematical modeling of pulsatile hormone secretion: a historical perspective.
Evans, William S; Farhy, Leon S; Johnson, Michael L
2009-01-01
Shortly after the recognition of the profound physiological significance of the pulsatile nature of hormone secretion, computer-based modeling techniques were introduced for the identification and characterization of such pulses. Whereas these earlier approaches defined perturbations in hormone concentration-time series, deconvolution procedures were subsequently employed to separate such pulses into their secretion event and clearance components. Stochastic differential equation modeling was also used to define basal and pulsatile hormone secretion. To assess the regulation of individual components within a hormone network, a method that quantitated approximate entropy within hormone concentration-times series was described. To define relationships within coupled hormone systems, methods including cross-correlation and cross-approximate entropy were utilized. To address some of the inherent limitations of these methods, modeling techniques with which to appraise the strength of feedback signaling between and among hormone-secreting components of a network have been developed. Techniques such as dynamic modeling have been utilized to reconstruct dose-response interactions between hormones within coupled systems. A logical extension of these advances will require the development of mathematical methods with which to approximate endocrine networks exhibiting multiple feedback interactions and subsequently reconstruct their parameters based on experimental data for the purpose of testing regulatory hypotheses and estimating alterations in hormone release control mechanisms.
NASA Astrophysics Data System (ADS)
Lineweaver, C. H.
2005-12-01
The principle of Maximum Entropy Production (MEP) is being usefully applied to a wide range of non-equilibrium processes including flows in planetary atmospheres and the bioenergetics of photosynthesis. Our goal of applying the principle of maximum entropy production to an even wider range of Far From Equilibrium Dissipative Systems (FFEDS) depends on the reproducibility of the evolution of the system from macro-state A to macro-state B. In an attempt to apply the principle of MEP to astronomical and cosmological structures, we investigate the problematic relationship between gravity and entropy. In the context of open and non-equilibrium systems, we use a generalization of the Gibbs free energy to include the sources of free energy extracted by non-living FFEDS such as hurricanes and convection cells. Redox potential gradients and thermal and pressure gradients provide the free energy for a broad range of FFEDS, both living and non-living. However, these gradients have to be within certain ranges. If the gradients are too weak, FFEDS do not appear. If the gradients are too strong FFEDS disappear. Living and non-living FFEDS often have different source gradients (redox potential gradients vs thermal and pressure gradients) and when they share the same gradient, they exploit different ranges of the gradient. In a preliminary attempt to distinguish living from non-living FFEDS, we investigate the parameter space of: type of gradient and steepness of gradient.
NASA Astrophysics Data System (ADS)
Obuchi, Tomoyuki; Cocco, Simona; Monasson, Rémi
2015-11-01
We consider the problem of learning a target probability distribution over a set of N binary variables from the knowledge of the expectation values (with this target distribution) of M observables, drawn uniformly at random. The space of all probability distributions compatible with these M expectation values within some fixed accuracy, called version space, is studied. We introduce a biased measure over the version space, which gives a boost increasing exponentially with the entropy of the distributions and with an arbitrary inverse `temperature' Γ . The choice of Γ allows us to interpolate smoothly between the unbiased measure over all distributions in the version space (Γ =0) and the pointwise measure concentrated at the maximum entropy distribution (Γ → ∞ ). Using the replica method we compute the volume of the version space and other quantities of interest, such as the distance R between the target distribution and the center-of-mass distribution over the version space, as functions of α =(log M)/N and Γ for large N. Phase transitions at critical values of α are found, corresponding to qualitative improvements in the learning of the target distribution and to the decrease of the distance R. However, for fixed α the distance R does not vary with Γ which means that the maximum entropy distribution is not closer to the target distribution than any other distribution compatible with the observable values. Our results are confirmed by Monte Carlo sampling of the version space for small system sizes (N≤ 10).
Optimized Kernel Entropy Components.
Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau
2017-06-01
This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.
Constrained maximum consistency multi-path mitigation
NASA Astrophysics Data System (ADS)
Smith, George B.
2003-10-01
Blind deconvolution algorithms can be useful as pre-processors for signal classification algorithms in shallow water. These algorithms remove the distortion of the signal caused by multipath propagation when no knowledge of the environment is available. A framework in which filters that produce signal estimates from each data channel that are as consistent with each other as possible in a least-squares sense has been presented [Smith, J. Acoust. Soc. Am. 107 (2000)]. This framework provides a solution to the blind deconvolution problem. One implementation of this framework yields the cross-relation on which EVAM [Gurelli and Nikias, IEEE Trans. Signal Process. 43 (1995)] and Rietsch [Rietsch, Geophysics 62(6) (1997)] processing are based. In this presentation, partially blind implementations that have good noise stability properties are compared using Classification Operating Characteristics (CLOC) analysis. [Work supported by ONR under Program Element 62747N and NRL, Stennis Space Center, MS.
Statistical theory on the analytical form of cloud particle size distributions
NASA Astrophysics Data System (ADS)
Wu, Wei; McFarquhar, Greg
2017-11-01
Several analytical forms of cloud particle size distributions (PSDs) have been used in numerical modeling and remote sensing retrieval studies of clouds and precipitation, including exponential, gamma, lognormal, and Weibull distributions. However, there is no satisfying physical explanation as to why certain distribution forms preferentially occur instead of others. Theoretically, the analytical form of a PSD can be derived by directly solving the general dynamic equation, but no analytical solutions have been found yet. Instead of using a process level approach, the use of the principle of maximum entropy (MaxEnt) for determining the analytical form of PSDs from the perspective of system is examined here. Here, the issue of variability under coordinate transformations that arises using the Gibbs/Shannon definition of entropy is identified, and the use of the concept of relative entropy to avoid these problems is discussed. Focusing on cloud physics, the four-parameter generalized gamma distribution is proposed as the analytical form of a PSD using the principle of maximum (relative) entropy with assumptions on power law relations between state variables, scale invariance and a further constraint on the expectation of one state variable (e.g. bulk water mass). DOE ASR.
Analysis of rapid eye movement periodicity in narcoleptics based on maximum entropy method.
Honma, H; Ohtomo, N; Kohsaka, M; Fukuda, N; Kobayashi, R; Sakakibara, S; Nakamura, F; Koyama, T
1999-04-01
We examined REM sleep periodicity in typical narcoleptics and patients who had shown signs of a narcoleptic tetrad without HLA-DRB1*1501/DQB1*0602 or DR2 antigens, using spectral analysis based on the maximum entropy method. The REM sleep period of typical narcoleptics showed two peaks, one at 70-90 min and one at 110-130 min at night, and a single peak at around 70-90 min during the daytime. The nocturnal REM sleep period of typical narcoleptics may be composed of several different periods, one of which corresponds to that of their daytime REM sleep.
NASA Technical Reports Server (NTRS)
Collins, Emmanuel G., Jr.; Richter, Stephen
1990-01-01
One well known deficiency of LQG compensators is that they do not guarantee any measure of robustness. This deficiency is especially highlighted when considering control design for complex systems such as flexible structures. There has thus been a need to generalize LQG theory to incorporate robustness constraints. Here we describe the maximum entropy approach to robust control design for flexible structures, a generalization of LQG theory, pioneered by Hyland, which has proved useful in practice. The design equations consist of a set of coupled Riccati and Lyapunov equations. A homotopy algorithm that is used to solve these design equations is presented.
Mauda, R.; Pinchas, M.
2014-01-01
Recently a new blind equalization method was proposed for the 16QAM constellation input inspired by the maximum entropy density approximation technique with improved equalization performance compared to the maximum entropy approach, Godard's algorithm, and others. In addition, an approximated expression for the minimum mean square error (MSE) was obtained. The idea was to find those Lagrange multipliers that bring the approximated MSE to minimum. Since the derivation of the obtained MSE with respect to the Lagrange multipliers leads to a nonlinear equation for the Lagrange multipliers, the part in the MSE expression that caused the nonlinearity in the equation for the Lagrange multipliers was ignored. Thus, the obtained Lagrange multipliers were not those Lagrange multipliers that bring the approximated MSE to minimum. In this paper, we derive a new set of Lagrange multipliers based on the nonlinear expression for the Lagrange multipliers obtained from minimizing the approximated MSE with respect to the Lagrange multipliers. Simulation results indicate that for the high signal to noise ratio (SNR) case, a faster convergence rate is obtained for a channel causing a high initial intersymbol interference (ISI) while the same equalization performance is obtained for an easy channel (initial ISI low). PMID:24723813
Stimulus-dependent Maximum Entropy Models of Neural Population Codes
Segev, Ronen; Schneidman, Elad
2013-01-01
Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME) model—a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population. PMID:23516339
Entropy in self-similar shock profiles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margolin, Len G.; Reisner, Jon Michael; Jordan, Pedro M.
In this paper, we study the structure of a gaseous shock, and in particular the distribution of entropy within, in both a thermodynamics and a statistical mechanics context. The problem of shock structure has a long and distinguished history that we review. We employ the Navier–Stokes equations to construct a self–similar version of Becker’s solution for a shock assuming a particular (physically plausible) Prandtl number; that solution reproduces the well–known result of Morduchow & Libby that features a maximum of the equilibrium entropy inside the shock profile. We then construct an entropy profile, based on gas kinetic theory, that ismore » smooth and monotonically increasing. The extension of equilibrium thermodynamics to irreversible processes is based in part on the assumption of local thermodynamic equilibrium. We show that this assumption is not valid except for the weakest shocks. Finally, we conclude by hypothesizing a thermodynamic nonequilibrium entropy and demonstrating that it closely estimates the gas kinetic nonequilibrium entropy within a shock.« less
Entropy in self-similar shock profiles
Margolin, Len G.; Reisner, Jon Michael; Jordan, Pedro M.
2017-07-16
In this paper, we study the structure of a gaseous shock, and in particular the distribution of entropy within, in both a thermodynamics and a statistical mechanics context. The problem of shock structure has a long and distinguished history that we review. We employ the Navier–Stokes equations to construct a self–similar version of Becker’s solution for a shock assuming a particular (physically plausible) Prandtl number; that solution reproduces the well–known result of Morduchow & Libby that features a maximum of the equilibrium entropy inside the shock profile. We then construct an entropy profile, based on gas kinetic theory, that ismore » smooth and monotonically increasing. The extension of equilibrium thermodynamics to irreversible processes is based in part on the assumption of local thermodynamic equilibrium. We show that this assumption is not valid except for the weakest shocks. Finally, we conclude by hypothesizing a thermodynamic nonequilibrium entropy and demonstrating that it closely estimates the gas kinetic nonequilibrium entropy within a shock.« less
NASA Astrophysics Data System (ADS)
Whitney, Robert S.
2015-03-01
We investigate the nonlinear scattering theory for quantum systems with strong Seebeck and Peltier effects, and consider their use as heat engines and refrigerators with finite power outputs. This paper gives detailed derivations of the results summarized in a previous paper [R. S. Whitney, Phys. Rev. Lett. 112, 130601 (2014), 10.1103/PhysRevLett.112.130601]. It shows how to use the scattering theory to find (i) the quantum thermoelectric with maximum possible power output, and (ii) the quantum thermoelectric with maximum efficiency at given power output. The latter corresponds to a minimal entropy production at that power output. These quantities are of quantum origin since they depend on system size over electronic wavelength, and so have no analog in classical thermodynamics. The maximal efficiency coincides with Carnot efficiency at zero power output, but decreases with increasing power output. This gives a fundamental lower bound on entropy production, which means that reversibility (in the thermodynamic sense) is impossible for finite power output. The suppression of efficiency by (nonlinear) phonon and photon effects is addressed in detail; when these effects are strong, maximum efficiency coincides with maximum power. Finally, we show in particular limits (typically without magnetic fields) that relaxation within the quantum system does not allow the system to exceed the bounds derived for relaxation-free systems, however, a general proof of this remains elusive.
Maximum Entropy Approach in Dynamic Contrast-Enhanced Magnetic Resonance Imaging.
Farsani, Zahra Amini; Schmid, Volker J
2017-01-01
In the estimation of physiological kinetic parameters from Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) data, the determination of the arterial input function (AIF) plays a key role. This paper proposes a Bayesian method to estimate the physiological parameters of DCE-MRI along with the AIF in situations, where no measurement of the AIF is available. In the proposed algorithm, the maximum entropy method (MEM) is combined with the maximum a posterior approach (MAP). To this end, MEM is used to specify a prior probability distribution of the unknown AIF. The ability of this method to estimate the AIF is validated using the Kullback-Leibler divergence. Subsequently, the kinetic parameters can be estimated with MAP. The proposed algorithm is evaluated with a data set from a breast cancer MRI study. The application shows that the AIF can reliably be determined from the DCE-MRI data using MEM. Kinetic parameters can be estimated subsequently. The maximum entropy method is a powerful tool to reconstructing images from many types of data. This method is useful for generating the probability distribution based on given information. The proposed method gives an alternative way to assess the input function from the existing data. The proposed method allows a good fit of the data and therefore a better estimation of the kinetic parameters. In the end, this allows for a more reliable use of DCE-MRI. Schattauer GmbH.
Ecosystem functioning and maximum entropy production: a quantitative test of hypotheses.
Meysman, Filip J R; Bruers, Stijn
2010-05-12
The idea that entropy production puts a constraint on ecosystem functioning is quite popular in ecological thermodynamics. Yet, until now, such claims have received little quantitative verification. Here, we examine three 'entropy production' hypotheses that have been forwarded in the past. The first states that increased entropy production serves as a fingerprint of living systems. The other two hypotheses invoke stronger constraints. The state selection hypothesis states that when a system can attain multiple steady states, the stable state will show the highest entropy production rate. The gradient response principle requires that when the thermodynamic gradient increases, the system's new stable state should always be accompanied by a higher entropy production rate. We test these three hypotheses by applying them to a set of conventional food web models. Each time, we calculate the entropy production rate associated with the stable state of the ecosystem. This analysis shows that the first hypothesis holds for all the food webs tested: the living state shows always an increased entropy production over the abiotic state. In contrast, the state selection and gradient response hypotheses break down when the food web incorporates more than one trophic level, indicating that they are not generally valid.
Towards operational interpretations of generalized entropies
NASA Astrophysics Data System (ADS)
Topsøe, Flemming
2010-12-01
The driving force behind our study has been to overcome the difficulties you encounter when you try to extend the clear and convincing operational interpretations of classical Boltzmann-Gibbs-Shannon entropy to other notions, especially to generalized entropies as proposed by Tsallis. Our approach is philosophical, based on speculations regarding the interplay between truth, belief and knowledge. The main result demonstrates that, accepting philosophically motivated assumptions, the only possible measures of entropy are those suggested by Tsallis - which, as we know, include classical entropy. This result constitutes, so it seems, a more transparent interpretation of entropy than previously available. However, further research to clarify the assumptions is still needed. Our study points to the thesis that one should never consider the notion of entropy in isolation - in order to enable a rich and technically smooth study, further concepts, such as divergence, score functions and descriptors or controls should be included in the discussion. This will clarify the distinction between Nature and Observer and facilitate a game theoretical discussion. The usefulness of this distinction and the subsequent exploitation of game theoretical results - such as those connected with the notion of Nash equilibrium - is demonstrated by a discussion of the Maximum Entropy Principle.
Entropy of international trades
NASA Astrophysics Data System (ADS)
Oh, Chang-Young; Lee, D.-S.
2017-05-01
The organization of international trades is highly complex under the collective efforts towards economic profits of participating countries given inhomogeneous resources for production. Considering the trade flux as the probability of exporting a product from a country to another, we evaluate the entropy of the world trades in the period 1950-2000. The trade entropy has increased with time, and we show that it is mainly due to the extension of trade partnership. For a given number of trade partners, the mean trade entropy is about 60% of the maximum possible entropy, independent of time, which can be regarded as a characteristic of the trade fluxes' heterogeneity and is shown to be derived from the scaling and functional behaviors of the universal trade-flux distribution. The correlation and time evolution of the individual countries' gross-domestic products and the number of trade partners show that most countries achieved their economic growth partly by extending their trade relationship.
NASA Astrophysics Data System (ADS)
Luo, L.; Fan, M.; Shen, M. Z.
2007-07-01
Atmospheric turbulence greatly limits the spatial resolution of astronomical images acquired by the large ground-based telescope. The record image obtained from telescope was thought as a convolution result of the object function and the point spread function. The statistic relationship of the images measured data, the estimated object and point spread function was in accord with the Bayes conditional probability distribution, and the maximum-likelihood formulation was found. A blind deconvolution approach based on the maximum-likelihood estimation technique with real optical band limitation constraint is presented for removing the effect of atmospheric turbulence on this class images through the minimization of the convolution error function by use of the conjugation gradient optimization algorithm. As a result, the object function and the point spread function could be estimated from a few record images at the same time by the blind deconvolution algorithm. According to the principle of Fourier optics, the relationship between the telescope optical system parameters and the image band constraint in the frequency domain was formulated during the image processing transformation between the spatial domain and the frequency domain. The convergence of the algorithm was increased by use of having the estimated function variable (also is the object function and the point spread function) nonnegative and the point-spread function band limited. Avoiding Fourier transform frequency components beyond the cut off frequency lost during the image processing transformation when the size of the sampled image data, image spatial domain and frequency domain were the same respectively, the detector element (e.g. a pixels in the CCD) should be less than the quarter of the diffraction speckle diameter of the telescope for acquiring the images on the focal plane. The proposed method can easily be applied to the case of wide field-view turbulent-degraded images restoration because of no using the object support constraint in the algorithm. The performance validity of the method is examined by the computer simulation and the restoration of the real Alpha Psc astronomical image data. The results suggest that the blind deconvolution with the real optical band constraint can remove the effect of the atmospheric turbulence on the observed images and the spatial resolution of the object image can arrive at or exceed the diffraction-limited level.
Jarzynski equality in the context of maximum path entropy
NASA Astrophysics Data System (ADS)
González, Diego; Davis, Sergio
2017-06-01
In the global framework of finding an axiomatic derivation of nonequilibrium Statistical Mechanics from fundamental principles, such as the maximum path entropy - also known as Maximum Caliber principle -, this work proposes an alternative derivation of the well-known Jarzynski equality, a nonequilibrium identity of great importance today due to its applications to irreversible processes: biological systems (protein folding), mechanical systems, among others. This equality relates the free energy differences between two equilibrium thermodynamic states with the work performed when going between those states, through an average over a path ensemble. In this work the analysis of Jarzynski's equality will be performed using the formalism of inference over path space. This derivation highlights the wide generality of Jarzynski's original result, which could even be used in non-thermodynamical settings such as social systems, financial and ecological systems.
NASA Astrophysics Data System (ADS)
Khodasevich, I. A.; Voitikov, S. V.; Orlovich, V. A.; Kosmyna, M. B.; Shekhovtsov, A. N.
2016-09-01
Unpolarized spontaneous Raman spectra of crystalline double calcium orthovanadates Ca10M(VO4)7 (M = Li, K, Na) in the range 150-1600 cm-1 were measured. Two vibrational bands with full-width at half-maximum (FWHM) of 37-50 cm-1 were found in the regions 150-500 and 700-1000 cm-1. The band shapes were approximated well by deconvolution into Voigt profiles. The band at 700-1000 cm-1 was stronger and deconvoluted into eight Voigt profiles. The frequencies of two strong lines were ~848 and ~862 cm-1 for Ca10Li(VO4)7; ~850 and ~866 cm-1 for Ca10Na(VO4)7; and ~844 and ~866 cm-1 for Ca10K(VO4)7. The Lorentzian width parameters of these lines in the Voigt profiles were ~5 times greater than those of the Gaussian width parameters. The FWHM of the Voigt profiles were ~18-42 cm-1. The two strongest lines had widths of 21-25 cm-1. The vibrational band at 300-500 cm-1 was ~5-6 times weaker than that at 700-1000 cm-1 and was deconvoluted into four lines with widths of 25-40 cm-1. The large FWHM of the Raman lines indicated that the crystal structures were disordered. These crystals could be of interest for Raman conversion of pico- and femtosecond laser pulses because of the intense vibrations with large FWHM in the Raman spectra.
High resolution schemes and the entropy condition
NASA Technical Reports Server (NTRS)
Osher, S.; Chakravarthy, S.
1983-01-01
A systematic procedure for constructing semidiscrete, second order accurate, variation diminishing, five point band width, approximations to scalar conservation laws, is presented. These schemes are constructed to also satisfy a single discrete entropy inequality. Thus, in the convex flux case, convergence is proven to be the unique physically correct solution. For hyperbolic systems of conservation laws, this construction is used formally to extend the first author's first order accurate scheme, and show (under some minor technical hypotheses) that limit solutions satisfy an entropy inequality. Results concerning discrete shocks, a maximum principle, and maximal order of accuracy are obtained. Numerical applications are also presented.
Kleidon, A.
2010-01-01
The Earth system is remarkably different from its planetary neighbours in that it shows pronounced, strong global cycling of matter. These global cycles result in the maintenance of a unique thermodynamic state of the Earth's atmosphere which is far from thermodynamic equilibrium (TE). Here, I provide a simple introduction of the thermodynamic basis to understand why Earth system processes operate so far away from TE. I use a simple toy model to illustrate the application of non-equilibrium thermodynamics and to classify applications of the proposed principle of maximum entropy production (MEP) to such processes into three different cases of contrasting flexibility in the boundary conditions. I then provide a brief overview of the different processes within the Earth system that produce entropy, review actual examples of MEP in environmental and ecological systems, and discuss the role of interactions among dissipative processes in making boundary conditions more flexible. I close with a brief summary and conclusion. PMID:20368248
Maximum entropy formalism for the analytic continuation of matrix-valued Green's functions
NASA Astrophysics Data System (ADS)
Kraberger, Gernot J.; Triebl, Robert; Zingl, Manuel; Aichhorn, Markus
2017-10-01
We present a generalization of the maximum entropy method to the analytic continuation of matrix-valued Green's functions. To treat off-diagonal elements correctly based on Bayesian probability theory, the entropy term has to be extended for spectral functions that are possibly negative in some frequency ranges. In that way, all matrix elements of the Green's function matrix can be analytically continued; we introduce a computationally cheap element-wise method for this purpose. However, this method cannot ensure important constraints on the mathematical properties of the resulting spectral functions, namely positive semidefiniteness and Hermiticity. To improve on this, we present a full matrix formalism, where all matrix elements are treated simultaneously. We show the capabilities of these methods using insulating and metallic dynamical mean-field theory (DMFT) Green's functions as test cases. Finally, we apply the methods to realistic material calculations for LaTiO3, where off-diagonal matrix elements in the Green's function appear due to the distorted crystal structure.
Kleidon, A
2010-05-12
The Earth system is remarkably different from its planetary neighbours in that it shows pronounced, strong global cycling of matter. These global cycles result in the maintenance of a unique thermodynamic state of the Earth's atmosphere which is far from thermodynamic equilibrium (TE). Here, I provide a simple introduction of the thermodynamic basis to understand why Earth system processes operate so far away from TE. I use a simple toy model to illustrate the application of non-equilibrium thermodynamics and to classify applications of the proposed principle of maximum entropy production (MEP) to such processes into three different cases of contrasting flexibility in the boundary conditions. I then provide a brief overview of the different processes within the Earth system that produce entropy, review actual examples of MEP in environmental and ecological systems, and discuss the role of interactions among dissipative processes in making boundary conditions more flexible. I close with a brief summary and conclusion.
An entropy-based method for determining the flow depth distribution in natural channels
NASA Astrophysics Data System (ADS)
Moramarco, Tommaso; Corato, Giovanni; Melone, Florisa; Singh, Vijay P.
2013-08-01
A methodology for determining the bathymetry of river cross-sections during floods by the sampling of surface flow velocity and existing low flow hydraulic data is developed . Similar to Chiu (1988) who proposed an entropy-based velocity distribution, the flow depth distribution in a cross-section of a natural channel is derived by entropy maximization. The depth distribution depends on one parameter, whose estimate is straightforward, and on the maximum flow depth. Applying to a velocity data set of five river gage sites, the method modeled the flow area observed during flow measurements and accurately assessed the corresponding discharge by coupling the flow depth distribution and the entropic relation between mean velocity and maximum velocity. The methodology unfolds a new perspective for flow monitoring by remote sensing, considering that the two main quantities on which the methodology is based, i.e., surface flow velocity and flow depth, might be potentially sensed by new sensors operating aboard an aircraft or satellite.
Bayesian Approach to Spectral Function Reconstruction for Euclidean Quantum Field Theories
NASA Astrophysics Data System (ADS)
Burnier, Yannis; Rothkopf, Alexander
2013-11-01
We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T=2.33TC.
Bayesian approach to spectral function reconstruction for Euclidean quantum field theories.
Burnier, Yannis; Rothkopf, Alexander
2013-11-01
We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T=2.33T(C).
Principle of maximum entropy for reliability analysis in the design of machine components
NASA Astrophysics Data System (ADS)
Zhang, Yimin
2018-03-01
We studied the reliability of machine components with parameters that follow an arbitrary statistical distribution using the principle of maximum entropy (PME). We used PME to select the statistical distribution that best fits the available information. We also established a probability density function (PDF) and a failure probability model for the parameters of mechanical components using the concept of entropy and the PME. We obtained the first four moments of the state function for reliability analysis and design. Furthermore, we attained an estimate of the PDF with the fewest human bias factors using the PME. This function was used to calculate the reliability of the machine components, including a connecting rod, a vehicle half-shaft, a front axle, a rear axle housing, and a leaf spring, which have parameters that typically follow a non-normal distribution. Simulations were conducted for comparison. This study provides a design methodology for the reliability of mechanical components for practical engineering projects.
A secure image encryption method based on dynamic harmony search (DHS) combined with chaotic map
NASA Astrophysics Data System (ADS)
Mirzaei Talarposhti, Khadijeh; Khaki Jamei, Mehrzad
2016-06-01
In recent years, there has been increasing interest in the security of digital images. This study focuses on the gray scale image encryption using dynamic harmony search (DHS). In this research, first, a chaotic map is used to create cipher images, and then the maximum entropy and minimum correlation coefficient is obtained by applying a harmony search algorithm on them. This process is divided into two steps. In the first step, the diffusion of a plain image using DHS to maximize the entropy as a fitness function will be performed. However, in the second step, a horizontal and vertical permutation will be applied on the best cipher image, which is obtained in the previous step. Additionally, DHS has been used to minimize the correlation coefficient as a fitness function in the second step. The simulation results have shown that by using the proposed method, the maximum entropy and the minimum correlation coefficient, which are approximately 7.9998 and 0.0001, respectively, have been obtained.
A maximum entropy thermodynamics of small systems.
Dixit, Purushottam D
2013-05-14
We present a maximum entropy approach to analyze the state space of a small system in contact with a large bath, e.g., a solvated macromolecular system. For the solute, the fluctuations around the mean values of observables are not negligible and the probability distribution P(r) of the state space depends on the intricate details of the interaction of the solute with the solvent. Here, we employ a superstatistical approach: P(r) is expressed as a marginal distribution summed over the variation in β, the inverse temperature of the solute. The joint distribution P(β, r) is estimated by maximizing its entropy. We also calculate the first order system-size corrections to the canonical ensemble description of the state space. We test the development on a simple harmonic oscillator interacting with two baths with very different chemical identities, viz., (a) Lennard-Jones particles and (b) water molecules. In both cases, our method captures the state space of the oscillator sufficiently well. Future directions and connections with traditional statistical mechanics are discussed.
Statistical mechanics of letters in words
Stephens, Greg J.; Bialek, William
2013-01-01
We consider words as a network of interacting letters, and approximate the probability distribution of states taken on by this network. Despite the intuition that the rules of English spelling are highly combinatorial and arbitrary, we find that maximum entropy models consistent with pairwise correlations among letters provide a surprisingly good approximation to the full statistics of words, capturing ~92% of the multi-information in four-letter words and even “discovering” words that were not represented in the data. These maximum entropy models incorporate letter interactions through a set of pairwise potentials and thus define an energy landscape on the space of possible words. Guided by the large letter redundancy we seek a lower-dimensional encoding of the letter distribution and show that distinctions between local minima in the landscape account for ~68% of the four-letter entropy. We suggest that these states provide an effective vocabulary which is matched to the frequency of word use and much smaller than the full lexicon. PMID:20866490
Weak scale from the maximum entropy principle
NASA Astrophysics Data System (ADS)
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cannon, William; Zucker, Jeremy; Baxter, Douglas
We report the application of a recently proposed approach for modeling biological systems using a maximum entropy production rate principle in lieu of having in vivo rate constants. The method is applied in four steps: (1) a new ODE-based optimization approach based on Marcelin’s 1910 mass action equation is used to obtain the maximum entropy distribution, (2) the predicted metabolite concentrations are compared to those generally expected from experiment using a loss function from which post-translational regulation of enzymes is inferred, (3) the system is re-optimized with the inferred regulation from which rate constants are determined from the metabolite concentrationsmore » and reaction fluxes, and finally (4) a full ODE-based, mass action simulation with rate parameters and allosteric regulation is obtained. From the last step, the power characteristics and resistance of each reaction can be determined. The method is applied to the central metabolism of Neurospora crassa and the flow of material through the three competing pathways of upper glycolysis, the non-oxidative pentose phosphate pathway, and the oxidative pentose phosphate pathway are evaluated as a function of the NADP/NADPH ratio. It is predicted that regulation of phosphofructokinase (PFK) and flow through the pentose phosphate pathway are essential for preventing an extreme level of fructose 1, 6-bisphophate accumulation. Such an extreme level of fructose 1,6-bisphophate would otherwise result in a glassy cytoplasm with limited diffusion, dramatically decreasing the entropy and energy production rate and, consequently, biological competitiveness.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gubler, Philipp, E-mail: pgubler@riken.jp; RIKEN Nishina Center, Wako, Saitama 351-0198; Yamamoto, Naoki
2015-05-15
Making use of the operator product expansion, we derive a general class of sum rules for the imaginary part of the single-particle self-energy of the unitary Fermi gas. The sum rules are analyzed numerically with the help of the maximum entropy method, which allows us to extract the single-particle spectral density as a function of both energy and momentum. These spectral densities contain basic information on the properties of the unitary Fermi gas, such as the dispersion relation and the superfluid pairing gap, for which we obtain reasonable agreement with the available results based on quantum Monte-Carlo simulations.
A probability space for quantum models
NASA Astrophysics Data System (ADS)
Lemmens, L. F.
2017-06-01
A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.
NASA Astrophysics Data System (ADS)
Obuchi, Tomoyuki; Monasson, Rémi
2015-09-01
The maximum entropy principle (MEP) is a very useful working hypothesis in a wide variety of inference problems, ranging from biological to engineering tasks. To better understand the reasons of the success of MEP, we propose a statistical-mechanical formulation to treat the space of probability distributions constrained by the measures of (experimental) observables. In this paper we first review the results of a detailed analysis of the simplest case of randomly chosen observables. In addition, we investigate by numerical and analytical means the case of smooth observables, which is of practical relevance. Our preliminary results are presented and discussed with respect to the efficiency of the MEP.
Rabani, Eran; Reichman, David R.; Krilov, Goran; Berne, Bruce J.
2002-01-01
We present a method based on augmenting an exact relation between a frequency-dependent diffusion constant and the imaginary time velocity autocorrelation function, combined with the maximum entropy numerical analytic continuation approach to study transport properties in quantum liquids. The method is applied to the case of liquid para-hydrogen at two thermodynamic state points: a liquid near the triple point and a high-temperature liquid. Good agreement for the self-diffusion constant and for the real-time velocity autocorrelation function is obtained in comparison to experimental measurements and other theoretical predictions. Improvement of the methodology and future applications are discussed. PMID:11830656
del Jesus, Manuel; Foti, Romano; Rinaldo, Andrea; Rodriguez-Iturbe, Ignacio
2012-01-01
The spatial organization of functional vegetation types in river basins is a major determinant of their runoff production, biodiversity, and ecosystem services. The optimization of different objective functions has been suggested to control the adaptive behavior of plants and ecosystems, often without a compelling justification. Maximum entropy production (MEP), rooted in thermodynamics principles, provides a tool to justify the choice of the objective function controlling vegetation organization. The application of MEP at the ecosystem scale results in maximum productivity (i.e., maximum canopy photosynthesis) as the thermodynamic limit toward which the organization of vegetation appears to evolve. Maximum productivity, which incorporates complex hydrologic feedbacks, allows us to reproduce the spatial macroscopic organization of functional types of vegetation in a thoroughly monitored river basin, without the need for a reductionist description of the underlying microscopic dynamics. The methodology incorporates the stochastic characteristics of precipitation and the associated soil moisture on a spatially disaggregated framework. Our results suggest that the spatial organization of functional vegetation types in river basins naturally evolves toward configurations corresponding to dynamically accessible local maxima of the maximum productivity of the ecosystem. PMID:23213227
NASA Astrophysics Data System (ADS)
Calixto, M.; Romera, E.
2015-02-01
We propose a new method to identify transitions from a topological insulator to a band insulator in silicene (the silicon equivalent of graphene) in the presence of perpendicular magnetic and electric fields, by using the Rényi-Wehrl entropy of the quantum state in phase space. Electron-hole entropies display an inversion/crossing behavior at the charge neutrality point for any Landau level, and the combined entropy of particles plus holes turns out to be maximum at this critical point. The result is interpreted in terms of delocalization of the quantum state in phase space. The entropic description presented in this work will be valid in general 2D gapped Dirac materials, with a strong intrinsic spin-orbit interaction, isostructural with silicene.
Dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization
NASA Astrophysics Data System (ADS)
Li, Li
2018-03-01
In order to extract target from complex background more quickly and accurately, and to further improve the detection effect of defects, a method of dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization was proposed. Firstly, the method of single-threshold selection based on Arimoto entropy was extended to dual-threshold selection in order to separate the target from the background more accurately. Then intermediate variables in formulae of Arimoto entropy dual-threshold selection was calculated by recursion to eliminate redundant computation effectively and to reduce the amount of calculation. Finally, the local search phase of artificial bee colony algorithm was improved by chaotic sequence based on tent mapping. The fast search for two optimal thresholds was achieved using the improved bee colony optimization algorithm, thus the search could be accelerated obviously. A large number of experimental results show that, compared with the existing segmentation methods such as multi-threshold segmentation method using maximum Shannon entropy, two-dimensional Shannon entropy segmentation method, two-dimensional Tsallis gray entropy segmentation method and multi-threshold segmentation method using reciprocal gray entropy, the proposed method can segment target more quickly and accurately with superior segmentation effect. It proves to be an instant and effective method for image segmentation.
On the morphological instability of a bubble during inertia-controlled growth
NASA Astrophysics Data System (ADS)
Martyushev, L. M.; Birzina, A. I.; Soboleva, A. S.
2018-06-01
The morphological stability of a spherical bubble growing under inertia control is analyzed. Based on the comparison of entropy productions for a distorted and undistorted surface and using the maximum entropy production principle, the morphological instability of the bubble under arbitrary amplitude distortions is shown. This result allows explaining a number of experiments where the surface roughness of bubbles was observed during their explosive-type growth.
Entropy Production in Collisionless Systems. II. Arbitrary Phase-space Occupation Numbers
NASA Astrophysics Data System (ADS)
Barnes, Eric I.; Williams, Liliya L. R.
2012-04-01
We present an analysis of two thermodynamic techniques for determining equilibria of self-gravitating systems. One is the Lynden-Bell (LB) entropy maximization analysis that introduced violent relaxation. Since we do not use the Stirling approximation, which is invalid at small occupation numbers, our systems have finite mass, unlike LB's isothermal spheres. (Instead of Stirling, we utilize a very accurate smooth approximation for ln x!.) The second analysis extends entropy production extremization to self-gravitating systems, also without the use of the Stirling approximation. In addition to the LB statistical family characterized by the exclusion principle in phase space, and designed to treat collisionless systems, we also apply the two approaches to the Maxwell-Boltzmann (MB) families, which have no exclusion principle and hence represent collisional systems. We implicitly assume that all of the phase space is equally accessible. We derive entropy production expressions for both families and give the extremum conditions for entropy production. Surprisingly, our analysis indicates that extremizing entropy production rate results in systems that have maximum entropy, in both LB and MB statistics. In other words, both thermodynamic approaches lead to the same equilibrium structures.
Trends in entropy production during ecosystem development in the Amazon Basin.
Holdaway, Robert J; Sparrow, Ashley D; Coomes, David A
2010-05-12
Understanding successional trends in energy and matter exchange across the ecosystem-atmosphere boundary layer is an essential focus in ecological research; however, a general theory describing the observed pattern remains elusive. This paper examines whether the principle of maximum entropy production could provide the solution. A general framework is developed for calculating entropy production using data from terrestrial eddy covariance and micrometeorological studies. We apply this framework to data from eight tropical forest and pasture flux sites in the Amazon Basin and show that forest sites had consistently higher entropy production rates than pasture sites (0.461 versus 0.422 W m(-2) K(-1), respectively). It is suggested that during development, changes in canopy structure minimize surface albedo, and development of deeper root systems optimizes access to soil water and thus potential transpiration, resulting in lower surface temperatures and increased entropy production. We discuss our results in the context of a theoretical model of entropy production versus ecosystem developmental stage. We conclude that, although further work is required, entropy production could potentially provide a much-needed theoretical basis for understanding the effects of deforestation and land-use change on the land-surface energy balance.
Li, Mengshan; Zhang, Huaijing; Chen, Bingsheng; Wu, Yan; Guan, Lixin
2018-03-05
The pKa value of drugs is an important parameter in drug design and pharmacology. In this paper, an improved particle swarm optimization (PSO) algorithm was proposed based on the population entropy diversity. In the improved algorithm, when the population entropy was higher than the set maximum threshold, the convergence strategy was adopted; when the population entropy was lower than the set minimum threshold the divergence strategy was adopted; when the population entropy was between the maximum and minimum threshold, the self-adaptive adjustment strategy was maintained. The improved PSO algorithm was applied in the training of radial basis function artificial neural network (RBF ANN) model and the selection of molecular descriptors. A quantitative structure-activity relationship model based on RBF ANN trained by the improved PSO algorithm was proposed to predict the pKa values of 74 kinds of neutral and basic drugs and then validated by another database containing 20 molecules. The validation results showed that the model had a good prediction performance. The absolute average relative error, root mean square error, and squared correlation coefficient were 0.3105, 0.0411, and 0.9685, respectively. The model can be used as a reference for exploring other quantitative structure-activity relationships.
On the pH Dependence of the Potential of Maximum Entropy of Ir(111) Electrodes.
Ganassin, Alberto; Sebastián, Paula; Climent, Víctor; Schuhmann, Wolfgang; Bandarenka, Aliaksandr S; Feliu, Juan
2017-04-28
Studies over the entropy of components forming the electrode/electrolyte interface can give fundamental insights into the properties of electrified interphases. In particular, the potential where the entropy of formation of the double layer is maximal (potential of maximum entropy, PME) is an important parameter for the characterization of electrochemical systems. Indeed, this parameter determines the majority of electrode processes. In this work, we determine PMEs for Ir(111) electrodes. The latter currently play an important role to understand electrocatalysis for energy provision; and at the same time, iridium is one of the most stable metals against corrosion. For the experiments, we used a combination of the laser induced potential transient to determine the PME, and CO charge-displacement to determine the potentials of zero total charge, (E PZTC ). Both PME and E PZTC were assessed for perchlorate solutions in the pH range from 1 to 4. Surprisingly, we found that those are located in the potential region where the adsorption of hydrogen and hydroxyl species takes place, respectively. The PMEs demonstrated a shift by ~30 mV per a pH unit (in the RHE scale). Connections between the PME and electrocatalytic properties of the electrode surface are discussed.
NASA Astrophysics Data System (ADS)
Chen, Xiao; Li, Yaan; Yu, Jing; Li, Yuxing
2018-01-01
For fast and more effective implementation of tracking multiple targets in a cluttered environment, we propose a multiple targets tracking (MTT) algorithm called maximum entropy fuzzy c-means clustering joint probabilistic data association that combines fuzzy c-means clustering and the joint probabilistic data association (PDA) algorithm. The algorithm uses the membership value to express the probability of the target originating from measurement. The membership value is obtained through fuzzy c-means clustering objective function optimized by the maximum entropy principle. When considering the effect of the public measurement, we use a correction factor to adjust the association probability matrix to estimate the state of the target. As this algorithm avoids confirmation matrix splitting, it can solve the high computational load problem of the joint PDA algorithm. The results of simulations and analysis conducted for tracking neighbor parallel targets and cross targets in a different density cluttered environment show that the proposed algorithm can realize MTT quickly and efficiently in a cluttered environment. Further, the performance of the proposed algorithm remains constant with increasing process noise variance. The proposed algorithm has the advantages of efficiency and low computational load, which can ensure optimum performance when tracking multiple targets in a dense cluttered environment.
Maximum entropy methods for extracting the learned features of deep neural networks.
Finnegan, Alex; Song, Jun S
2017-10-01
New architectures of multilayer artificial neural networks and new methods for training them are rapidly revolutionizing the application of machine learning in diverse fields, including business, social science, physical sciences, and biology. Interpreting deep neural networks, however, currently remains elusive, and a critical challenge lies in understanding which meaningful features a network is actually learning. We present a general method for interpreting deep neural networks and extracting network-learned features from input data. We describe our algorithm in the context of biological sequence analysis. Our approach, based on ideas from statistical physics, samples from the maximum entropy distribution over possible sequences, anchored at an input sequence and subject to constraints implied by the empirical function learned by a network. Using our framework, we demonstrate that local transcription factor binding motifs can be identified from a network trained on ChIP-seq data and that nucleosome positioning signals are indeed learned by a network trained on chemical cleavage nucleosome maps. Imposing a further constraint on the maximum entropy distribution also allows us to probe whether a network is learning global sequence features, such as the high GC content in nucleosome-rich regions. This work thus provides valuable mathematical tools for interpreting and extracting learned features from feed-forward neural networks.
Formulating the shear stress distribution in circular open channels based on the Renyi entropy
NASA Astrophysics Data System (ADS)
Khozani, Zohreh Sheikh; Bonakdari, Hossein
2018-01-01
The principle of maximum entropy is employed to derive the shear stress distribution by maximizing the Renyi entropy subject to some constraints and by assuming that dimensionless shear stress is a random variable. A Renyi entropy-based equation can be used to model the shear stress distribution along the entire wetted perimeter of circular channels and circular channels with flat beds and deposited sediments. A wide range of experimental results for 12 hydraulic conditions with different Froude numbers (0.375 to 1.71) and flow depths (20.3 to 201.5 mm) were used to validate the derived shear stress distribution. For circular channels, model performance enhanced with increasing flow depth (mean relative error (RE) of 0.0414) and only deteriorated slightly at the greatest flow depth (RE of 0.0573). For circular channels with flat beds, the Renyi entropy model predicted the shear stress distribution well at lower sediment depth. The Renyi entropy model results were also compared with Shannon entropy model results. Both models performed well for circular channels, but for circular channels with flat beds the Renyi entropy model displayed superior performance in estimating the shear stress distribution. The Renyi entropy model was highly precise and predicted the shear stress distribution in a circular channel with RE of 0.0480 and in a circular channel with a flat bed with RE of 0.0488.
Wellskins and slug tests: where's the bias?
NASA Astrophysics Data System (ADS)
Rovey, C. W.; Niemann, W. L.
2001-03-01
Pumping tests in an outwash sand at the Camp Dodge Site give hydraulic conductivities ( K) approximately seven times greater than conventional slug tests in the same wells. To determine if this difference is caused by skin bias, we slug tested three sets of wells, each in a progressively greater stage of development. Results were analyzed with both the conventional Bouwer-Rice method and the deconvolution method, which quantifies the skin and eliminates its effects. In 12 undeveloped wells the average skin is +4.0, causing underestimation of conventional slug-test K (Bouwer-Rice method) by approximately a factor of 2 relative to the deconvolution method. In seven nominally developed wells the skin averages just +0.34, and the Bouwer-Rice method gives K within 10% of that calculated with the deconvolution method. The Bouwer-Rice K in this group is also within 5% of that measured by natural-gradient tracer tests at the same site. In 12 intensely developed wells the average skin is <-0.82, consistent with an average skin of -1.7 measured during single-well pumping tests. At this site the maximum possible skin bias is much smaller than the difference between slug and pumping-test Ks. Moreover, the difference in K persists even in intensely developed wells with negative skins. Therefore, positive wellskins do not cause the difference in K between pumping and slug tests at this site.
A mechanism producing power law etc. distributions
NASA Astrophysics Data System (ADS)
Li, Heling; Shen, Hongjun; Yang, Bin
2017-07-01
Power law distribution is playing an increasingly important role in the complex system study. Based on the insolvability of complex systems, the idea of incomplete statistics is utilized and expanded, three different exponential factors are introduced in equations about the normalization condition, statistical average and Shannon entropy, with probability distribution function deduced about exponential function, power function and the product form between power function and exponential function derived from Shannon entropy and maximal entropy principle. So it is shown that maximum entropy principle can totally replace equal probability hypothesis. Owing to the fact that power and probability distribution in the product form between power function and exponential function, which cannot be derived via equal probability hypothesis, can be derived by the aid of maximal entropy principle, it also can be concluded that maximal entropy principle is a basic principle which embodies concepts more extensively and reveals basic principles on motion laws of objects more fundamentally. At the same time, this principle also reveals the intrinsic link between Nature and different objects in human society and principles complied by all.
NASA Astrophysics Data System (ADS)
Neri, Izaak; Roldán, Édgar; Jülicher, Frank
2017-01-01
We study the statistics of infima, stopping times, and passage probabilities of entropy production in nonequilibrium steady states, and we show that they are universal. We consider two examples of stopping times: first-passage times of entropy production and waiting times of stochastic processes, which are the times when a system reaches a given state for the first time. Our main results are as follows: (i) The distribution of the global infimum of entropy production is exponential with mean equal to minus Boltzmann's constant; (ii) we find exact expressions for the passage probabilities of entropy production; (iii) we derive a fluctuation theorem for stopping-time distributions of entropy production. These results have interesting implications for stochastic processes that can be discussed in simple colloidal systems and in active molecular processes. In particular, we show that the timing and statistics of discrete chemical transitions of molecular processes, such as the steps of molecular motors, are governed by the statistics of entropy production. We also show that the extreme-value statistics of active molecular processes are governed by entropy production; for example, we derive a relation between the maximal excursion of a molecular motor against the direction of an external force and the infimum of the corresponding entropy-production fluctuations. Using this relation, we make predictions for the distribution of the maximum backtrack depth of RNA polymerases, which follow from our universal results for entropy-production infima.
Inverting Monotonic Nonlinearities by Entropy Maximization
López-de-Ipiña Pena, Karmele; Caiafa, Cesar F.
2016-01-01
This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results. PMID:27780261
Inverting Monotonic Nonlinearities by Entropy Maximization.
Solé-Casals, Jordi; López-de-Ipiña Pena, Karmele; Caiafa, Cesar F
2016-01-01
This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results.
Application of an NLME-Stochastic Deconvolution Approach to Level A IVIVC Modeling.
Kakhi, Maziar; Suarez-Sharp, Sandra; Shepard, Terry; Chittenden, Jason
2017-07-01
Stochastic deconvolution is a parameter estimation method that calculates drug absorption using a nonlinear mixed-effects model in which the random effects associated with absorption represent a Wiener process. The present work compares (1) stochastic deconvolution and (2) numerical deconvolution, using clinical pharmacokinetic (PK) data generated for an in vitro-in vivo correlation (IVIVC) study of extended release (ER) formulations of a Biopharmaceutics Classification System class III drug substance. The preliminary analysis found that numerical and stochastic deconvolution yielded superimposable fraction absorbed (F abs ) versus time profiles when supplied with exactly the same externally determined unit impulse response parameters. In a separate analysis, a full population-PK/stochastic deconvolution was applied to the clinical PK data. Scenarios were considered in which immediate release (IR) data were either retained or excluded to inform parameter estimation. The resulting F abs profiles were then used to model level A IVIVCs. All the considered stochastic deconvolution scenarios, and numerical deconvolution, yielded on average similar results with respect to the IVIVC validation. These results could be achieved with stochastic deconvolution without recourse to IR data. Unlike numerical deconvolution, this also implies that in crossover studies where certain individuals do not receive an IR treatment, their ER data alone can still be included as part of the IVIVC analysis. Published by Elsevier Inc.
Use and validity of principles of extremum of entropy production in the study of complex systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitor Reis, A., E-mail: ahr@uevora.pt
2014-07-15
It is shown how both the principles of extremum of entropy production, which are often used in the study of complex systems, follow from the maximization of overall system conductivities, under appropriate constraints. In this way, the maximum rate of entropy production (MEP) occurs when all the forces in the system are kept constant. On the other hand, the minimum rate of entropy production (mEP) occurs when all the currents that cross the system are kept constant. A brief discussion on the validity of the application of the mEP and MEP principles in several cases, and in particular to themore » Earth’s climate is also presented. -- Highlights: •The principles of extremum of entropy production are not first principles. •They result from the maximization of conductivities under appropriate constraints. •The conditions of their validity are set explicitly. •Some long-standing controversies are discussed and clarified.« less
Simultaneous Multi-Scale Diffusion Estimation and Tractography Guided by Entropy Spectrum Pathways
Galinsky, Vitaly L.; Frank, Lawrence R.
2015-01-01
We have developed a method for the simultaneous estimation of local diffusion and the global fiber tracts based upon the information entropy flow that computes the maximum entropy trajectories between locations and depends upon the global structure of the multi-dimensional and multi-modal diffusion field. Computation of the entropy spectrum pathways requires only solving a simple eigenvector problem for the probability distribution for which efficient numerical routines exist, and a straight forward integration of the probability conservation through ray tracing of the convective modes guided by a global structure of the entropy spectrum coupled with a small scale local diffusion. The intervoxel diffusion is sampled by multi b-shell multi q-angle DWI data expanded in spherical waves. This novel approach to fiber tracking incorporates global information about multiple fiber crossings in every individual voxel and ranks it in the most scientifically rigorous way. This method has potential significance for a wide range of applications, including studies of brain connectivity. PMID:25532167
A model for Entropy Production, Entropy Decrease and Action Minimization in Self-Organization
NASA Astrophysics Data System (ADS)
Georgiev, Georgi; Chatterjee, Atanu; Vu, Thanh; Iannacchione, Germano
In self-organization energy gradients across complex systems lead to change in the structure of systems, decreasing their internal entropy to ensure the most efficient energy transport and therefore maximum entropy production in the surroundings. This approach stems from fundamental variational principles in physics, such as the principle of least action. It is coupled to the total energy flowing through a system, which leads to increase the action efficiency. We compare energy transport through a fluid cell which has random motion of its molecules, and a cell which can form convection cells. We examine the signs of change of entropy, and the action needed for the motion inside those systems. The system in which convective motion occurs, reduces the time for energy transmission, compared to random motion. For more complex systems, those convection cells form a network of transport channels, for the purpose of obeying the equations of motion in this geometry. Those transport networks are an essential feature of complex systems in biology, ecology, economy and society.
NASA Astrophysics Data System (ADS)
Wang, WenBin; Wu, ZiNiu; Wang, ChunFeng; Hu, RuiFeng
2013-11-01
A model based on a thermodynamic approach is proposed for predicting the dynamics of communicable epidemics assumed to be governed by controlling efforts of multiple scales so that an entropy is associated with the system. All the epidemic details are factored into a single and time-dependent coefficient, the functional form of this coefficient is found through four constraints, including notably the existence of an inflexion point and a maximum. The model is solved to give a log-normal distribution for the spread rate, for which a Shannon entropy can be defined. The only parameter, that characterizes the width of the distribution function, is uniquely determined through maximizing the rate of entropy production. This entropy-based thermodynamic (EBT) model predicts the number of hospitalized cases with a reasonable accuracy for SARS in the year 2003. This EBT model can be of use for potential epidemics such as avian influenza and H7N9 in China.
Entropic criterion for model selection
NASA Astrophysics Data System (ADS)
Tseng, Chih-Yuan
2006-10-01
Model or variable selection is usually achieved through ranking models according to the increasing order of preference. One of methods is applying Kullback-Leibler distance or relative entropy as a selection criterion. Yet that will raise two questions, why use this criterion and are there any other criteria. Besides, conventional approaches require a reference prior, which is usually difficult to get. Following the logic of inductive inference proposed by Caticha [Relative entropy and inductive inference, in: G. Erickson, Y. Zhai (Eds.), Bayesian Inference and Maximum Entropy Methods in Science and Engineering, AIP Conference Proceedings, vol. 707, 2004 (available from arXiv.org/abs/physics/0311093)], we show relative entropy to be a unique criterion, which requires no prior information and can be applied to different fields. We examine this criterion by considering a physical problem, simple fluids, and results are promising.
Steepest entropy ascent quantum thermodynamic model of electron and phonon transport
NASA Astrophysics Data System (ADS)
Li, Guanchen; von Spakovsky, Michael R.; Hin, Celine
2018-01-01
An advanced nonequilibrium thermodynamic model for electron and phonon transport is formulated based on the steepest-entropy-ascent quantum thermodynamics framework. This framework, based on the principle of steepest entropy ascent (or the equivalent maximum entropy production principle), inherently satisfies the laws of thermodynamics and mechanics and is applicable at all temporal and spatial scales even in the far-from-equilibrium realm. Specifically, the model is proven to recover the Boltzmann transport equations in the near-equilibrium limit and the two-temperature model of electron-phonon coupling when no dispersion is assumed. The heat and mass transport at a temperature discontinuity across a homogeneous interface where the dispersion and coupling of electron and phonon transport are both considered are then modeled. Local nonequilibrium system evolution and nonquasiequilibrium interactions are predicted and the results discussed.
Conserved actions, maximum entropy and dark matter haloes
NASA Astrophysics Data System (ADS)
Pontzen, Andrew; Governato, Fabio
2013-03-01
We use maximum entropy arguments to derive the phase-space distribution of a virialized dark matter halo. Our distribution function gives an improved representation of the end product of violent relaxation. This is achieved by incorporating physically motivated dynamical constraints (specifically on orbital actions) which prevent arbitrary redistribution of energy. We compare the predictions with three high-resolution dark matter simulations of widely varying mass. The numerical distribution function is accurately predicted by our argument, producing an excellent match for the vast majority of particles. The remaining particles constitute the central cusp of the halo (≲4 per cent of the dark matter). They can be accounted for within the presented framework once the short dynamical time-scales of the centre are taken into account.
Optimal protocol for maximum work extraction in a feedback process with a time-varying potential
NASA Astrophysics Data System (ADS)
Kwon, Chulan
2017-12-01
The nonequilibrium nature of information thermodynamics is characterized by the inequality or non-negativity of the total entropy change of the system, memory, and reservoir. Mutual information change plays a crucial role in the inequality, in particular if work is extracted and the paradox of Maxwell's demon is raised. We consider the Brownian information engine where the protocol set of the harmonic potential is initially chosen by the measurement and varies in time. We confirm the inequality of the total entropy change by calculating, in detail, the entropic terms including the mutual information change. We rigorously find the optimal values of the time-dependent protocol for maximum extraction of work both for the finite-time and the quasi-static process.
Numerical optimization using flow equations.
Punk, Matthias
2014-12-01
We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.
Numerical optimization using flow equations
NASA Astrophysics Data System (ADS)
Punk, Matthias
2014-12-01
We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.
LIBOR troubles: Anomalous movements detection based on maximum entropy
NASA Astrophysics Data System (ADS)
Bariviera, Aurelio F.; Martín, María T.; Plastino, Angelo; Vampa, Victoria
2016-05-01
According to the definition of the London Interbank Offered Rate (LIBOR), contributing banks should give fair estimates of their own borrowing costs in the interbank market. Between 2007 and 2009, several banks made inappropriate submissions of LIBOR, sometimes motivated by profit-seeking from their trading positions. In 2012, several newspapers' articles began to cast doubt on LIBOR integrity, leading surveillance authorities to conduct investigations on banks' behavior. Such procedures resulted in severe fines imposed to involved banks, who recognized their financial inappropriate conduct. In this paper, we uncover such unfair behavior by using a forecasting method based on the Maximum Entropy principle. Our results are robust against changes in parameter settings and could be of great help for market surveillance.
NASA Astrophysics Data System (ADS)
De Martino, Daniele
2017-12-01
In this work maximum entropy distributions in the space of steady states of metabolic networks are considered upon constraining the first and second moments of the growth rate. Coexistence of fast and slow phenotypes, with bimodal flux distributions, emerges upon considering control on the average growth (optimization) and its fluctuations (heterogeneity). This is applied to the carbon catabolic core of Escherichia coli where it quantifies the metabolic activity of slow growing phenotypes and it provides a quantitative map with metabolic fluxes, opening the possibility to detect coexistence from flux data. A preliminary analysis on data for E. coli cultures in standard conditions shows degeneracy for the inferred parameters that extend in the coexistence region.
An understanding of human dynamics in urban subway traffic from the Maximum Entropy Principle
NASA Astrophysics Data System (ADS)
Yong, Nuo; Ni, Shunjiang; Shen, Shifei; Ji, Xuewei
2016-08-01
We studied the distribution of entry time interval in Beijing subway traffic by analyzing the smart card transaction data, and then deduced the probability distribution function of entry time interval based on the Maximum Entropy Principle. Both theoretical derivation and data statistics indicated that the entry time interval obeys power-law distribution with an exponential cutoff. In addition, we pointed out the constraint conditions for the distribution form and discussed how the constraints affect the distribution function. It is speculated that for bursts and heavy tails in human dynamics, when the fitted power exponent is less than 1.0, it cannot be a pure power-law distribution, but with an exponential cutoff, which may be ignored in the previous studies.
Algorithms for optimized maximum entropy and diagnostic tools for analytic continuation.
Bergeron, Dominic; Tremblay, A-M S
2016-08-01
Analytic continuation of numerical data obtained in imaginary time or frequency has become an essential part of many branches of quantum computational physics. It is, however, an ill-conditioned procedure and thus a hard numerical problem. The maximum-entropy approach, based on Bayesian inference, is the most widely used method to tackle that problem. Although the approach is well established and among the most reliable and efficient ones, useful developments of the method and of its implementation are still possible. In addition, while a few free software implementations are available, a well-documented, optimized, general purpose, and user-friendly software dedicated to that specific task is still lacking. Here we analyze all aspects of the implementation that are critical for accuracy and speed and present a highly optimized approach to maximum entropy. Original algorithmic and conceptual contributions include (1) numerical approximations that yield a computational complexity that is almost independent of temperature and spectrum shape (including sharp Drude peaks in broad background, for example) while ensuring quantitative accuracy of the result whenever precision of the data is sufficient, (2) a robust method of choosing the entropy weight α that follows from a simple consistency condition of the approach and the observation that information- and noise-fitting regimes can be identified clearly from the behavior of χ^{2} with respect to α, and (3) several diagnostics to assess the reliability of the result. Benchmarks with test spectral functions of different complexity and an example with an actual physical simulation are presented. Our implementation, which covers most typical cases for fermions, bosons, and response functions, is available as an open source, user-friendly software.
Algorithms for optimized maximum entropy and diagnostic tools for analytic continuation
NASA Astrophysics Data System (ADS)
Bergeron, Dominic; Tremblay, A.-M. S.
2016-08-01
Analytic continuation of numerical data obtained in imaginary time or frequency has become an essential part of many branches of quantum computational physics. It is, however, an ill-conditioned procedure and thus a hard numerical problem. The maximum-entropy approach, based on Bayesian inference, is the most widely used method to tackle that problem. Although the approach is well established and among the most reliable and efficient ones, useful developments of the method and of its implementation are still possible. In addition, while a few free software implementations are available, a well-documented, optimized, general purpose, and user-friendly software dedicated to that specific task is still lacking. Here we analyze all aspects of the implementation that are critical for accuracy and speed and present a highly optimized approach to maximum entropy. Original algorithmic and conceptual contributions include (1) numerical approximations that yield a computational complexity that is almost independent of temperature and spectrum shape (including sharp Drude peaks in broad background, for example) while ensuring quantitative accuracy of the result whenever precision of the data is sufficient, (2) a robust method of choosing the entropy weight α that follows from a simple consistency condition of the approach and the observation that information- and noise-fitting regimes can be identified clearly from the behavior of χ2 with respect to α , and (3) several diagnostics to assess the reliability of the result. Benchmarks with test spectral functions of different complexity and an example with an actual physical simulation are presented. Our implementation, which covers most typical cases for fermions, bosons, and response functions, is available as an open source, user-friendly software.
Sun, Zeqing; Sun, Anyu; Ju, Bing-Feng
2017-02-01
Guided-wave echoes from weak reflective pipe defects are usually interfered by coherent noise and difficult to interpret. In this paper, a deconvolution imaging method is proposed to reconstruct defect images from synthetically focused guided-wave signals, with enhanced axial resolution. A compact transducer, circumferentially scanning around the pipe, is used to receive guided-wave echoes from discontinuities at a distance. This method achieves a higher circumferential sampling density than arrayed transducers-up to 72 sampling spots per lap for a pipe with a diameter of 180 mm. A noise suppression technique is used to enhance the signal-to-noise ratio. The enhancement in both signal-to-noise ratio and axial resolution of the method is experimentally validated by the detection of two kinds of artificial defects: a pitting defect of 5 mm in diameter and 0.9 mm in maximum depth, and iron pieces attached to the pipe surface. A reconstructed image of the pitting defect is obtained with a 5.87 dB signal-to-noise ratio. It is revealed that a high circumferential sampling density is important for the enhancement of the inspection sensitivity, by comparing the images reconstructed with different down-sampling ratios. A modified full width at half maximum is used as the criterion to evaluate the circumferential extent of the region where iron pieces are attached, which is applicable for defects with inhomogeneous reflection intensity.
Praveen, Radhakrishnan; Prasad Verma, Priya Ranjan; Venkatesan, Jayachandran; Yoon, Dong-Han; Kim, Se-Kwon; Singh, Sandeep Kumar
2017-09-01
The objective of present investigation was to develop gastro-retentive controlled release system of carvedilol using biological macromolecule, chitosan. 3 2 full factorial design was adopted for optimization of tripolyphosphate (X 1 ) and curing time (X 2 ). Bead stability in 0.1N HCl, buoyancy duration, density, drug loading, dissolution efficiency and cumulative percentage release at 8th hour were evaluated as dependent variables. The levels of X 1 and X 2 of optimized formulation having maximum desirability was found to 2.0% w/v and 62.66min, respectively. The in silico predicted responses and observed response were found to be in good agreement (percent bias error: -13.295 to +13.269). SEM images showed numerous pores in the cross sectional image that renders buoyancy. AUC 0-∞ of optimized formulation was 1.47 times higher as compared to suspension corroborating enhanced extent of absorption. T max and mean residence time were significantly higher from optimized formulation vis a vis suspension. In silico study indicated maximum regional absorption from the duodenum (94.1%) followed by jejunum (5.6%). Wagner-Nelson and Loo-Reigelman method were the preferred deconvolution approach over numerical deconvolution to establish IVIVC. In conclusion, the study showed that gastro-retentive controlled release system prepared using chitosan could be a potential drug carrier of carvedilol with improved bioavailability. Copyright © 2017 Elsevier B.V. All rights reserved.
Convex foundations for generalized MaxEnt models
NASA Astrophysics Data System (ADS)
Frongillo, Rafael; Reid, Mark D.
2014-12-01
We present an approach to maximum entropy models that highlights the convex geometry and duality of generalized exponential families (GEFs) and their connection to Bregman divergences. Using our framework, we are able to resolve a puzzling aspect of the bijection of Banerjee and coauthors between classical exponential families and what they call regular Bregman divergences. Their regularity condition rules out all but Bregman divergences generated from log-convex generators. We recover their bijection and show that a much broader class of divergences correspond to GEFs via two key observations: 1) Like classical exponential families, GEFs have a "cumulant" C whose subdifferential contains the mean: Eo˜pθ[φ(o)]∈∂C(θ) ; 2) Generalized relative entropy is a C-Bregman divergence between parameters: DF(pθ,pθ')= D C(θ,θ') , where DF becomes the KL divergence for F = -H. We also show that every incomplete market with cost function C can be expressed as a complete market, where the prices are constrained to be a GEF with cumulant C. This provides an entirely new interpretation of prediction markets, relating their design back to the principle of maximum entropy.
Li, Jing Xin; Yang, Li; Yang, Lei; Zhang, Chao; Huo, Zhao Min; Chen, Min Hao; Luan, Xiao Feng
2018-03-01
Quantitative evaluation of ecosystem service is a primary premise for rational resources exploitation and sustainable development. Examining ecosystem services flow provides a scientific method to quantity ecosystem services. We built an assessment indicator system based on land cover/land use under the framework of four types of ecosystem services. The types of ecosystem services flow were reclassified. Using entropy theory, disorder degree and developing trend of indicators and urban ecosystem were quantitatively assessed. Beijing was chosen as the study area, and twenty-four indicators were selected for evaluation. The results showed that the entropy value of Beijing urban ecosystem during 2004 to 2015 was 0.794 and the entropy flow was -0.024, suggesting a large disordered degree and near verge of non-health. The system got maximum values for three times, while the mean annual variation of the system entropy value increased gradually in three periods, indicating that human activities had negative effects on urban ecosystem. Entropy flow reached minimum value in 2007, implying the environmental quality was the best in 2007. The determination coefficient for the fitting function of total permanent population in Beijing and urban ecosystem entropy flow was 0.921, indicating that urban ecosystem health was highly correlated with total permanent population.
A Theoretical Basis for Entropy-Scaling Effects in Human Mobility Patterns.
Osgood, Nathaniel D; Paul, Tuhin; Stanley, Kevin G; Qian, Weicheng
2016-01-01
Characterizing how people move through space has been an important component of many disciplines. With the advent of automated data collection through GPS and other location sensing systems, researchers have the opportunity to examine human mobility at spatio-temporal resolution heretofore impossible. However, the copious and complex data collected through these logging systems can be difficult for humans to fully exploit, leading many researchers to propose novel metrics for encapsulating movement patterns in succinct and useful ways. A particularly salient proposed metric is the mobility entropy rate of the string representing the sequence of locations visited by an individual. However, mobility entropy rate is not scale invariant: entropy rate calculations based on measurements of the same trajectory at varying spatial or temporal granularity do not yield the same value, limiting the utility of mobility entropy rate as a metric by confounding inter-experimental comparisons. In this paper, we derive a scaling relationship for mobility entropy rate of non-repeating straight line paths from the definition of Lempel-Ziv compression. We show that the resulting formulation predicts the scaling behavior of simulated mobility traces, and provides an upper bound on mobility entropy rate under certain assumptions. We further show that this formulation has a maximum value for a particular sampling rate, implying that optimal sampling rates for particular movement patterns exist.
NASA Astrophysics Data System (ADS)
Zhao, Hui; Qu, Weilu; Qiu, Weiting
2018-03-01
In order to evaluate sustainable development level of resource-based cities, an evaluation method with Shapely entropy and Choquet integral is proposed. First of all, a systematic index system is constructed, the importance of each attribute is calculated based on the maximum Shapely entropy principle, and then the Choquet integral is introduced to calculate the comprehensive evaluation value of each city from the bottom up, finally apply this method to 10 typical resource-based cities in China. The empirical results show that the evaluation method is scientific and reasonable, which provides theoretical support for the sustainable development path and reform direction of resource-based cities.
Measuring Questions: Relevance and its Relation to Entropy
NASA Technical Reports Server (NTRS)
Knuth, Kevin H.
2004-01-01
The Boolean lattice of logical statements induces the free distributive lattice of questions. Inclusion on this lattice is based on whether one question answers another. Generalizing the zeta function of the question lattice leads to a valuation called relevance or bearing, which is a measure of the degree to which one question answers another. Richard Cox conjectured that this degree can be expressed as a generalized entropy. With the assistance of yet another important result from Janos Acz6l, I show that this is indeed the case; and that the resulting inquiry calculus is a natural generalization of information theory. This approach provides a new perspective of the Principle of Maximum Entropy.
Efficient algorithms and implementations of entropy-based moment closures for rarefied gases
NASA Astrophysics Data System (ADS)
Schaerer, Roman Pascal; Bansal, Pratyuksh; Torrilhon, Manuel
2017-07-01
We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) [13], we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropy distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.
Blakely, Richard J.
1981-01-01
Estimations of the depth to magnetic sources using the power spectrum of magnetic anomalies generally require long magnetic profiles. The method developed here uses the maximum entropy power spectrum (MEPS) to calculate depth to source on short windows of magnetic data; resolution is thereby improved. The method operates by dividing a profile into overlapping windows, calculating a maximum entropy power spectrum for each window, linearizing the spectra, and calculating with least squares the various depth estimates. The assumptions of the method are that the source is two dimensional and that the intensity of magnetization includes random noise; knowledge of the direction of magnetization is not required. The method is applied to synthetic data and to observed marine anomalies over the Peru-Chile Trench. The analyses indicate a continuous magnetic basement extending from the eastern margin of the Nazca plate and into the subduction zone. The computed basement depths agree with acoustic basement seaward of the trench axis, but deepen as the plate approaches the inner trench wall. This apparent increase in the computed depths may result from the deterioration of magnetization in the upper part of the ocean crust, possibly caused by compressional disruption of the basaltic layer. Landward of the trench axis, the depth estimates indicate possible thrusting of the oceanic material into the lower slope of the continental margin.
Multiple Diffusion Mechanisms Due to Nanostructuring in Crowded Environments
Sanabria, Hugo; Kubota, Yoshihisa; Waxham, M. Neal
2007-01-01
One of the key questions regarding intracellular diffusion is how the environment affects molecular mobility. Mostly, intracellular diffusion has been described as hindered, and the physical reasons for this behavior are: immobile barriers, molecular crowding, and binding interactions with immobile or mobile molecules. Using results from multi-photon fluorescence correlation spectroscopy, we describe how immobile barriers and crowding agents affect translational mobility. To study the hindrance produced by immobile barriers, we used sol-gels (silica nanostructures) that consist of a continuous solid phase and aqueous phase in which fluorescently tagged molecules diffuse. In the case of molecular crowding, translational mobility was assessed in increasing concentrations of 500 kDa dextran solutions. Diffusion of fluorescent tracers in both sol-gels and dextran solutions shows clear evidence of anomalous subdiffusion. In addition, data from the autocorrelation function were analyzed using the maximum entropy method as adapted to fluorescence correlation spectroscopy data and compared with the standard model that incorporates anomalous diffusion. The maximum entropy method revealed evidence of different diffusion mechanisms that had not been revealed using the anomalous diffusion model. These mechanisms likely correspond to nanostructuring in crowded environments and to the relative dimensions of the crowding agent with respect to the tracer molecule. Analysis with the maximum entropy method also revealed information about the degree of heterogeneity in the environment as reported by the behavior of diffusive molecules. PMID:17040979
Possible dynamical explanations for Paltridge's principle of maximum entropy production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Virgo, Nathaniel, E-mail: nathanielvirgo@gmail.com; Ikegami, Takashi, E-mail: nathanielvirgo@gmail.com
2014-12-05
Throughout the history of non-equilibrium thermodynamics a number of theories have been proposed in which complex, far from equilibrium flow systems are hypothesised to reach a steady state that maximises some quantity. Perhaps the most celebrated is Paltridge's principle of maximum entropy production for the horizontal heat flux in Earth's atmosphere, for which there is some empirical support. There have been a number of attempts to derive such a principle from maximum entropy considerations. However, we currently lack a more mechanistic explanation of how any particular system might self-organise into a state that maximises some quantity. This is in contrastmore » to equilibrium thermodynamics, in which models such as the Ising model have been a great help in understanding the relationship between the predictions of MaxEnt and the dynamics of physical systems. In this paper we show that, unlike in the equilibrium case, Paltridge-type maximisation in non-equilibrium systems cannot be achieved by a simple dynamical feedback mechanism. Nevertheless, we propose several possible mechanisms by which maximisation could occur. Showing that these occur in any real system is a task for future work. The possibilities presented here may not be the only ones. We hope that by presenting them we can provoke further discussion about the possible dynamical mechanisms behind extremum principles for non-equilibrium systems, and their relationship to predictions obtained through MaxEnt.« less
Partial Deconvolution with Inaccurate Blur Kernel.
Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei
2017-10-17
Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.
A multi wavelength study of the circumnuclear region of NGC 1365
NASA Astrophysics Data System (ADS)
Kristen, H.; Sandqvist, A. A.; Lindblad, P. O.
We select a sample of five barred spiral galaxies in order to test previous findings concerning NGC 1365. The H I halos of the investigated objects are found to be compact. The effect of companions on the H I extent is illustrated. The central region of NGC 1365 has been mapped in the J = 3-2 CO emission line with the 15-m SEST, which has a HPBW of 15" at the frequency of this transition. The observing grid has a 5"-spacing in the inner and a 10"-spacing in the outer region. A Maximum Entropy Method (MEM) deconvolution has been performed on the inner region observations. A circumnuclear molecular torus with a radius of about 5" is the dominant feature. Molecular emission is also seen coming from various dust streamers in the bar of the galaxy. The velocity field of the molecular region agrees well with predictions of models of gas streaming in the bar and nuclear region. Comparisons with 7"-resolution VLA observations of H I absorption in the nuclear region are discussed. The morphology and kinematics of the high excitation outflow cone in the nuclear region of the Seyfert 1.5 galaxy NGC 1365 is investigated. The opening angle of the cone is 100 degrees, and the orientation such that the line of sight to the Seyfert 1.5 nucleus falls inside the cone. The outflow velocities within the cone are accelerated and fall off towards the edge. An HST FOC exposure of the nuclear region in [OIII] emission shows the cone to contain a large number of discrete clouds. Seen in the B band, a number of star forming regions surround the nucleus, of which the brightest seem to have absolute magnitudes of about -16.6 magnitudes.
NASA Astrophysics Data System (ADS)
Zhang, Yong; Wang, Rongjiang; Parolai, Stefano; Zschau, Jochen
2013-04-01
Based on the principle of the phased array interference, we have developed an Iterative Deconvolution Stacking (IDS) method for real-time kinematic source inversion using near-field strong-motion and GPS networks. In this method, the seismic and GPS stations work like an array radar. The whole potential fault area is scanned patch by patch by stacking the apparent source time functions, which are obtained through deconvolution between the recorded seismograms and synthetic Green's functions. Once some significant source signals are detected any when and where, their signatures are removed from the observed seismograms. The procedure is repeated until the accumulative seismic moment being found converges and the residual seismograms are reduced below the noise level. The new approach does not need any artificial constraint used in the source parameterization such as, for example, fixing the hypocentre, restricting the rupture velocity and rise time, etc. Thus, it can be used for automatic real-time source inversion. In the application to the 2011 Tohoku earthquake, the IDS method is proved to be robust and reliable on the fast estimation of moment magnitude, fault area, rupture direction, and maximum slip, etc. About at 100 s after the rupture initiation, we can get the information that the rupture mainly propagates along the up-dip direction and causes a maximum slip of 17 m, which is enough to release a tsunami early warning. About two minutes after the earthquake occurrence, the maximum slip is found to be 31 m, and the moment magnitude reaches Mw8.9 which is very close to the final moment magnitude (Mw9.0) of this earthquake.
A maximum entropy reconstruction technique for tomographic particle image velocimetry
NASA Astrophysics Data System (ADS)
Bilsky, A. V.; Lozhkin, V. A.; Markovich, D. M.; Tokarev, M. P.
2013-04-01
This paper studies a novel approach for reducing tomographic PIV computational complexity. The proposed approach is an algebraic reconstruction technique, termed MENT (maximum entropy). This technique computes the three-dimensional light intensity distribution several times faster than SMART, using at least ten times less memory. Additionally, the reconstruction quality remains nearly the same as with SMART. This paper presents the theoretical computation performance comparison for MENT, SMART and MART, followed by validation using synthetic particle images. Both the theoretical assessment and validation of synthetic images demonstrate significant computational time reduction. The data processing accuracy of MENT was compared to that of SMART in a slot jet experiment. A comparison of the average velocity profiles shows a high level of agreement between the results obtained with MENT and those obtained with SMART.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urniezius, Renaldas
2011-03-14
The principle of Maximum relative Entropy optimization was analyzed for dead reckoning localization of a rigid body when observation data of two attached accelerometers was collected. Model constraints were derived from the relationships between the sensors. The experiment's results confirmed that accelerometers each axis' noise can be successfully filtered utilizing dependency between channels and the dependency between time series data. Dependency between channels was used for a priori calculation, and a posteriori distribution was derived utilizing dependency between time series data. There was revisited data of autocalibration experiment by removing the initial assumption that instantaneous rotation axis of a rigidmore » body was known. Performance results confirmed that such an approach could be used for online dead reckoning localization.« less
LensEnt2: Maximum-entropy weak lens reconstruction
NASA Astrophysics Data System (ADS)
Marshall, P. J.; Hobson, M. P.; Gull, S. F.; Bridle, S. L.
2013-08-01
LensEnt2 is a maximum entropy reconstructor of weak lensing mass maps. The method takes each galaxy shape as an independent estimator of the reduced shear field and incorporates an intrinsic smoothness, determined by Bayesian methods, into the reconstruction. The uncertainties from both the intrinsic distribution of galaxy shapes and galaxy shape estimation are carried through to the final mass reconstruction, and the mass within arbitrarily shaped apertures are calculated with corresponding uncertainties. The input is a galaxy ellipticity catalog with each measured galaxy shape treated as a noisy tracer of the reduced shear field, which is inferred on a fine pixel grid assuming positivity, and smoothness on scales of w arcsec where w is an input parameter. The ICF width w can be chosen by computing the evidence for it.
Halo-independence with quantified maximum entropy at DAMA/LIBRA
NASA Astrophysics Data System (ADS)
Fowlie, Andrew
2017-10-01
Using the DAMA/LIBRA anomaly as an example, we formalise the notion of halo-independence in the context of Bayesian statistics and quantified maximum entropy. We consider an infinite set of possible profiles, weighted by an entropic prior and constrained by a likelihood describing noisy measurements of modulated moments by DAMA/LIBRA. Assuming an isotropic dark matter (DM) profile in the galactic rest frame, we find the most plausible DM profiles and predictions for unmodulated signal rates at DAMA/LIBRA. The entropic prior contains an a priori unknown regularisation factor, β, that describes the strength of our conviction that the profile is approximately Maxwellian. By varying β, we smoothly interpolate between a halo-independent and a halo-dependent analysis, thus exploring the impact of prior information about the DM profile.
Comparison of two views of maximum entropy in biodiversity: Frank (2011) and Pueyo et al. (2007).
Pueyo, Salvador
2012-05-01
An increasing number of authors agree in that the maximum entropy principle (MaxEnt) is essential for the understanding of macroecological patterns. However, there are subtle but crucial differences among the approaches by several of these authors. This poses a major obstacle for anyone interested in applying the methodology of MaxEnt in this context. In a recent publication, Frank (2011) gives some arguments why his own approach would represent an improvement as compared to the earlier paper by Pueyo et al. (2007) and also to the views by Edwin T. Jaynes, who first formulated MaxEnt in the context of statistical physics. Here I show that his criticisms are flawed and that there are fundamental reasons to prefer the original approach.
Bayesian Image Segmentations by Potts Prior and Loopy Belief Propagation
NASA Astrophysics Data System (ADS)
Tanaka, Kazuyuki; Kataoka, Shun; Yasuda, Muneki; Waizumi, Yuji; Hsu, Chiou-Ting
2014-12-01
This paper presents a Bayesian image segmentation model based on Potts prior and loopy belief propagation. The proposed Bayesian model involves several terms, including the pairwise interactions of Potts models, and the average vectors and covariant matrices of Gauss distributions in color image modeling. These terms are often referred to as hyperparameters in statistical machine learning theory. In order to determine these hyperparameters, we propose a new scheme for hyperparameter estimation based on conditional maximization of entropy in the Potts prior. The algorithm is given based on loopy belief propagation. In addition, we compare our conditional maximum entropy framework with the conventional maximum likelihood framework, and also clarify how the first order phase transitions in loopy belief propagations for Potts models influence our hyperparameter estimation procedures.
On the sufficiency of pairwise interactions in maximum entropy models of networks
NASA Astrophysics Data System (ADS)
Nemenman, Ilya; Merchan, Lina
Biological information processing networks consist of many components, which are coupled by an even larger number of complex multivariate interactions. However, analyses of data sets from fields as diverse as neuroscience, molecular biology, and behavior have reported that observed statistics of states of some biological networks can be approximated well by maximum entropy models with only pairwise interactions among the components. Based on simulations of random Ising spin networks with p-spin (p > 2) interactions, here we argue that this reduction in complexity can be thought of as a natural property of some densely interacting networks in certain regimes, and not necessarily as a special property of living systems. This work was supported in part by James S. McDonnell Foundation Grant No. 220020321.
Maximum entropy models as a tool for building precise neural controls.
Savin, Cristina; Tkačik, Gašper
2017-10-01
Neural responses are highly structured, with population activity restricted to a small subset of the astronomical range of possible activity patterns. Characterizing these statistical regularities is important for understanding circuit computation, but challenging in practice. Here we review recent approaches based on the maximum entropy principle used for quantifying collective behavior in neural activity. We highlight recent models that capture population-level statistics of neural data, yielding insights into the organization of the neural code and its biological substrate. Furthermore, the MaxEnt framework provides a general recipe for constructing surrogate ensembles that preserve aspects of the data, but are otherwise maximally unstructured. This idea can be used to generate a hierarchy of controls against which rigorous statistical tests are possible. Copyright © 2017 Elsevier Ltd. All rights reserved.
Comparison of two views of maximum entropy in biodiversity: Frank (2011) and Pueyo et al. (2007)
Pueyo, Salvador
2012-01-01
An increasing number of authors agree in that the maximum entropy principle (MaxEnt) is essential for the understanding of macroecological patterns. However, there are subtle but crucial differences among the approaches by several of these authors. This poses a major obstacle for anyone interested in applying the methodology of MaxEnt in this context. In a recent publication, Frank (2011) gives some arguments why his own approach would represent an improvement as compared to the earlier paper by Pueyo et al. (2007) and also to the views by Edwin T. Jaynes, who first formulated MaxEnt in the context of statistical physics. Here I show that his criticisms are flawed and that there are fundamental reasons to prefer the original approach. PMID:22837843
MaxEnt-Based Ecological Theory: A Template for Integrated Catchment Theory
NASA Astrophysics Data System (ADS)
Harte, J.
2017-12-01
The maximum information entropy procedure (MaxEnt) is both a powerful tool for inferring least-biased probability distributions from limited data and a framework for the construction of complex systems theory. The maximum entropy theory of ecology (METE) describes remarkably well widely observed patterns in the distribution, abundance and energetics of individuals and taxa in relatively static ecosystems. An extension to ecosystems undergoing change in response to disturbance or natural succession (DynaMETE) is in progress. I describe the structure of both the static and the dynamic theory and show a range of comparisons with census data. I then propose a generalization of the MaxEnt approach that could provide a framework for a predictive theory of both static and dynamic, fully-coupled, eco-socio-hydrological catchment systems.
Mobli, Mehdi; Stern, Alan S.; Bermel, Wolfgang; King, Glenn F.; Hoch, Jeffrey C.
2010-01-01
One of the stiffest challenges in structural studies of proteins using NMR is the assignment of sidechain resonances. Typically, a panel of lengthy 3D experiments are acquired in order to establish connectivities and resolve ambiguities due to overlap. We demonstrate that these experiments can be replaced by a single 4D experiment that is time-efficient, yields excellent resolution, and captures unique carbon-proton connectivity information. The approach is made practical by the use of non-uniform sampling in the three indirect time dimensions and maximum entropy reconstruction of the corresponding 3D frequency spectrum. This 4D method will facilitate automated resonance assignment procedures and it should be particularly beneficial for increasing throughput in NMR-based structural genomics initiatives. PMID:20299257
A maximum entropy model for chromatin structure
NASA Astrophysics Data System (ADS)
Farre, Pau; Emberly, Eldon; Emberly Group Team
The DNA inside the nucleus of eukaryotic cells shows a variety of conserved structures at different length scales These structures are formed by interactions between protein complexes that bind to the DNA and regulate gene activity. Recent high throughput sequencing techniques allow for the measurement both of the genome wide contact map of the folded DNA within a cell (HiC) and where various proteins are bound to the DNA (ChIP-seq). In this talk I will present a maximum-entropy method capable of both predicting HiC contact maps from binding data, and binding data from HiC contact maps. This method results in an intuitive Ising-type model that is able to predict how altering the presence of binding factors can modify chromosome conformation, without the need of polymer simulations.
Statistical Deconvolution for Superresolution Fluorescence Microscopy
Mukamel, Eran A.; Babcock, Hazen; Zhuang, Xiaowei
2012-01-01
Superresolution microscopy techniques based on the sequential activation of fluorophores can achieve image resolution of ∼10 nm but require a sparse distribution of simultaneously activated fluorophores in the field of view. Image analysis procedures for this approach typically discard data from crowded molecules with overlapping images, wasting valuable image information that is only partly degraded by overlap. A data analysis method that exploits all available fluorescence data, regardless of overlap, could increase the number of molecules processed per frame and thereby accelerate superresolution imaging speed, enabling the study of fast, dynamic biological processes. Here, we present a computational method, referred to as deconvolution-STORM (deconSTORM), which uses iterative image deconvolution in place of single- or multiemitter localization to estimate the sample. DeconSTORM approximates the maximum likelihood sample estimate under a realistic statistical model of fluorescence microscopy movies comprising numerous frames. The model incorporates Poisson-distributed photon-detection noise, the sparse spatial distribution of activated fluorophores, and temporal correlations between consecutive movie frames arising from intermittent fluorophore activation. We first quantitatively validated this approach with simulated fluorescence data and showed that deconSTORM accurately estimates superresolution images even at high densities of activated fluorophores where analysis by single- or multiemitter localization methods fails. We then applied the method to experimental data of cellular structures and demonstrated that deconSTORM enables an approximately fivefold or greater increase in imaging speed by allowing a higher density of activated fluorophores/frame. PMID:22677393
Deconvolution method for accurate determination of overlapping peak areas in chromatograms.
Nelson, T J
1991-12-20
A method is described for deconvoluting chromatograms which contain overlapping peaks. Parameters can be selected to ensure that attenuation of peak areas is uniform over any desired range of peak widths. A simple extension of the method greatly reduces the negative overshoot frequently encountered with deconvolutions. The deconvoluted chromatograms are suitable for integration by conventional methods.
2015-03-26
performing. All reasonable permutations of factors will be used to develop a multitude of unique combinations. These combinations are considered different...are seen below (Duda et al., 2001). Entropy impurity: () = −�P�ωj�log2P(ωj) j (9) Gini impurity: () =�()�� = 1 2 ∗ [1...proportion of one class to another approaches 0.5, the impurity measure reaches its maximum, which for Entropy is 1.0, while it is 0.5 for Gini and
Regularization of Grad’s 13 -Moment-Equations in Kinetic Gas Theory
2011-01-01
variant of the moment method has been proposed by Eu (1980) and is used, e.g., in Myong (2001). Recently, a maximum- entropy 10-moment system has been used...small amplitude linear waves, the R13 system is linearly stable in time for all modes and wave lengths. The instability of the Burnett system indicates...Boltzmann equation. Related to the problem of global hyperbolicity is the questions of the existence of an entropy law for the R13 system . In the linear
Maximum Kolmogorov-Sinai Entropy Versus Minimum Mixing Time in Markov Chains
NASA Astrophysics Data System (ADS)
Mihelich, M.; Dubrulle, B.; Paillard, D.; Kral, Q.; Faranda, D.
2018-01-01
We establish a link between the maximization of Kolmogorov Sinai entropy (KSE) and the minimization of the mixing time for general Markov chains. Since the maximisation of KSE is analytical and easier to compute in general than mixing time, this link provides a new faster method to approximate the minimum mixing time dynamics. It could be interesting in computer sciences and statistical physics, for computations that use random walks on graphs that can be represented as Markov chains.
The Adiabatic Piston and the Second Law of Thermodynamics
NASA Astrophysics Data System (ADS)
Crosignani, Bruno; Di Porto, Paolo; Conti, Claudio
2002-11-01
A detailed analysis of the adiabatic-piston problem reveals peculiar dynamical features that challenge the general belief that isolated systems necessarily reach a static equilibrium state. In particular, the fact that the piston behaves like a perpetuum mobile, i.e., it never stops but keeps wandering, undergoing sizable oscillations, around the position corresponding to maximum entropy, has remarkable implications on the entropy variations of the system and on the validity of the second law when dealing with systems of mesoscopic dimensions.
Maximum Entropy Calculations on a Discrete Probability Space
1986-01-01
constraints acting besides normalization. Statement 3: " The aim of this paper is to show that the die experiment just spoken of has solutions by classical ...analysis. Statement 4: We snall solve this problem in a purely classical way, without the need for recourse to any exotic estimator, such as ME." Note... The I’iximoun Entropy Principle lin i rejirk.ible -series ofT papers beginning in 1957, E. T. J.ayiieti (1957) be~gan a revuluuion in inductive
2013-10-01
cancer for improving the overall specificity. Our recent work has focused on testing retrospective Maximum Entropy and Compressed Sensing of the 4D...terparts and increases the entropy or sparsity of the reconstructed spectrum by narrowing the peak linewidths and de -noising smaller features. This, in...tightened’ beyond the standard de - viation of the noise in an effort to reduce the RMSE and reconstruc- tion non-linearity, but this prevents the
NASA Astrophysics Data System (ADS)
di Liberto, Francesco; Pastore, Raffaele; Peruggi, Fulvio
2011-05-01
When some entropy is transferred, by means of a reversible engine, from a hot heat source to a colder one, the maximum efficiency occurs, i.e. the maximum available work is obtained. Similarly, a reversible heat pumps transfer entropy from a cold heat source to a hotter one with the minimum expense of energy. In contrast, if we are faced with non-reversible devices, there is some lost work for heat engines, and some extra work for heat pumps. These quantities are both related to entropy production. The lost work, i.e. ? , is also called 'degraded energy' or 'energy unavailable to do work'. The extra work, i.e. ? , is the excess of work performed on the system in the irreversible process with respect to the reversible one (or the excess of heat given to the hotter source in the irreversible process). Both quantities are analysed in detail and are evaluated for a complex process, i.e. the stepwise circular cycle, which is similar to the stepwise Carnot cycle. The stepwise circular cycle is a cycle performed by means of N small weights, dw, which are first added and then removed from the piston of the vessel containing the gas or vice versa. The work performed by the gas can be found as the increase of the potential energy of the dw's. Each single dw is identified and its increase, i.e. its increase in potential energy, evaluated. In such a way it is found how the energy output of the cycle is distributed among the dw's. The size of the dw's affects entropy production and therefore the lost and extra work. The distribution of increases depends on the chosen removal process.
Sample entropy analysis of cervical neoplasia gene-expression signatures
Botting, Shaleen K; Trzeciakowski, Jerome P; Benoit, Michelle F; Salama, Salama A; Diaz-Arrastia, Concepcion R
2009-01-01
Background We introduce Approximate Entropy as a mathematical method of analysis for microarray data. Approximate entropy is applied here as a method to classify the complex gene expression patterns resultant of a clinical sample set. Since Entropy is a measure of disorder in a system, we believe that by choosing genes which display minimum entropy in normal controls and maximum entropy in the cancerous sample set we will be able to distinguish those genes which display the greatest variability in the cancerous set. Here we describe a method of utilizing Approximate Sample Entropy (ApSE) analysis to identify genes of interest with the highest probability of producing an accurate, predictive, classification model from our data set. Results In the development of a diagnostic gene-expression profile for cervical intraepithelial neoplasia (CIN) and squamous cell carcinoma of the cervix, we identified 208 genes which are unchanging in all normal tissue samples, yet exhibit a random pattern indicative of the genetic instability and heterogeneity of malignant cells. This may be measured in terms of the ApSE when compared to normal tissue. We have validated 10 of these genes on 10 Normal and 20 cancer and CIN3 samples. We report that the predictive value of the sample entropy calculation for these 10 genes of interest is promising (75% sensitivity, 80% specificity for prediction of cervical cancer over CIN3). Conclusion The success of the Approximate Sample Entropy approach in discerning alterations in complexity from biological system with such relatively small sample set, and extracting biologically relevant genes of interest hold great promise. PMID:19232110
Pfeiffer, Keram; French, Andrew S
2009-09-02
Neurotransmitter chemicals excite or inhibit a range of sensory afferents and sensory pathways. These changes in firing rate or static sensitivity can also be associated with changes in dynamic sensitivity or membrane noise and thus action potential timing. We measured action potential firing produced by random mechanical stimulation of spider mechanoreceptor neurons during long-duration excitation by the GABAA agonist muscimol. Information capacity was estimated from signal-to-noise ratio by averaging responses to repeated identical stimulation sequences. Information capacity was also estimated from the coherence function between input and output signals. Entropy rate was estimated by a data compression algorithm and maximum entropy rate from the firing rate. Action potential timing variability, or jitter, was measured as normalized interspike interval distance. Muscimol increased firing rate, information capacity, and entropy rate, but jitter was unchanged. We compared these data with the effects of increasing firing rate by current injection. Our results indicate that the major increase in information capacity by neurotransmitter action arose from the increased entropy rate produced by increased firing rate, not from reduction in membrane noise and action potential jitter.
NASA Astrophysics Data System (ADS)
Dushkin, A. V.; Kasatkina, T. I.; Novoseltsev, V. I.; Ivanov, S. V.
2018-03-01
The article proposes a forecasting method that allows, based on the given values of entropy and error level of the first and second kind, to determine the allowable time for forecasting the development of the characteristic parameters of a complex information system. The main feature of the method under consideration is the determination of changes in the characteristic parameters of the development of the information system in the form of the magnitude of the increment in the ratios of its entropy. When a predetermined value of the prediction error ratio is reached, that is, the entropy of the system, the characteristic parameters of the system and the depth of the prediction in time are estimated. The resulting values of the characteristics and will be optimal, since at that moment the system possessed the best ratio of entropy as a measure of the degree of organization and orderliness of the structure of the system. To construct a method for estimating the depth of prediction, it is expedient to use the maximum principle of the value of entropy.
Force-Time Entropy of Isometric Impulse.
Hsieh, Tsung-Yu; Newell, Karl M
2016-01-01
The relation between force and temporal variability in discrete impulse production has been viewed as independent (R. A. Schmidt, H. Zelaznik, B. Hawkins, J. S. Frank, & J. T. Quinn, 1979 ) or dependent on the rate of force (L. G. Carlton & K. M. Newell, 1993 ). Two experiments in an isometric single finger force task investigated the joint force-time entropy with (a) fixed time to peak force and different percentages of force level and (b) fixed percentage of force level and different times to peak force. The results showed that the peak force variability increased either with the increment of force level or through a shorter time to peak force that also reduced timing error variability. The peak force entropy and entropy of time to peak force increased on the respective dimension as the parameter conditions approached either maximum force or a minimum rate of force production. The findings show that force error and timing error are dependent but complementary when considered in the same framework with the joint force-time entropy at a minimum in the middle parameter range of discrete impulse.
NASA Astrophysics Data System (ADS)
Kollet, S. J.
2015-05-01
In this study, entropy production optimization and inference principles are applied to a synthetic semi-arid hillslope in high-resolution, physics-based simulations. The results suggest that entropy or power is indeed maximized, because of the strong nonlinearity of variably saturated flow and competing processes related to soil moisture fluxes, the depletion of gradients, and the movement of a free water table. Thus, it appears that the maximum entropy production (MEP) principle may indeed be applicable to hydrologic systems. In the application to hydrologic system, the free water table constitutes an important degree of freedom in the optimization of entropy production and may also relate the theory to actual observations. In an ensuing analysis, an attempt is made to transfer the complex, "microscopic" hillslope model into a macroscopic model of reduced complexity using the MEP principle as an interference tool to obtain effective conductance coefficients and forces/gradients. The results demonstrate a new approach for the application of MEP to hydrologic systems and may form the basis for fruitful discussions and research in future.
NASA Astrophysics Data System (ADS)
Oda, Hirokuni; Xuan, Chuang
2014-10-01
development of pass-through superconducting rock magnetometers (SRM) has greatly promoted collection of paleomagnetic data from continuous long-core samples. The output of pass-through measurement is smoothed and distorted due to convolution of magnetization with the magnetometer sensor response. Although several studies could restore high-resolution paleomagnetic signal through deconvolution of pass-through measurement, difficulties in accurately measuring the magnetometer sensor response have hindered the application of deconvolution. We acquired reliable sensor response of an SRM at the Oregon State University based on repeated measurements of a precisely fabricated magnetic point source. In addition, we present an improved deconvolution algorithm based on Akaike's Bayesian Information Criterion (ABIC) minimization, incorporating new parameters to account for errors in sample measurement position and length. The new algorithm was tested using synthetic data constructed by convolving "true" paleomagnetic signal containing an "excursion" with the sensor response. Realistic noise was added to the synthetic measurement using Monte Carlo method based on measurement noise distribution acquired from 200 repeated measurements of a u-channel sample. Deconvolution of 1000 synthetic measurements with realistic noise closely resembles the "true" magnetization, and successfully restored fine-scale magnetization variations including the "excursion." Our analyses show that inaccuracy in sample measurement position and length significantly affects deconvolution estimation, and can be resolved using the new deconvolution algorithm. Optimized deconvolution of 20 repeated measurements of a u-channel sample yielded highly consistent deconvolution results and estimates of error in sample measurement position and length, demonstrating the reliability of the new deconvolution algorithm for real pass-through measurements.
NASA Astrophysics Data System (ADS)
Xuan, Chuang; Oda, Hirokuni
2015-11-01
The rapid accumulation of continuous paleomagnetic and rock magnetic records acquired from pass-through measurements on superconducting rock magnetometers (SRM) has greatly contributed to our understanding of the paleomagnetic field and paleo-environment. Pass-through measurements are inevitably smoothed and altered by the convolution effect of SRM sensor response, and deconvolution is needed to restore high-resolution paleomagnetic and environmental signals. Although various deconvolution algorithms have been developed, the lack of easy-to-use software has hindered the practical application of deconvolution. Here, we present standalone graphical software UDECON as a convenient tool to perform optimized deconvolution for pass-through paleomagnetic measurements using the algorithm recently developed by Oda and Xuan (Geochem Geophys Geosyst 15:3907-3924, 2014). With the preparation of a format file, UDECON can directly read pass-through paleomagnetic measurement files collected at different laboratories. After the SRM sensor response is determined and loaded to the software, optimized deconvolution can be conducted using two different approaches (i.e., "Grid search" and "Simplex method") with adjustable initial values or ranges for smoothness, corrections of sample length, and shifts in measurement position. UDECON provides a suite of tools to view conveniently and check various types of original measurement and deconvolution data. Multiple steps of measurement and/or deconvolution data can be compared simultaneously to check the consistency and to guide further deconvolution optimization. Deconvolved data together with the loaded original measurement and SRM sensor response data can be saved and reloaded for further treatment in UDECON. Users can also export the optimized deconvolution data to a text file for analysis in other software.
A neural network approach for the blind deconvolution of turbulent flows
NASA Astrophysics Data System (ADS)
Maulik, R.; San, O.
2017-11-01
We present a single-layer feedforward artificial neural network architecture trained through a supervised learning approach for the deconvolution of flow variables from their coarse grained computations such as those encountered in large eddy simulations. We stress that the deconvolution procedure proposed in this investigation is blind, i.e. the deconvolved field is computed without any pre-existing information about the filtering procedure or kernel. This may be conceptually contrasted to the celebrated approximate deconvolution approaches where a filter shape is predefined for an iterative deconvolution process. We demonstrate that the proposed blind deconvolution network performs exceptionally well in the a-priori testing of both two-dimensional Kraichnan and three-dimensional Kolmogorov turbulence and shows promise in forming the backbone of a physics-augmented data-driven closure for the Navier-Stokes equations.
NASA Astrophysics Data System (ADS)
Hao Chiang, Shou; Valdez, Miguel; Chen, Chi-Farn
2016-06-01
Forest is a very important ecosystem and natural resource for living things. Based on forest inventories, government is able to make decisions to converse, improve and manage forests in a sustainable way. Field work for forestry investigation is difficult and time consuming, because it needs intensive physical labor and the costs are high, especially surveying in remote mountainous regions. A reliable forest inventory can give us a more accurate and timely information to develop new and efficient approaches of forest management. The remote sensing technology has been recently used for forest investigation at a large scale. To produce an informative forest inventory, forest attributes, including tree species are unavoidably required to be considered. In this study the aim is to classify forest tree species in Erdenebulgan County, Huwsgul province in Mongolia, using Maximum Entropy method. The study area is covered by a dense forest which is almost 70% of total territorial extension of Erdenebulgan County and is located in a high mountain region in northern Mongolia. For this study, Landsat satellite imagery and a Digital Elevation Model (DEM) were acquired to perform tree species mapping. The forest tree species inventory map was collected from the Forest Division of the Mongolian Ministry of Nature and Environment as training data and also used as ground truth to perform the accuracy assessment of the tree species classification. Landsat images and DEM were processed for maximum entropy modeling, and this study applied the model with two experiments. The first one is to use Landsat surface reflectance for tree species classification; and the second experiment incorporates terrain variables in addition to the Landsat surface reflectance to perform the tree species classification. All experimental results were compared with the tree species inventory to assess the classification accuracy. Results show that the second one which uses Landsat surface reflectance coupled with terrain variables produced better result, with the higher overall accuracy and kappa coefficient than first experiment. The results indicate that the Maximum Entropy method is an applicable, and to classify tree species using satellite imagery data coupled with terrain information can improve the classification of tree species in the study area.
A Theoretical Basis for Entropy-Scaling Effects in Human Mobility Patterns
2016-01-01
Characterizing how people move through space has been an important component of many disciplines. With the advent of automated data collection through GPS and other location sensing systems, researchers have the opportunity to examine human mobility at spatio-temporal resolution heretofore impossible. However, the copious and complex data collected through these logging systems can be difficult for humans to fully exploit, leading many researchers to propose novel metrics for encapsulating movement patterns in succinct and useful ways. A particularly salient proposed metric is the mobility entropy rate of the string representing the sequence of locations visited by an individual. However, mobility entropy rate is not scale invariant: entropy rate calculations based on measurements of the same trajectory at varying spatial or temporal granularity do not yield the same value, limiting the utility of mobility entropy rate as a metric by confounding inter-experimental comparisons. In this paper, we derive a scaling relationship for mobility entropy rate of non-repeating straight line paths from the definition of Lempel-Ziv compression. We show that the resulting formulation predicts the scaling behavior of simulated mobility traces, and provides an upper bound on mobility entropy rate under certain assumptions. We further show that this formulation has a maximum value for a particular sampling rate, implying that optimal sampling rates for particular movement patterns exist. PMID:27571423
Crowded field photometry with deconvolved images.
NASA Astrophysics Data System (ADS)
Linde, P.; Spännare, S.
A local implementation of the Lucy-Richardson algorithm has been used to deconvolve a set of crowded stellar field images. The effects of deconvolution on detection limits as well as on photometric and astrometric properties have been investigated as a function of the number of deconvolution iterations. Results show that deconvolution improves detection of faint stars, although artifacts are also found. Deconvolution provides more stars measurable without significant degradation of positional accuracy. The photometric precision is affected by deconvolution in several ways. Errors due to unresolved images are notably reduced, while flux redistribution between stars and background increases the errors.
Improving ground-penetrating radar data in sedimentary rocks using deterministic deconvolution
Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.; Byrnes, A.P.
2003-01-01
Resolution is key to confidently identifying unique geologic features using ground-penetrating radar (GPR) data. Source wavelet "ringing" (related to bandwidth) in a GPR section limits resolution because of wavelet interference, and can smear reflections in time and/or space. The resultant potential for misinterpretation limits the usefulness of GPR. Deconvolution offers the ability to compress the source wavelet and improve temporal resolution. Unlike statistical deconvolution, deterministic deconvolution is mathematically simple and stable while providing the highest possible resolution because it uses the source wavelet unique to the specific radar equipment. Source wavelets generated in, transmitted through and acquired from air allow successful application of deterministic approaches to wavelet suppression. We demonstrate the validity of using a source wavelet acquired in air as the operator for deterministic deconvolution in a field application using "400-MHz" antennas at a quarry site characterized by interbedded carbonates with shale partings. We collected GPR data on a bench adjacent to cleanly exposed quarry faces in which we placed conductive rods to provide conclusive groundtruth for this approach to deconvolution. The best deconvolution results, which are confirmed by the conductive rods for the 400-MHz antenna tests, were observed for wavelets acquired when the transmitter and receiver were separated by 0.3 m. Applying deterministic deconvolution to GPR data collected in sedimentary strata at our study site resulted in an improvement in resolution (50%) and improved spatial location (0.10-0.15 m) of geologic features compared to the same data processed without deterministic deconvolution. The effectiveness of deterministic deconvolution for increased resolution and spatial accuracy of specific geologic features is further demonstrated by comparing results of deconvolved data with nondeconvolved data acquired along a 30-m transect immediately adjacent to a fresh quarry face. The results at this site support using deterministic deconvolution, which incorporates the GPR instrument's unique source wavelet, as a standard part of routine GPR data processing. ?? 2003 Elsevier B.V. All rights reserved.
Quantifying the entropic cost of cellular growth control
NASA Astrophysics Data System (ADS)
De Martino, Daniele; Capuani, Fabrizio; De Martino, Andrea
2017-07-01
Viewing the ways a living cell can organize its metabolism as the phase space of a physical system, regulation can be seen as the ability to reduce the entropy of that space by selecting specific cellular configurations that are, in some sense, optimal. Here we quantify the amount of regulation required to control a cell's growth rate by a maximum-entropy approach to the space of underlying metabolic phenotypes, where a configuration corresponds to a metabolic flux pattern as described by genome-scale models. We link the mean growth rate achieved by a population of cells to the minimal amount of metabolic regulation needed to achieve it through a phase diagram that highlights how growth suppression can be as costly (in regulatory terms) as growth enhancement. Moreover, we provide an interpretation of the inverse temperature β controlling maximum-entropy distributions based on the underlying growth dynamics. Specifically, we show that the asymptotic value of β for a cell population can be expected to depend on (i) the carrying capacity of the environment, (ii) the initial size of the colony, and (iii) the probability distribution from which the inoculum was sampled. Results obtained for E. coli and human cells are found to be remarkably consistent with empirical evidence.
Bahreinizad, Hossein; Salimi Bani, Milad; Hasani, Mojtaba; Karimi, Mohammad Taghi; Sharifmoradi, Keyvan; Karimi, Alireza
2017-08-09
The influence of various musculoskeletal disorders has been evaluated using different kinetic and kinematic parameters. But the efficiency of walking can be evaluated by measuring the effort of the subject, or by other words the energy that is required to walk. The aim of this study was to identify mechanical energy differences between the normal and pathological groups. Four groups of 15 healthy subjects, 13 Parkinson subjects, 4 osteoarthritis subjects, and 4 ACL reconstructed subjects have participated in this study. The motions of foot, shank and thigh were recorded using a three dimensional motion analysis system. The kinetic, potential and total mechanical energy of each segment was calculated using 3D markers positions and anthropometric measurements. Maximum value and sample entropy of energies was compared between the normal and abnormal subjects. Maximum value of potential energy of OA subjects was lower than the normal subjects. Furthermore, sample entropy of mechanical energy for Parkinson subjects was low in comparison to the normal subjects while sample entropy of mechanical energy for the ACL subjects was higher than that of the normal subjects. Findings of this study suggested that the subjects with different abilities show different mechanical energy during walking.
Wear, Keith; Liu, Yunbo; Gammell, Paul M; Maruvada, Subha; Harris, Gerald R
2015-01-01
Nonlinear acoustic signals contain significant energy at many harmonic frequencies. For many applications, the sensitivity (frequency response) of a hydrophone will not be uniform over such a broad spectrum. In a continuation of a previous investigation involving deconvolution methodology, deconvolution (implemented in the frequency domain as an inverse filter computed from frequency-dependent hydrophone sensitivity) was investigated for improvement of accuracy and precision of nonlinear acoustic output measurements. Timedelay spectrometry was used to measure complex sensitivities for 6 fiber-optic hydrophones. The hydrophones were then used to measure a pressure wave with rich harmonic content. Spectral asymmetry between compressional and rarefactional segments was exploited to design filters used in conjunction with deconvolution. Complex deconvolution reduced mean bias (for 6 fiber-optic hydrophones) from 163% to 24% for peak compressional pressure (p+), from 113% to 15% for peak rarefactional pressure (p-), and from 126% to 29% for pulse intensity integral (PII). Complex deconvolution reduced mean coefficient of variation (COV) (for 6 fiber optic hydrophones) from 18% to 11% (p+), 53% to 11% (p-), and 20% to 16% (PII). Deconvolution based on sensitivity magnitude or the minimum phase model also resulted in significant reductions in mean bias and COV of acoustic output parameters but was less effective than direct complex deconvolution for p+ and p-. Therefore, deconvolution with appropriate filtering facilitates reliable nonlinear acoustic output measurements using hydrophones with frequency-dependent sensitivity.
Non-life insurance pricing: multi-agent model
NASA Astrophysics Data System (ADS)
Darooneh, A. H.
2004-11-01
We use the maximum entropy principle for the pricing of non-life insurance and recover the Bühlmann results for the economic premium principle. The concept of economic equilibrium is revised in this respect.
Maximum entropy principle for stationary states underpinned by stochastic thermodynamics.
Ford, Ian J
2015-11-01
The selection of an equilibrium state by maximizing the entropy of a system, subject to certain constraints, is often powerfully motivated as an exercise in logical inference, a procedure where conclusions are reached on the basis of incomplete information. But such a framework can be more compelling if it is underpinned by dynamical arguments, and we show how this can be provided by stochastic thermodynamics, where an explicit link is made between the production of entropy and the stochastic dynamics of a system coupled to an environment. The separation of entropy production into three components allows us to select a stationary state by maximizing the change, averaged over all realizations of the motion, in the principal relaxational or nonadiabatic component, equivalent to requiring that this contribution to the entropy production should become time independent for all realizations. We show that this recovers the usual equilibrium probability density function (pdf) for a conservative system in an isothermal environment, as well as the stationary nonequilibrium pdf for a particle confined to a potential under nonisothermal conditions, and a particle subject to a constant nonconservative force under isothermal conditions. The two remaining components of entropy production account for a recently discussed thermodynamic anomaly between over- and underdamped treatments of the dynamics in the nonisothermal stationary state.
NASA Astrophysics Data System (ADS)
Gadjiev, Bahruz; Progulova, Tatiana
2015-01-01
We consider a multifractal structure as a mixture of fractal substructures and introduce a distribution function f (α), where α is a fractal dimension. Then we can introduce g(p)˜
NASA Astrophysics Data System (ADS)
von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo
2014-06-01
Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.
Halo-independence with quantified maximum entropy at DAMA/LIBRA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fowlie, Andrew, E-mail: andrew.j.fowlie@googlemail.com
2017-10-01
Using the DAMA/LIBRA anomaly as an example, we formalise the notion of halo-independence in the context of Bayesian statistics and quantified maximum entropy. We consider an infinite set of possible profiles, weighted by an entropic prior and constrained by a likelihood describing noisy measurements of modulated moments by DAMA/LIBRA. Assuming an isotropic dark matter (DM) profile in the galactic rest frame, we find the most plausible DM profiles and predictions for unmodulated signal rates at DAMA/LIBRA. The entropic prior contains an a priori unknown regularisation factor, β, that describes the strength of our conviction that the profile is approximately Maxwellian.more » By varying β, we smoothly interpolate between a halo-independent and a halo-dependent analysis, thus exploring the impact of prior information about the DM profile.« less
Kim, Hea-Jung
2014-01-01
This paper considers a hierarchical screened Gaussian model (HSGM) for Bayesian inference of normal models when an interval constraint in the mean parameter space needs to be incorporated in the modeling but when such a restriction is uncertain. An objective measure of the uncertainty, regarding the interval constraint, accounted for by using the HSGM is proposed for the Bayesian inference. For this purpose, we drive a maximum entropy prior of the normal mean, eliciting the uncertainty regarding the interval constraint, and then obtain the uncertainty measure by considering the relationship between the maximum entropy prior and the marginal prior of the normal mean in HSGM. Bayesian estimation procedure of HSGM is developed and two numerical illustrations pertaining to the properties of the uncertainty measure are provided.
Multi-Group Maximum Entropy Model for Translational Non-Equilibrium
NASA Technical Reports Server (NTRS)
Jayaraman, Vegnesh; Liu, Yen; Panesi, Marco
2017-01-01
The aim of the current work is to describe a new model for flows in translational non- equilibrium. Starting from the statistical description of a gas proposed by Boltzmann, the model relies on a domain decomposition technique in velocity space. Using the maximum entropy principle, the logarithm of the distribution function in each velocity sub-domain (group) is expressed with a power series in molecular velocity. New governing equations are obtained using the method of weighted residuals by taking the velocity moments of the Boltzmann equation. The model is applied to a spatially homogeneous Boltzmann equation with a Bhatnagar-Gross-Krook1(BGK) model collision operator and the relaxation of an initial non-equilibrium distribution to a Maxwellian is studied using the model. In addition, numerical results obtained using the model for a 1D shock tube problem are also reported.
Maximum entropy perception-action space: a Bayesian model of eye movement selection
NASA Astrophysics Data System (ADS)
Colas, Francis; Bessière, Pierre; Girard, Benoît
2011-03-01
In this article, we investigate the issue of the selection of eye movements in a free-eye Multiple Object Tracking task. We propose a Bayesian model of retinotopic maps with a complex logarithmic mapping. This model is structured in two parts: a representation of the visual scene, and a decision model based on the representation. We compare different decision models based on different features of the representation and we show that taking into account uncertainty helps predict the eye movements of subjects recorded in a psychophysics experiment. Finally, based on experimental data, we postulate that the complex logarithmic mapping has a functional relevance, as the density of objects in this space in more uniform than expected. This may indicate that the representation space and control strategies are such that the object density is of maximum entropy.
Estimation of typhoon rainfall in GaoPing River: A Multivariate Maximum Entropy Method
NASA Astrophysics Data System (ADS)
Pei-Jui, Wu; Hwa-Lung, Yu
2016-04-01
The heavy rainfall from typhoons is the main factor of the natural disaster in Taiwan, which causes the significant loss of human lives and properties. Statistically average 3.5 typhoons invade Taiwan every year, and the serious typhoon, Morakot in 2009, impacted Taiwan in recorded history. Because the duration, path and intensity of typhoon, also affect the temporal and spatial rainfall type in specific region , finding the characteristics of the typhoon rainfall type is advantageous when we try to estimate the quantity of rainfall. This study developed a rainfall prediction model and can be divided three parts. First, using the EEOF(extended empirical orthogonal function) to classify the typhoon events, and decompose the standard rainfall type of all stations of each typhoon event into the EOF and PC(principal component). So we can classify the typhoon events which vary similarly in temporally and spatially as the similar typhoon types. Next, according to the classification above, we construct the PDF(probability density function) in different space and time by means of using the multivariate maximum entropy from the first to forth moment statistically. Therefore, we can get the probability of each stations of each time. Final we use the BME(Bayesian Maximum Entropy method) to construct the typhoon rainfall prediction model , and to estimate the rainfall for the case of GaoPing river which located in south of Taiwan.This study could be useful for typhoon rainfall predictions in future and suitable to government for the typhoon disaster prevention .
How the Second Law of Thermodynamics Has Informed Ecosystem Ecology through Its History
NASA Astrophysics Data System (ADS)
Chapman, E. J.; Childers, D. L.; Vallino, J. J.
2014-12-01
Throughout the history of ecosystem ecology many attempts have been made to develop a general principle governing how systems develop and organize. We reviewed the historical developments that led to conceptualization of several goal-oriented principles in ecosystem ecology and the relationships among them. We focused our review on two prominent principles—the Maximum Power Principle and the Maximum Entropy Production Principle—and the literature that applies to both. While these principles have considerable conceptual overlap and both use concepts in physics (power and entropy), we found considerable differences in their historical development, the disciplines that apply these principles, and their adoption in the literature. We reviewed the literature using Web of Science keyword searches for the MPP, the MEPP, as well as for papers that cited pioneers in the MPP and the MEPP development. From the 6000 papers that our keyword searches returned, we limited our further meta-analysis to 32 papers by focusing on studies with a foundation in ecosystems research. Despite these seemingly disparate pasts, we concluded that the conceptual approaches of these two principles were more similar than dissimilar and that maximization of power in ecosystems occurs with maximum entropy production. We also found that these two principles have great potential to explain how systems develop, organize, and function, but there are no widely agreed upon theoretical derivations for the MEPP or the MPP, possibly hindering their broader use in ecological research. We end with recommendations for how ecosystems-level studies may better use these principles.
Receiver function stacks: initial steps for seismic imaging of Cotopaxi volcano, Ecuador
NASA Astrophysics Data System (ADS)
Bishop, J. W.; Lees, J. M.; Ruiz, M. C.
2017-12-01
Cotopaxi volcano is a large, andesitic stratovolcano located within 50 km of the the Ecuadorean capital of Quito. Cotopaxi most recently erupted for the first time in 73 years during August 2015. This eruptive cycle (VEI = 1) featured phreatic explosions and ejection of an ash column 9 km above the volcano edifice. Following this event, ash covered approximately 500 km2 of the surrounding area. Analysis of Multi-GAS data suggests that this eruption was fed from a shallow source. However, stratigraphic evidence surveying the last 800 years of Cotopaxi's activity suggests that there may be a deep magmatic source. To establish a geophysical framework for Cotopaxi's activity, receiver functions were calculated from well recorded earthquakes detected from April 2015 to December 2015 at 9 permanent broadband seismic stations around the volcano. These events were located, and phase arrivals were manually picked. Radial teleseismic receiver functions were then calculated using an iterative deconvolution technique with a Gaussian width of 2.5. A maximum of 200 iterations was allowed in each deconvolution. Iterations were stopped when either the maximum iteration number was reached or the percent change fell beneath a pre-determined tolerance. Receiver functions were then visually inspected for anomalous pulses before the initial P arrival or later peaks larger than the initial P-wave correlated pulse, which were also discarded. Using this data, initial crustal thickness and slab depth estimates beneath the volcano were obtained. Estimates of crustal Vp/Vs ratio for the region were also calculated.
Fast analytical spectral filtering methods for magnetic resonance perfusion quantification.
Reddy, Kasireddy V; Mitra, Abhishek; Yalavarthy, Phaneendra K
2016-08-01
The deconvolution in the perfusion weighted imaging (PWI) plays an important role in quantifying the MR perfusion parameters. The PWI application to stroke and brain tumor studies has become a standard clinical practice. The standard approach for this deconvolution is oscillatory-limited singular value decomposition (oSVD) and frequency domain deconvolution (FDD). The FDD is widely recognized as the fastest approach currently available for deconvolution of MR perfusion data. In this work, two fast deconvolution methods (namely analytical fourier filtering and analytical showalter spectral filtering) are proposed. Through systematic evaluation, the proposed methods are shown to be computationally efficient and quantitatively accurate compared to FDD and oSVD.
Jo, Javier A; Fang, Qiyin; Papaioannou, Thanassis; Baker, J Dennis; Dorafshar, Amir H; Reil, Todd; Qiao, Jian-Hua; Fishbein, Michael C; Freischlag, Julie A; Marcu, Laura
2006-01-01
We report the application of the Laguerre deconvolution technique (LDT) to the analysis of in-vivo time-resolved laser-induced fluorescence spectroscopy (TR-LIFS) data and the diagnosis of atherosclerotic plaques. TR-LIFS measurements were obtained in vivo from normal and atherosclerotic aortas (eight rabbits, 73 areas), and subsequently analyzed using LDT. Spectral and time-resolved features were used to develop four classification algorithms: linear discriminant analysis (LDA), stepwise LDA (SLDA), principal component analysis (PCA), and artificial neural network (ANN). Accurate deconvolution of TR-LIFS in-vivo measurements from normal and atherosclerotic arteries was provided by LDT. The derived Laguerre expansion coefficients reflected changes in the arterial biochemical composition, and provided a means to discriminate lesions rich in macrophages with high sensitivity (>85%) and specificity (>95%). Classification algorithms (SLDA and PCA) using a selected number of features with maximum discriminating power provided the best performance. This study demonstrates the potential of the LDT for in-vivo tissue diagnosis, and specifically for the detection of macrophages infiltration in atherosclerotic lesions, a key marker of plaque vulnerability.
NASA Astrophysics Data System (ADS)
Jo, Javier A.; Fang, Qiyin; Papaioannou, Thanassis; Baker, J. Dennis; Dorafshar, Amir; Reil, Todd; Qiao, Jianhua; Fishbein, Michael C.; Freischlag, Julie A.; Marcu, Laura
2006-03-01
We report the application of the Laguerre deconvolution technique (LDT) to the analysis of in-vivo time-resolved laser-induced fluorescence spectroscopy (TR-LIFS) data and the diagnosis of atherosclerotic plaques. TR-LIFS measurements were obtained in vivo from normal and atherosclerotic aortas (eight rabbits, 73 areas), and subsequently analyzed using LDT. Spectral and time-resolved features were used to develop four classification algorithms: linear discriminant analysis (LDA), stepwise LDA (SLDA), principal component analysis (PCA), and artificial neural network (ANN). Accurate deconvolution of TR-LIFS in-vivo measurements from normal and atherosclerotic arteries was provided by LDT. The derived Laguerre expansion coefficients reflected changes in the arterial biochemical composition, and provided a means to discriminate lesions rich in macrophages with high sensitivity (>85%) and specificity (>95%). Classification algorithms (SLDA and PCA) using a selected number of features with maximum discriminating power provided the best performance. This study demonstrates the potential of the LDT for in-vivo tissue diagnosis, and specifically for the detection of macrophages infiltration in atherosclerotic lesions, a key marker of plaque vulnerability.
Jo, Javier A.; Fang, Qiyin; Papaioannou, Thanassis; Baker, J. Dennis; Dorafshar, Amir H.; Reil, Todd; Qiao, Jian-Hua; Fishbein, Michael C.; Freischlag, Julie A.; Marcu, Laura
2007-01-01
We report the application of the Laguerre deconvolution technique (LDT) to the analysis of in-vivo time-resolved laser-induced fluorescence spectroscopy (TR-LIFS) data and the diagnosis of atherosclerotic plaques. TR-LIFS measurements were obtained in vivo from normal and atherosclerotic aortas (eight rabbits, 73 areas), and subsequently analyzed using LDT. Spectral and time-resolved features were used to develop four classification algorithms: linear discriminant analysis (LDA), stepwise LDA (SLDA), principal component analysis (PCA), and artificial neural network (ANN). Accurate deconvolution of TR-LIFS in-vivo measurements from normal and atherosclerotic arteries was provided by LDT. The derived Laguerre expansion coefficients reflected changes in the arterial biochemical composition, and provided a means to discriminate lesions rich in macrophages with high sensitivity (>85%) and specificity (>95%). Classification algorithms (SLDA and PCA) using a selected number of features with maximum discriminating power provided the best performance. This study demonstrates the potential of the LDT for in-vivo tissue diagnosis, and specifically for the detection of macrophages infiltration in atherosclerotic lesions, a key marker of plaque vulnerability. PMID:16674179
NASA Astrophysics Data System (ADS)
Jia, Xiaodong; Zhao, Ming; Di, Yuan; Jin, Chao; Lee, Jay
2017-01-01
Minimum Entropy Deconvolution (MED) filter, which is a non-parametric approach for impulsive signature detection, has been widely studied recently. Although the merits of the MED filter are manifold, this method tends to over highlight the dominant peaks and its performance becomes less stable when strong noise exists. In order to better understand the behavior of the MED filter, this study first investigated the mathematical fundamentals of the MED filter and then explained the reason why the MED filter tends to over highlight the dominant peaks. In order to pursue finer solutions for weak impulsive signature enhancement, the Convolutional Sparse Filter (CSF) is originally proposed in this work and the derivation of the CSF is presented in details. The superiority of the proposed CSF over the MED filter is validated by both simulated data and experimental data. The results demonstrate that CSF is an effective method for impulsive signature enhancement that could be applied in rotating machines for incipient fault detection.
Ohisa, Noriko; Ogawa, Hiromasa; Murayama, Nobuki; Yoshida, Katsumi
2010-02-01
Polysomnography (PSG) is the gold standard for the diagnosis of sleep apnea hypopnea syndrome (SAHS), but it takes time to analyze the PSG and PSG cannot be performed repeatedly because of efforts and costs. Therefore, simplified sleep respiratory disorder indices in which are reflected the PSG results are needed. The Memcalc method, which is a combination of the maximum entropy method for spectral analysis and the non-linear least squares method for fitting analysis (Makin2, Suwa Trust, Tokyo, Japan) has recently been developed. Spectral entropy which is derived by the Memcalc method might be useful to expressing the trend of time-series behavior. Spectral entropy of ECG which is calculated with the Memcalc method was evaluated by comparing to the PSG results. Obstructive SAS patients (n = 79) and control volanteer (n = 7) ECG was recorded using MemCalc-Makin2 (GMS) with PSG recording using Alice IV (Respironics) from 20:00 to 6:00. Spectral entropy of ECG, which was calculated every 2 seconds using the Memcalc method, was compared to sleep stages which were analyzed manually from PSG recordings. Spectral entropy value (-0.473 vs. -0.418, p < 0.05) were significantly increased in the OSAHS compared to the control. For the entropy cutoff level of -0.423, sensitivity and specificity for OSAHS were 86.1% and 71.4%, respectively, resulting in a receiver operating characteristic with an area under the curve of 0.837. The absolute value of entropy had inverse correlation with stage 3. Spectral entropy, which was calculated with Memcalc method, might be a possible index evaluating the quality of sleep.
NASA Astrophysics Data System (ADS)
Das, Soma; Dey, T. K.
2006-08-01
The magnetocaloric effect (MCE) in fine grained perovskite manganites of the type La1-xKxMnO3 (0
1/ f noise from the laws of thermodynamics for finite-size fluctuations.
Chamberlin, Ralph V; Nasir, Derek M
2014-07-01
Computer simulations of the Ising model exhibit white noise if thermal fluctuations are governed by Boltzmann's factor alone; whereas we find that the same model exhibits 1/f noise if Boltzmann's factor is extended to include local alignment entropy to all orders. We show that this nonlinear correction maintains maximum entropy during equilibrium fluctuations. Indeed, as with the usual way to resolve Gibbs' paradox that avoids entropy reduction during reversible processes, the correction yields the statistics of indistinguishable particles. The correction also ensures conservation of energy if an instantaneous contribution from local entropy is included. Thus, a common mechanism for 1/f noise comes from assuming that finite-size fluctuations strictly obey the laws of thermodynamics, even in small parts of a large system. Empirical evidence for the model comes from its ability to match the measured temperature dependence of the spectral-density exponents in several metals and to show non-Gaussian fluctuations characteristic of nanoscale systems.
Entropy in an expanding universe.
Frautschi, S
1982-08-13
The question of how the observed evolution of organized structures from initial chaos in the expanding universe can be reconciled with the laws of statistical mechanics is studied, with emphasis on effects of the expansion and gravity. Some major sources of entropy increase are listed. An expanding "causal" region is defined in which the entropy, though increasing, tends to fall further and further behind its maximum possible value, thus allowing for the development of order. The related questions of whether entropy will continue increasing without limit in the future, and whether such increase in the form of Hawking radiation or radiation from positronium might enable life to maintain itself permanently, are considered. Attempts to find a scheme for preserving life based on solid structures fail because events such as quantum tunneling recurrently disorganize matter on a very long but fixed time scale, whereas all energy sources slow down progressively in an expanding universe. However, there remains hope that other modes of life capable of maintaining themselves permanently can be found.
Quantum and Ecosystem Entropies
NASA Astrophysics Data System (ADS)
Kirwan, A. D.
2008-06-01
Ecosystems and quantum gases share a number of superficial similarities including enormous numbers of interacting elements and the fundamental role of energy in such interactions. A theory for the synthesis of data and prediction of new phenomena is well established in quantum statistical mechanics. The premise of this paper is that the reason a comparable unifying theory has not emerged in ecology is that a proper role for entropy has yet to be assigned. To this end, a phase space entropy model of ecosystems is developed. Specification of an ecosystem phase space cell size based on microbial mass, length, and time scales gives an ecosystem uncertainty parameter only about three orders of magnitude larger than Planck’s constant. Ecosystem equilibria is specified by conservation of biomass and total metabolic energy, along with the principle of maximum entropy at equilibria. Both Bose - Einstein and Fermi - Dirac equilibrium conditions arise in ecosystems applications. The paper concludes with a discussion of some broader aspects of an ecosystem phase space.
Application of digital image processing techniques to astronomical imagery 1980
NASA Technical Reports Server (NTRS)
Lorre, J. J.
1981-01-01
Topics include: (1) polar coordinate transformations (M83); (2) multispectral ratios (M82); (3) maximum entropy restoration (M87); (4) automated computation of stellar magnitudes in nebulosity; (5) color and polarization; (6) aliasing.
Nonextensivity in a Dark Maximum Entropy Landscape
NASA Astrophysics Data System (ADS)
Leubner, M. P.
2011-03-01
Nonextensive statistics along with network science, an emerging branch of graph theory, are increasingly recognized as potential interdisciplinary frameworks whenever systems are subject to long-range interactions and memory. Such settings are characterized by non-local interactions evolving in a non-Euclidean fractal/multi-fractal space-time making their behavior nonextensive. After summarizing the theoretical foundations from first principles, along with a discussion of entropy bifurcation and duality in nonextensive systems, we focus on selected significant astrophysical consequences. Those include the gravitational equilibria of dark matter (DM) and hot gas in clustered structures, the dark energy(DE) negative pressure landscape governed by the highest degree of mutual correlations and the hierarchy of discrete cosmic structure scales, available upon extremizing the generalized nonextensive link entropy in a homogeneous growing network.
Lezon, Timothy R; Banavar, Jayanth R; Cieplak, Marek; Maritan, Amos; Fedoroff, Nina V
2006-12-12
We describe a method based on the principle of entropy maximization to identify the gene interaction network with the highest probability of giving rise to experimentally observed transcript profiles. In its simplest form, the method yields the pairwise gene interaction network, but it can also be extended to deduce higher-order interactions. Analysis of microarray data from genes in Saccharomyces cerevisiae chemostat cultures exhibiting energy metabolic oscillations identifies a gene interaction network that reflects the intracellular communication pathways that adjust cellular metabolic activity and cell division to the limiting nutrient conditions that trigger metabolic oscillations. The success of the present approach in extracting meaningful genetic connections suggests that the maximum entropy principle is a useful concept for understanding living systems, as it is for other complex, nonequilibrium systems.
Entropy, Ergodicity, and Stem Cell Multipotency
NASA Astrophysics Data System (ADS)
Ridden, Sonya J.; Chang, Hannah H.; Zygalakis, Konstantinos C.; MacArthur, Ben D.
2015-11-01
Populations of mammalian stem cells commonly exhibit considerable cell-cell variability. However, the functional role of this diversity is unclear. Here, we analyze expression fluctuations of the stem cell surface marker Sca1 in mouse hematopoietic progenitor cells using a simple stochastic model and find that the observed dynamics naturally lie close to a critical state, thereby producing a diverse population that is able to respond rapidly to environmental changes. We propose an information-theoretic interpretation of these results that views cellular multipotency as an instance of maximum entropy statistical inference.
Nonequilibrium-thermodynamics approach to open quantum systems
NASA Astrophysics Data System (ADS)
Semin, Vitalii; Petruccione, Francesco
2014-11-01
Open quantum systems are studied from the thermodynamical point of view unifying the principle of maximum informational entropy and the hypothesis of relaxation times hierarchy. The result of the unification is a non-Markovian and local-in-time master equation that provides a direct connection for dynamical and thermodynamical properties of open quantum systems. The power of the approach is illustrated by the application to the damped harmonic oscillator and the damped driven two-level system, resulting in analytical expressions for the non-Markovian and nonequilibrium entropy and inverse temperature.
Third law of thermodynamics in the presence of a heat flux
DOE Office of Scientific and Technical Information (OSTI.GOV)
Camacho, J.
1995-01-01
Following a maximum entropy formalism, we study a one-dimensional crystal under a heat flux. We obtain the phonon distribution function and evaluate the nonequilibrium temperature, the specific heat, and the entropy as functions of the internal energy and the heat flux, in both the quantum and the classical limits. Some analogies between the behavior of equilibrium systems at low absolute temperature and nonequilibrium steady states under high values of the heat flux are shown, which point to a possible generalization of the third law in nonequilibrium situations.
The existence of negative absolute temperatures in Axelrod’s social influence model
NASA Astrophysics Data System (ADS)
Villegas-Febres, J. C.; Olivares-Rivas, W.
2008-06-01
We introduce the concept of temperature as an order parameter in the standard Axelrod’s social influence model. It is defined as the relation between suitably defined entropy and energy functions, T=(. We show that at the critical point, where the order/disorder transition occurs, this absolute temperature changes in sign. At this point, which corresponds to the transition homogeneous/heterogeneous culture, the entropy of the system shows a maximum. We discuss the relationship between the temperature and other properties of the model in terms of cultural traits.
Entropy of spatial network ensembles
NASA Astrophysics Data System (ADS)
Coon, Justin P.; Dettmann, Carl P.; Georgiou, Orestis
2018-04-01
We analyze complexity in spatial network ensembles through the lens of graph entropy. Mathematically, we model a spatial network as a soft random geometric graph, i.e., a graph with two sources of randomness, namely nodes located randomly in space and links formed independently between pairs of nodes with probability given by a specified function (the "pair connection function") of their mutual distance. We consider the general case where randomness arises in node positions as well as pairwise connections (i.e., for a given pair distance, the corresponding edge state is a random variable). Classical random geometric graph and exponential graph models can be recovered in certain limits. We derive a simple bound for the entropy of a spatial network ensemble and calculate the conditional entropy of an ensemble given the node location distribution for hard and soft (probabilistic) pair connection functions. Under this formalism, we derive the connection function that yields maximum entropy under general constraints. Finally, we apply our analytical framework to study two practical examples: ad hoc wireless networks and the US flight network. Through the study of these examples, we illustrate that both exhibit properties that are indicative of nearly maximally entropic ensembles.
SU-F-T-478: Effect of Deconvolution in Analysis of Mega Voltage Photon Beam Profiles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muthukumaran, M; Manigandan, D; Murali, V
2016-06-15
Purpose: To study and compare the penumbra of 6 MV and 15 MV photon beam profiles after deconvoluting different volume ionization chambers. Methods: 0.125cc Semi-Flex chamber, Markus Chamber and PTW Farmer chamber were used to measure the in-plane and cross-plane profiles at 5 cm depth for 6 MV and 15 MV photons. The profiles were measured for various field sizes starting from 2×2 cm till 30×30 cm. PTW TBA scan software was used for the measurements and the “deconvolution” functionality in the software was used to remove the volume averaging effect due to finite volume of the chamber along lateralmore » and longitudinal directions for all the ionization chambers. The predicted true profile was compared and the change in penumbra before and after deconvolution was studied. Results: After deconvoluting the penumbra decreased by 1 mm for field sizes ranging from 2 × 2 cm till 20 x20 cm. This is observed for along both lateral and longitudinal directions. However for field sizes from 20 × 20 till 30 ×30 cm the difference in penumbra was around 1.2 till 1.8 mm. This was observed for both 6 MV and 15 MV photon beams. The penumbra was always lesser in the deconvoluted profiles for all the ionization chambers involved in the study. The variation in difference in penumbral values were in the order of 0.1 till 0.3 mm between the deconvoluted profile along lateral and longitudinal directions for all the chambers under study. Deconvolution of the profiles along longitudinal direction for Farmer chamber was not good and is not comparable with other deconvoluted profiles. Conclusion: The results of the deconvoluted profiles for 0.125cc and Markus chamber was comparable and the deconvolution functionality can be used to overcome the volume averaging effect.« less
NASA Astrophysics Data System (ADS)
Chang, Yong; Zi, Yanyang; Zhao, Jiyuan; Yang, Zhe; He, Wangpeng; Sun, Hailiang
2017-03-01
In guided wave pipeline inspection, echoes reflected from closely spaced reflectors generally overlap, meaning useful information is lost. To solve the overlapping problem, sparse deconvolution methods have been developed in the past decade. However, conventional sparse deconvolution methods have limitations in handling guided wave signals, because the input signal is directly used as the prototype of the convolution matrix, without considering the waveform change caused by the dispersion properties of the guided wave. In this paper, an adaptive sparse deconvolution (ASD) method is proposed to overcome these limitations. First, the Gaussian echo model is employed to adaptively estimate the column prototype of the convolution matrix instead of directly using the input signal as the prototype. Then, the convolution matrix is constructed upon the estimated results. Third, the split augmented Lagrangian shrinkage (SALSA) algorithm is introduced to solve the deconvolution problem with high computational efficiency. To verify the effectiveness of the proposed method, guided wave signals obtained from pipeline inspection are investigated numerically and experimentally. Compared to conventional sparse deconvolution methods, e.g. the {{l}1} -norm deconvolution method, the proposed method shows better performance in handling the echo overlap problem in the guided wave signal.
Hybrid sparse blind deconvolution: an implementation of SOOT algorithm to real data
NASA Astrophysics Data System (ADS)
Pakmanesh, Parvaneh; Goudarzi, Alireza; Kourki, Meisam
2018-06-01
Getting information of seismic data depends on deconvolution as an important processing step; it provides the reflectivity series by signal compression. This compression can be obtained by removing the wavelet effects on the traces. The recently blind deconvolution has provided reliable performance for sparse signal recovery. In this study, two deconvolution methods have been implemented to the seismic data; the convolution of these methods provides a robust spiking deconvolution approach. This hybrid deconvolution is applied using the sparse deconvolution (MM algorithm) and the Smoothed-One-Over-Two algorithm (SOOT) in a chain. The MM algorithm is based on the minimization of the cost function defined by standards l1 and l2. After applying the two algorithms to the seismic data, the SOOT algorithm provided well-compressed data with a higher resolution than the MM algorithm. The SOOT algorithm requires initial values to be applied for real data, such as the wavelet coefficients and reflectivity series that can be achieved through the MM algorithm. The computational cost of the hybrid method is high, and it is necessary to be implemented on post-stack or pre-stack seismic data of complex structure regions.
Bustamante, P; Romero, S; Pena, A; Escalera, B; Reillo, A
1998-12-01
In earlier work, a nonlinear enthalpy-entropy compensation was observed for the solubility of phenacetin in dioxane-water mixtures. This effect had not been earlier reported for the solubility of drugs in solvent mixtures. To gain insight into the compensation effect, the behavior of the apparent thermodynamic magnitudes for the solubility of paracetamol, acetanilide, and nalidixic acid is studied in this work. The solubility of these drugs was measured at several temperatures in dioxane-water mixtures. DSC analysis was performed on the original powders and on the solid phases after equilibration with the solvent mixture. The thermal properties of the solid phases did not show significant changes. The three drugs display a solubility maximum against the cosolvent ratio. The solubility peaks of acetanilide and nalidixic acid shift to a more polar region at the higher temperatures. Nonlinear van't Hoff plots were observed for nalidixic acid whereas acetanilide and paracetamol show linear behavior at the temperature range studied. The apparent enthalpies of solution are endothermic going through a maximum at 50% dioxane. Two different mechanisms, entropy and enthalpy, are suggested to be the driving forces that increase the solubility of the three drugs. Solubility is entropy controlled at the water-rich region (0-50% dioxane) and enthalpy controlled at the dioxane-rich region (50-100% dioxane). The enthalpy-entropy compensation analysis also suggests that two different mechanisms, dependent on cosolvent ratio, are involved in the solubility enhancement of the three drugs. The plots of deltaH versus deltaG are nonlinear, and the slope changes from positive to negative above 50% dioxane. The compensation effect for the thermodynamic magnitudes of transfer from water to the aqueous mixtures can be described by a common empirical nonlinear relationship, with the exception of paracetamol, which follows a separate linear relationship at dioxane ratios above 50%. The results corroborate earlier findings with phenacetin. The similar pattern shown by the drugs studied suggests that the nonlinear enthalpy-entropy compensation effect may be characteristic of the solubility of semipolar drugs in dioxane-water mixtures.
NASA Astrophysics Data System (ADS)
Xu, Jun; Dang, Chao; Kong, Fan
2017-10-01
This paper presents a new method for efficient structural reliability analysis. In this method, a rotational quasi-symmetric point method (RQ-SPM) is proposed for evaluating the fractional moments of the performance function. Then, the derivation of the performance function's probability density function (PDF) is carried out based on the maximum entropy method in which constraints are specified in terms of fractional moments. In this regard, the probability of failure can be obtained by a simple integral over the performance function's PDF. Six examples, including a finite element-based reliability analysis and a dynamic system with strong nonlinearity, are used to illustrate the efficacy of the proposed method. All the computed results are compared with those by Monte Carlo simulation (MCS). It is found that the proposed method can provide very accurate results with low computational effort.
Sanchez-Martinez, M; Crehuet, R
2014-12-21
We present a method based on the maximum entropy principle that can re-weight an ensemble of protein structures based on data from residual dipolar couplings (RDCs). The RDCs of intrinsically disordered proteins (IDPs) provide information on the secondary structure elements present in an ensemble; however even two sets of RDCs are not enough to fully determine the distribution of conformations, and the force field used to generate the structures has a pervasive influence on the refined ensemble. Two physics-based coarse-grained force fields, Profasi and Campari, are able to predict the secondary structure elements present in an IDP, but even after including the RDC data, the re-weighted ensembles differ between both force fields. Thus the spread of IDP ensembles highlights the need for better force fields. We distribute our algorithm in an open-source Python code.
NASA Astrophysics Data System (ADS)
Caldarelli, Stefano; Catalano, Donata; Di Bari, Lorenzo; Lumetti, Marco; Ciofalo, Maurizio; Alberto Veracini, Carlo
1994-07-01
The dipolar couplings observed by NMR spectroscopy of solutes in nematic solvents (LX-NMR) are used to build up the maximum entropy (ME) probability distribution function of the variables describing the orientational and internal motion of the molecule. The ME conformational distributions of 2,2'- and 3,3'-dithiophene and 2,2':5',2″-terthiophene (α-terthienyl)thus obtained are compared with the results of previous studies. The 2,2'- and 3,3'-dithiophene molecules exhibit equilibria among cisoid and transoid forms; the probability maxima correspond to planar and twisted conformers for 2,2'- or 3,3'-dithiophene, respectively, 2,2':5',2″-Terthiophene has two internal degrees of freedom; the ME approach indicates that the trans, trans and cis, trans planar conformations are the most probable. The correlation between the two intramolecular rotations is also discussed.
Image reconstruction of IRAS survey scans
NASA Technical Reports Server (NTRS)
Bontekoe, Tj. Romke
1990-01-01
The IRAS survey data can be used successfully to produce images of extended objects. The major difficulties, viz. non-uniform sampling, different response functions for each detector, and varying signal-to-noise levels for each detector for each scan, were resolved. The results of three different image construction techniques are compared: co-addition, constrained least squares, and maximum entropy. The maximum entropy result is superior. An image of the galaxy M51 with an average spatial resolution of 45 arc seconds is presented, using 60 micron survey data. This exceeds the telescope diffraction limit of 1 minute of arc, at this wavelength. Data fusion is a proposed method for combining data from different instruments, with different spacial resolutions, at different wavelengths. Data estimates of the physical parameters, temperature, density and composition, can be made from the data without prior image (re-)construction. An increase in the accuracy of these parameters is expected as the result of this more systematic approach.
Energy and maximum norm estimates for nonlinear conservation laws
NASA Technical Reports Server (NTRS)
Olsson, Pelle; Oliger, Joseph
1994-01-01
We have devised a technique that makes it possible to obtain energy estimates for initial-boundary value problems for nonlinear conservation laws. The two major tools to achieve the energy estimates are a certain splitting of the flux vector derivative f(u)(sub x), and a structural hypothesis, referred to as a cone condition, on the flux vector f(u). These hypotheses are fulfilled for many equations that occur in practice, such as the Euler equations of gas dynamics. It should be noted that the energy estimates are obtained without any assumptions on the gradient of the solution u. The results extend to weak solutions that are obtained as point wise limits of vanishing viscosity solutions. As a byproduct we obtain explicit expressions for the entropy function and the entropy flux of symmetrizable systems of conservation laws. Under certain circumstances the proposed technique can be applied repeatedly so as to yield estimates in the maximum norm.
Test images for the maximum entropy image restoration method
NASA Technical Reports Server (NTRS)
Mackey, James E.
1990-01-01
One of the major activities of any experimentalist is data analysis and reduction. In solar physics, remote observations are made of the sun in a variety of wavelengths and circumstances. In no case is the data collected free from the influence of the design and operation of the data gathering instrument as well as the ever present problem of noise. The presence of significant noise invalidates the simple inversion procedure regardless of the range of known correlation functions. The Maximum Entropy Method (MEM) attempts to perform this inversion by making minimal assumptions about the data. To provide a means of testing the MEM and characterizing its sensitivity to noise, choice of point spread function, type of data, etc., one would like to have test images of known characteristics that can represent the type of data being analyzed. A means of reconstructing these images is presented.
NASA Astrophysics Data System (ADS)
Tian, Yu; Rao, Changhui; Wei, Kai
2008-07-01
The adaptive optics can only partially compensate the image blurred by atmospheric turbulence due to the observing condition and hardware restriction. A post-processing method based on frame selection and multi-frames blind deconvolution to improve images partially corrected by adaptive optics is proposed. The appropriate frames which are suitable for blind deconvolution from the recorded AO close-loop frames series are selected by the frame selection technique and then do the multi-frame blind deconvolution. There is no priori knowledge except for the positive constraint in blind deconvolution. It is benefit for the use of multi-frame images to improve the stability and convergence of the blind deconvolution algorithm. The method had been applied in the image restoration of celestial bodies which were observed by 1.2m telescope equipped with 61-element adaptive optical system at Yunnan Observatory. The results show that the method can effectively improve the images partially corrected by adaptive optics.
Moisture sorption isotherms and thermodynamic properties of mexican mennonite-style cheese.
Martinez-Monteagudo, Sergio I; Salais-Fierro, Fabiola
2014-10-01
Moisture adsorption isotherms of fresh and ripened Mexican Mennonite-style cheese were investigated using the static gravimetric method at 4, 8, and 12 °C in a water activity range (aw) of 0.08-0.96. These isotherms were modeled using GAB, BET, Oswin and Halsey equations through weighed non-linear regression. All isotherms were sigmoid in shape, showing a type II BET isotherm, and the data were best described by GAB model. GAB model coefficients revealed that water adsorption by cheese matrix is a multilayer process characterized by molecules that are strongly bound in the monolayer and molecules that are slightly structured in a multilayer. Using the GAB model, it was possible to estimate thermodynamic functions (net isosteric heat, differential entropy, integral enthalpy and entropy, and enthalpy-entropy compensation) as function of moisture content. For both samples, the isosteric heat and differential entropy decreased with moisture content in exponential fashion. The integral enthalpy gradually decreased with increasing moisture content after reached a maximum value, while the integral entropy decreased with increasing moisture content after reached a minimum value. A linear compensation was found between integral enthalpy and entropy suggesting enthalpy controlled adsorption. Determination of moisture content and aw relationship yields to important information of controlling the ripening, drying and storage operations as well as understanding of the water state within a cheese matrix.
Blind source deconvolution for deep Earth seismology
NASA Astrophysics Data System (ADS)
Stefan, W.; Renaut, R.; Garnero, E. J.; Lay, T.
2007-12-01
We present an approach to automatically estimate an empirical source characterization of deep earthquakes recorded teleseismically and subsequently remove the source from the recordings by applying regularized deconvolution. A principle goal in this work is to effectively deblur the seismograms, resulting in more impulsive and narrower pulses, permitting better constraints in high resolution waveform analyses. Our method consists of two stages: (1) we first estimate the empirical source by automatically registering traces to their 1st principal component with a weighting scheme based on their deviation from this shape, we then use this shape as an estimation of the earthquake source. (2) We compare different deconvolution techniques to remove the source characteristic from the trace. In particular Total Variation (TV) regularized deconvolution is used which utilizes the fact that most natural signals have an underlying spareness in an appropriate basis, in this case, impulsive onsets of seismic arrivals. We show several examples of deep focus Fiji-Tonga region earthquakes for the phases S and ScS, comparing source responses for the separate phases. TV deconvolution is compared to the water level deconvolution, Tikenov deconvolution, and L1 norm deconvolution, for both data and synthetics. This approach significantly improves our ability to study subtle waveform features that are commonly masked by either noise or the earthquake source. Eliminating source complexities improves our ability to resolve deep mantle triplications, waveform complexities associated with possible double crossings of the post-perovskite phase transition, as well as increasing stability in waveform analyses used for deep mantle anisotropy measurements.
NASA Astrophysics Data System (ADS)
Kulakhmetov, Marat; Gallis, Michael; Alexeenko, Alina
2016-05-01
Quasi-classical trajectory (QCT) calculations are used to study state-specific ro-vibrational energy exchange and dissociation in the O2 + O system. Atom-diatom collisions with energy between 0.1 and 20 eV are calculated with a double many body expansion potential energy surface by Varandas and Pais [Mol. Phys. 65, 843 (1988)]. Inelastic collisions favor mono-quantum vibrational transitions at translational energies above 1.3 eV although multi-quantum transitions are also important. Post-collision vibrational favoring decreases first exponentially and then linearly as Δv increases. Vibrationally elastic collisions (Δv = 0) favor small ΔJ transitions while vibrationally inelastic collisions have equilibrium post-collision rotational distributions. Dissociation exhibits both vibrational and rotational favoring. New vibrational-translational (VT), vibrational-rotational-translational (VRT) energy exchange, and dissociation models are developed based on QCT observations and maximum entropy considerations. Full set of parameters for state-to-state modeling of oxygen is presented. The VT energy exchange model describes 22 000 state-to-state vibrational cross sections using 11 parameters and reproduces vibrational relaxation rates within 30% in the 2500-20 000 K temperature range. The VRT model captures 80 × 106 state-to-state ro-vibrational cross sections using 19 parameters and reproduces vibrational relaxation rates within 60% in the 5000-15 000 K temperature range. The developed dissociation model reproduces state-specific and equilibrium dissociation rates within 25% using just 48 parameters. The maximum entropy framework makes it feasible to upscale ab initio simulation to full nonequilibrium flow calculations.
Maximum entropy approach to statistical inference for an ocean acoustic waveguide.
Knobles, D P; Sagers, J D; Koch, R A
2012-02-01
A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America
NASA Astrophysics Data System (ADS)
Chung-Wei, Li; Gwo-Hshiung, Tzeng
To deal with complex problems, structuring them through graphical representations and analyzing causal influences can aid in illuminating complex issues, systems, or concepts. The DEMATEL method is a methodology which can be used for researching and solving complicated and intertwined problem groups. The end product of the DEMATEL process is a visual representation—the impact-relations map—by which respondents organize their own actions in the world. The applicability of the DEMATEL method is widespread, ranging from analyzing world problematique decision making to industrial planning. The most important property of the DEMATEL method used in the multi-criteria decision making (MCDM) field is to construct interrelations between criteria. In order to obtain a suitable impact-relations map, an appropriate threshold value is needed to obtain adequate information for further analysis and decision-making. In this paper, we propose a method based on the entropy approach, the maximum mean de-entropy algorithm, to achieve this purpose. Using real cases to find the interrelationships between the criteria for evaluating effects in E-learning programs as an examples, we will compare the results obtained from the respondents and from our method, and discuss that the different impact-relations maps from these two methods.
NASA Astrophysics Data System (ADS)
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2014-06-01
We give an evidence of the Big Fix. The theory of wormholes and multiverse suggests that the parameters of the Standard Model are fixed in such a way that the total entropy at the late stage of the universe is maximized, which we call the maximum entropy principle. In this paper, we discuss how it can be confirmed by the experimental data, and we show that it is indeed true for the Higgs vacuum expectation value vh. We assume that the baryon number is produced by the sphaleron process, and that the current quark masses, the gauge couplings and the Higgs self-coupling are fixed when we vary vh. It turns out that the existence of the atomic nuclei plays a crucial role to maximize the entropy. This is reminiscent of the anthropic principle, however it is required by the fundamental law in our case.
Lezon, Timothy R.; Banavar, Jayanth R.; Cieplak, Marek; Maritan, Amos; Fedoroff, Nina V.
2006-01-01
We describe a method based on the principle of entropy maximization to identify the gene interaction network with the highest probability of giving rise to experimentally observed transcript profiles. In its simplest form, the method yields the pairwise gene interaction network, but it can also be extended to deduce higher-order interactions. Analysis of microarray data from genes in Saccharomyces cerevisiae chemostat cultures exhibiting energy metabolic oscillations identifies a gene interaction network that reflects the intracellular communication pathways that adjust cellular metabolic activity and cell division to the limiting nutrient conditions that trigger metabolic oscillations. The success of the present approach in extracting meaningful genetic connections suggests that the maximum entropy principle is a useful concept for understanding living systems, as it is for other complex, nonequilibrium systems. PMID:17138668
Magnetic and thermodynamic properties of the Pr-based ferromagnet PrGe2-δ
NASA Astrophysics Data System (ADS)
Matsumoto, Keisuke T.; Morioka, Naoya; Hiraoka, Koichi
2018-03-01
We investigated the magnetization, M, and specific heat, C, of ThSi2-type PrGe2-δ. A polycrystalline sample of PrGe2-δ was prepared by arc-melting. Magnetization divided by magnetic field, M / B, increased sharply and C showed a clear jump at the Curie temperature, TC, of 14.6 K; these results indicate that PrGe2-δ ordered ferromagnetically. The magnetic entropy at TC reached R ln 3, indicating a quasi-triplet crystalline electric field (CEF) ground state. The maximum value of magnetic entropy change was 11.5 J/K kg with a field change of 7 T, which is comparable to those of other right rare-earth based magnetocaloric materials. This large magnetic entropy change was attributed to the quasi-triplet ground state of the CEF.
MaxEnt alternatives to pearson family distributions
NASA Astrophysics Data System (ADS)
Stokes, Barrie J.
2012-05-01
In a previous MaxEnt conference [11] a method of obtaining MaxEnt univariate distributions under a variety of constraints was presented. The Mathematica function Interpolation[], normally used with numerical data, can also process "semi-symbolic" data, and Lagrange Multiplier equations were solved for a set of symbolic ordinates describing the required MaxEnt probability density function. We apply a more developed version of this approach to finding MaxEnt distributions having prescribed β1 and β2 values, and compare the entropy of the MaxEnt distribution to that of the Pearson family distribution having the same β1 and β2. These MaxEnt distributions do have, in general, greater entropy than the related Pearson distribution. In accordance with Jaynes' Maximum Entropy Principle, these MaxEnt distributions are thus to be preferred to the corresponding Pearson distributions as priors in Bayes' Theorem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molotkov, S. N., E-mail: sergei.molotkov@gmail.com
2012-12-15
Any key-generation session contains a finite number of quantum-state messages, and it is there-fore important to understand the fundamental restrictions imposed on the minimal length of a string required to obtain a secret key with a specified length. The entropy uncertainty relations for smooth min and max entropies considerably simplify and shorten the proof of security. A proof of security of quantum key distribution with phase-temporal encryption is presented. This protocol provides the maximum critical error compared to other protocols up to which secure key distribution is guaranteed. In addition, unlike other basic protocols (of the BB84 type), which aremore » vulnerable with respect to an attack by 'blinding' of avalanche photodetectors, this protocol is stable with respect to such an attack and guarantees key security.« less
Negative specific heat of a magnetically self-confined plasma torus
Kiessling, Michael K.-H.; Neukirch, Thomas
2003-01-01
It is shown that the thermodynamic maximum-entropy principle predicts negative specific heat for a stationary, magnetically self-confined current-carrying plasma torus. Implications for the magnetic self-confinement of fusion plasma are considered. PMID:12576553
Erny, Guillaume L; Moeenfard, Marzieh; Alves, Arminda
2015-02-01
In this manuscript, the separation of kahweol and cafestol esters from Arabica coffee brews was investigated using liquid chromatography with a diode array detector. When detected in conjunction, cafestol, and kahweol esters were eluted together, but, after optimization, the kahweol esters could be selectively detected by setting the wavelength at 290 nm to allow their quantification. Such an approach was not possible for the cafestol esters, and spectral deconvolution was used to obtain deconvoluted chromatograms. In each of those chromatograms, the four esters were baseline separated allowing for the quantification of the eight targeted compounds. Because kahweol esters could be quantified either using the chromatogram obtained by setting the wavelength at 290 nm or using the deconvoluted chromatogram, those compounds were used to compare the analytical performances. Slightly better limits of detection were obtained using the deconvoluted chromatogram. Identical concentrations were found in a real sample with both approaches. The peak areas in the deconvoluted chromatograms were repeatable (intraday repeatability of 0.8%, interday repeatability of 1.0%). This work demonstrates the accuracy of spectral deconvolution when using liquid chromatography to mathematically separate coeluting compounds using the full spectra recorded by a diode array detector. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Estimating parameter of Rayleigh distribution by using Maximum Likelihood method and Bayes method
NASA Astrophysics Data System (ADS)
Ardianti, Fitri; Sutarman
2018-01-01
In this paper, we use Maximum Likelihood estimation and Bayes method under some risk function to estimate parameter of Rayleigh distribution to know the best method. The prior knowledge which used in Bayes method is Jeffrey’s non-informative prior. Maximum likelihood estimation and Bayes method under precautionary loss function, entropy loss function, loss function-L 1 will be compared. We compare these methods by bias and MSE value using R program. After that, the result will be displayed in tables to facilitate the comparisons.
Kune, Christopher; Far, Johann; De Pauw, Edwin
2016-12-06
Ion mobility spectrometry (IMS) is a gas phase separation technique, which relies on differences in collision cross section (CCS) of ions. Ionic clouds of unresolved conformers overlap if the CCS difference is below the instrumental resolution expressed as CCS/ΔCCS. The experimental arrival time distribution (ATD) peak is then a superimposition of the various contributions weighted by their relative intensities. This paper introduces a strategy for accurate drift time determination using traveling wave ion mobility spectrometry (TWIMS) of poorly resolved or unresolved conformers. This method implements through a calibration procedure the link between the peak full width at half-maximum (fwhm) and the drift time of model compounds for wide range of settings for wave heights and velocities. We modified a Gaussian equation, which achieves the deconvolution of ATD peaks where the fwhm is fixed according to our calibration procedure. The new fitting Gaussian equation only depends on two parameters: The apex of the peak (A) and the mean drift time value (μ). The standard deviation parameter (correlated to fwhm) becomes a function of the drift time. This correlation function between μ and fwhm is obtained using the TWIMS calibration procedure which determines the maximum instrumental ion beam diffusion under limited and controlled space charge effect using ionic compounds which are detected as single conformers in the gas phase. This deconvolution process has been used to highlight the presence of poorly resolved conformers of crown ether complexes and peptides leading to more accurate CCS determinations in better agreement with quantum chemistry predictions.
The many faces of the second law
NASA Astrophysics Data System (ADS)
Van den Broeck, C.
2010-10-01
There exists no perpetuum mobile of the second kind. We review the implications of this observation on the second law, on the efficiency of thermal machines, on Onsager symmetry, on Brownian motors and Brownian refrigerators, and on the universality of efficiency of thermal machines at maximum power. We derive a microscopic expression for the stochastic entropy production, and obtain from it the detailed and integral fluctuation theorem. We close with the remarkable observation that the second law can be split in two: the total entropy production is the sum of two contributions each of which is growing independently in time.
Shock melting and vaporization of metals.
NASA Technical Reports Server (NTRS)
Ahrens, T. J.
1972-01-01
The effect of initial porosity on shock induction of melting and vaporization is investigated for Ba, Sr, Li, Fe, Al, U, and Th. For the less compressible of these metals, it is found that for a given strong shock-generation system (explosive in contact, or flyer-plate impact) an optimum initial specific volume exists such that the total entropy production, and hence the amount of metal liquid or vapor, is a maximum. Initial volumes from 1.4 to 2.0 times crystal volumes, depending on the metal sample and shock-inducing system, will result in optimum post-shock entropies.
The adiabatic piston: a perpetuum mobile in the mesoscopic realm
NASA Astrophysics Data System (ADS)
Crosignani, Bruno; Porto, Paolo; Conti, Claudio
2004-03-01
A detailed analysis of the adiabatic-piston problem reveals, for a finely-tuned choice of the spatial dimensions of the system, peculiar dynamical features that challenge the statement that an isolated system necessarily reaches a time-independent equilibrium state. In particular, the piston behaves like a perpetuum mobile, i.e., it never comes to a stop but keeps wandering, undergoing sizeable oscillations around the position corresponding to maximum entropy; this has remarkable implications on the entropy changes of a mesoscopic isolated system and on the limits of validity of the second law of thermodynamics in the mesoscopic realm.
A maximum (non-extensive) entropy approach to equity options bid-ask spread
NASA Astrophysics Data System (ADS)
Tapiero, Oren J.
2013-07-01
The cross-section of options bid-ask spreads with their strikes are modelled by maximising the Kaniadakis entropy. A theoretical model results with the bid-ask spread depending explicitly on the implied volatility; the probability of expiring at-the-money and an asymmetric information parameter (κ). Considering AIG as a test case for the period between January 2006 and October 2008, we find that information flows uniquely from the trading activity in the underlying asset to its derivatives. Suggesting that κ is possibly an option implied measure of the current state of trading liquidity in the underlying asset.
Calibration of Wide-Field Deconvolution Microscopy for Quantitative Fluorescence Imaging
Lee, Ji-Sook; Wee, Tse-Luen (Erika); Brown, Claire M.
2014-01-01
Deconvolution enhances contrast in fluorescence microscopy images, especially in low-contrast, high-background wide-field microscope images, improving characterization of features within the sample. Deconvolution can also be combined with other imaging modalities, such as confocal microscopy, and most software programs seek to improve resolution as well as contrast. Quantitative image analyses require instrument calibration and with deconvolution, necessitate that this process itself preserves the relative quantitative relationships between fluorescence intensities. To ensure that the quantitative nature of the data remains unaltered, deconvolution algorithms need to be tested thoroughly. This study investigated whether the deconvolution algorithms in AutoQuant X3 preserve relative quantitative intensity data. InSpeck Green calibration microspheres were prepared for imaging, z-stacks were collected using a wide-field microscope, and the images were deconvolved using the iterative deconvolution algorithms with default settings. Afterwards, the mean intensities and volumes of microspheres in the original and the deconvolved images were measured. Deconvolved data sets showed higher average microsphere intensities and smaller volumes than the original wide-field data sets. In original and deconvolved data sets, intensity means showed linear relationships with the relative microsphere intensities given by the manufacturer. Importantly, upon normalization, the trend lines were found to have similar slopes. In original and deconvolved images, the volumes of the microspheres were quite uniform for all relative microsphere intensities. We were able to show that AutoQuant X3 deconvolution software data are quantitative. In general, the protocol presented can be used to calibrate any fluorescence microscope or image processing and analysis procedure. PMID:24688321
Probability of stress-corrosion fracture under random loading
NASA Technical Reports Server (NTRS)
Yang, J. N.
1974-01-01
Mathematical formulation is based on cumulative-damage hypothesis and experimentally-determined stress-corrosion characteristics. Under both stationary random loadings, mean value and variance of cumulative damage are obtained. Probability of stress-corrosion fracture is then evaluated, using principle of maximum entropy.
Entropic bounds on currents in Langevin systems
NASA Astrophysics Data System (ADS)
Dechant, Andreas; Sasa, Shin-ichi
2018-06-01
We derive a bound on generalized currents for Langevin systems in terms of the total entropy production in the system and its environment. For overdamped dynamics, any generalized current is bounded by the total rate of entropy production. We show that this entropic bound on the magnitude of generalized currents imposes power-efficiency tradeoff relations for ratchets in contact with a heat bath: Maximum efficiency—Carnot efficiency for a Smoluchowski-Feynman ratchet and unity for a flashing or rocking ratchet—can only be reached at vanishing power output. For underdamped dynamics, while there may be reversible currents that are not bounded by the entropy production rate, we show that the output power and heat absorption rate are irreversible currents and thus obey the same bound. As a consequence, a power-efficiency tradeoff relation holds not only for underdamped ratchets but also for periodically driven heat engines. For weak driving, the bound results in additional constraints on the Onsager matrix beyond those imposed by the second law. Finally, we discuss the connection between heat and entropy in a nonthermal situation where the friction and noise intensity are state dependent.
Velázquez-Gutiérrez, Sandra Karina; Figueira, Ana Cristina; Rodríguez-Huezo, María Eva; Román-Guerrero, Angélica; Carrillo-Navas, Hector; Pérez-Alonso, César
2015-05-05
Freeze-dried chia mucilage adsorption isotherms were determined at 25, 35 and 40°C and fitted with the Guggenheim-Anderson-de Boer model. The integral thermodynamic properties (enthalpy and entropy) were estimated with the Clausius-Clapeyron equation. Pore radius of the mucilage, calculated with the Kelvin equation, varied from 0.87 to 6.44 nm in the temperature range studied. The point of maximum stability (minimum integral entropy) ranged between 7.56 and 7.63kg H2O per 100 kg of dry solids (d.s.) (water activity of 0.34-0.53). Enthalpy-entropy compensation for the mucilage showed two isokinetic temperatures: (i) one occurring at low moisture contents (0-7.56 kg H2O per 100 kg d.s.), controlled by changes in water entropy; and (ii) another happening in the moisture interval of 7.56-24 kg H2O per 100 kg d.s. and was enthalpy driven. The glass transition temperature Tg of the mucilage fluctuated between 42.93 and 57.93°C. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wapenaar, K.; van der Neut, J.; Ruigrok, E.; Draganov, D.; Hunziker, J.; Slob, E.; Thorbecke, J.; Snieder, R.
2008-12-01
It is well-known that under specific conditions the crosscorrelation of wavefields observed at two receivers yields the impulse response between these receivers. This principle is known as 'Green's function retrieval' or 'seismic interferometry'. Recently it has been recognized that in many situations it can be advantageous to replace the correlation process by deconvolution. One of the advantages is that deconvolution compensates for the waveform emitted by the source; another advantage is that it is not necessary to assume that the medium is lossless. The approaches that have been developed to date employ a 1D deconvolution process. We propose a method for seismic interferometry by multidimensional deconvolution and show that under specific circumstances the method compensates for irregularities in the source distribution. This is an important difference with crosscorrelation methods, which rely on the condition that waves are equipartitioned. This condition is for example fulfilled when the sources are regularly distributed along a closed surface and the power spectra of the sources are identical. The proposed multidimensional deconvolution method compensates for anisotropic illumination, without requiring knowledge about the positions and the spectra of the sources.
Diffusivity anomaly in modified Stillinger-Weber liquids
NASA Astrophysics Data System (ADS)
Sengupta, Shiladitya; Vasisht, Vishwas V.; Sastry, Srikanth
2014-01-01
By modifying the tetrahedrality (the strength of the three body interactions) in the well-known Stillinger-Weber model for silicon, we study the diffusivity of a series of model liquids as a function of tetrahedrality and temperature at fixed pressure. Previous work has shown that at constant temperature, the diffusivity exhibits a maximum as a function of tetrahedrality, which we refer to as the diffusivity anomaly, in analogy with the well-known anomaly in water upon variation of pressure at constant temperature. We explore to what extent the structural and thermodynamic changes accompanying changes in the interaction potential can help rationalize the diffusivity anomaly, by employing the Rosenfeld relation between diffusivity and the excess entropy (over the ideal gas reference value), and the pair correlation entropy, which provides an approximation to the excess entropy in terms of the pair correlation function. We find that in the modified Stillinger-Weber liquids, the Rosenfeld relation works well above the melting temperatures but exhibits deviations below, with the deviations becoming smaller for smaller tetrahedrality. Further we find that both the excess entropy and the pair correlation entropy at constant temperature go through maxima as a function of the tetrahedrality, thus demonstrating the close relationship between structural, thermodynamic, and dynamical anomalies in the modified Stillinger-Weber liquids.
Pfeiffer, Keram; French, Andrew S.
2015-01-01
Naturalistic signals were created from vibrations made by locusts walking on a Sansevieria plant. Both naturalistic and Gaussian noise signals were used to mechanically stimulate VS-3 slit-sense mechanoreceptor neurons of the spider, Cupiennius salei, with stimulus amplitudes adjusted to give similar firing rates for either stimulus. Intracellular microelectrodes recorded action potentials, receptor potential, and receptor current, using current clamp and voltage clamp. Frequency response analysis showed that naturalistic stimulation contained relatively more power at low frequencies, and caused increased neuronal sensitivity to higher frequencies. In contrast, varying the amplitude of Gaussian stimulation did not change neuronal dynamics. Naturalistic stimulation contained less entropy than Gaussian, but signal entropy was higher than stimulus in the resultant receptor current, indicating addition of uncorrelated noise during transduction. The presence of added noise was supported by measuring linear information capacity in the receptor current. Total entropy and information capacity in action potentials produced by either stimulus were much lower than in earlier stages, and limited to the maximum entropy of binary signals. We conclude that the dynamics of action potential encoding in VS-3 neurons are sensitive to the form of stimulation, but entropy and information capacity of action potentials are limited by firing rate. PMID:26578975
Informational temperature concept and the nature of self-organization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Shu-Kun
1996-12-31
Self-organization phenomena are spontaneous processes. Their behavior should be governed by the second law of thermodynamics. The dissipative structure theory of the Prigogine school of thermodynamics claims that {open_quotes}order out of chaos{close_quotes} through {open_quotes}self-organization{close_quotes} and challenges the validity of the second law of thermodynamics. Unfortunately this theory is questionable. Therefore we have to reconsider the related fundamental theoretical problems. Informational entropy (S) and information (I) are related by S = S{sub max} - I, where S{sub max} is the maximum informational entropy. This conforms with the broadly accepted definition that entropy is the information loss. As informational entropy concept hasmore » been proved to be useful, it will be convenient to define an informational temperature, T{sub I}. This can be related to energy E and the informational entropy S. Information registration is a process of {Delta}I > 0, or {Delta}S < 0, and involves the energetically excited states ({Delta}E > 0). Therefore, T{sub I} is negative, and has the opposite sign of the conventional thermodynamic temperature, T. This concept is useful for clarifying the concepts of {open_quotes}order{close_quotes} and {open_quotes}disorder{close_quotes} of static structures and characterizing many typical information loss processes of self-organization.« less
NASA Astrophysics Data System (ADS)
Špiclin, Žiga; Bürmen, Miran; Pernuš, Franjo; Likar, Boštjan
2012-03-01
Spatial resolution of hyperspectral imaging systems can vary significantly due to axial optical aberrations that originate from wavelength-induced index-of-refraction variations of the imaging optics. For systems that have a broad spectral range, the spatial resolution will vary significantly both with respect to the acquisition wavelength and with respect to the spatial position within each spectral image. Variations of the spatial resolution can be effectively characterized as part of the calibration procedure by a local image-based estimation of the pointspread function (PSF) of the hyperspectral imaging system. The estimated PSF can then be used in the image deconvolution methods to improve the spatial resolution of the spectral images. We estimated the PSFs from the spectral images of a line grid geometric caliber. From individual line segments of the line grid, the PSF was obtained by a non-parametric estimation procedure that used an orthogonal series representation of the PSF. By using the non-parametric estimation procedure, the PSFs were estimated at different spatial positions and at different wavelengths. The variations of the spatial resolution were characterized by the radius and the fullwidth half-maximum of each PSF and by the modulation transfer function, computed from images of USAF1951 resolution target. The estimation and characterization of the PSFs and the image deconvolution based spatial resolution enhancement were tested on images obtained by a hyperspectral imaging system with an acousto-optic tunable filter in the visible spectral range. The results demonstrate that the spatial resolution of the acquired spectral images can be significantly improved using the estimated PSFs and image deconvolution methods.
NASA Technical Reports Server (NTRS)
Ioup, J. W.; Ioup, G. E.; Rayborn, G. H., Jr.; Wood, G. M., Jr.; Upchurch, B. T.
1984-01-01
Mass spectrometer data in the form of ion current versus mass-to-charge ratio often include overlapping mass peaks, especially in low- and medium-resolution instruments. Numerical deconvolution of such data effectively enhances the resolution by decreasing the overlap of mass peaks. In this paper two approaches to deconvolution are presented: a function-domain iterative technique and a Fourier transform method which uses transform-domain function-continuation. Both techniques include data smoothing to reduce the sensitivity of the deconvolution to noise. The efficacy of these methods is demonstrated through application to representative mass spectrometer data and the deconvolved results are discussed and compared to data obtained from a spectrometer with sufficient resolution to achieve separation of the mass peaks studied. A case for which the deconvolution is seriously affected by Gibbs oscillations is analyzed.
Lam, France; Cladière, Damien; Guillaume, Cyndélia; Wassmann, Katja; Bolte, Susanne
2017-02-15
In the presented work we aimed at improving confocal imaging to obtain highest possible resolution in thick biological samples, such as the mouse oocyte. We therefore developed an image processing workflow that allows improving the lateral and axial resolution of a standard confocal microscope. Our workflow comprises refractive index matching, the optimization of microscope hardware parameters and image restoration by deconvolution. We compare two different deconvolution algorithms, evaluate the necessity of denoising and establish the optimal image restoration procedure. We validate our workflow by imaging sub resolution fluorescent beads and measuring the maximum lateral and axial resolution of the confocal system. Subsequently, we apply the parameters to the imaging and data restoration of fluorescently labelled meiotic spindles of mouse oocytes. We measure a resolution increase of approximately 2-fold in the lateral and 3-fold in the axial direction throughout a depth of 60μm. This demonstrates that with our optimized workflow we reach a resolution that is comparable to 3D-SIM-imaging, but with better depth penetration for confocal images of beads and the biological sample. Copyright © 2016 Elsevier Inc. All rights reserved.
Broadband ion mobility deconvolution for rapid analysis of complex mixtures.
Pettit, Michael E; Brantley, Matthew R; Donnarumma, Fabrizio; Murray, Kermit K; Solouki, Touradj
2018-05-04
High resolving power ion mobility (IM) allows for accurate characterization of complex mixtures in high-throughput IM mass spectrometry (IM-MS) experiments. We previously demonstrated that pure component IM-MS data can be extracted from IM unresolved post-IM/collision-induced dissociation (CID) MS data using automated ion mobility deconvolution (AIMD) software [Matthew Brantley, Behrooz Zekavat, Brett Harper, Rachel Mason, and Touradj Solouki, J. Am. Soc. Mass Spectrom., 2014, 25, 1810-1819]. In our previous reports, we utilized a quadrupole ion filter for m/z-isolation of IM unresolved monoisotopic species prior to post-IM/CID MS. Here, we utilize a broadband IM-MS deconvolution strategy to remove the m/z-isolation requirement for successful deconvolution of IM unresolved peaks. Broadband data collection has throughput and multiplexing advantages; hence, elimination of the ion isolation step reduces experimental run times and thus expands the applicability of AIMD to high-throughput bottom-up proteomics. We demonstrate broadband IM-MS deconvolution of two separate and unrelated pairs of IM unresolved isomers (viz., a pair of isomeric hexapeptides and a pair of isomeric trisaccharides) in a simulated complex mixture. Moreover, we show that broadband IM-MS deconvolution improves high-throughput bottom-up characterization of a proteolytic digest of rat brain tissue. To our knowledge, this manuscript is the first to report successful deconvolution of pure component IM and MS data from an IM-assisted data-independent analysis (DIA) or HDMSE dataset.
NASA Astrophysics Data System (ADS)
Wang, Yi Jiao; Feng, Qing Yi; Chai, Li He
As one of the most important financial markets and one of the main parts of economic system, the stock market has become the research focus in economics. The stock market is a typical complex open system far from equilibrium. Many available models that make huge contribution to researches on market are strong in describing the market however, ignoring strong nonlinear interactions among active agents and weak in reveal underlying dynamic mechanisms of structural evolutions of market. From econophysical perspectives, this paper analyzes the complex interactions among agents and defines the generalized entropy in stock markets. Nonlinear evolutionary dynamic equation for the stock markets is then derived from Maximum Generalized Entropy Principle. Simulations are accordingly conducted for a typical case with the given data, by which the structural evolution of the stock market system is demonstrated. Some discussions and implications are finally provided.
Entropy generation minimization for the sloshing phenomenon in half-full elliptical storage tanks
NASA Astrophysics Data System (ADS)
Saghi, Hassan
2018-02-01
In this paper, the entropy generation in the sloshing phenomenon was obtained in elliptical storage tanks and the optimum geometry of tank was suggested. To do this, a numerical model was developed to simulate the sloshing phenomenon by using coupled Reynolds-Averaged Navier-Stokes (RANS) solver and the Volume-of-Fluid (VOF) method. The RANS equations were discretized and solved using the staggered grid finite difference and SMAC methods, and the available data were used for the model validation. Some parameters consisting of maximum free surface displacement (MFSD), maximum horizontal force exerted on the tank perimeter (MHF), tank perimeter (TP), and total entropy generation (Sgen) were introduced as design criteria for elliptical storage tanks. The entropy generation distribution provides designers with useful information about the causes of the energy loss. In this step, horizontal periodic sway motions as X =amsin(ωt) were applied to elliptical storage tanks with different aspect ratios namely ratios of large diameter to small diameter of elliptical storage tank (AR). Then, the effect of am and ω was studied on the results. The results show that the relation between MFSD and MHF is almost linear relative to the sway motion amplitude. Moreover, the results show that an increase in the AR causes a decrease in the MFSD and MHF. The results, also, show that the relation between MFSD and MHF is nonlinear relative to the sway motion angular frequency. Furthermore, the results show that an increase in the AR causes that the relation between MFSD and MHF becomes linear relative to the sway motion angular frequency. In addition, MFSD and MHF were minimized in a sway motion with a 7 rad/s angular frequency. Finally, the results show that the elliptical storage tank with AR =1.2-1.4 is the optimum section.
Maximum Entropy Methods as the Bridge Between Microscopic and Macroscopic Theory
NASA Astrophysics Data System (ADS)
Taylor, Jamie M.
2016-09-01
This paper is concerned with an investigation into a function of macroscopic variables known as the singular potential, building on previous work by Ball and Majumdar. The singular potential is a function of the admissible statistical averages of probability distributions on a state space, defined so that it corresponds to the maximum possible entropy given known observed statistical averages, although non-classical entropy-like objective functions will also be considered. First the set of admissible moments must be established, and under the conditions presented in this work the set is open, bounded and convex allowing a description in terms of supporting hyperplanes, which provides estimates on the development of singularities for related probability distributions. Under appropriate conditions it is shown that the singular potential is strictly convex, as differentiable as the microscopic entropy, and blows up uniformly as the macroscopic variable tends to the boundary of the set of admissible moments. Applications of the singular potential are then discussed, and particular consideration will be given to certain free-energy functionals typical in mean-field theory, demonstrating an equivalence between certain microscopic and macroscopic free-energy functionals. This allows statements about L^1-local minimisers of Onsager's free energy to be obtained which cannot be given by two-sided variations, and overcomes the need to ensure local minimisers are bounded away from zero and +∞ before taking L^∞ variations. The analysis also permits the definition of a dual order parameter for which Onsager's free energy allows an explicit representation. Also, the difficulties in approximating the singular potential by everywhere defined functions, in particular by polynomial functions, are addressed, with examples demonstrating the failure of the Taylor approximation to preserve relevant shape properties of the singular potential.
Dong, Xinzhe; Xing, Ligang; Wu, Peipei; Fu, Zheng; Wan, Honglin; Li, Dengwang; Yin, Yong; Sun, Xiaorong; Yu, Jinming
2013-01-01
To explore the relationship of a new PET image parameter, (18)F-fluorodeoxyglucose ((18)F-FDG) uptake heterogeneity assessed by texture analysis, with maximum standardized uptake value (SUV(max)) and tumor TNM staging. Forty consecutive patients with esophageal squamous cell carcinoma were enrolled. All patients underwent whole-body preoperative (18)F-FDG PET/CT. Heterogeneity of intratumoral (18)F-FDG uptake was assessed on the basis of the textural features (entropy and energy) of the three-dimensional images using MATLAB software. The correlations between the textural parameters and SUV(max), histological grade, tumor location, and TNM stage were analyzed. Tumors with higher SUV(max) were seen to be more heterogenous on (18)F-FDG uptake. Significant correlations were observed between T stage and SUV(max) (r(s)=0.390, P=0.013), entropy (rs=0.693, P<0.001), and energy (r(s)=-0.469, P=0.002). Correlations were also found between SUV(max), entropy, energy, and N stage (r(s)=0.326, P=0.04; r(s)=0.501, P=0.001; r(s)=-0.413, P=0.008). The American Joint Committee on Cancer stage correlated significantly with all metabolic parameters. The receiver-operating characteristic curve demonstrated an entropy of 4.699 as the optimal cutoff point for detecting tumors above stage II(b) with an areas under the ROC curve of 0.789 (P<0.001). This study provides initial evidence for the relationship between the new parameter of tumor uptake heterogeneity and the commonly used simplistic parameter of SUV and tumor stage. Our findings suggest a complementary role of these parameters in the staging and prognosis of esophageal squamous cell carcinoma.
ERIC Educational Resources Information Center
Smith, Michael J.; Vincent, Colin A.
1989-01-01
Uses reversible electrochemical cells near equilibrium to study basic thermodynamic concepts such as maximum work and free energy. Selects sealed, miniature, commercial cells to obtain accurate measurement of enthalpy, entropy, and Gibbs free energy. (MVL)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quigley, B; Smith, C; La Riviere, P
2016-06-15
Purpose: To evaluate the resolution and sensitivity of XIL imaging using a surface radiance simulation based on optical diffusion and maximum likelihood expectation maximization (MLEM) image reconstruction. XIL imaging seeks to determine the distribution of luminescent nanophosphors, which could be used as nanodosimeters or radiosensitizers. Methods: The XIL simulation generated a homogeneous slab with optical properties similar to tissue. X-ray activated nanophosphors were placed at 1.0 cm depth in the tissue in concentrations of 10{sup −4} g/mL in two volumes of 10 mm{sup 3} with varying separations between each other. An analytical optical diffusion model determined the surface radiance frommore » the photon distributions generated at depth in the tissue by the nanophosphors. The simulation then determined the detected luminescent signal collected with a f/1.0 aperture lens and back-illuminated EMCCD camera. The surface radiance was deconvolved using a MLEM algorithm to estimate the nanophosphors distribution and the resolution. To account for both Poisson and Gaussian noise, a shifted Poisson imaging model was used in the deconvolution. The deconvolved distributions were fitted to a Gaussian after radial averaging to measure the full width at half maximum (FWHM) and the peak to peak distance between distributions was measured to determine the resolving power. Results: Simulated surface radiances for doses from 1mGy to 100 cGy were computed. Each image was deconvolved using 1000 iterations. At 1mGy, deconvolution reduced the FWHM of the nanophosphors distribution by 65% and had a resolving power is 3.84 mm. Decreasing the dose from 100 cGy to 1 mGy increased the FWHM by 22% but allowed for a dose reduction of a factor of 1000. Conclusion: Deconvolving the detected surface radiance allows for dose reduction while maintaining the resolution of the nanophosphors. It proves to be a useful technique in overcoming the resolution limitations of diffuse optical imaging in tissue. C. S. acknowledges support from the NIH National Institute of General Medical Sciences (Award number R25GM109439, Project Title: University of Chicago Initiative for Maximizing Student Development, IMSD). B. Q. and P. L. acknowledge support from NIH grant R01EB017293.« less
Using deconvolution to improve the metrological performance of the grid method
NASA Astrophysics Data System (ADS)
Grédiac, Michel; Sur, Frédéric; Badulescu, Claudiu; Mathias, Jean-Denis
2013-06-01
The use of various deconvolution techniques to enhance strain maps obtained with the grid method is addressed in this study. Since phase derivative maps obtained with the grid method can be approximated by their actual counterparts convolved by the envelope of the kernel used to extract phases and phase derivatives, non-blind restoration techniques can be used to perform deconvolution. Six deconvolution techniques are presented and employed to restore a synthetic phase derivative map, namely direct deconvolution, regularized deconvolution, the Richardson-Lucy algorithm and Wiener filtering, the last two with two variants concerning their practical implementations. Obtained results show that the noise that corrupts the grid images must be thoroughly taken into account to limit its effect on the deconvolved strain maps. The difficulty here is that the noise on the grid image yields a spatially correlated noise on the strain maps. In particular, numerical experiments on synthetic data show that direct and regularized deconvolutions are unstable when noisy data are processed. The same remark holds when Wiener filtering is employed without taking into account noise autocorrelation. On the other hand, the Richardson-Lucy algorithm and Wiener filtering with noise autocorrelation provide deconvolved maps where the impact of noise remains controlled within a certain limit. It is also observed that the last technique outperforms the Richardson-Lucy algorithm. Two short examples of actual strain fields restoration are finally shown. They deal with asphalt and shape memory alloy specimens. The benefits and limitations of deconvolution are presented and discussed in these two cases. The main conclusion is that strain maps are correctly deconvolved when the signal-to-noise ratio is high and that actual noise in the actual strain maps must be more specifically characterized than in the current study to address higher noise levels with Wiener filtering.
NASA Astrophysics Data System (ADS)
Bardy, Fabrice; Van Dun, Bram; Dillon, Harvey; Cowan, Robert
2014-08-01
Objective. To evaluate the viability of disentangling a series of overlapping ‘cortical auditory evoked potentials’ (CAEPs) elicited by different stimuli using least-squares (LS) deconvolution, and to assess the adaptation of CAEPs for different stimulus onset-asynchronies (SOAs). Approach. Optimal aperiodic stimulus sequences were designed by controlling the condition number of matrices associated with the LS deconvolution technique. First, theoretical considerations of LS deconvolution were assessed in simulations in which multiple artificial overlapping responses were recovered. Second, biological CAEPs were recorded in response to continuously repeated stimulus trains containing six different tone-bursts with frequencies 8, 4, 2, 1, 0.5, 0.25 kHz separated by SOAs jittered around 150 (120-185), 250 (220-285) and 650 (620-685) ms. The control condition had a fixed SOA of 1175 ms. In a second condition, using the same SOAs, trains of six stimuli were separated by a silence gap of 1600 ms. Twenty-four adults with normal hearing (<20 dB HL) were assessed. Main results. Results showed disentangling of a series of overlapping responses using LS deconvolution on simulated waveforms as well as on real EEG data. The use of rapid presentation and LS deconvolution did not however, allow the recovered CAEPs to have a higher signal-to-noise ratio than for slowly presented stimuli. The LS deconvolution technique enables the analysis of a series of overlapping responses in EEG. Significance. LS deconvolution is a useful technique for the study of adaptation mechanisms of CAEPs for closely spaced stimuli whose characteristics change from stimulus to stimulus. High-rate presentation is necessary to develop an understanding of how the auditory system encodes natural speech or other intrinsically high-rate stimuli.
Chen, Zhaoxue; Chen, Hao
2014-01-01
A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.
Maximum Entropy, Word-Frequency, Chinese Characters, and Multiple Meanings
Yan, Xiaoyong; Minnhagen, Petter
2015-01-01
The word-frequency distribution of a text written by an author is well accounted for by a maximum entropy distribution, the RGF (random group formation)-prediction. The RGF-distribution is completely determined by the a priori values of the total number of words in the text (M), the number of distinct words (N) and the number of repetitions of the most common word (kmax). It is here shown that this maximum entropy prediction also describes a text written in Chinese characters. In particular it is shown that although the same Chinese text written in words and Chinese characters have quite differently shaped distributions, they are nevertheless both well predicted by their respective three a priori characteristic values. It is pointed out that this is analogous to the change in the shape of the distribution when translating a given text to another language. Another consequence of the RGF-prediction is that taking a part of a long text will change the input parameters (M, N, kmax) and consequently also the shape of the frequency distribution. This is explicitly confirmed for texts written in Chinese characters. Since the RGF-prediction has no system-specific information beyond the three a priori values (M, N, kmax), any specific language characteristic has to be sought in systematic deviations from the RGF-prediction and the measured frequencies. One such systematic deviation is identified and, through a statistical information theoretical argument and an extended RGF-model, it is proposed that this deviation is caused by multiple meanings of Chinese characters. The effect is stronger for Chinese characters than for Chinese words. The relation between Zipf’s law, the Simon-model for texts and the present results are discussed. PMID:25955175
Beyond maximum entropy: Fractal Pixon-based image reconstruction
NASA Technical Reports Server (NTRS)
Puetter, Richard C.; Pina, R. K.
1994-01-01
We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.
NASA Astrophysics Data System (ADS)
Wang, C.; Rubin, Y.
2014-12-01
Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.
A Maximum Entropy Test for Evaluating Higher-Order Correlations in Spike Counts
Onken, Arno; Dragoi, Valentin; Obermayer, Klaus
2012-01-01
Evaluating the importance of higher-order correlations of neural spike counts has been notoriously hard. A large number of samples are typically required in order to estimate higher-order correlations and resulting information theoretic quantities. In typical electrophysiology data sets with many experimental conditions, however, the number of samples in each condition is rather small. Here we describe a method that allows to quantify evidence for higher-order correlations in exactly these cases. We construct a family of reference distributions: maximum entropy distributions, which are constrained only by marginals and by linear correlations as quantified by the Pearson correlation coefficient. We devise a Monte Carlo goodness-of-fit test, which tests - for a given divergence measure of interest - whether the experimental data lead to the rejection of the null hypothesis that it was generated by one of the reference distributions. Applying our test to artificial data shows that the effects of higher-order correlations on these divergence measures can be detected even when the number of samples is small. Subsequently, we apply our method to spike count data which were recorded with multielectrode arrays from the primary visual cortex of anesthetized cat during an adaptation experiment. Using mutual information as a divergence measure we find that there are spike count bin sizes at which the maximum entropy hypothesis can be rejected for a substantial number of neuronal pairs. These results demonstrate that higher-order correlations can matter when estimating information theoretic quantities in V1. They also show that our test is able to detect their presence in typical in-vivo data sets, where the number of samples is too small to estimate higher-order correlations directly. PMID:22685392
Understanding Peripheral Bat Populations Using Maximum-Entropy Suitability Modeling
Barnhart, Paul R.; Gillam, Erin H.
2016-01-01
Individuals along the periphery of a species distribution regularly encounter more challenging environmental and climatic conditions than conspecifics near the center of the distribution. Due to these potential constraints, individuals in peripheral margins are expected to change their habitat and behavioral characteristics. Managers typically rely on species distribution maps when developing adequate management practices. However, these range maps are often too simplistic and do not provide adequate information as to what fine-scale biotic and abiotic factors are driving a species occurrence. In the last decade, habitat suitability modelling has become widely used as a substitute for simplistic distribution mapping which allows regional managers the ability to fine-tune management resources. The objectives of this study were to use maximum-entropy modeling to produce habitat suitability models for seven species that have a peripheral margin intersecting the state of North Dakota, according to current IUCN distributions, and determine the vegetative and climatic characteristics driving these models. Mistnetting resulted in the documentation of five species outside the IUCN distribution in North Dakota, indicating that current range maps for North Dakota, and potentially the northern Great Plains, are in need of update. Maximum-entropy modeling showed that temperature and not precipitation were the variables most important for model production. This fine-scale result highlights the importance of habitat suitability modelling as this information cannot be extracted from distribution maps. Our results provide baseline information needed for future research about how and why individuals residing in the peripheral margins of a species’ distribution may show marked differences in habitat use as a result of urban expansion, habitat loss, and climate change compared to more centralized populations. PMID:27935936
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kulakhmetov, Marat, E-mail: mkulakhm@purdue.edu; Alexeenko, Alina, E-mail: alexeenk@purdue.edu; Gallis, Michael, E-mail: magalli@sandia.gov
Quasi-classical trajectory (QCT) calculations are used to study state-specific ro-vibrational energy exchange and dissociation in the O{sub 2} + O system. Atom-diatom collisions with energy between 0.1 and 20 eV are calculated with a double many body expansion potential energy surface by Varandas and Pais [Mol. Phys. 65, 843 (1988)]. Inelastic collisions favor mono-quantum vibrational transitions at translational energies above 1.3 eV although multi-quantum transitions are also important. Post-collision vibrational favoring decreases first exponentially and then linearly as Δv increases. Vibrationally elastic collisions (Δv = 0) favor small ΔJ transitions while vibrationally inelastic collisions have equilibrium post-collision rotational distributions. Dissociationmore » exhibits both vibrational and rotational favoring. New vibrational-translational (VT), vibrational-rotational-translational (VRT) energy exchange, and dissociation models are developed based on QCT observations and maximum entropy considerations. Full set of parameters for state-to-state modeling of oxygen is presented. The VT energy exchange model describes 22 000 state-to-state vibrational cross sections using 11 parameters and reproduces vibrational relaxation rates within 30% in the 2500–20 000 K temperature range. The VRT model captures 80 × 10{sup 6} state-to-state ro-vibrational cross sections using 19 parameters and reproduces vibrational relaxation rates within 60% in the 5000–15 000 K temperature range. The developed dissociation model reproduces state-specific and equilibrium dissociation rates within 25% using just 48 parameters. The maximum entropy framework makes it feasible to upscale ab initio simulation to full nonequilibrium flow calculations.« less
Scalar flux modeling in turbulent flames using iterative deconvolution
NASA Astrophysics Data System (ADS)
Nikolaou, Z. M.; Cant, R. S.; Vervisch, L.
2018-04-01
In the context of large eddy simulations, deconvolution is an attractive alternative for modeling the unclosed terms appearing in the filtered governing equations. Such methods have been used in a number of studies for non-reacting and incompressible flows; however, their application in reacting flows is limited in comparison. Deconvolution methods originate from clearly defined operations, and in theory they can be used in order to model any unclosed term in the filtered equations including the scalar flux. In this study, an iterative deconvolution algorithm is used in order to provide a closure for the scalar flux term in a turbulent premixed flame by explicitly filtering the deconvoluted fields. The assessment of the method is conducted a priori using a three-dimensional direct numerical simulation database of a turbulent freely propagating premixed flame in a canonical configuration. In contrast to most classical a priori studies, the assessment is more stringent as it is performed on a much coarser mesh which is constructed using the filtered fields as obtained from the direct simulations. For the conditions tested in this study, deconvolution is found to provide good estimates both of the scalar flux and of its divergence.
NASA Astrophysics Data System (ADS)
Raghunath, N.; Faber, T. L.; Suryanarayanan, S.; Votaw, J. R.
2009-02-01
Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.
NASA Astrophysics Data System (ADS)
Krishnan, Karthik; Reddy, Kasireddy V.; Ajani, Bhavya; Yalavarthy, Phaneendra K.
2017-02-01
CT and MR perfusion weighted imaging (PWI) enable quantification of perfusion parameters in stroke studies. These parameters are calculated from the residual impulse response function (IRF) based on a physiological model for tissue perfusion. The standard approach for estimating the IRF is deconvolution using oscillatory-limited singular value decomposition (oSVD) or Frequency Domain Deconvolution (FDD). FDD is widely recognized as the fastest approach currently available for deconvolution of CT Perfusion/MR PWI. In this work, three faster methods are proposed. The first is a direct (model based) crude approximation to the final perfusion quantities (Blood flow, Blood volume, Mean Transit Time and Delay) using the Welch-Satterthwaite approximation for gamma fitted concentration time curves (CTC). The second method is a fast accurate deconvolution method, we call Analytical Fourier Filtering (AFF). The third is another fast accurate deconvolution technique using Showalter's method, we call Analytical Showalter's Spectral Filtering (ASSF). Through systematic evaluation on phantom and clinical data, the proposed methods are shown to be computationally more than twice as fast as FDD. The two deconvolution based methods, AFF and ASSF, are also shown to be quantitatively accurate compared to FDD and oSVD.
An Integrated Theory of Everything (TOE)
NASA Astrophysics Data System (ADS)
Colella, Antonio
2014-03-01
An Integrated TOE unifies all known physical phenomena from the Planck cube to the Super Universe (multiverse). Each matter/force particle is represented by a Planck cube string. Any Super Universe object is a volume of contiguous Planck cubes. Super force Planck cube string singularities existed at the start of all universes. An Integrated TOE foundations are twenty independent existing theories and without sacrificing their integrities, are replaced by twenty interrelated amplified theories. Amplifications of Higgs force theory are key to an Integrated TOE and include: 64 supersymmetric Higgs particles; super force condensations to 17 matter particles/associated Higgs forces; spontaneous symmetry breaking is bidirectional; and the sum of 8 permanent Higgs force energies is dark energy. Stellar black hole theory was amplified to include a quark star (matter) with mass, volume, near zero temperature, and maximum entropy. A black hole (energy) has energy, minimal volume (singularity), near infinite temperature, and minimum entropy. Our precursor universe's super supermassive quark star (matter) evaporated to a super supermassive black hole (energy). This transferred total conserved energy/mass and transformed entropy from maximum to minimum. Integrated Theory of Everything Book Video: https://www.youtube.com/watch?v=4a1c9IvdoGY Research Article Video: http://www.youtube.com/watch?v=CD-QoLeVbSY Research Article: http://toncolella.files.wordpress.com/2012/07/m080112.pdf.
Electrical and morphological properties of magnetocaloric nano ZnNi ferrite
NASA Astrophysics Data System (ADS)
Hemeda, O. M.; Mostafa, Nasser Y.; Abd Elkader, Omar H.; Hemeda, D. M.; Tawfik, A.; Mostafa, M.
2015-11-01
A series of Zn1-xNixFe2O4 nano ferrite (with x=0, 0.2, 0.4, 0.6, 0.8, and 1) compositions were synthesized using the combustion technique. The powder samples were characterized by XRD. The X-ray analysis showed that the samples were single phase spinel cubic structure. The AC resistivity decreases by increasing the frequency from 1 kHz to 10 kHz. As the frequency of the applied field increases the hopping of charge carrier also increase, thereby decreasing the resistivity. A shift in dielectric maximum is observed toward higher temperature with increasing the Ni content from 536 K to 560 K at 1 kHz. The HRTEM (high resolution TEM) images of four compositions have lattice spacing which confirms the crystalline nature of the samples. The surface morphology SEM of the sample consists of some grains with relatively homogenies distribution with an average size varying from 0.85 to 0.92 μm. The values for entropy change in this work are still small but are significally higher than the values that have been reported for iron oxide nanoparticle. The magnetic entropy change was calculated from measurements of M (H, T) where H is the magnetic field and T is the temperature. The maximum value of entropy change (ΔS) obtained near Curie temperature which makes these material candidates for magnetocaloric applications.
Proposed principles of maximum local entropy production.
Ross, John; Corlan, Alexandru D; Müller, Stefan C
2012-07-12
Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.
NASA Technical Reports Server (NTRS)
Limber, Mark A.; Manteuffel, Thomas A.; Mccormick, Stephen F.; Sholl, David S.
1993-01-01
We consider the problem of image reconstruction from a finite number of projections over the space L(sup 1)(Omega), where Omega is a compact subset of the set of Real numbers (exp 2). We prove that, given a discretization of the projection space, the function that generates the correct projection data and maximizes the Boltzmann-Shannon entropy is piecewise constant on a certain discretization of Omega, which we call the 'optimal grid'. It is on this grid that one obtains the maximum resolution given the problem setup. The size of this grid grows very quickly as the number of projections and number of cells per projection grow, indicating fast computational methods are essential to make its use feasible. We use a Fenchel duality formulation of the problem to keep the number of variables small while still using the optimal discretization, and propose a multilevel scheme to improve convergence of a simple cyclic maximization scheme applied to the dual problem.
On the maximum-entropy/autoregressive modeling of time series
NASA Technical Reports Server (NTRS)
Chao, B. F.
1984-01-01
The autoregressive (AR) model of a random process is interpreted in the light of the Prony's relation which relates a complex conjugate pair of poles of the AR process in the z-plane (or the z domain) on the one hand, to the complex frequency of one complex harmonic function in the time domain on the other. Thus the AR model of a time series is one that models the time series as a linear combination of complex harmonic functions, which include pure sinusoids and real exponentials as special cases. An AR model is completely determined by its z-domain pole configuration. The maximum-entropy/autogressive (ME/AR) spectrum, defined on the unit circle of the z-plane (or the frequency domain), is nothing but a convenient, but ambiguous visual representation. It is asserted that the position and shape of a spectral peak is determined by the corresponding complex frequency, and the height of the spectral peak contains little information about the complex amplitude of the complex harmonic functions.
Maximum entropy production allows a simple representation of heterogeneity in semiarid ecosystems.
Schymanski, Stanislaus J; Kleidon, Axel; Stieglitz, Marc; Narula, Jatin
2010-05-12
Feedbacks between water use, biomass and infiltration capacity in semiarid ecosystems have been shown to lead to the spontaneous formation of vegetation patterns in a simple model. The formation of patterns permits the maintenance of larger overall biomass at low rainfall rates compared with homogeneous vegetation. This results in a bias of models run at larger scales neglecting subgrid-scale variability. In the present study, we investigate the question whether subgrid-scale heterogeneity can be parameterized as the outcome of optimal partitioning between bare soil and vegetated area. We find that a two-box model reproduces the time-averaged biomass of the patterns emerging in a 100 x 100 grid model if the vegetated fraction is optimized for maximum entropy production (MEP). This suggests that the proposed optimality-based representation of subgrid-scale heterogeneity may be generally applicable to different systems and at different scales. The implications for our understanding of self-organized behaviour and its modelling are discussed.
Image construction from the IRAS survey and data fusion
NASA Technical Reports Server (NTRS)
Bontekoe, Tj. R.
1990-01-01
The IRAS survey data can be used successfully to produce images of extended objects. The major difficulty, viz. non-uniform sampling, different response functions for each detector, and varying signal-to-noise levels for each detector for each scan, were resolved. The results of three different image construction techniques are compared: co-addition, constrained least squares, and maximum entropy. The maximum entropy result is superior. An image of the galaxy M51 with an average spatial resolution of 45 arc seconds, is presented using 60 micron survey data. This exceeds the telescope diffraction limit of 1 minute of arc, at this wavelength. Data fusion is a proposed method for combining data from different instruments, with different spatial resolutions, at different wavelengths. Direct estimates of the physical parameters, temperature, density and composition, can be made from the data without prior images (re-)construction. An increase in the accuracy of these parameters is expected as the result of this more systematic approach.
NASA Astrophysics Data System (ADS)
Tang, Shaolei; Yang, Xiaofeng; Dong, Di; Li, Ziwei
2015-12-01
Sea surface temperature (SST) is an important variable for understanding interactions between the ocean and the atmosphere. SST fusion is crucial for acquiring SST products of high spatial resolution and coverage. This study introduces a Bayesian maximum entropy (BME) method for blending daily SSTs from multiple satellite sensors. A new spatiotemporal covariance model of an SST field is built to integrate not only single-day SSTs but also time-adjacent SSTs. In addition, AVHRR 30-year SST climatology data are introduced as soft data at the estimation points to improve the accuracy of blended results within the BME framework. The merged SSTs, with a spatial resolution of 4 km and a temporal resolution of 24 hours, are produced in the Western Pacific Ocean region to demonstrate and evaluate the proposed methodology. Comparisons with in situ drifting buoy observations show that the merged SSTs are accurate and the bias and root-mean-square errors for the comparison are 0.15°C and 0.72°C, respectively.
Maximum entropy estimation of a Benzene contaminated plume using ecotoxicological assays.
Wahyudi, Agung; Bartzke, Mariana; Küster, Eberhard; Bogaert, Patrick
2013-01-01
Ecotoxicological bioassays, e.g. based on Danio rerio teratogenicity (DarT) or the acute luminescence inhibition with Vibrio fischeri, could potentially lead to significant benefits for detecting on site contaminations on qualitative or semi-quantitative bases. The aim was to use the observed effects of two ecotoxicological assays for estimating the extent of a Benzene groundwater contamination plume. We used a Maximum Entropy (MaxEnt) method to rebuild a bivariate probability table that links the observed toxicity from the bioassays with Benzene concentrations. Compared with direct mapping of the contamination plume as obtained from groundwater samples, the MaxEnt concentration map exhibits on average slightly higher concentrations though the global pattern is close to it. This suggest MaxEnt is a valuable method to build a relationship between quantitative data, e.g. contaminant concentrations, and more qualitative or indirect measurements, in a spatial mapping framework, which is especially useful when clear quantitative relation is not at hand. Copyright © 2012 Elsevier Ltd. All rights reserved.
Reinterpreting maximum entropy in ecology: a null hypothesis constrained by ecological mechanism.
O'Dwyer, James P; Rominger, Andrew; Xiao, Xiao
2017-07-01
Simplified mechanistic models in ecology have been criticised for the fact that a good fit to data does not imply the mechanism is true: pattern does not equal process. In parallel, the maximum entropy principle (MaxEnt) has been applied in ecology to make predictions constrained by just a handful of state variables, like total abundance or species richness. But an outstanding question remains: what principle tells us which state variables to constrain? Here we attempt to solve both problems simultaneously, by translating a given set of mechanisms into the state variables to be used in MaxEnt, and then using this MaxEnt theory as a null model against which to compare mechanistic predictions. In particular, we identify the sufficient statistics needed to parametrise a given mechanistic model from data and use them as MaxEnt constraints. Our approach isolates exactly what mechanism is telling us over and above the state variables alone. © 2017 John Wiley & Sons Ltd/CNRS.
Maximum-entropy reconstruction method for moment-based solution of the Boltzmann equation
NASA Astrophysics Data System (ADS)
Summy, Dustin; Pullin, Dale
2013-11-01
We describe a method for a moment-based solution of the Boltzmann equation. This starts with moment equations for a 10 + 9 N , N = 0 , 1 , 2 . . . -moment representation. The partial-differential equations (PDEs) for these moments are unclosed, containing both higher-order moments and molecular-collision terms. These are evaluated using a maximum-entropy construction of the velocity distribution function f (c , x , t) , using the known moments, within a finite-box domain of single-particle-velocity (c) space. Use of a finite-domain alleviates known problems (Junk and Unterreiter, Continuum Mech. Thermodyn., 2002) concerning existence and uniqueness of the reconstruction. Unclosed moments are evaluated with quadrature while collision terms are calculated using a Monte-Carlo method. This allows integration of the moment PDEs in time. Illustrative examples will include zero-space- dimensional relaxation of f (c , t) from a Mott-Smith-like initial condition toward equilibrium and one-space dimensional, finite Knudsen number, planar Couette flow. Comparison with results using the direct-simulation Monte-Carlo method will be presented.
Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks
Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.
2011-01-01
Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.
NASA Astrophysics Data System (ADS)
Chen, Yihang; Xiao, Chijie; Yang, Xiaoyi; Wang, Tianbo; Xu, Tianchao; Yu, Yi; Xu, Min; Wang, Long; Lin, Chen; Wang, Xiaogang
2017-10-01
The Laser-driven Ion beam trace probe (LITP) is a new diagnostic method for measuring poloidal magnetic field (Bp) and radial electric field (Er) in tokamaks. LITP injects a laser-driven ion beam into the tokamak, and Bp and Er profiles can be reconstructed using tomography methods. A reconstruction code has been developed to validate the LITP theory, and both 2D reconstruction of Bp and simultaneous reconstruction of Bp and Er have been attained. To reconstruct from experimental data with noise, Maximum Entropy and Gaussian-Bayesian tomography methods were applied and improved according to the characteristics of the LITP problem. With these improved methods, a reconstruction error level below 15% has been attained with a data noise level of 10%. These methods will be further tested and applied in the following LITP experiments. Supported by the ITER-CHINA program 2015GB120001, CHINA MOST under 2012YQ030142 and National Natural Science Foundation Abstract of China under 11575014 and 11375053.
Zhang, Liguo; Sun, Jianguo; Yin, Guisheng; Zhao, Jing; Han, Qilong
2015-01-01
In non-destructive testing (NDT) of metal welds, weld line tracking is usually performed outdoors, where the structured light sources are always disturbed by various noises, such as sunlight, shadows, and reflections from the weld line surface. In this paper, we design a cross structured light (CSL) to detect the weld line and propose a robust laser stripe segmentation algorithm to overcome the noises in structured light images. An adaptive monochromatic space is applied to preprocess the image with ambient noises. In the monochromatic image, the laser stripe obtained is recovered as a multichannel signal by minimum entropy deconvolution. Lastly, the stripe centre points are extracted from the image. In experiments, the CSL sensor and the proposed algorithm are applied to guide a wall climbing robot inspecting the weld line of a wind power tower. The experimental results show that the CSL sensor can capture the 3D information of the welds with high accuracy, and the proposed algorithm contributes to the weld line inspection and the robot navigation. PMID:26110403
Fault Detection of Roller-Bearings Using Signal Processing and Optimization Algorithms
Kwak, Dae-Ho; Lee, Dong-Han; Ahn, Jong-Hyo; Koh, Bong-Hwan
2014-01-01
This study presents a fault detection of roller bearings through signal processing and optimization techniques. After the occurrence of scratch-type defects on the inner race of bearings, variations of kurtosis values are investigated in terms of two different data processing techniques: minimum entropy deconvolution (MED), and the Teager-Kaiser Energy Operator (TKEO). MED and the TKEO are employed to qualitatively enhance the discrimination of defect-induced repeating peaks on bearing vibration data with measurement noise. Given the perspective of the execution sequence of MED and the TKEO, the study found that the kurtosis sensitivity towards a defect on bearings could be highly improved. Also, the vibration signal from both healthy and damaged bearings is decomposed into multiple intrinsic mode functions (IMFs), through empirical mode decomposition (EMD). The weight vectors of IMFs become design variables for a genetic algorithm (GA). The weights of each IMF can be optimized through the genetic algorithm, to enhance the sensitivity of kurtosis on damaged bearing signals. Experimental results show that the EMD-GA approach successfully improved the resolution of detectability between a roller bearing with defect, and an intact system. PMID:24368701
Universal features of fluctuations in globular proteins.
Erman, Burak
2016-06-01
Using data from 2000 non-homologous protein crystal structures, we show that the distribution of residue B factors of proteins collapses onto a single master curve. We show by maximum entropy arguments that this curve is a Gamma function whose order and dispersion are obtained from experimental data. The distribution for any given specific protein can be generated from the master curve by a linear transformation. Any perturbation of the B factor distribution of a protein, imposed at constant energy, causes a decrease in the entropy of the protein relative to that of the reference state. Proteins 2016; 84:721-725. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Thresholding Based on Maximum Weighted Object Correlation for Rail Defect Detection
NASA Astrophysics Data System (ADS)
Li, Qingyong; Huang, Yaping; Liang, Zhengping; Luo, Siwei
Automatic thresholding is an important technique for rail defect detection, but traditional methods are not competent enough to fit the characteristics of this application. This paper proposes the Maximum Weighted Object Correlation (MWOC) thresholding method, fitting the features that rail images are unimodal and defect proportion is small. MWOC selects a threshold by optimizing the product of object correlation and the weight term that expresses the proportion of thresholded defects. Our experimental results demonstrate that MWOC achieves misclassification error of 0.85%, and outperforms the other well-established thresholding methods, including Otsu, maximum correlation thresholding, maximum entropy thresholding and valley-emphasis method, for the application of rail defect detection.
Bivariate Rainfall and Runoff Analysis Using Shannon Entropy Theory
NASA Astrophysics Data System (ADS)
Rahimi, A.; Zhang, L.
2012-12-01
Rainfall-Runoff analysis is the key component for many hydrological and hydraulic designs in which the dependence of rainfall and runoff needs to be studied. It is known that the convenient bivariate distribution are often unable to model the rainfall-runoff variables due to that they either have constraints on the range of the dependence or fixed form for the marginal distributions. Thus, this paper presents an approach to derive the entropy-based joint rainfall-runoff distribution using Shannon entropy theory. The distribution derived can model the full range of dependence and allow different specified marginals. The modeling and estimation can be proceeded as: (i) univariate analysis of marginal distributions which includes two steps, (a) using the nonparametric statistics approach to detect modes and underlying probability density, and (b) fitting the appropriate parametric probability density functions; (ii) define the constraints based on the univariate analysis and the dependence structure; (iii) derive and validate the entropy-based joint distribution. As to validate the method, the rainfall-runoff data are collected from the small agricultural experimental watersheds located in semi-arid region near Riesel (Waco), Texas, maintained by the USDA. The results of unviariate analysis show that the rainfall variables follow the gamma distribution, whereas the runoff variables have mixed structure and follow the mixed-gamma distribution. With this information, the entropy-based joint distribution is derived using the first moments, the first moments of logarithm transformed rainfall and runoff, and the covariance between rainfall and runoff. The results of entropy-based joint distribution indicate: (1) the joint distribution derived successfully preserves the dependence between rainfall and runoff, and (2) the K-S goodness of fit statistical tests confirm the marginal distributions re-derived reveal the underlying univariate probability densities which further assure that the entropy-based joint rainfall-runoff distribution are satisfactorily derived. Overall, the study shows the Shannon entropy theory can be satisfactorily applied to model the dependence between rainfall and runoff. The study also shows that the entropy-based joint distribution is an appropriate approach to capture the dependence structure that cannot be captured by the convenient bivariate joint distributions. Joint Rainfall-Runoff Entropy Based PDF, and Corresponding Marginal PDF and Histogram for W12 Watershed The K-S Test Result and RMSE on Univariate Distributions Derived from the Maximum Entropy Based Joint Probability Distribution;
NASA Astrophysics Data System (ADS)
Wapenaar, Kees; van der Neut, Joost; Ruigrok, Elmer; Draganov, Deyan; Hunziker, Juerg; Slob, Evert; Thorbecke, Jan; Snieder, Roel
2010-05-01
In recent years, seismic interferometry (or Green's function retrieval) has led to many applications in seismology (exploration, regional and global), underwater acoustics and ultrasonics. One of the explanations for this broad interest lies in the simplicity of the methodology. In passive data applications a simple crosscorrelation of responses at two receivers gives the impulse response (Green's function) at one receiver as if there were a source at the position of the other. In controlled-source applications the procedure is similar, except that it involves in addition a summation along the sources. It has also been recognized that the simple crosscorrelation approach has its limitations. From the various theoretical models it follows that there are a number of underlying assumptions for retrieving the Green's function by crosscorrelation. The most important assumptions are that the medium is lossless and that the waves are equipartitioned. In heuristic terms the latter condition means that the receivers are illuminated isotropically from all directions, which is for example achieved when the sources are regularly distributed along a closed surface, the sources are mutually uncorrelated and their power spectra are identical. Despite the fact that in practical situations these conditions are at most only partly fulfilled, the results of seismic interferometry are generally quite robust, but the retrieved amplitudes are unreliable and the results are often blurred by artifacts. Several researchers have proposed to address some of the shortcomings by replacing the correlation process by deconvolution. In most cases the employed deconvolution procedure is essentially 1-D (i.e., trace-by-trace deconvolution). This compensates the anelastic losses, but it does not account for the anisotropic illumination of the receivers. To obtain more accurate results, seismic interferometry by deconvolution should acknowledge the 3-D nature of the seismic wave field. Hence, from a theoretical point of view, the trace-by-trace process should be replaced by a full 3-D wave field deconvolution process. Interferometry by multidimensional deconvolution is more accurate than the trace-by-trace correlation and deconvolution approaches but the processing is more involved. In the presentation we will give a systematic analysis of seismic interferometry by crosscorrelation versus multi-dimensional deconvolution and discuss applications of both approaches.
Information Flow Between Resting-State Networks.
Diez, Ibai; Erramuzpe, Asier; Escudero, Iñaki; Mateos, Beatriz; Cabrera, Alberto; Marinazzo, Daniele; Sanz-Arigita, Ernesto J; Stramaglia, Sebastiano; Cortes Diaz, Jesus M
2015-11-01
The resting brain dynamics self-organize into a finite number of correlated patterns known as resting-state networks (RSNs). It is well known that techniques such as independent component analysis can separate the brain activity at rest to provide such RSNs, but the specific pattern of interaction between RSNs is not yet fully understood. To this aim, we propose here a novel method to compute the information flow (IF) between different RSNs from resting-state magnetic resonance imaging. After hemodynamic response function blind deconvolution of all voxel signals, and under the hypothesis that RSNs define regions of interest, our method first uses principal component analysis to reduce dimensionality in each RSN to next compute IF (estimated here in terms of transfer entropy) between the different RSNs by systematically increasing k (the number of principal components used in the calculation). When k=1, this method is equivalent to computing IF using the average of all voxel activities in each RSN. For k≥1, our method calculates the k multivariate IF between the different RSNs. We find that the average IF among RSNs is dimension dependent, increasing from k=1 (i.e., the average voxel activity) up to a maximum occurring at k=5 and to finally decay to zero for k≥10. This suggests that a small number of components (close to five) is sufficient to describe the IF pattern between RSNs. Our method--addressing differences in IF between RSNs for any generic data--can be used for group comparison in health or disease. To illustrate this, we have calculated the inter-RSN IF in a data set of Alzheimer's disease (AD) to find that the most significant differences between AD and controls occurred for k=2, in addition to AD showing increased IF w.r.t. The spatial localization of the k=2 component, within RSNs, allows the characterization of IF differences between AD and controls.
Habitat suitability of patch types: A case study of the Yosemite toad
Christina T. Liang; Thomas J. Stohlgren
2011-01-01
Understanding patch variability is crucial in understanding the spatial population structure of wildlife species, especially for rare or threatened species.We used a well-tested maximum entropy species distribution model (Maxent) to map the Yosemite toad (Anaxyrus (= Bufo) canorus) in the Sierra Nevada...
Entropy production rate as a criterion for inconsistency in decision theory
NASA Astrophysics Data System (ADS)
Dixit, Purushottam D.
2018-05-01
Individual and group decisions are complex, often involving choosing an apt alternative from a multitude of options. Evaluating pairwise comparisons breaks down such complex decision problems into tractable ones. Pairwise comparison matrices (PCMs) are regularly used to solve multiple-criteria decision-making problems, for example, using Saaty’s analytic hierarchy process (AHP) framework. However, there are two significant drawbacks of using PCMs. First, humans evaluate PCMs in an inconsistent manner. Second, not all entries of a large PCM can be reliably filled by human decision makers. We address these two issues by first establishing a novel connection between PCMs and time-irreversible Markov processes. Specifically, we show that every PCM induces a family of dissipative maximum path entropy random walks (MERW) over the set of alternatives. We show that only ‘consistent’ PCMs correspond to detailed balanced MERWs. We identify the non-equilibrium entropy production in the induced MERWs as a metric of inconsistency of the underlying PCMs. Notably, the entropy production satisfies all of the recently laid out criteria for reasonable consistency indices. We also propose an approach to use incompletely filled PCMs in AHP. Potential future avenues are discussed as well.
Cluster-size entropy in the Axelrod model of social influence: Small-world networks and mass media
NASA Astrophysics Data System (ADS)
Gandica, Y.; Charmell, A.; Villegas-Febres, J.; Bonalde, I.
2011-10-01
We study the Axelrod's cultural adaptation model using the concept of cluster-size entropy Sc, which gives information on the variability of the cultural cluster size present in the system. Using networks of different topologies, from regular to random, we find that the critical point of the well-known nonequilibrium monocultural-multicultural (order-disorder) transition of the Axelrod model is given by the maximum of the Sc(q) distributions. The width of the cluster entropy distributions can be used to qualitatively determine whether the transition is first or second order. By scaling the cluster entropy distributions we were able to obtain a relationship between the critical cultural trait qc and the number F of cultural features in two-dimensional regular networks. We also analyze the effect of the mass media (external field) on social systems within the Axelrod model in a square network. We find a partially ordered phase whose largest cultural cluster is not aligned with the external field, in contrast with a recent suggestion that this type of phase cannot be formed in regular networks. We draw a q-B phase diagram for the Axelrod model in regular networks.
Information entropy of humpback whale songs.
Suzuki, Ryuji; Buck, John R; Tyack, Peter L
2006-03-01
The structure of humpback whale (Megaptera novaeangliae) songs was examined using information theory techniques. The song is an ordered sequence of individual sound elements separated by gaps of silence. Song samples were converted into sequences of discrete symbols by both human and automated classifiers. This paper analyzes the song structure in these symbol sequences using information entropy estimators and autocorrelation estimators. Both parametric and nonparametric entropy estimators are applied to the symbol sequences representing the songs. The results provide quantitative evidence consistent with the hierarchical structure proposed for these songs by Payne and McVay [Science 173, 587-597 (1971)]. Specifically, this analysis demonstrates that: (1) There is a strong structural constraint, or syntax, in the generation of the songs, and (2) the structural constraints exhibit periodicities with periods of 6-8 and 180-400 units. This implies that no empirical Markov model is capable of representing the songs' structure. The results are robust to the choice of either human or automated song-to-symbol classifiers. In addition, the entropy estimates indicate that the maximum amount of information that could be communicated by the sequence of sounds made is less than 1 bit per second.
Furbish, David; Schmeeckle, Mark; Schumer, Rina; Fathel, Siobhan
2016-01-01
We describe the most likely forms of the probability distributions of bed load particle velocities, accelerations, hop distances, and travel times, in a manner that formally appeals to inferential statistics while honoring mechanical and kinematic constraints imposed by equilibrium transport conditions. The analysis is based on E. Jaynes's elaboration of the implications of the similarity between the Gibbs entropy in statistical mechanics and the Shannon entropy in information theory. By maximizing the information entropy of a distribution subject to known constraints on its moments, our choice of the form of the distribution is unbiased. The analysis suggests that particle velocities and travel times are exponentially distributed and that particle accelerations follow a Laplace distribution with zero mean. Particle hop distances, viewed alone, ought to be distributed exponentially. However, the covariance between hop distances and travel times precludes this result. Instead, the covariance structure suggests that hop distances follow a Weibull distribution. These distributions are consistent with high-resolution measurements obtained from high-speed imaging of bed load particle motions. The analysis brings us closer to choosing distributions based on our mechanical insight.
NASA Astrophysics Data System (ADS)
Mechi, Nesrine; Alzahrani, Bandar; Hcini, Sobhi; Bouazizi, Mohamed Lamjed; Dhahri, Abdessalem
2018-06-01
We have investigated the correlation between magnetocaloric and electrical properties of La0.47Pr0.2Pb0.33MnO3 perovskite prepared using the sol-gel method. Rietveld analysis of X-ray diffraction (XRD) pattern shows pure crystalline phase with rhombohedral ? structure. Magnetic entropy change, relative cooling power (RCP) and specific heat were predicted from M(T, μ0H) data at different magnetic fields with the help of the phenomenological model. The magnetic entropy change reaches a maximum value ? of about 3.96 J kg-1 K-1 for μ0H = 5 T corresponding to RCP of 183 J kg-1. These values are relatively higher, making our sample a promising candidate for the magnetic refrigeration. Electrical-resistivity measurements were well fitted with the phenomenological percolation model, which is based on the phase segregation of ferromagnetic-metallic clusters and paramagnetic-semiconductor regions. The temperature and magnetic field dependences of resistivity data, ρ(T, μ0H), allowed us to determine the magnetic entropy change ?. Results show that the as-obtained magnetic entropy change values are similar to those determined from the phenomenological model.
Cluster-size entropy in the Axelrod model of social influence: small-world networks and mass media.
Gandica, Y; Charmell, A; Villegas-Febres, J; Bonalde, I
2011-10-01
We study the Axelrod's cultural adaptation model using the concept of cluster-size entropy S(c), which gives information on the variability of the cultural cluster size present in the system. Using networks of different topologies, from regular to random, we find that the critical point of the well-known nonequilibrium monocultural-multicultural (order-disorder) transition of the Axelrod model is given by the maximum of the S(c)(q) distributions. The width of the cluster entropy distributions can be used to qualitatively determine whether the transition is first or second order. By scaling the cluster entropy distributions we were able to obtain a relationship between the critical cultural trait q(c) and the number F of cultural features in two-dimensional regular networks. We also analyze the effect of the mass media (external field) on social systems within the Axelrod model in a square network. We find a partially ordered phase whose largest cultural cluster is not aligned with the external field, in contrast with a recent suggestion that this type of phase cannot be formed in regular networks. We draw a q-B phase diagram for the Axelrod model in regular networks.
Tsallis entropy and decoherence of CsI quantum pseudo dot qubit
NASA Astrophysics Data System (ADS)
Tiotsop, M.; Fotue, A. J.; Fotsin, H. B.; Fai, L. C.
2017-05-01
Polaron in CsI quantum pseudo dot under an electromagnetic field was considered, and the ground and first excited state energies were derived by employing the combining Pekar variational and unitary transformation methods. With the two-level system obtained, single qubit was envisioned and the decoherence was studied using non-extensive entropy (Tsallis entropy). Numerical results showed: (i) the increase (decrease) of the energy levels (period of oscillation) with the increase of chemical potential, the zero point of pseudo dot, cyclotron frequency, and transverse and longitudinal confinements; (ii) the Tsallis entropy evolved as a wave envelop that increase with the increase of non-extenxive parameter and with the increase of electric field strength, zero point of pseudo dot and cyclotron frequency the wave envelop evolve periodically with reduction of period; (iii) The transition probability increases from the boundary to the centre of the dot where it has its maximum value. It was also noted that the probability density oscillate with period T0 = ℏ / Δ Ε with the tunnelling of the chemical potential and zero point of the pseudo dot. These results are helpful in the control of decoherence in quantum systems and may also be useful for the design of quantum computers.
Kim, Eugene; Ryu, Jae Hun; Byun, Sung Hye
2016-06-01
According to several studies investigating the relationship between muscle activity and electroencephalogram results, reversal of neuromuscular blockade (NMB) may affect depth of anesthesia indices. Therefore, we investigated the effect of pyridostigmine on these indices via spectral entropy. Fifty-six patients scheduled for thyroidectomy or parotidectomy were included in this study and randomized into two groups. At the start of skin suturing, the desflurane concentration was adjusted to 4.2 vol% in both groups. Following this, the pyridostigmine group (group P, n = 28) was administered pyridostigmine 0.2 mg/kg mixed with glycopyrrolate 0.04 mg/kg, while the control group (group C, n = 28) received normal saline. Entropy values (response entropy [RE] and state entropy [SE]), train of four (TOF) ratio, and end-tidal desflurane concentration were recorded from point of drug administration to 15 minutes post-drug administration. Mean RE values at 15 minutes, when the maximum effect of pyridostigmine was anticipated, showed a statistically significant difference between groups (53.8 ± 10.5 in group P and 48.0 ± 8.8 in group C; P = 0.030). However, mean SE at 15 minutes showed no significant difference between the two groups (P = 0.066). At 15 minutes, there were significant differences in the TOF ratio between the two groups (P < 0.001). NMB reversal by pyridostigmine significantly increased RE values but not SE values. This finding suggests that spectral entropy may be a useful alternative tool for monitoring anesthetic depth during recovery from anesthesia in the presence of electromyogram activity.
An adaptive technique to maximize lossless image data compression of satellite images
NASA Technical Reports Server (NTRS)
Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe
1994-01-01
Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.
1981-03-03
Government Agencies. The views and conclusions contained in this document are those of the contractor and should not be interpreted as necessarily...resolving closely spaced j optical point targets are compared using Monte Carlo simulation ,esults for three different examples. It is found that the MEM is...although no direct compari- son was given. The objective of this report is to compare the capabilities of MLE and MEM in resolving two optical CSO’s
An optimized algorithm for multiscale wideband deconvolution of radio astronomical images
NASA Astrophysics Data System (ADS)
Offringa, A. R.; Smirnov, O.
2017-10-01
We describe a new multiscale deconvolution algorithm that can also be used in a multifrequency mode. The algorithm only affects the minor clean loop. In single-frequency mode, the minor loop of our improved multiscale algorithm is over an order of magnitude faster than the casa multiscale algorithm, and produces results of similar quality. For multifrequency deconvolution, a technique named joined-channel cleaning is used. In this mode, the minor loop of our algorithm is two to three orders of magnitude faster than casa msmfs. We extend the multiscale mode with automated scale-dependent masking, which allows structures to be cleaned below the noise. We describe a new scale-bias function for use in multiscale cleaning. We test a second deconvolution method that is a variant of the moresane deconvolution technique, and uses a convex optimization technique with isotropic undecimated wavelets as dictionary. On simple well-calibrated data, the convex optimization algorithm produces visually more representative models. On complex or imperfect data, the convex optimization algorithm has stability issues.
New regularization scheme for blind color image deconvolution
NASA Astrophysics Data System (ADS)
Chen, Li; He, Yu; Yap, Kim-Hui
2011-01-01
This paper proposes a new regularization scheme to address blind color image deconvolution. Color images generally have a significant correlation among the red, green, and blue channels. Conventional blind monochromatic deconvolution algorithms handle each color image channels independently, thereby ignoring the interchannel correlation present in the color images. In view of this, a unified regularization scheme for image is developed to recover edges of color images and reduce color artifacts. In addition, by using the color image properties, a spectral-based regularization operator is adopted to impose constraints on the blurs. Further, this paper proposes a reinforcement regularization framework that integrates a soft parametric learning term in addressing blind color image deconvolution. A blur modeling scheme is developed to evaluate the relevance of manifold parametric blur structures, and the information is integrated into the deconvolution scheme. An optimization procedure called alternating minimization is then employed to iteratively minimize the image- and blur-domain cost functions. Experimental results show that the method is able to achieve satisfactory restored color images under different blurring conditions.
Parabolic replicator dynamics and the principle of minimum Tsallis information gain
2013-01-01
Background Non-linear, parabolic (sub-exponential) and hyperbolic (super-exponential) models of prebiological evolution of molecular replicators have been proposed and extensively studied. The parabolic models appear to be the most realistic approximations of real-life replicator systems due primarily to product inhibition. Unlike the more traditional exponential models, the distribution of individual frequencies in an evolving parabolic population is not described by the Maximum Entropy (MaxEnt) Principle in its traditional form, whereby the distribution with the maximum Shannon entropy is chosen among all the distributions that are possible under the given constraints. We sought to identify a more general form of the MaxEnt principle that would be applicable to parabolic growth. Results We consider a model of a population that reproduces according to the parabolic growth law and show that the frequencies of individuals in the population minimize the Tsallis relative entropy (non-additive information gain) at each time moment. Next, we consider a model of a parabolically growing population that maintains a constant total size and provide an “implicit” solution for this system. We show that in this case, the frequencies of the individuals in the population also minimize the Tsallis information gain at each moment of the ‘internal time” of the population. Conclusions The results of this analysis show that the general MaxEnt principle is the underlying law for the evolution of a broad class of replicator systems including not only exponential but also parabolic and hyperbolic systems. The choice of the appropriate entropy (information) function depends on the growth dynamics of a particular class of systems. The Tsallis entropy is non-additive for independent subsystems, i.e. the information on the subsystems is insufficient to describe the system as a whole. In the context of prebiotic evolution, this “non-reductionist” nature of parabolic replicator systems might reflect the importance of group selection and competition between ensembles of cooperating replicators. Reviewers This article was reviewed by Viswanadham Sridhara (nominated by Claus Wilke), Puushottam Dixit (nominated by Sergei Maslov), and Nick Grishin. For the complete reviews, see the Reviewers’ Reports section. PMID:23937956
NASA Astrophysics Data System (ADS)
Beretta, Gian Paolo
2014-10-01
By suitable reformulations, we cast the mathematical frameworks of several well-known different approaches to the description of nonequilibrium dynamics into a unified formulation valid in all these contexts, which extends to such frameworks the concept of steepest entropy ascent (SEA) dynamics introduced by the present author in previous works on quantum thermodynamics. Actually, the present formulation constitutes a generalization also for the quantum thermodynamics framework. The analysis emphasizes that in the SEA modeling principle a key role is played by the geometrical metric with respect to which to measure the length of a trajectory in state space. In the near-thermodynamic-equilibrium limit, the metric tensor is directly related to the Onsager's generalized resistivity tensor. Therefore, through the identification of a suitable metric field which generalizes the Onsager generalized resistance to the arbitrarily far-nonequilibrium domain, most of the existing theories of nonequilibrium thermodynamics can be cast in such a way that the state exhibits the spontaneous tendency to evolve in state space along the path of SEA compatible with the conservation constraints and the boundary conditions. The resulting unified family of SEA dynamical models is intrinsically and strongly consistent with the second law of thermodynamics. The non-negativity of the entropy production is a general and readily proved feature of SEA dynamics. In several of the different approaches to nonequilibrium description we consider here, the SEA concept has not been investigated before. We believe it defines the precise meaning and the domain of general validity of the so-called maximum entropy production principle. Therefore, it is hoped that the present unifying approach may prove useful in providing a fresh basis for effective, thermodynamically consistent, numerical models and theoretical treatments of irreversible conservative relaxation towards equilibrium from far nonequilibrium states. The mathematical frameworks we consider are the following: (A) statistical or information-theoretic models of relaxation; (B) small-scale and rarefied gas dynamics (i.e., kinetic models for the Boltzmann equation); (C) rational extended thermodynamics, macroscopic nonequilibrium thermodynamics, and chemical kinetics; (D) mesoscopic nonequilibrium thermodynamics, continuum mechanics with fluctuations; and (E) quantum statistical mechanics, quantum thermodynamics, mesoscopic nonequilibrium quantum thermodynamics, and intrinsic quantum thermodynamics.
Methods and Apparatus for Reducing Multipath Signal Error Using Deconvolution
NASA Technical Reports Server (NTRS)
Kumar, Rajendra (Inventor); Lau, Kenneth H. (Inventor)
1999-01-01
A deconvolution approach to adaptive signal processing has been applied to the elimination of signal multipath errors as embodied in one preferred embodiment in a global positioning system receiver. The method and receiver of the present invention estimates then compensates for multipath effects in a comprehensive manner. Application of deconvolution, along with other adaptive identification and estimation techniques, results in completely novel GPS (Global Positioning System) receiver architecture.
Improving space debris detection in GEO ring using image deconvolution
NASA Astrophysics Data System (ADS)
Núñez, Jorge; Núñez, Anna; Montojo, Francisco Javier; Condominas, Marta
2015-07-01
In this paper we present a method based on image deconvolution to improve the detection of space debris, mainly in the geostationary ring. Among the deconvolution methods we chose the iterative Richardson-Lucy (R-L), as the method that achieves better goals with a reasonable amount of computation. For this work, we used two sets of real 4096 × 4096 pixel test images obtained with the Telescope Fabra-ROA at Montsec (TFRM). Using the first set of data, we establish the optimal number of iterations in 7, and applying the R-L method with 7 iterations to the images, we show that the astrometric accuracy does not vary significantly while the limiting magnitude of the deconvolved images increases significantly compared to the original ones. The increase is in average about 1.0 magnitude, which means that objects up to 2.5 times fainter can be detected after deconvolution. The application of the method to the second set of test images, which includes several faint objects, shows that, after deconvolution, up to four previously undetected faint objects are detected in a single frame. Finally, we carried out a study of some economic aspects of applying the deconvolution method, showing that an important economic impact can be envisaged.
Richardson-Lucy deconvolution as a general tool for combining images with complementary strengths.
Ingaramo, Maria; York, Andrew G; Hoogendoorn, Eelco; Postma, Marten; Shroff, Hari; Patterson, George H
2014-03-17
We use Richardson-Lucy (RL) deconvolution to combine multiple images of a simulated object into a single image in the context of modern fluorescence microscopy techniques. RL deconvolution can merge images with very different point-spread functions, such as in multiview light-sheet microscopes,1, 2 while preserving the best resolution information present in each image. We show that RL deconvolution is also easily applied to merge high-resolution, high-noise images with low-resolution, low-noise images, relevant when complementing conventional microscopy with localization microscopy. We also use RL deconvolution to merge images produced by different simulated illumination patterns, relevant to structured illumination microscopy (SIM)3, 4 and image scanning microscopy (ISM). The quality of our ISM reconstructions is at least as good as reconstructions using standard inversion algorithms for ISM data, but our method follows a simpler recipe that requires no mathematical insight. Finally, we apply RL deconvolution to merge a series of ten images with varying signal and resolution levels. This combination is relevant to gated stimulated-emission depletion (STED) microscopy, and shows that merges of high-quality images are possible even in cases for which a non-iterative inversion algorithm is unknown. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Bardy, Fabrice; Dillon, Harvey; Van Dun, Bram
2014-04-01
Rapid presentation of stimuli in an evoked response paradigm can lead to overlap of multiple responses and consequently difficulties interpreting waveform morphology. This paper presents a deconvolution method allowing overlapping multiple responses to be disentangled. The deconvolution technique uses a least-squared error approach. A methodology is proposed to optimize the stimulus sequence associated with the deconvolution technique under low-jitter conditions. It controls the condition number of the matrices involved in recovering the responses. Simulations were performed using the proposed deconvolution technique. Multiple overlapping responses can be recovered perfectly in noiseless conditions. In the presence of noise, the amount of error introduced by the technique can be controlled a priori by the condition number of the matrix associated with the used stimulus sequence. The simulation results indicate the need for a minimum amount of jitter, as well as a sufficient number of overlap combinations to obtain optimum results. An aperiodic model is recommended to improve reconstruction. We propose a deconvolution technique allowing multiple overlapping responses to be extracted and a method of choosing the stimulus sequence optimal for response recovery. This technique may allow audiologists, psychologists, and electrophysiologists to optimize their experimental designs involving rapidly presented stimuli, and to recover evoked overlapping responses. Copyright © 2013 International Federation of Clinical Neurophysiology. All rights reserved.
He, Xinzi; Yu, Zhen; Wang, Tianfu; Lei, Baiying; Shi, Yiyan
2018-01-01
Dermoscopy imaging has been a routine examination approach for skin lesion diagnosis. Accurate segmentation is the first step for automatic dermoscopy image assessment. The main challenges for skin lesion segmentation are numerous variations in viewpoint and scale of skin lesion region. To handle these challenges, we propose a novel skin lesion segmentation network via a very deep dense deconvolution network based on dermoscopic images. Specifically, the deep dense layer and generic multi-path Deep RefineNet are combined to improve the segmentation performance. The deep representation of all available layers is aggregated to form the global feature maps using skip connection. Also, the dense deconvolution layer is leveraged to capture diverse appearance features via the contextual information. Finally, we apply the dense deconvolution layer to smooth segmentation maps and obtain final high-resolution output. Our proposed method shows the superiority over the state-of-the-art approaches based on the public available 2016 and 2017 skin lesion challenge dataset and achieves the accuracy of 96.0% and 93.9%, which obtained a 6.0% and 1.2% increase over the traditional method, respectively. By utilizing Dense Deconvolution Net, the average time for processing one testing images with our proposed framework was 0.253 s.
Pituitary-hormone secretion by thyrotropinomas.
Roelfsema, Ferdinand; Kok, Simon; Kok, Petra; Pereira, Alberto M; Biermasz, Nienke R; Smit, Jan W; Frolich, Marijke; Keenan, Daniel M; Veldhuis, Johannes D; Romijn, Johannes A
2009-01-01
Hormone secretion by somatotropinomas, corticotropinomas and prolactinomas exhibits increased pulse frequency, basal and pulsatile secretion, accompanied by greater disorderliness. Increased concentrations of growth hormone (GH) or prolactin (PRL) are observed in about 30% of thyrotropinomas leading to acromegaly or disturbed sexual functions beyond thyrotropin (TSH)-induced hyperthyroidism. Regulation of non-TSH pituitary hormones in this context is not well understood. We there therefore evaluated TSH, GH and PRL secretion in 6 patients with up-to-date analytical and mathematical tools by 24-h blood sampling at 10-min intervals in a clinical research laboratory. The profiles were analyzed with a new deconvolution method, approximate entropy, cross-approximate entropy, cross-correlation and cosinor regression. TSH burst frequency and basal and pulsatile secretion were increased in patients compared with controls. TSH secretion patterns in patients were more irregular, but the diurnal rhythm was preserved at a higher mean with a 2.5 h phase delay. Although only one patient had clinical acromegaly, GH secretion and IGF-I levels were increased in two other patients and all three had a significant cross-correlation between the GH and TSH. PRL secretion was increased in one patient, but all patients had a significant cross-correlation with TSH and showed decreased PRL regularity. Cross-ApEn synchrony between TSH and GH did not differ between patients and controls, but TSH and PRL synchrony was reduced in patients. We conclude that TSH secretion by thyrotropinomas shares many characteristics of other pituitary hormone-secreting adenomas. In addition, abnormalities in GH and PRL secretion exist ranging from decreased (joint) regularity to overt hypersecretion, although not always clinically obvious, suggesting tumoral transformation of thyrotrope lineage cells.
NASA Astrophysics Data System (ADS)
Li, Zhong-xiao; Li, Zhen-chun
2016-09-01
The multichannel predictive deconvolution can be conducted in overlapping temporal and spatial data windows to solve the 2D predictive filter for multiple removal. Generally, the 2D predictive filter can better remove multiples at the cost of more computation time compared with the 1D predictive filter. In this paper we first use the cross-correlation strategy to determine the limited supporting region of filters where the coefficients play a major role for multiple removal in the filter coefficient space. To solve the 2D predictive filter the traditional multichannel predictive deconvolution uses the least squares (LS) algorithm, which requires primaries and multiples are orthogonal. To relax the orthogonality assumption the iterative reweighted least squares (IRLS) algorithm and the fast iterative shrinkage thresholding (FIST) algorithm have been used to solve the 2D predictive filter in the multichannel predictive deconvolution with the non-Gaussian maximization (L1 norm minimization) constraint of primaries. The FIST algorithm has been demonstrated as a faster alternative to the IRLS algorithm. In this paper we introduce the FIST algorithm to solve the filter coefficients in the limited supporting region of filters. Compared with the FIST based multichannel predictive deconvolution without the limited supporting region of filters the proposed method can reduce the computation burden effectively while achieving a similar accuracy. Additionally, the proposed method can better balance multiple removal and primary preservation than the traditional LS based multichannel predictive deconvolution and FIST based single channel predictive deconvolution. Synthetic and field data sets demonstrate the effectiveness of the proposed method.
Studing Regional Wave Source Time Functions Using A Massive Automated EGF Deconvolution Procedure
NASA Astrophysics Data System (ADS)
Xie, J. "; Schaff, D. P.
2010-12-01
Reliably estimated source time functions (STF) from high-frequency regional waveforms, such as Lg, Pn and Pg, provide important input for seismic source studies, explosion detection, and minimization of parameter trade-off in attenuation studies. The empirical Green’s function (EGF) method can be used for estimating STF, but it requires a strict recording condition. Waveforms from pairs of events that are similar in focal mechanism, but different in magnitude must be on-scale recorded on the same stations for the method to work. Searching for such waveforms can be very time consuming, particularly for regional waves that contain complex path effects and have reduced S/N ratios due to attenuation. We have developed a massive, automated procedure to conduct inter-event waveform deconvolution calculations from many candidate event pairs. The procedure automatically evaluates the “spikiness” of the deconvolutions by calculating their “sdc”, which is defined as the peak divided by the background value. The background value is calculated as the mean absolute value of the deconvolution, excluding 10 s around the source time function. When the sdc values are about 10 or higher, the deconvolutions are found to be sufficiently spiky (pulse-like), indicating similar path Green’s functions and good estimates of the STF. We have applied this automated procedure to Lg waves and full regional wavetrains from 989 M ≥ 5 events in and around China, calculating about a million deconvolutions. Of these we found about 2700 deconvolutions with sdc greater than 9, which, if having a sufficiently broad frequency band, can be used to estimate the STF of the larger events. We are currently refining our procedure, as well as the estimated STFs. We will infer the source scaling using the STFs. We will also explore the possibility that the deconvolution procedure could complement cross-correlation in a real time event-screening process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merlin, Thibaut, E-mail: thibaut.merlin@telecom-bretagne.eu; Visvikis, Dimitris; Fernandez, Philippe
2015-02-15
Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimationmore » of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a wavelet-based denoising in the reconstruction process to better correct for PVE. Future work includes further evaluations of the proposed method on clinical datasets and the use of improved PSF models.« less
Huang, Cheng-Yen; Hsieh, Ming-Ching; Zhou, Qinwei
2017-04-01
Monoclonal antibodies have become the fastest growing protein therapeutics in recent years. The stability and heterogeneity pertaining to its physical and chemical structures remain a big challenge. Tryptophan fluorescence has been proven to be a versatile tool to monitor protein tertiary structure. By modeling the tryptophan fluorescence emission envelope with log-normal distribution curves, the quantitative measure can be exercised for the routine characterization of monoclonal antibody overall tertiary structure. Furthermore, the log-normal deconvolution results can be presented as a two-dimensional plot with tryptophan emission bandwidth vs. emission maximum to enhance the resolution when comparing samples or as a function of applied perturbations. We demonstrate this by studying four different monoclonal antibodies, which show the distinction on emission bandwidth-maximum plot despite their similarity in overall amino acid sequences and tertiary structures. This strategy is also used to demonstrate the tertiary structure comparability between different lots manufactured for one of the monoclonal antibodies (mAb2). In addition, in the unfolding transition studies of mAb2 as a function of guanidine hydrochloride concentration, the evolution of the tertiary structure can be clearly traced in the emission bandwidth-maximum plot.
NASA Astrophysics Data System (ADS)
Lemos, José P. S.; Minamitsuji, Masato; Zaslavskii, Oleg B.
2017-02-01
In a (2 +1 )-dimensional spacetime with a negative cosmological constant, the thermodynamics and the entropy of an extremal rotating thin shell, i.e., an extremal rotating ring, are investigated. The outer and inner regions with respect to the shell are taken to be the Bañados-Teitelbom-Zanelli (BTZ) spacetime and the vacuum ground state anti-de Sitter spacetime, respectively. By applying the first law of thermodynamics to the extremal thin shell, one shows that the entropy of the shell is an arbitrary well-behaved function of the gravitational area A+ alone, S =S (A+). When the thin shell approaches its own gravitational radius r+ and turns into an extremal rotating BTZ black hole, it is found that the entropy of the spacetime remains such a function of A+, both when the local temperature of the shell at the gravitational radius is zero and nonzero. It is thus vindicated by this analysis that extremal black holes, here extremal BTZ black holes, have different properties from the corresponding nonextremal black holes, which have a definite entropy, the Bekenstein-Hawking entropy S (A+)=A/+4G , where G is the gravitational constant. It is argued that for extremal black holes, in particular for extremal BTZ black holes, one should set 0 ≤S (A+)≤A/+4G;i.e., the extremal black hole entropy has values in between zero and the maximum Bekenstein-Hawking entropy A/+4 G . Thus, rather than having just two entropies for extremal black holes, as previous results have debated, namely, 0 and A/+4 G , it is shown here that extremal black holes, in particular extremal BTZ black holes, may have a continuous range of entropies, limited by precisely those two entropies. Surely, the entropy that a particular extremal black hole picks must depend on past processes, notably on how it was formed. A remarkable relation between the third law of thermodynamics and the impossibility for a massive body to reach the velocity of light is also found. In addition, in the procedure, it becomes clear that there are two distinct angular velocities for the shell, the mechanical and thermodynamic angular velocities. We comment on the relationship between these two velocities. In passing, we clarify, for a static spacetime with a thermal shell, the meaning of the Tolman temperature formula at a generic radius and at the shell.
Quan, H T
2014-06-01
We study the maximum efficiency of a heat engine based on a small system. It is revealed that due to the finiteness of the system, irreversibility may arise when the working substance contacts with a heat reservoir. As a result, there is a working-substance-dependent correction to the Carnot efficiency. We derive a general and simple expression for the maximum efficiency of a Carnot cycle heat engine in terms of the relative entropy. This maximum efficiency approaches the Carnot efficiency asymptotically when the size of the working substance increases to the thermodynamic limit. Our study extends Carnot's result of the maximum efficiency to an arbitrary working substance and elucidates the subtlety of thermodynamic laws in small systems.
A robot control formalism based on an information quality concept
NASA Technical Reports Server (NTRS)
Ekman, A.; Torne, A.; Stromberg, D.
1994-01-01
A relevance measure based on Jaynes maximum entropy principle is introduced. Information quality is the conjunction of accuracy and relevance. The formalism based on information quality is developed for one-agent applications. The robot requires a well defined working environment where properties of each object must be accurately specified.
Coupling diffusion and maximum entropy models to estimate thermal inertia
USDA-ARS?s Scientific Manuscript database
Thermal inertia is a physical property of soil at the land surface related to water content. We have developed a method for estimating soil thermal inertia using two daily measurements of surface temperature, to capture the diurnal range, and diurnal time series of net radiation and specific humidi...
Biological evolution of replicator systems: towards a quantitative approach.
Martin, Osmel; Horvath, J E
2013-04-01
The aim of this work is to study the features of a simple replicator chemical model of the relation between kinetic stability and entropy production under the action of external perturbations. We quantitatively explore the different paths leading to evolution in a toy model where two independent replicators compete for the same substrate. To do that, the same scenario described originally by Pross (J Phys Org Chem 17:312-316, 2004) is revised and new criteria to define the kinetic stability are proposed. Our results suggest that fast replicator populations are continually favored by the effects of strong stochastic environmental fluctuations capable to determine the global population, the former assumed to be the only acting evolution force. We demonstrate that the process is continually driven by strong perturbations only, and that population crashes may be useful proxies for these catastrophic environmental fluctuations. As expected, such behavior is particularly enhanced under very large scale perturbations, suggesting a likely dynamical footprint in the recovery patterns of new species after mass extinction events in the Earth's geological past. Furthermore, the hypothesis that natural selection always favors the faster processes may give theoretical support to different studies that claim the applicability of maximum principles like the Maximum Metabolic Flux (MMF) or Maximum Entropy Productions Principle (MEPP), seen as the main goal of biological evolution.
Xu, Yadong; Serre, Marc L; Reyes, Jeanette; Vizuete, William
2016-04-19
To improve ozone exposure estimates for ambient concentrations at a national scale, we introduce our novel Regionalized Air Quality Model Performance (RAMP) approach to integrate chemical transport model (CTM) predictions with the available ozone observations using the Bayesian Maximum Entropy (BME) framework. The framework models the nonlinear and nonhomoscedastic relation between air pollution observations and CTM predictions and for the first time accounts for variability in CTM model performance. A validation analysis using only noncollocated data outside of a validation radius rv was performed and the R(2) between observations and re-estimated values for two daily metrics, the daily maximum 8-h average (DM8A) and the daily 24-h average (D24A) ozone concentrations, were obtained with the OBS scenario using ozone observations only in contrast with the RAMP and a Constant Air Quality Model Performance (CAMP) scenarios. We show that, by accounting for the spatial and temporal variability in model performance, our novel RAMP approach is able to extract more information in terms of R(2) increase percentage, with over 12 times for the DM8A and over 3.5 times for the D24A ozone concentrations, from CTM predictions than the CAMP approach assuming that model performance does not change across space and time.
Biological Evolution of Replicator Systems: Towards a Quantitative Approach
NASA Astrophysics Data System (ADS)
Martin, Osmel; Horvath, J. E.
2013-04-01
The aim of this work is to study the features of a simple replicator chemical model of the relation between kinetic stability and entropy production under the action of external perturbations. We quantitatively explore the different paths leading to evolution in a toy model where two independent replicators compete for the same substrate. To do that, the same scenario described originally by Pross (J Phys Org Chem 17:312-316, 2004) is revised and new criteria to define the kinetic stability are proposed. Our results suggest that fast replicator populations are continually favored by the effects of strong stochastic environmental fluctuations capable to determine the global population, the former assumed to be the only acting evolution force. We demonstrate that the process is continually driven by strong perturbations only, and that population crashes may be useful proxies for these catastrophic environmental fluctuations. As expected, such behavior is particularly enhanced under very large scale perturbations, suggesting a likely dynamical footprint in the recovery patterns of new species after mass extinction events in the Earth's geological past. Furthermore, the hypothesis that natural selection always favors the faster processes may give theoretical support to different studies that claim the applicability of maximum principles like the Maximum Metabolic Flux (MMF) or Maximum Entropy Productions Principle (MEPP), seen as the main goal of biological evolution.
NASA Technical Reports Server (NTRS)
Wood, G. M.; Rayborn, G. H.; Ioup, J. W.; Ioup, G. E.; Upchurch, B. T.; Howard, S. J.
1981-01-01
Mathematical deconvolution of digitized analog signals from scientific measuring instruments is shown to be a means of extracting important information which is otherwise hidden due to time-constant and other broadening or distortion effects caused by the experiment. Three different approaches to deconvolution and their subsequent application to recorded data from three analytical instruments are considered. To demonstrate the efficacy of deconvolution, the use of these approaches to solve the convolution integral for the gas chromatograph, magnetic mass spectrometer, and the time-of-flight mass spectrometer are described. Other possible applications of these types of numerical treatment of data to yield superior results from analog signals of the physical parameters normally measured in aerospace simulation facilities are suggested and briefly discussed.
Multi-frame partially saturated images blind deconvolution
NASA Astrophysics Data System (ADS)
Ye, Pengzhao; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting
2016-12-01
When blurred images have saturated or over-exposed pixels, conventional blind deconvolution approaches often fail to estimate accurate point spread function (PSF) and will introduce local ringing artifacts. In this paper, we propose a method to deal with the problem under the modified multi-frame blind deconvolution framework. First, in the kernel estimation step, a light streak detection scheme using multi-frame blurred images is incorporated into the regularization constraint. Second, we deal with image regions affected by the saturated pixels separately by modeling a weighted matrix during each multi-frame deconvolution iteration process. Both synthetic and real-world examples show that more accurate PSFs can be estimated and restored images have richer details and less negative effects compared to state of art methods.
Parallelization of a blind deconvolution algorithm
NASA Astrophysics Data System (ADS)
Matson, Charles L.; Borelli, Kathy J.
2006-09-01
Often it is of interest to deblur imagery in order to obtain higher-resolution images. Deblurring requires knowledge of the blurring function - information that is often not available separately from the blurred imagery. Blind deconvolution algorithms overcome this problem by jointly estimating both the high-resolution image and the blurring function from the blurred imagery. Because blind deconvolution algorithms are iterative in nature, they can take minutes to days to deblur an image depending how many frames of data are used for the deblurring and the platforms on which the algorithms are executed. Here we present our progress in parallelizing a blind deconvolution algorithm to increase its execution speed. This progress includes sub-frame parallelization and a code structure that is not specialized to a specific computer hardware architecture.
Improved deconvolution of very weak confocal signals.
Day, Kasey J; La Rivière, Patrick J; Chandler, Talon; Bindokas, Vytas P; Ferrier, Nicola J; Glick, Benjamin S
2017-01-01
Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal of background noise. This approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage.
Is applicable thermodynamics of negative temperature for living organisms?
NASA Astrophysics Data System (ADS)
Atanasov, Atanas Todorov
2017-11-01
During organismal development the moment of sexual maturity can be characterizes by nearly maximum basal metabolic rate and body mass. Once the living organism reaches extreme values of the mass and the basal metabolic rate, it reaches near equilibrium thermodynamic steady state physiological level with maximum organismal complexity. Such thermodynamic systems that reach equilibrium steady state level at maximum mass-energy characteristics can be regarded from the prospective of thermodynamics of negative temperature. In these systems the increase of the internal and free energy is accompanied with decrease of the entropy. In our study we show the possibility the living organisms to regard as thermodynamic system with negative temperature
Multichannel blind deconvolution of spatially misaligned images.
Sroubek, Filip; Flusser, Jan
2005-07-01
Existing multichannel blind restoration techniques assume perfect spatial alignment of channels, correct estimation of blur size, and are prone to noise. We developed an alternating minimization scheme based on a maximum a posteriori estimation with a priori distribution of blurs derived from the multichannel framework and a priori distribution of original images defined by the variational integral. This stochastic approach enables us to recover the blurs and the original image from channels severely corrupted by noise. We observe that the exact knowledge of the blur size is not necessary, and we prove that translation misregistration up to a certain extent can be automatically removed in the restoration process.
NASA Astrophysics Data System (ADS)
Luo, Lin; Fan, Min; Shen, Mang-zuo
2008-01-01
Atmospheric turbulence severely restricts the spatial resolution of astronomical images obtained by a large ground-based telescope. In order to reduce effectively this effect, we propose a method of blind deconvolution, with a bandwidth constraint determined by the parameters of the telescope's optical system based on the principle of maximum likelihood estimation, in which the convolution error function is minimized by using the conjugate gradient algorithm. A relation between the parameters of the telescope optical system and the image's frequency-domain bandwidth is established, and the speed of convergence of the algorithm is improved by using the positivity constraint on the variables and the limited-bandwidth constraint on the point spread function. To avoid the effective Fourier frequencies exceed the cut-off frequency, it is required that each single image element (e.g., the pixel in the CCD imaging) in the sampling focal plane should be smaller than one fourth of the diameter of the diffraction spot. In the algorithm, no object-centered constraint was used, so the proposed method is suitable for the image restoration of a whole field of objects. By the computer simulation and by the restoration of an actually-observed image of α Piscium, the effectiveness of the proposed method is demonstrated.
Research of MPPT for photovoltaic generation based on two-dimensional cloud model
NASA Astrophysics Data System (ADS)
Liu, Shuping; Fan, Wei
2013-03-01
The cloud model is a mathematical representation to fuzziness and randomness in linguistic concepts. It represents a qualitative concept with expected value Ex, entropy En and hyper entropy He, and integrates the fuzziness and randomness of a linguistic concept in a unified way. This model is a new method for transformation between qualitative and quantitative in the knowledge. This paper is introduced MPPT (maximum power point tracking, MPPT) controller based two- dimensional cloud model through analysis of auto-optimization MPPT control of photovoltaic power system and combining theory of cloud model. Simulation result shows that the cloud controller is simple and easy, directly perceived through the senses, and has strong robustness, better control performance.
Computing the Energy Cost of the Information Transmitted by Model Biological Neurons
NASA Astrophysics Data System (ADS)
Torrealdea, F. J.; Sarasola, C.; d'Anjou, A.; Moujahid, A.
2009-08-01
We assign an energy function to a Hindmarsh-Rose model of a neuron and use it to compute values of average energy consumption during its signalling activity. We also compute values of information entropy of an isolated neuron and of mutual information between two electrically coupled neurons. We find that for the isolated neuron the chaotic signaling regime is the one with the biggest ratio of information entropy to energy consumption. We also find that in the case of electrically coupled neurons there are values of the coupling strength at which the mutual information to energy consumption ratio is maximum, that is, that transmitting at that coupling conditions is energetically less expensive.
Minimization of a free-energy-like potential for non-equilibrium flow systems at steady state
Niven, Robert K.
2010-01-01
This study examines a new formulation of non-equilibrium thermodynamics, which gives a conditional derivation of the ‘maximum entropy production’ (MEP) principle for flow and/or chemical reaction systems at steady state. The analysis uses a dimensionless potential function ϕst for non-equilibrium systems, analogous to the free energy concept of equilibrium thermodynamics. Spontaneous reductions in ϕst arise from increases in the ‘flux entropy’ of the system—a measure of the variability of the fluxes—or in the local entropy production; conditionally, depending on the behaviour of the flux entropy, the formulation reduces to the MEP principle. The inferred steady state is also shown to exhibit high variability in its instantaneous fluxes and rates, consistent with the observed behaviour of turbulent fluid flow, heat convection and biological systems; one consequence is the coexistence of energy producers and consumers in ecological systems. The different paths for attaining steady state are also classified. PMID:20368250
Near room temperature magnetocaloric properties and the universal curve of MnCoGe1-xCux
NASA Astrophysics Data System (ADS)
Si, Xiaodong; Liu, Yongsheng; Lu, Xiaofei; Shen, Yulong; Wang, Wenli; Yu, Wenying; Zhou, Tao; Gao, Tian
2017-05-01
Intermetallic compounds based on MnCoGe have drawn attention due to the coupled magnetic and structural transformations and the large magnetocaloric entropy. Here, we provide a systematic comparison of experimental data under different magnetic fields with magnetic and the magnetocaloric properties. The ferromagnetic transition temperature (TC) increases from 353.4(6) K for x = 0.01 to 363.4(4) K for x = 0.04 with increasing nominal copper content. The maximum magnetic entropy change |ΔSM| in a magnetic field change of 5 T is found to be 18.3(2) J/(kg K) with a large relative cooling power (RCP) value of 292.5(4) J/kg for x = 0.01, revealing that the present system can provide an acceptable magnetocaloric effect at a cheaper price for magnetic refrigeration materials. Making attempt to contrast a master curve for the present system, we find the experimental values of magnetic field dependence of the magnetic entropy change are consistent with a phenomenological universal curve.
Gravitational Thermodynamics for Interstellar Gas and Weakly Degenerate Quantum Gas
NASA Astrophysics Data System (ADS)
Zhu, Ding Yu; Shen, Jian Qi
2016-03-01
The temperature distribution of an ideal gas in gravitational fields has been identified as a longstanding problem in thermodynamics and statistical physics. According to the principle of entropy increase (i.e., the principle of maximum entropy), we apply a variational principle to the thermodynamical entropy functional of an ideal gas and establish a relationship between temperature gradient and gravitational field strength. As an illustrative example, the temperature and density distributions of an ideal gas in two simple but typical gravitational fields (i.e., a uniform gravitational field and an inverse-square gravitational field) are considered on the basis of entropic and hydrostatic equilibrium conditions. The effect of temperature inhomogeneity in gravitational fields is also addressed for a weakly degenerate quantum gas (e.g., Fermi and Bose gas). The present gravitational thermodynamics of a gas would have potential applications in quantum fluids, e.g., Bose-Einstein condensates in Earth’s gravitational field and the temperature fluctuation spectrum in cosmic microwave background radiation.
Technology in the high entropy world.
Tambo, N
2006-01-01
Modern growing society is mainly driven by oils and may be designated "petroleum civilisation". However, the basic energy used to drive the global ecosystem is solar radiation. The amount of fossil energy consumption is minimal in the whole global energy balance. Economic growth is mainly controlled by the fossil (commercial) energy consumption rate in urban areas. Water and sanitation systems are bridging economical activities and global ecosystems. Therefore, vast amounts of high entropy solar energy should always be taken into account in the water industry. Only in urban/industrial areas where most of the GDP is earned, are commercial energy driven systems inevitably introduced with maximum effort for energy saving. A water district concept to ensure appropriate quality use with the least deterioration of the environment is proposed. In other areas, decentralised water and sanitation systems driven on soft energy paths would be recommended. A process and system designed on a high entropy energy system would be the foundation for a future urban metabolic system revolution for when oil-based energy become scarce.
Santana Vieira, Alexsandro; Desidério Fernandes, Wedson; Fernando Antonialli-Junior, William
2010-05-01
We investigated the changes in the behavioral repertoire over the course of life and determined the life expectancy and entropy of workers of the ant Ectatomma vizottoi. Newly emerged ants were individually marked with model airplane paint for observation of behaviors and determination of the age and life expectancy. Ants were divided into two groups: young and old workers. The 36 behaviors observed were divided into eight categories. Workers exhibit a clear division of tasks throughout their lives, with young workers performing more tasks inside the colony and old workers, outside, unlike species that have small colonies. This species also exhibits an intermediate life expectancy compared to workers of other species that are also intermediary in size. This supports the hypothesis of a relationship between size and maximum life expectancy, but it also suggests that other factors may also be acting in concert. Entropy value shows a high mortality rate during the first life intervals.
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Dutra, L. V.; Mascarenhas, N. D. A.; Mitsuo, Fernando Augusta, II
1984-01-01
A study area near Ribeirao Preto in Sao Paulo state was selected, with predominance in sugar cane. Eight features were extracted from the 4 original bands of LANDSAT image, using low-pass and high-pass filtering to obtain spatial features. There were 5 training sites in order to acquire the necessary parameters. Two groups of four channels were selected from 12 channels using JM-distance and entropy criterions. The number of selected channels was defined by physical restrictions of the image analyzer and computacional costs. The evaluation was performed by extracting the confusion matrix for training and tests areas, with a maximum likelihood classifier, and by defining performance indexes based on those matrixes for each group of channels. Results show that in spatial features and supervised classification, the entropy criterion is better in the sense that allows a more accurate and generalized definition of class signature. On the other hand, JM-distance criterion strongly reduces the misclassification within training areas.
Septal penetration correction in I-131 imaging following thyroid cancer treatment
NASA Astrophysics Data System (ADS)
Barrack, Fiona; Scuffham, James; McQuaid, Sarah
2018-04-01
Whole body gamma camera images acquired after I-131 treatment for thyroid cancer can suffer from collimator septal penetration artefacts because of the high energy of the gamma photons. This results in the appearance of ‘spoke’ artefacts, emanating from regions of high activity concentration, caused by the non-isotropic attenuation of the collimator. Deconvolution has the potential to reduce such artefacts, by taking into account the non-Gaussian point-spread-function (PSF) of the system. A Richardson–Lucy deconvolution algorithm, with and without prior scatter-correction was tested as a method of reducing septal penetration in planar gamma camera images. Phantom images (hot spheres within a warm background) were acquired and deconvolution using a measured PSF was applied. The results were evaluated through region-of-interest and line profile analysis to determine the success of artefact reduction and the optimal number of deconvolution iterations and damping parameter (λ). Without scatter-correction, the optimal results were obtained with 15 iterations and λ = 0.01, with the counts in the spokes reduced to 20% of the original value, indicating a substantial decrease in their prominence. When a triple-energy-window scatter-correction was applied prior to deconvolution, the optimal results were obtained with six iterations and λ = 0.02, which reduced the spoke counts to 3% of the original value. The prior application of scatter-correction therefore produced the best results, with a marked change in the appearance of the images. The optimal settings were then applied to six patient datasets, to demonstrate its utility in the clinical setting. In all datasets, spoke artefacts were substantially reduced after the application of scatter-correction and deconvolution, with the mean spoke count being reduced to 10% of the original value. This indicates that deconvolution is a promising technique for septal penetration artefact reduction that could potentially improve the diagnostic accuracy of I-131 imaging. Novelty and significance This work has demonstrated that scatter correction combined with deconvolution can be used to substantially reduce the appearance of septal penetration artefacts in I-131 phantom and patient gamma camera planar images, enable improved visualisation of the I-131 distribution. Deconvolution with symmetric PSF has previously been used to reduce artefacts in gamma camera images however this work details the novel use of an asymmetric PSF to remove the angularly dependent septal penetration artefacts.
Efficient algorithms and implementations of entropy-based moment closures for rarefied gases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaerer, Roman Pascal, E-mail: schaerer@mathcces.rwth-aachen.de; Bansal, Pratyuksh; Torrilhon, Manuel
We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) , we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropymore » distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.« less
NASA Astrophysics Data System (ADS)
Eduardo Virgilio Silva, Luiz; Otavio Murta, Luiz
2012-12-01
Complexity in time series is an intriguing feature of living dynamical systems, with potential use for identification of system state. Although various methods have been proposed for measuring physiologic complexity, uncorrelated time series are often assigned high values of complexity, errouneously classifying them as a complex physiological signals. Here, we propose and discuss a method for complex system analysis based on generalized statistical formalism and surrogate time series. Sample entropy (SampEn) was rewritten inspired in Tsallis generalized entropy, as function of q parameter (qSampEn). qSDiff curves were calculated, which consist of differences between original and surrogate series qSampEn. We evaluated qSDiff for 125 real heart rate variability (HRV) dynamics, divided into groups of 70 healthy, 44 congestive heart failure (CHF), and 11 atrial fibrillation (AF) subjects, and for simulated series of stochastic and chaotic process. The evaluations showed that, for nonperiodic signals, qSDiff curves have a maximum point (qSDiffmax) for q ≠1. Values of q where the maximum point occurs and where qSDiff is zero were also evaluated. Only qSDiffmax values were capable of distinguish HRV groups (p-values 5.10×10-3, 1.11×10-7, and 5.50×10-7 for healthy vs. CHF, healthy vs. AF, and CHF vs. AF, respectively), consistently with the concept of physiologic complexity, and suggests a potential use for chaotic system analysis.
Source Pulse Estimation of Mine Shock by Blind Deconvolution
NASA Astrophysics Data System (ADS)
Makowski, R.
The objective of seismic signal deconvolution is to extract from the signal information concerning the rockmass or the signal in the source of the shock. In the case of blind deconvolution, we have to extract information regarding both quantities. Many methods of deconvolution made use of in prospective seismology were found to be of minor utility when applied to shock-induced signals recorded in the mines of the Lubin Copper District. The lack of effectiveness should be attributed to the inadequacy of the model on which the methods are based, with respect to the propagation conditions for that type of signal. Each of the blind deconvolution methods involves a number of assumptions; hence, only if these assumptions are fulfilled, we may expect reliable results.Consequently, we had to formulate a different model for the signals recorded in the copper mines of the Lubin District. The model is based on the following assumptions: (1) The signal emitted by the sh ock source is a short-term signal. (2) The signal transmitting system (rockmass) constitutes a parallel connection of elementary systems. (3) The elementary systems are of resonant type. Such a model seems to be justified by the geological structure as well as by the positions of the shock foci and seismometers. The results of time-frequency transformation also support the dominance of resonant-type propagation.Making use of the model, a new method for the blind deconvolution of seismic signals has been proposed. The adequacy of the new model, as well as the efficiency of the proposed method, has been confirmed by the results of blind deconvolution. The slight approximation errors obtained with a small number of approximating elements additionally corroborate the adequacy of the model.
Improving Range Estimation of a 3-Dimensional Flash Ladar via Blind Deconvolution
2010-09-01
12 2.1.4 Optical Imaging as a Linear and Nonlinear System 15 2.1.5 Coherence Theory and Laser Light Statistics . . . 16 2.2 Deconvolution...rather than deconvolution. 2.1.5 Coherence Theory and Laser Light Statistics. Using [24] and [25], this section serves as background on coherence theory...the laser light incident on the detector surface. The image intensity related to different types of coherence is governed by the laser light’s spatial
Evaluation of deconvolution modelling applied to numerical combustion
NASA Astrophysics Data System (ADS)
Mehl, Cédric; Idier, Jérôme; Fiorina, Benoît
2018-01-01
A possible modelling approach in the large eddy simulation (LES) of reactive flows is to deconvolve resolved scalars. Indeed, by inverting the LES filter, scalars such as mass fractions are reconstructed. This information can be used to close budget terms of filtered species balance equations, such as the filtered reaction rate. Being ill-posed in the mathematical sense, the problem is very sensitive to any numerical perturbation. The objective of the present study is to assess the ability of this kind of methodology to capture the chemical structure of premixed flames. For that purpose, three deconvolution methods are tested on a one-dimensional filtered laminar premixed flame configuration: the approximate deconvolution method based on Van Cittert iterative deconvolution, a Taylor decomposition-based method, and the regularised deconvolution method based on the minimisation of a quadratic criterion. These methods are then extended to the reconstruction of subgrid scale profiles. Two methodologies are proposed: the first one relies on subgrid scale interpolation of deconvolved profiles and the second uses parametric functions to describe small scales. Conducted tests analyse the ability of the method to capture the chemical filtered flame structure and front propagation speed. Results show that the deconvolution model should include information about small scales in order to regularise the filter inversion. a priori and a posteriori tests showed that the filtered flame propagation speed and structure cannot be captured if the filter size is too large.
Faceting for direction-dependent spectral deconvolution
NASA Astrophysics Data System (ADS)
Tasse, C.; Hugo, B.; Mirmont, M.; Smirnov, O.; Atemkeng, M.; Bester, L.; Hardcastle, M. J.; Lakhoo, R.; Perkins, S.; Shimwell, T.
2018-04-01
The new generation of radio interferometers is characterized by high sensitivity, wide fields of view and large fractional bandwidth. To synthesize the deepest images enabled by the high dynamic range of these instruments requires us to take into account the direction-dependent Jones matrices, while estimating the spectral properties of the sky in the imaging and deconvolution algorithms. In this paper we discuss and implement a wideband wide-field spectral deconvolution framework (DDFacet) based on image plane faceting, that takes into account generic direction-dependent effects. Specifically, we present a wide-field co-planar faceting scheme, and discuss the various effects that need to be taken into account to solve for the deconvolution problem (image plane normalization, position-dependent Point Spread Function, etc). We discuss two wideband spectral deconvolution algorithms based on hybrid matching pursuit and sub-space optimisation respectively. A few interesting technical features incorporated in our imager are discussed, including baseline dependent averaging, which has the effect of improving computing efficiency. The version of DDFacet presented here can account for any externally defined Jones matrices and/or beam patterns.
NASA Astrophysics Data System (ADS)
Pompa, P. P.; Cingolani, R.; Rinaldi, R.
2003-07-01
In this paper, we present a deconvolution method aimed at spectrally resolving the broad fluorescence spectra of proteins, namely, of the enzyme bovine liver glutamate dehydrogenase (GDH). The analytical procedure is based on the deconvolution of the emission spectra into three distinct Gaussian fluorescing bands Gj. The relative changes of the Gj parameters are directly related to the conformational changes of the enzyme, and provide interesting information about the fluorescence dynamics of the individual emitting contributions. Our deconvolution method results in an excellent fitting of all the spectra obtained with GDH in a number of experimental conditions (various conformational states of the protein) and describes very well the dynamics of a variety of phenomena, such as the dependence of hexamers association on protein concentration, the dynamics of thermal denaturation, and the interaction process between the enzyme and external quenchers. The investigation was carried out by means of different optical experiments, i.e., native enzyme fluorescence, thermal-induced unfolding, and fluorescence quenching studies, utilizing both the analysis of the “average” behavior of the enzyme and the proposed deconvolution approach.
4Pi microscopy deconvolution with a variable point-spread function.
Baddeley, David; Carl, Christian; Cremer, Christoph
2006-09-20
To remove the axial sidelobes from 4Pi images, deconvolution forms an integral part of 4Pi microscopy. As a result of its high axial resolution, the 4Pi point spread function (PSF) is particularly susceptible to imperfect optical conditions within the sample. This is typically observed as a shift in the position of the maxima under the PSF envelope. A significantly varying phase shift renders deconvolution procedures based on a spatially invariant PSF essentially useless. We present a technique for computing the forward transformation in the case of a varying phase at a computational expense of the same order of magnitude as that of the shift invariant case, a method for the estimation of PSF phase from an acquired image, and a deconvolution procedure built on these techniques.
Improved deconvolution of very weak confocal signals
Day, Kasey J.; La Rivière, Patrick J.; Chandler, Talon; Bindokas, Vytas P.; Ferrier, Nicola J.; Glick, Benjamin S.
2017-01-01
Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal of background noise. This approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage. PMID:28868135
Improved deconvolution of very weak confocal signals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Day, Kasey J.; La Riviere, Patrick J.; Chandler, Talon
Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal ofmore » background noise. Here, this approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage.« less
Improved deconvolution of very weak confocal signals
Day, Kasey J.; La Riviere, Patrick J.; Chandler, Talon; ...
2017-06-06
Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal ofmore » background noise. Here, this approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage.« less
Application of SNODAS and hydrologic models to enhance entropy-based snow monitoring network design
NASA Astrophysics Data System (ADS)
Keum, Jongho; Coulibaly, Paulin; Razavi, Tara; Tapsoba, Dominique; Gobena, Adam; Weber, Frank; Pietroniro, Alain
2018-06-01
Snow has a unique characteristic in the water cycle, that is, snow falls during the entire winter season, but the discharge from snowmelt is typically delayed until the melting period and occurs in a relatively short period. Therefore, reliable observations from an optimal snow monitoring network are necessary for an efficient management of snowmelt water for flood prevention and hydropower generation. The Dual Entropy and Multiobjective Optimization is applied to design snow monitoring networks in La Grande River Basin in Québec and Columbia River Basin in British Columbia. While the networks are optimized to have the maximum amount of information with minimum redundancy based on entropy concepts, this study extends the traditional entropy applications to the hydrometric network design by introducing several improvements. First, several data quantization cases and their effects on the snow network design problems were explored. Second, the applicability the Snow Data Assimilation System (SNODAS) products as synthetic datasets of potential stations was demonstrated in the design of the snow monitoring network of the Columbia River Basin. Third, beyond finding the Pareto-optimal networks from the entropy with multi-objective optimization, the networks obtained for La Grande River Basin were further evaluated by applying three hydrologic models. The calibrated hydrologic models simulated discharges using the updated snow water equivalent data from the Pareto-optimal networks. Then, the model performances for high flows were compared to determine the best optimal network for enhanced spring runoff forecasting.
Sensitivity of Tropical Cyclone Spinup Time to the Initial Entropy Deficit
NASA Astrophysics Data System (ADS)
Tang, B.; Corbosiero, K. L.; Rios-Berrios, R.; Alland, J.; Berman, J.
2014-12-01
The development timescale of a tropical cyclone from genesis to the start of rapid intensification in an axisymmetric model is hypothesized to be a function of the initial entropy deficit. We run a set of idealized simulations in which the initial entropy deficit between the boundary layer and free troposphere varies from 0 to 100 J kg-1 K-1. The development timescale is measured by changes in the integrated kinetic energy of the low-level vortex. This timescale is inversely related to the mean mass flux during the tropical cyclone gestation period. The mean mass flux, in turn, is a function of the statistics of convective updrafts and downdrafts. Contour frequency by altitude diagrams show that entrainment of dry air into updrafts is predominately responsible for differences in the mass flux between the experiments, while downdrafts play a secondary role. Analyses of the potential and kinetic energy budgets indicate less efficient conversion of available potential energy to kinetic energy in the experiments with higher entropy deficits. Entrainment leads to the loss of buoyancy and the destruction of available potential energy. In the presence of strong downdrafts, there can even be a reversal of the conversion term. Weaker and more radially confined radial inflow results in less convergence of angular momentum in the experiments with higher entropy deficits. The result is a slower vortex spinup and a reduction in steady-state vortex size, despite similar steady-state maximum intensities among the experiments.
First Self-Consistent, Two-Layer Model of Near-Surface Water-Equivalent-Hydrogen on Mars
NASA Astrophysics Data System (ADS)
Feldman, W. C.; Pathare, A.; Prettyman, T. H.; Maurice, S.
2015-12-01
This study uses 9.5 years of Mars Odyssey Neutron Spectrometer (MONS) data [1]. We have used the epithermal and fast neutron count rates to determine the water-equivalent-hydrogen (WEH) content of an upper layer, Wup, having thickness D. The "crossover" technique we utilized is an improvement over previous work [2,3]. We then used Monte Carlo simulated grids of epithermal and thermal count rates spanning Wup = 1% to 15% [4] to determine the WEH content, Wdn, of a semi-infinite lower layer buried at depth, D. We also advance upon previous work by using improved deconvolution methods to reduce spatial blurring in fast and epithermal maps [5]. The resultant count-rates were digitized into a 2°x2°cylindrical grid for all WEH computations. Two sets of WEH maps will be shown. The first uses the one-layer model developed initially by Feldman et al. [6]. Comparison of the undeconvolved and deconvolved versions clearly illustrates the improvement obtained by deconvolution. We will also present the full two layer maps of Wup, Wdn, and D for the deconvolved data sets, which show: 1) contrary to our previous preliminary mapping [3], the fresh icy mid-latitude craters identified by [7] are NOT exclusively found in regions with average Wdn values that exceed the pore-filling threshold for regolith ice; 2) a maximum Wdn of about 80% by weight at the Phoenix site; 3) an isolated Wdn maximum just east of Gale crater that is centered on Aeolis Mensae; 4) a resolved Wdn maximum that overlays the Orsen Wells crater on Xanthe Terra; 5) Wdn local maxima that hug the western flanks of Olympus Mons and Elysium Mons, and 6) several Wdn maxima that cover Arabia Terra. We will present and interpret regional maps of all of these features. Refs: [1] Maurice et al. JGR, 2011; [2] Feldman et al. JGR, 2011; [3] Pathare et al. 8th Mars Conf., 2014; [4] Prettyman et al. JGR, 2004 [5] Prettyman et al. JGR, 2009; [6] Feldman et al. JGR, 2004; [7] Dundas et al. JGR, 2014.
2008-07-01
consider a proof as a composition relative to some system of music or as a painting. From the Bayesian perspective, any sufciently complex problem has...these types algorithms based on maximum entropy analysis. An example is the Bar-Shalom- Campo Fusion Rule: Xf (kjk) = X2(kjk) + (P22 P21)U1[X1(kjk
Grammars Leak: Modeling How Phonotactic Generalizations Interact within the Grammar
ERIC Educational Resources Information Center
Martin, Andrew
2011-01-01
I present evidence from Navajo and English that weaker, gradient versions of morpheme-internal phonotactic constraints, such as the ban on geminate consonants in English, hold even across prosodic word boundaries. I argue that these lexical biases are the result of a MAXIMUM ENTROPY phonotactic learning algorithm that maximizes the probability of…
LANDMARK-BASED SPEECH RECOGNITION: REPORT OF THE 2004 JOHNS HOPKINS SUMMER WORKSHOP.
Hasegawa-Johnson, Mark; Baker, James; Borys, Sarah; Chen, Ken; Coogan, Emily; Greenberg, Steven; Juneja, Amit; Kirchhoff, Katrin; Livescu, Karen; Mohan, Srividya; Muller, Jennifer; Sonmez, Kemal; Wang, Tianyu
2005-01-01
Three research prototype speech recognition systems are described, all of which use recently developed methods from artificial intelligence (specifically support vector machines, dynamic Bayesian networks, and maximum entropy classification) in order to implement, in the form of an automatic speech recognizer, current theories of human speech perception and phonology (specifically landmark-based speech perception, nonlinear phonology, and articulatory phonology). All three systems begin with a high-dimensional multiframe acoustic-to-distinctive feature transformation, implemented using support vector machines trained to detect and classify acoustic phonetic landmarks. Distinctive feature probabilities estimated by the support vector machines are then integrated using one of three pronunciation models: a dynamic programming algorithm that assumes canonical pronunciation of each word, a dynamic Bayesian network implementation of articulatory phonology, or a discriminative pronunciation model trained using the methods of maximum entropy classification. Log probability scores computed by these models are then combined, using log-linear combination, with other word scores available in the lattice output of a first-pass recognizer, and the resulting combination score is used to compute a second-pass speech recognition output.