DOE Office of Scientific and Technical Information (OSTI.GOV)
Giovannetti, Vittorio; Maccone, Lorenzo; Shapiro, Jeffrey H.
The minimum Renyi and Wehrl output entropies are found for bosonic channels in which the signal photons are either randomly displaced by a Gaussian distribution (classical-noise channel), or coupled to a thermal environment through lossy propagation (thermal-noise channel). It is shown that the Renyi output entropies of integer orders z{>=}2 and the Wehrl output entropy are minimized when the channel input is a coherent state.
Haseli, Y
2016-05-01
The objective of this study is to investigate the thermal efficiency and power production of typical models of endoreversible heat engines at the regime of minimum entropy generation rate. The study considers the Curzon-Ahlborn engine, the Novikov's engine, and the Carnot vapor cycle. The operational regimes at maximum thermal efficiency, maximum power output and minimum entropy production rate are compared for each of these engines. The results reveal that in an endoreversible heat engine, a reduction in entropy production corresponds to an increase in thermal efficiency. The three criteria of minimum entropy production, the maximum thermal efficiency, and the maximum power may become equivalent at the condition of fixed heat input.
The Conditional Entropy Power Inequality for Bosonic Quantum Systems
NASA Astrophysics Data System (ADS)
De Palma, Giacomo; Trevisan, Dario
2018-06-01
We prove the conditional Entropy Power Inequality for Gaussian quantum systems. This fundamental inequality determines the minimum quantum conditional von Neumann entropy of the output of the beam-splitter or of the squeezing among all the input states where the two inputs are conditionally independent given the memory and have given quantum conditional entropies. We also prove that, for any couple of values of the quantum conditional entropies of the two inputs, the minimum of the quantum conditional entropy of the output given by the conditional Entropy Power Inequality is asymptotically achieved by a suitable sequence of quantum Gaussian input states. Our proof of the conditional Entropy Power Inequality is based on a new Stam inequality for the quantum conditional Fisher information and on the determination of the universal asymptotic behaviour of the quantum conditional entropy under the heat semigroup evolution. The beam-splitter and the squeezing are the central elements of quantum optics, and can model the attenuation, the amplification and the noise of electromagnetic signals. This conditional Entropy Power Inequality will have a strong impact in quantum information and quantum cryptography. Among its many possible applications there is the proof of a new uncertainty relation for the conditional Wehrl entropy.
The Conditional Entropy Power Inequality for Bosonic Quantum Systems
NASA Astrophysics Data System (ADS)
De Palma, Giacomo; Trevisan, Dario
2018-01-01
We prove the conditional Entropy Power Inequality for Gaussian quantum systems. This fundamental inequality determines the minimum quantum conditional von Neumann entropy of the output of the beam-splitter or of the squeezing among all the input states where the two inputs are conditionally independent given the memory and have given quantum conditional entropies. We also prove that, for any couple of values of the quantum conditional entropies of the two inputs, the minimum of the quantum conditional entropy of the output given by the conditional Entropy Power Inequality is asymptotically achieved by a suitable sequence of quantum Gaussian input states. Our proof of the conditional Entropy Power Inequality is based on a new Stam inequality for the quantum conditional Fisher information and on the determination of the universal asymptotic behaviour of the quantum conditional entropy under the heat semigroup evolution. The beam-splitter and the squeezing are the central elements of quantum optics, and can model the attenuation, the amplification and the noise of electromagnetic signals. This conditional Entropy Power Inequality will have a strong impact in quantum information and quantum cryptography. Among its many possible applications there is the proof of a new uncertainty relation for the conditional Wehrl entropy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guha, Saikat; Shapiro, Jeffrey H.; Erkmen, Baris I.
Previous work on the classical information capacities of bosonic channels has established the capacity of the single-user pure-loss channel, bounded the capacity of the single-user thermal-noise channel, and bounded the capacity region of the multiple-access channel. The latter is a multiple-user scenario in which several transmitters seek to simultaneously and independently communicate to a single receiver. We study the capacity region of the bosonic broadcast channel, in which a single transmitter seeks to simultaneously and independently communicate to two different receivers. It is known that the tightest available lower bound on the capacity of the single-user thermal-noise channel is thatmore » channel's capacity if, as conjectured, the minimum von Neumann entropy at the output of a bosonic channel with additive thermal noise occurs for coherent-state inputs. Evidence in support of this minimum output entropy conjecture has been accumulated, but a rigorous proof has not been obtained. We propose a minimum output entropy conjecture that, if proved to be correct, will establish that the capacity region of the bosonic broadcast channel equals the inner bound achieved using a coherent-state encoding and optimum detection. We provide some evidence that supports this conjecture, but again a full proof is not available.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giovannetti, Vittorio; Lloyd, Seth; Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139
The Amosov-Holevo-Werner conjecture implies the additivity of the minimum Renyi entropies at the output of a channel. The conjecture is proven true for all Renyi entropies of integer order greater than two in a class of Gaussian bosonic channel where the input signal is randomly displaced or where it is coupled linearly to an external environment.
Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy
NASA Astrophysics Data System (ADS)
Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng
2018-06-01
To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.
Unbiased All-Optical Random-Number Generator
NASA Astrophysics Data System (ADS)
Steinle, Tobias; Greiner, Johannes N.; Wrachtrup, Jörg; Giessen, Harald; Gerhardt, Ilja
2017-10-01
The generation of random bits is of enormous importance in modern information science. Cryptographic security is based on random numbers which require a physical process for their generation. This is commonly performed by hardware random-number generators. These often exhibit a number of problems, namely experimental bias, memory in the system, and other technical subtleties, which reduce the reliability in the entropy estimation. Further, the generated outcome has to be postprocessed to "iron out" such spurious effects. Here, we present a purely optical randomness generator, based on the bistable output of an optical parametric oscillator. Detector noise plays no role and postprocessing is reduced to a minimum. Upon entering the bistable regime, initially the resulting output phase depends on vacuum fluctuations. Later, the phase is rigidly locked and can be well determined versus a pulse train, which is derived from the pump laser. This delivers an ambiguity-free output, which is reliably detected and associated with a binary outcome. The resulting random bit stream resembles a perfect coin toss and passes all relevant randomness measures. The random nature of the generated binary outcome is furthermore confirmed by an analysis of resulting conditional entropies.
Highly Entangled, Non-random Subspaces of Tensor Products from Quantum Groups
NASA Astrophysics Data System (ADS)
Brannan, Michael; Collins, Benoît
2018-03-01
In this paper we describe a class of highly entangled subspaces of a tensor product of finite-dimensional Hilbert spaces arising from the representation theory of free orthogonal quantum groups. We determine their largest singular values and obtain lower bounds for the minimum output entropy of the corresponding quantum channels. An application to the construction of d-positive maps on matrix algebras is also presented.
First-order irreversible thermodynamic approach to a simple energy converter
NASA Astrophysics Data System (ADS)
Arias-Hernandez, L. A.; Angulo-Brown, F.; Paez-Hernandez, R. T.
2008-01-01
Several authors have shown that dissipative thermal cycle models based on finite-time thermodynamics exhibit loop-shaped curves of power output versus efficiency, such as it occurs with actual dissipative thermal engines. Within the context of first-order irreversible thermodynamics (FOIT), in this work we show that for an energy converter consisting of two coupled fluxes it is also possible to find loop-shaped curves of both power output and the so-called ecological function versus efficiency. In a previous work Stucki [J. W. Stucki, Eur. J. Biochem. 109, 269 (1980)] used a FOIT approach to describe the modes of thermodynamic performance of oxidative phosphorylation involved in adenosine triphosphate (ATP) synthesis within mithochondrias. In that work the author did not use the mentioned loop-shaped curves and he proposed that oxidative phosphorylation operates in a steady state at both minimum entropy production and maximum efficiency simultaneously, by means of a conductance matching condition between extreme states of zero and infinite conductances, respectively. In the present work we show that all Stucki’s results about the oxidative phosphorylation energetics can be obtained without the so-called conductance matching condition. On the other hand, we also show that the minimum entropy production state implies both null power output and efficiency and therefore this state is not fulfilled by the oxidative phosphorylation performance. Our results suggest that actual efficiency values of oxidative phosphorylation performance are better described by a mode of operation consisting of the simultaneous maximization of both the so-called ecological function and the efficiency.
A minimum entropy principle in the gas dynamics equations
NASA Technical Reports Server (NTRS)
Tadmor, E.
1986-01-01
Let u(x bar,t) be a weak solution of the Euler equations, governing the inviscid polytropic gas dynamics; in addition, u(x bar, t) is assumed to respect the usual entropy conditions connected with the conservative Euler equations. We show that such entropy solutions of the gas dynamics equations satisfy a minimum entropy principle, namely, that the spatial minimum of their specific entropy, (Ess inf s(u(x,t)))/x, is an increasing function of time. This principle equally applies to discrete approximations of the Euler equations such as the Godunov-type and Lax-Friedrichs schemes. Our derivation of this minimum principle makes use of the fact that there is a family of generalized entrophy functions connected with the conservative Euler equations.
NASA Astrophysics Data System (ADS)
Cheng, Yao; Zhou, Ning; Zhang, Weihua; Wang, Zhiwei
2018-07-01
Minimum entropy deconvolution is a widely-used tool in machinery fault diagnosis, because it enhances the impulse component of the signal. The filter coefficients that greatly influence the performance of the minimum entropy deconvolution are calculated by an iterative procedure. This paper proposes an improved deconvolution method for the fault detection of rolling element bearings. The proposed method solves the filter coefficients by the standard particle swarm optimization algorithm, assisted by a generalized spherical coordinate transformation. When optimizing the filters performance for enhancing the impulses in fault diagnosis (namely, faulty rolling element bearings), the proposed method outperformed the classical minimum entropy deconvolution method. The proposed method was validated in simulation and experimental signals from railway bearings. In both simulation and experimental studies, the proposed method delivered better deconvolution performance than the classical minimum entropy deconvolution method, especially in the case of low signal-to-noise ratio.
Minimum entropy deconvolution and blind equalisation
NASA Technical Reports Server (NTRS)
Satorius, E. H.; Mulligan, J. J.
1992-01-01
Relationships between minimum entropy deconvolution, developed primarily for geophysics applications, and blind equalization are pointed out. It is seen that a large class of existing blind equalization algorithms are directly related to the scale-invariant cost functions used in minimum entropy deconvolution. Thus the extensive analyses of these cost functions can be directly applied to blind equalization, including the important asymptotic results of Donoho.
Low Streamflow Forcasting using Minimum Relative Entropy
NASA Astrophysics Data System (ADS)
Cui, H.; Singh, V. P.
2013-12-01
Minimum relative entropy spectral analysis is derived in this study, and applied to forecast streamflow time series. Proposed method extends the autocorrelation in the manner that the relative entropy of underlying process is minimized so that time series data can be forecasted. Different prior estimation, such as uniform, exponential and Gaussian assumption, is taken to estimate the spectral density depending on the autocorrelation structure. Seasonal and nonseasonal low streamflow series obtained from Colorado River (Texas) under draught condition is successfully forecasted using proposed method. Minimum relative entropy determines spectral of low streamflow series with higher resolution than conventional method. Forecasted streamflow is compared to the prediction using Burg's maximum entropy spectral analysis (MESA) and Configurational entropy. The advantage and disadvantage of each method in forecasting low streamflow is discussed.
Uncertainty relations with quantum memory for the Wehrl entropy
NASA Astrophysics Data System (ADS)
De Palma, Giacomo
2018-03-01
We prove two new fundamental uncertainty relations with quantum memory for the Wehrl entropy. The first relation applies to the bipartite memory scenario. It determines the minimum conditional Wehrl entropy among all the quantum states with a given conditional von Neumann entropy and proves that this minimum is asymptotically achieved by a suitable sequence of quantum Gaussian states. The second relation applies to the tripartite memory scenario. It determines the minimum of the sum of the Wehrl entropy of a quantum state conditioned on the first memory quantum system with the Wehrl entropy of the same state conditioned on the second memory quantum system and proves that also this minimum is asymptotically achieved by a suitable sequence of quantum Gaussian states. The Wehrl entropy of a quantum state is the Shannon differential entropy of the outcome of a heterodyne measurement performed on the state. The heterodyne measurement is one of the main measurements in quantum optics and lies at the basis of one of the most promising protocols for quantum key distribution. These fundamental entropic uncertainty relations will be a valuable tool in quantum information and will, for example, find application in security proofs of quantum key distribution protocols in the asymptotic regime and in entanglement witnessing in quantum optics.
Maximum Relative Entropy of Coherence: An Operational Coherence Measure.
Bu, Kaifeng; Singh, Uttam; Fei, Shao-Ming; Pati, Arun Kumar; Wu, Junde
2017-10-13
The operational characterization of quantum coherence is the cornerstone in the development of the resource theory of coherence. We introduce a new coherence quantifier based on maximum relative entropy. We prove that the maximum relative entropy of coherence is directly related to the maximum overlap with maximally coherent states under a particular class of operations, which provides an operational interpretation of the maximum relative entropy of coherence. Moreover, we show that, for any coherent state, there are examples of subchannel discrimination problems such that this coherent state allows for a higher probability of successfully discriminating subchannels than that of all incoherent states. This advantage of coherent states in subchannel discrimination can be exactly characterized by the maximum relative entropy of coherence. By introducing a suitable smooth maximum relative entropy of coherence, we prove that the smooth maximum relative entropy of coherence provides a lower bound of one-shot coherence cost, and the maximum relative entropy of coherence is equivalent to the relative entropy of coherence in the asymptotic limit. Similar to the maximum relative entropy of coherence, the minimum relative entropy of coherence has also been investigated. We show that the minimum relative entropy of coherence provides an upper bound of one-shot coherence distillation, and in the asymptotic limit the minimum relative entropy of coherence is equivalent to the relative entropy of coherence.
NASA Astrophysics Data System (ADS)
McDonald, Geoff L.; Zhao, Qing
2017-01-01
Minimum Entropy Deconvolution (MED) has been applied successfully to rotating machine fault detection from vibration data, however this method has limitations. A convolution adjustment to the MED definition and solution is proposed in this paper to address the discontinuity at the start of the signal - in some cases causing spurious impulses to be erroneously deconvolved. A problem with the MED solution is that it is an iterative selection process, and will not necessarily design an optimal filter for the posed problem. Additionally, the problem goal in MED prefers to deconvolve a single-impulse, while in rotating machine faults we expect one impulse-like vibration source per rotational period of the faulty element. Maximum Correlated Kurtosis Deconvolution was proposed to address some of these problems, and although it solves the target goal of multiple periodic impulses, it is still an iterative non-optimal solution to the posed problem and only solves for a limited set of impulses in a row. Ideally, the problem goal should target an impulse train as the output goal, and should directly solve for the optimal filter in a non-iterative manner. To meet these goals, we propose a non-iterative deconvolution approach called Multipoint Optimal Minimum Entropy Deconvolution Adjusted (MOMEDA). MOMEDA proposes a deconvolution problem with an infinite impulse train as the goal and the optimal filter solution can be solved for directly. From experimental data on a gearbox with and without a gear tooth chip, we show that MOMEDA and its deconvolution spectrums according to the period between the impulses can be used to detect faults and study the health of rotating machine elements effectively.
NASA Astrophysics Data System (ADS)
Suzuki, Masuo
2013-01-01
A new variational principle of steady states is found by introducing an integrated type of energy dissipation (or entropy production) instead of instantaneous energy dissipation. This new principle is valid both in linear and nonlinear transport phenomena. Prigogine’s dream has now been realized by this new general principle of minimum “integrated” entropy production (or energy dissipation). This new principle does not contradict with the Onsager-Prigogine principle of minimum instantaneous entropy production in the linear regime, but it is conceptually different from the latter which does not hold in the nonlinear regime. Applications of this theory to electric conduction, heat conduction, particle diffusion and chemical reactions are presented. The irreversibility (or positive entropy production) and long time tail problem in Kubo’s formula are also discussed in the Introduction and last section. This constitutes the complementary explanation of our theory of entropy production given in the previous papers (M. Suzuki, Physica A 390 (2011) 1904 and M. Suzuki, Physica A 391 (2012) 1074) and has given the motivation of the present investigation of variational principle.
Maximum and minimum entropy states yielding local continuity bounds
NASA Astrophysics Data System (ADS)
Hanson, Eric P.; Datta, Nilanjana
2018-04-01
Given an arbitrary quantum state (σ), we obtain an explicit construction of a state ρɛ * ( σ ) [respectively, ρ * , ɛ ( σ ) ] which has the maximum (respectively, minimum) entropy among all states which lie in a specified neighborhood (ɛ-ball) of σ. Computing the entropy of these states leads to a local strengthening of the continuity bound of the von Neumann entropy, i.e., the Audenaert-Fannes inequality. Our bound is local in the sense that it depends on the spectrum of σ. The states ρɛ * ( σ ) and ρ * , ɛ (σ) depend only on the geometry of the ɛ-ball and are in fact optimizers for a larger class of entropies. These include the Rényi entropy and the minimum- and maximum-entropies, providing explicit formulas for certain smoothed quantities. This allows us to obtain local continuity bounds for these quantities as well. In obtaining this bound, we first derive a more general result which may be of independent interest, namely, a necessary and sufficient condition under which a state maximizes a concave and Gâteaux-differentiable function in an ɛ-ball around a given state σ. Examples of such a function include the von Neumann entropy and the conditional entropy of bipartite states. Our proofs employ tools from the theory of convex optimization under non-differentiable constraints, in particular Fermat's rule, and majorization theory.
Entropy-Based Registration of Point Clouds Using Terrestrial Laser Scanning and Smartphone GPS.
Chen, Maolin; Wang, Siying; Wang, Mingwei; Wan, Youchuan; He, Peipei
2017-01-20
Automatic registration of terrestrial laser scanning point clouds is a crucial but unresolved topic that is of great interest in many domains. This study combines terrestrial laser scanner with a smartphone for the coarse registration of leveled point clouds with small roll and pitch angles and height differences, which is a novel sensor combination mode for terrestrial laser scanning. The approximate distance between two neighboring scan positions is firstly calculated with smartphone GPS coordinates. Then, 2D distribution entropy is used to measure the distribution coherence between the two scans and search for the optimal initial transformation parameters. To this end, we propose a method called Iterative Minimum Entropy (IME) to correct initial transformation parameters based on two criteria: the difference between the average and minimum entropy and the deviation from the minimum entropy to the expected entropy. Finally, the presented method is evaluated using two data sets that contain tens of millions of points from panoramic and non-panoramic, vegetation-dominated and building-dominated cases and can achieve high accuracy and efficiency.
NASA Astrophysics Data System (ADS)
Açıkkalp, Emin; Caner, Necmettin
2015-09-01
In this study, a nano-scale irreversible Brayton cycle operating with quantum gasses including Bose and Fermi gasses is researched. Developments in the nano-technology cause searching the nano-scale machines including thermal systems to be unavoidable. Thermodynamic analysis of a nano-scale irreversible Brayton cycle operating with Bose and Fermi gasses was performed (especially using exergetic sustainability index). In addition, thermodynamic analysis involving classical evaluation parameters such as work output, exergy output, entropy generation, energy and exergy efficiencies were conducted. Results are submitted numerically and finally some useful recommendations were conducted. Some important results are: entropy generation and exergetic sustainability index are affected mostly for Bose gas and power output and exergy output are affected mostly for the Fermi gas by x. At the high temperature conditions, work output and entropy generation have high values comparing with other degeneracy conditions.
On S-mixing entropy of quantum channels
NASA Astrophysics Data System (ADS)
Mukhamedov, Farrukh; Watanabe, Noboru
2018-06-01
In this paper, an S-mixing entropy of quantum channels is introduced as a generalization of Ohya's S-mixing entropy. We investigate several properties of the introduced entropy. Moreover, certain relations between the S-mixing entropy and the existing map and output entropies of quantum channels are investigated as well. These relations allowed us to find certain connections between separable states and the introduced entropy. Hence, there is a sufficient condition to detect entangled states. Moreover, several properties of the introduced entropy are investigated. Besides, entropies of qubit and phase-damping channels are calculated.
Neuronal Entropy-Rate Feature of Entopeduncular Nucleus in Rat Model of Parkinson's Disease.
Darbin, Olivier; Jin, Xingxing; Von Wrangel, Christof; Schwabe, Kerstin; Nambu, Atsushi; Naritoku, Dean K; Krauss, Joachim K; Alam, Mesbah
2016-03-01
The function of the nigro-striatal pathway on neuronal entropy in the basal ganglia (BG) output nucleus, i.e. the entopeduncular nucleus (EPN) was investigated in the unilaterally 6-hyroxydopamine (6-OHDA)-lesioned rat model of Parkinson's disease (PD). In both control subjects and subjects with 6-OHDA lesion of dopamine (DA) the nigro-striatal pathway, a histological hallmark for parkinsonism, neuronal entropy in EPN was maximal in neurons with firing rates ranging between 15 and 25 Hz. In 6-OHDA lesioned rats, neuronal entropy in the EPN was specifically higher in neurons with firing rates above 25 Hz. Our data establishes that the nigro-striatal pathway controls neuronal entropy in motor circuitry and that the parkinsonian condition is associated with abnormal relationship between firing rate and neuronal entropy in BG output nuclei. The neuronal firing rates and entropy relationship provide putative relevant electrophysiological information to investigate the sensory-motor processing in normal condition and conditions such as movement disorders.
Darbin, Olivier; Jin, Xingxing; von Wrangel, Christof; Schwabe, Kerstin; Nambu, Atsushi; Naritoku, Dean K; Krauss, Joachim K.; Alam, Mesbah
2016-01-01
The function of the nigro-striatal pathway on neuronal entropy in the basal ganglia (BG) output nucleus (entopeduncular nucleus, EPN) was investigated in the unilaterally 6-hyroxydopamine (6-OHDA)-lesioned rat model of Parkinson’s disease (PD). In both control subjects and subjects with 6-OHDA lesion of the nigro-striatal pathway, a histological hallmark for parkinsonism, neuronal entropy in EPN was maximal in neurons with firing rates ranging between 15Hz and 25 Hz. In 6-OHDA lesioned rats, neuronal entropy in the EPN was specifically higher in neurons with firing rates above 25Hz. Our data establishes that nigro-striatal pathway controls neuronal entropy in motor circuitry and that the parkinsonian condition is associated with abnormal relationship between firing rate and neuronal entropy in BG output nuclei. The neuronal firing rates and entropy relationship provide putative relevant electrophysiological information to investigate the sensory-motor processing in normal condition and conditions with movement disorders. PMID:26711712
Silveira, Vladímir de Aquino; Souza, Givago da Silva; Gomes, Bruno Duarte; Rodrigues, Anderson Raiol; Silveira, Luiz Carlos de Lima
2014-01-01
We used psychometric functions to estimate the joint entropy for space discrimination and spatial frequency discrimination. Space discrimination was taken as discrimination of spatial extent. Seven subjects were tested. Gábor functions comprising unidimensionalsinusoidal gratings (0.4, 2, and 10 cpd) and bidimensionalGaussian envelopes (1°) were used as reference stimuli. The experiment comprised the comparison between reference and test stimulithat differed in grating's spatial frequency or envelope's standard deviation. We tested 21 different envelope's standard deviations around the reference standard deviation to study spatial extent discrimination and 19 different grating's spatial frequencies around the reference spatial frequency to study spatial frequency discrimination. Two series of psychometric functions were obtained for 2%, 5%, 10%, and 100% stimulus contrast. The psychometric function data points for spatial extent discrimination or spatial frequency discrimination were fitted with Gaussian functions using the least square method, and the spatial extent and spatial frequency entropies were estimated from the standard deviation of these Gaussian functions. Then, joint entropy was obtained by multiplying the square root of space extent entropy times the spatial frequency entropy. We compared our results to the theoretical minimum for unidimensional Gábor functions, 1/4π or 0.0796. At low and intermediate spatial frequencies and high contrasts, joint entropy reached levels below the theoretical minimum, suggesting non-linear interactions between two or more visual mechanisms. We concluded that non-linear interactions of visual pathways, such as the M and P pathways, could explain joint entropy values below the theoretical minimum at low and intermediate spatial frequencies and high contrasts. These non-linear interactions might be at work at intermediate and high contrasts at all spatial frequencies once there was a substantial decrease in joint entropy for these stimulus conditions when contrast was raised. PMID:24466158
Silveira, Vladímir de Aquino; Souza, Givago da Silva; Gomes, Bruno Duarte; Rodrigues, Anderson Raiol; Silveira, Luiz Carlos de Lima
2014-01-01
We used psychometric functions to estimate the joint entropy for space discrimination and spatial frequency discrimination. Space discrimination was taken as discrimination of spatial extent. Seven subjects were tested. Gábor functions comprising unidimensionalsinusoidal gratings (0.4, 2, and 10 cpd) and bidimensionalGaussian envelopes (1°) were used as reference stimuli. The experiment comprised the comparison between reference and test stimulithat differed in grating's spatial frequency or envelope's standard deviation. We tested 21 different envelope's standard deviations around the reference standard deviation to study spatial extent discrimination and 19 different grating's spatial frequencies around the reference spatial frequency to study spatial frequency discrimination. Two series of psychometric functions were obtained for 2%, 5%, 10%, and 100% stimulus contrast. The psychometric function data points for spatial extent discrimination or spatial frequency discrimination were fitted with Gaussian functions using the least square method, and the spatial extent and spatial frequency entropies were estimated from the standard deviation of these Gaussian functions. Then, joint entropy was obtained by multiplying the square root of space extent entropy times the spatial frequency entropy. We compared our results to the theoretical minimum for unidimensional Gábor functions, 1/4π or 0.0796. At low and intermediate spatial frequencies and high contrasts, joint entropy reached levels below the theoretical minimum, suggesting non-linear interactions between two or more visual mechanisms. We concluded that non-linear interactions of visual pathways, such as the M and P pathways, could explain joint entropy values below the theoretical minimum at low and intermediate spatial frequencies and high contrasts. These non-linear interactions might be at work at intermediate and high contrasts at all spatial frequencies once there was a substantial decrease in joint entropy for these stimulus conditions when contrast was raised.
Connectivity in the human brain dissociates entropy and complexity of auditory inputs☆
Nastase, Samuel A.; Iacovella, Vittorio; Davis, Ben; Hasson, Uri
2015-01-01
Complex systems are described according to two central dimensions: (a) the randomness of their output, quantified via entropy; and (b) their complexity, which reflects the organization of a system's generators. Whereas some approaches hold that complexity can be reduced to uncertainty or entropy, an axiom of complexity science is that signals with very high or very low entropy are generated by relatively non-complex systems, while complex systems typically generate outputs with entropy peaking between these two extremes. In understanding their environment, individuals would benefit from coding for both input entropy and complexity; entropy indexes uncertainty and can inform probabilistic coding strategies, whereas complexity reflects a concise and abstract representation of the underlying environmental configuration, which can serve independent purposes, e.g., as a template for generalization and rapid comparisons between environments. Using functional neuroimaging, we demonstrate that, in response to passively processed auditory inputs, functional integration patterns in the human brain track both the entropy and complexity of the auditory signal. Connectivity between several brain regions scaled monotonically with input entropy, suggesting sensitivity to uncertainty, whereas connectivity between other regions tracked entropy in a convex manner consistent with sensitivity to input complexity. These findings suggest that the human brain simultaneously tracks the uncertainty of sensory data and effectively models their environmental generators. PMID:25536493
Zotin, A A
2012-01-01
Realization of the principle of minimum energy dissipation (Prigogine's theorem) during individual development has been analyzed. This analysis has suggested the following reformulation of this principle for living objects: when environmental conditions are constant, the living system evolves to a current steady state in such a way that the difference between entropy production and entropy flow (psi(u) function) is positive and constantly decreases near the steady state, approaching zero. In turn, the current steady state tends to a final steady state in such a way that the difference between the specific entropy productions in an organism and its environment tends to be minimal. In general, individual development completely agrees with the law of entropy increase (second law of thermodynamics).
Entropy considerations applied to shock unsteadiness in hypersonic inlets
NASA Astrophysics Data System (ADS)
Bussey, Gillian Mary Harding
The stability of curved or rectangular shocks in hypersonic inlets in response to flow perturbations can be determined analytically from the principle of minimum entropy. Unsteady shock wave motion can have a significant effect on the flow in a hypersonic inlet or combustor. According to the principle of minimum entropy, a stable thermodynamic state is one with the lowest entropy gain. A model based on piston theory and its limits has been developed for applying the principle of minimum entropy to quasi-steady flow. Relations are derived for analyzing the time-averaged entropy gain flux across a shock for quasi-steady perturbations in atmospheric conditions and angle as a perturbation in entropy gain flux from the steady state. Initial results from sweeping a wedge at Mach 10 through several degrees in AEDC's Tunnel 9 indicates the bow shock becomes unsteady near the predicted normal Mach number. Several curved shocks of varying curvature are compared to a straight shock with the same mean normal Mach number, pressure ratio, or temperature ratio. The present work provides analysis and guidelines for designing an inlet robust to off- design flight or perturbations in flow conditions an inlet is likely to face. It also suggests that inlets with curved shocks are less robust to off-design flight than those with straight shocks such as rectangular inlets. Relations for evaluating entropy perturbations for highly unsteady flow across a shock and limits on their use were also developed. The normal Mach number at which a shock could be stable to high frequency upstream perturbations increases as the speed of the shock motion increases and slightly decreases as the perturbation size increases. The present work advances the principle of minimum entropy theory by providing additional validity for using the theory for time-varying flows and applying it to shocks, specifically those in inlets. While this analytic tool is applied in the present work for evaluating the stability of shocks in hypersonic inlets, it can be used for an arbitrary application with a shock.
Minimum entropy density method for the time series analysis
NASA Astrophysics Data System (ADS)
Lee, Jeong Won; Park, Joongwoo Brian; Jo, Hang-Hyun; Yang, Jae-Suk; Moon, Hie-Tae
2009-01-01
The entropy density is an intuitive and powerful concept to study the complicated nonlinear processes derived from physical systems. We develop the minimum entropy density method (MEDM) to detect the structure scale of a given time series, which is defined as the scale in which the uncertainty is minimized, hence the pattern is revealed most. The MEDM is applied to the financial time series of Standard and Poor’s 500 index from February 1983 to April 2006. Then the temporal behavior of structure scale is obtained and analyzed in relation to the information delivery time and efficient market hypothesis.
A compositional framework for Markov processes
NASA Astrophysics Data System (ADS)
Baez, John C.; Fong, Brendan; Pollard, Blake S.
2016-03-01
We define the concept of an "open" Markov process, or more precisely, continuous-time Markov chain, which is one where probability can flow in or out of certain states called "inputs" and "outputs." One can build up a Markov process from smaller open pieces. This process is formalized by making open Markov processes into the morphisms of a dagger compact category. We show that the behavior of a detailed balanced open Markov process is determined by a principle of minimum dissipation, closely related to Prigogine's principle of minimum entropy production. Using this fact, we set up a functor mapping open detailed balanced Markov processes to open circuits made of linear resistors. We also describe how to "black box" an open Markov process, obtaining the linear relation between input and output data that holds in any steady state, including nonequilibrium steady states with a nonzero flow of probability through the system. We prove that black boxing gives a symmetric monoidal dagger functor sending open detailed balanced Markov processes to Lagrangian relations between symplectic vector spaces. This allows us to compute the steady state behavior of an open detailed balanced Markov process from the behaviors of smaller pieces from which it is built. We relate this black box functor to a previously constructed black box functor for circuits.
NASA Astrophysics Data System (ADS)
di Liberto, Francesco; Pastore, Raffaele; Peruggi, Fulvio
2011-05-01
When some entropy is transferred, by means of a reversible engine, from a hot heat source to a colder one, the maximum efficiency occurs, i.e. the maximum available work is obtained. Similarly, a reversible heat pumps transfer entropy from a cold heat source to a hotter one with the minimum expense of energy. In contrast, if we are faced with non-reversible devices, there is some lost work for heat engines, and some extra work for heat pumps. These quantities are both related to entropy production. The lost work, i.e. ? , is also called 'degraded energy' or 'energy unavailable to do work'. The extra work, i.e. ? , is the excess of work performed on the system in the irreversible process with respect to the reversible one (or the excess of heat given to the hotter source in the irreversible process). Both quantities are analysed in detail and are evaluated for a complex process, i.e. the stepwise circular cycle, which is similar to the stepwise Carnot cycle. The stepwise circular cycle is a cycle performed by means of N small weights, dw, which are first added and then removed from the piston of the vessel containing the gas or vice versa. The work performed by the gas can be found as the increase of the potential energy of the dw's. Each single dw is identified and its increase, i.e. its increase in potential energy, evaluated. In such a way it is found how the energy output of the cycle is distributed among the dw's. The size of the dw's affects entropy production and therefore the lost and extra work. The distribution of increases depends on the chosen removal process.
2017-08-21
distributions, and we discuss some applications for engineered and biological information transmission systems. Keywords: information theory; minimum...of its interpretation as a measure of the amount of information communicable by a neural system to groups of downstream neurons. Previous authors...of the maximum entropy approach. Our results also have relevance for engineered information transmission systems. We show that empirically measured
Sadeghi Ghuchani, Mostafa
2018-02-08
This comment argues against the view that cancer cells produce less entropy than normal cells as stated in a recent paper by Marín and Sabater. The basic principle of estimation of entropy production rate in a living cell is discussed, emphasizing the fact that entropy production depends on both the amount of heat exchange during the metabolism and the entropy difference between products and substrates.
NASA Astrophysics Data System (ADS)
Sadeghi Ghuchani, Mostafa
2018-03-01
This comment argues against the view that cancer cells produce less entropy than normal cells as stated in a recent paper by Marín and Sabater. The basic principle of estimation of entropy production rate in a living cell is discussed, emphasizing the fact that entropy production depends on both the amount of heat exchange during the metabolism and the entropy difference between products and substrates.
Connectivity in the human brain dissociates entropy and complexity of auditory inputs.
Nastase, Samuel A; Iacovella, Vittorio; Davis, Ben; Hasson, Uri
2015-03-01
Complex systems are described according to two central dimensions: (a) the randomness of their output, quantified via entropy; and (b) their complexity, which reflects the organization of a system's generators. Whereas some approaches hold that complexity can be reduced to uncertainty or entropy, an axiom of complexity science is that signals with very high or very low entropy are generated by relatively non-complex systems, while complex systems typically generate outputs with entropy peaking between these two extremes. In understanding their environment, individuals would benefit from coding for both input entropy and complexity; entropy indexes uncertainty and can inform probabilistic coding strategies, whereas complexity reflects a concise and abstract representation of the underlying environmental configuration, which can serve independent purposes, e.g., as a template for generalization and rapid comparisons between environments. Using functional neuroimaging, we demonstrate that, in response to passively processed auditory inputs, functional integration patterns in the human brain track both the entropy and complexity of the auditory signal. Connectivity between several brain regions scaled monotonically with input entropy, suggesting sensitivity to uncertainty, whereas connectivity between other regions tracked entropy in a convex manner consistent with sensitivity to input complexity. These findings suggest that the human brain simultaneously tracks the uncertainty of sensory data and effectively models their environmental generators. Copyright © 2014. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Whitney, Robert S.
2015-03-01
We investigate the nonlinear scattering theory for quantum systems with strong Seebeck and Peltier effects, and consider their use as heat engines and refrigerators with finite power outputs. This paper gives detailed derivations of the results summarized in a previous paper [R. S. Whitney, Phys. Rev. Lett. 112, 130601 (2014), 10.1103/PhysRevLett.112.130601]. It shows how to use the scattering theory to find (i) the quantum thermoelectric with maximum possible power output, and (ii) the quantum thermoelectric with maximum efficiency at given power output. The latter corresponds to a minimal entropy production at that power output. These quantities are of quantum origin since they depend on system size over electronic wavelength, and so have no analog in classical thermodynamics. The maximal efficiency coincides with Carnot efficiency at zero power output, but decreases with increasing power output. This gives a fundamental lower bound on entropy production, which means that reversibility (in the thermodynamic sense) is impossible for finite power output. The suppression of efficiency by (nonlinear) phonon and photon effects is addressed in detail; when these effects are strong, maximum efficiency coincides with maximum power. Finally, we show in particular limits (typically without magnetic fields) that relaxation within the quantum system does not allow the system to exceed the bounds derived for relaxation-free systems, however, a general proof of this remains elusive.
NASA Astrophysics Data System (ADS)
Feidt, Michel; Costea, Monica
2018-04-01
Many works have been devoted to finite time thermodynamics since the Curzon and Ahlborn [1] contribution, which is generally considered as its origin. Nevertheless, previous works in this domain have been revealed [2], [3], and recently, results of the attempt to correlate Finite Time Thermodynamics with Linear Irreversible Thermodynamics according to Onsager's theory were reported [4]. The aim of the present paper is to extend and improve the approach relative to thermodynamic optimization of generic objective functions of a Carnot engine with linear response regime presented in [4]. The case study of the Carnot engine is revisited within the steady state hypothesis, when non-adiabaticity of the system is considered, and heat loss is accounted for by an overall heat leak between the engine heat reservoirs. The optimization is focused on the main objective functions connected to engineering conditions, namely maximum efficiency or power output, except the one relative to entropy that is more fundamental. Results given in reference [4] relative to the maximum power output and minimum entropy production as objective function are reconsidered and clarified, and the change from finite time to finite physical dimension was shown to be done by the heat flow rate at the source. Our modeling has led to new results of the Carnot engine optimization and proved that the primary interest for an engineer is mainly connected to what we called Finite Physical Dimensions Optimal Thermodynamics.
Delchini, Marc O.; Ragusa, Jean C.; Ferguson, Jim
2017-02-17
A viscous regularization technique, based on the local entropy residual, was proposed by Delchini et al. (2015) to stabilize the nonequilibrium-diffusion Grey Radiation-Hydrodynamic equations using an artificial viscosity technique. This viscous regularization is modulated by the local entropy production and is consistent with the entropy minimum principle. However, Delchini et al. (2015) only based their work on the hyperbolic parts of the Grey Radiation-Hydrodynamic equations and thus omitted the relaxation and diffusion terms present in the material energy and radiation energy equations. Here in this paper, we extend the theoretical grounds for the method and derive an entropy minimum principlemore » for the full set of nonequilibrium-diffusion Grey Radiation-Hydrodynamic equations. This further strengthens the applicability of the entropy viscosity method as a stabilization technique for radiation-hydrodynamic shock simulations. Radiative shock calculations using constant and temperature-dependent opacities are compared against semi-analytical reference solutions, and we present a procedure to perform spatial convergence studies of such simulations.« less
On variational definition of quantum entropy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belavkin, Roman V.
Entropy of distribution P can be defined in at least three different ways: 1) as the expectation of the Kullback-Leibler (KL) divergence of P from elementary δ-measures (in this case, it is interpreted as expected surprise); 2) as a negative KL-divergence of some reference measure ν from the probability measure P; 3) as the supremum of Shannon’s mutual information taken over all channels such that P is the output probability, in which case it is dual of some transportation problem. In classical (i.e. commutative) probability, all three definitions lead to the same quantity, providing only different interpretations of entropy. Inmore » non-commutative (i.e. quantum) probability, however, these definitions are not equivalent. In particular, the third definition, where the supremum is taken over all entanglements of two quantum systems with P being the output state, leads to the quantity that can be twice the von Neumann entropy. It was proposed originally by V. Belavkin and Ohya [1] and called the proper quantum entropy, because it allows one to define quantum conditional entropy that is always non-negative. Here we extend these ideas to define also quantum counterpart of proper cross-entropy and cross-information. We also show inequality for the values of classical and quantum information.« less
Gaussian States Minimize the Output Entropy of One-Mode Quantum Gaussian Channels
NASA Astrophysics Data System (ADS)
De Palma, Giacomo; Trevisan, Dario; Giovannetti, Vittorio
2017-04-01
We prove the long-standing conjecture stating that Gaussian thermal input states minimize the output von Neumann entropy of one-mode phase-covariant quantum Gaussian channels among all the input states with a given entropy. Phase-covariant quantum Gaussian channels model the attenuation and the noise that affect any electromagnetic signal in the quantum regime. Our result is crucial to prove the converse theorems for both the triple trade-off region and the capacity region for broadcast communication of the Gaussian quantum-limited amplifier. Our result extends to the quantum regime the entropy power inequality that plays a key role in classical information theory. Our proof exploits a completely new technique based on the recent determination of the p →q norms of the quantum-limited amplifier [De Palma et al., arXiv:1610.09967]. This technique can be applied to any quantum channel.
Gaussian States Minimize the Output Entropy of One-Mode Quantum Gaussian Channels.
De Palma, Giacomo; Trevisan, Dario; Giovannetti, Vittorio
2017-04-21
We prove the long-standing conjecture stating that Gaussian thermal input states minimize the output von Neumann entropy of one-mode phase-covariant quantum Gaussian channels among all the input states with a given entropy. Phase-covariant quantum Gaussian channels model the attenuation and the noise that affect any electromagnetic signal in the quantum regime. Our result is crucial to prove the converse theorems for both the triple trade-off region and the capacity region for broadcast communication of the Gaussian quantum-limited amplifier. Our result extends to the quantum regime the entropy power inequality that plays a key role in classical information theory. Our proof exploits a completely new technique based on the recent determination of the p→q norms of the quantum-limited amplifier [De Palma et al., arXiv:1610.09967]. This technique can be applied to any quantum channel.
Maximum Kolmogorov-Sinai Entropy Versus Minimum Mixing Time in Markov Chains
NASA Astrophysics Data System (ADS)
Mihelich, M.; Dubrulle, B.; Paillard, D.; Kral, Q.; Faranda, D.
2018-01-01
We establish a link between the maximization of Kolmogorov Sinai entropy (KSE) and the minimization of the mixing time for general Markov chains. Since the maximisation of KSE is analytical and easier to compute in general than mixing time, this link provides a new faster method to approximate the minimum mixing time dynamics. It could be interesting in computer sciences and statistical physics, for computations that use random walks on graphs that can be represented as Markov chains.
Free Energy in Introductory Physics
NASA Astrophysics Data System (ADS)
Prentis, Jeffrey J.; Obsniuk, Michael J.
2016-02-01
Energy and entropy are two of the most important concepts in science. For all natural processes where a system exchanges energy with its environment, the energy of the system tends to decrease and the entropy of the system tends to increase. Free energy is the special concept that specifies how to balance the opposing tendencies to minimize energy and maximize entropy. There are many pedagogical articles on energy and entropy. Here we present a simple model to illustrate the concept of free energy and the principle of minimum free energy.
Force-Time Entropy of Isometric Impulse.
Hsieh, Tsung-Yu; Newell, Karl M
2016-01-01
The relation between force and temporal variability in discrete impulse production has been viewed as independent (R. A. Schmidt, H. Zelaznik, B. Hawkins, J. S. Frank, & J. T. Quinn, 1979 ) or dependent on the rate of force (L. G. Carlton & K. M. Newell, 1993 ). Two experiments in an isometric single finger force task investigated the joint force-time entropy with (a) fixed time to peak force and different percentages of force level and (b) fixed percentage of force level and different times to peak force. The results showed that the peak force variability increased either with the increment of force level or through a shorter time to peak force that also reduced timing error variability. The peak force entropy and entropy of time to peak force increased on the respective dimension as the parameter conditions approached either maximum force or a minimum rate of force production. The findings show that force error and timing error are dependent but complementary when considered in the same framework with the joint force-time entropy at a minimum in the middle parameter range of discrete impulse.
Ratio of shear viscosity to entropy density in multifragmentation of Au + Au
NASA Astrophysics Data System (ADS)
Zhou, C. L.; Ma, Y. G.; Fang, D. Q.; Li, S. X.; Zhang, G. Q.
2012-06-01
The ratio of the shear viscosity (η) to entropy density (s) for the intermediate energy heavy-ion collisions has been calculated by using the Green-Kubo method in the framework of the quantum molecular dynamics model. The theoretical curve of η/s as a function of the incident energy for the head-on Au + Au collisions displays that a minimum region of η/s has been approached at higher incident energies, where the minimum η/s value is about 7 times Kovtun-Son-Starinets (KSS) bound (1/4π). We argue that the onset of minimum η/s region at higher incident energies corresponds to the nuclear liquid gas phase transition in nuclear multifragmentation.
Quantum entropy and uncertainty for two-mode squeezed, coherent and intelligent spin states
NASA Technical Reports Server (NTRS)
Aragone, C.; Mundarain, D.
1993-01-01
We compute the quantum entropy for monomode and two-mode systems set in squeezed states. Thereafter, the quantum entropy is also calculated for angular momentum algebra when the system is either in a coherent or in an intelligent spin state. These values are compared with the corresponding values of the respective uncertainties. In general, quantum entropies and uncertainties have the same minimum and maximum points. However, for coherent and intelligent spin states, it is found that some minima for the quantum entropy turn out to be uncertainty maxima. We feel that the quantum entropy we use provides the right answer, since it is given in an essentially unique way.
Ding, Jinliang; Chai, Tianyou; Wang, Hong
2011-03-01
This paper presents a novel offline modeling for product quality prediction of mineral processing which consists of a number of unit processes in series. The prediction of the product quality of the whole mineral process (i.e., the mixed concentrate grade) plays an important role and the establishment of its predictive model is a key issue for the plantwide optimization. For this purpose, a hybrid modeling approach of the mixed concentrate grade prediction is proposed, which consists of a linear model and a nonlinear model. The least-squares support vector machine is adopted to establish the nonlinear model. The inputs of the predictive model are the performance indices of each unit process, while the output is the mixed concentrate grade. In this paper, the model parameter selection is transformed into the shape control of the probability density function (PDF) of the modeling error. In this context, both the PDF-control-based and minimum-entropy-based model parameter selection approaches are proposed. Indeed, this is the first time that the PDF shape control idea is used to deal with system modeling, where the key idea is to turn model parameters so that either the modeling error PDF is controlled to follow a target PDF or the modeling error entropy is minimized. The experimental results using the real plant data and the comparison of the two approaches are discussed. The results show the effectiveness of the proposed approaches.
Numerical estimation of the relative entropy of entanglement
NASA Astrophysics Data System (ADS)
Zinchenko, Yuriy; Friedland, Shmuel; Gour, Gilad
2010-11-01
We propose a practical algorithm for the calculation of the relative entropy of entanglement (REE), defined as the minimum relative entropy between a state and the set of states with positive partial transpose. Our algorithm is based on a practical semidefinite cutting plane approach. In low dimensions the implementation of the algorithm in matlab provides an estimation for the REE with an absolute error smaller than 10-3.
Intrinsic Information Processing and Energy Dissipation in Stochastic Input-Output Dynamical Systems
2015-07-09
Crutchfield. Information Anatomy of Stochastic Equilibria, Entropy , (08 2014): 0. doi: 10.3390/e16094713 Virgil Griffith, Edwin Chong, Ryan James...Christopher Ellison, James Crutchfield. Intersection Information Based on Common Randomness, Entropy , (04 2014): 0. doi: 10.3390/e16041985 TOTAL: 5 Number...Learning Group Seminar, Complexity Sciences Center, UC Davis. Korana Burke and Greg Wimsatt (UCD), reviewed PRL “Measurement of Stochastic Entropy
Technological Illusions and the Entropy of American Defense
2014-04-11
contend with to fulfill mission requirements. While some assert, “The overall impact of human activity on the physical environment is producing...AVAILABILITY STATEMENT Approved for public release; distribution is unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT In physics , entropy constitutes...Reversible & Irreversible)…….5 Figure 2: Active Duty Personnel & DOD Budget Trends (1953-2013)……………8 Figure 3: DOD Entropy vs Output as an
A novel encoding scheme for effective biometric discretization: Linearly Separable Subcode.
Lim, Meng-Hui; Teoh, Andrew Beng Jin
2013-02-01
Separability in a code is crucial in guaranteeing a decent Hamming-distance separation among the codewords. In multibit biometric discretization where a code is used for quantization-intervals labeling, separability is necessary for preserving distance dissimilarity when feature components are mapped from a discrete space to a Hamming space. In this paper, we examine separability of Binary Reflected Gray Code (BRGC) encoding and reveal its inadequacy in tackling interclass variation during the discrete-to-binary mapping, leading to a tradeoff between classification performance and entropy of binary output. To overcome this drawback, we put forward two encoding schemes exhibiting full-ideal and near-ideal separability capabilities, known as Linearly Separable Subcode (LSSC) and Partially Linearly Separable Subcode (PLSSC), respectively. These encoding schemes convert the conventional entropy-performance tradeoff into an entropy-redundancy tradeoff in the increase of code length. Extensive experimental results vindicate the superiority of our schemes over the existing encoding schemes in discretization performance. This opens up possibilities of achieving much greater classification performance with high output entropy.
Entropy coders for image compression based on binary forward classification
NASA Astrophysics Data System (ADS)
Yoo, Hoon; Jeong, Jechang
2000-12-01
Entropy coders as a noiseless compression method are widely used as final step compression for images, and there have been many contributions to increase of entropy coder performance and to reduction of entropy coder complexity. In this paper, we propose some entropy coders based on the binary forward classification (BFC). The BFC requires overhead of classification but there is no change between the amount of input information and the total amount of classified output information, which we prove this property in this paper. And using the proved property, we propose entropy coders that are the BFC followed by Golomb-Rice coders (BFC+GR) and the BFC followed by arithmetic coders (BFC+A). The proposed entropy coders introduce negligible additional complexity due to the BFC. Simulation results also show better performance than other entropy coders that have similar complexity to the proposed coders.
NASA Astrophysics Data System (ADS)
Li, Gang; Zhao, Qing
2017-03-01
In this paper, a minimum entropy deconvolution based sinusoidal synthesis (MEDSS) filter is proposed to improve the fault detection performance of the regular sinusoidal synthesis (SS) method. The SS filter is an efficient linear predictor that exploits the frequency properties during model construction. The phase information of the harmonic components is not used in the regular SS filter. However, the phase relationships are important in differentiating noise from characteristic impulsive fault signatures. Therefore, in this work, the minimum entropy deconvolution (MED) technique is used to optimize the SS filter during the model construction process. A time-weighted-error Kalman filter is used to estimate the MEDSS model parameters adaptively. Three simulation examples and a practical application case study are provided to illustrate the effectiveness of the proposed method. The regular SS method and the autoregressive MED (ARMED) method are also implemented for comparison. The MEDSS model has demonstrated superior performance compared to the regular SS method and it also shows comparable or better performance with much less computational intensity than the ARMED method.
Aeroacoustic and aerodynamic applications of the theory of nonequilibrium thermodynamics
NASA Technical Reports Server (NTRS)
Horne, W. Clifton; Smith, Charles A.; Karamcheti, Krishnamurty
1991-01-01
Recent developments in the field of nonequilibrium thermodynamics associated with viscous flows are examined and related to developments to the understanding of specific phenomena in aerodynamics and aeroacoustics. A key element of the nonequilibrium theory is the principle of minimum entropy production rate for steady dissipative processes near equilibrium, and variational calculus is used to apply this principle to several examples of viscous flow. A review of nonequilibrium thermodynamics and its role in fluid motion are presented. Several formulations are presented of the local entropy production rate and the local energy dissipation rate, two quantities that are of central importance to the theory. These expressions and the principle of minimum entropy production rate for steady viscous flows are used to identify parallel-wall channel flow and irrotational flow as having minimally dissipative velocity distributions. Features of irrotational, steady, viscous flow near an airfoil, such as the effect of trailing-edge radius on circulation, are also found to be compatible with the minimum principle. Finally, the minimum principle is used to interpret the stability of infinitesimal and finite amplitude disturbances in an initially laminar, parallel shear flow, with results that are consistent with experiment and linearized hydrodynamic stability theory. These results suggest that a thermodynamic approach may be useful in unifying the understanding of many diverse phenomena in aerodynamics and aeroacoustics.
Exploiting the Maximum Entropy Principle to Increase Retrieval Effectiveness.
ERIC Educational Resources Information Center
Cooper, William S.
1983-01-01
Presents information retrieval design approach in which queries of computer-based system consist of sets of terms, either unweighted or weighted with subjective term precision estimates, and retrieval outputs ranked by probability of usefulness estimated by "maximum entropy principle." Boolean and weighted request systems are discussed.…
NASA Technical Reports Server (NTRS)
Shebalin, John V.
1997-01-01
The entropy associated with absolute equilibrium ensemble theories of ideal, homogeneous, fluid and magneto-fluid turbulence is discussed and the three-dimensional fluid case is examined in detail. A sigma-function is defined, whose minimum value with respect to global parameters is the entropy. A comparison is made between the use of global functions sigma and phase functions H (associated with the development of various H-theorems of ideal turbulence). It is shown that the two approaches are complimentary though conceptually different: H-theorems show that an isolated system tends to equilibrium while sigma-functions allow the demonstration that entropy never decreases when two previously isolated systems are combined. This provides a more complete picture of entropy in the statistical mechanics of ideal fluids.
Efficient optimization of the quantum relative entropy
NASA Astrophysics Data System (ADS)
Fawzi, Hamza; Fawzi, Omar
2018-04-01
Many quantum information measures can be written as an optimization of the quantum relative entropy between sets of states. For example, the relative entropy of entanglement of a state is the minimum relative entropy to the set of separable states. The various capacities of quantum channels can also be written in this way. We propose a unified framework to numerically compute these quantities using off-the-shelf semidefinite programming solvers, exploiting the approximation method proposed in Fawzi, Saunderson and Parrilo (2017 arXiv: 1705.00812). As a notable application, this method allows us to provide numerical counterexamples for a proposed lower bound on the quantum conditional mutual information in terms of the relative entropy of recovery.
NASA Astrophysics Data System (ADS)
Li, Jimeng; Li, Ming; Zhang, Jinfeng
2017-08-01
Rolling bearings are the key components in the modern machinery, and tough operation environments often make them prone to failure. However, due to the influence of the transmission path and background noise, the useful feature information relevant to the bearing fault contained in the vibration signals is weak, which makes it difficult to identify the fault symptom of rolling bearings in time. Therefore, the paper proposes a novel weak signal detection method based on time-delayed feedback monostable stochastic resonance (TFMSR) system and adaptive minimum entropy deconvolution (MED) to realize the fault diagnosis of rolling bearings. The MED method is employed to preprocess the vibration signals, which can deconvolve the effect of transmission path and clarify the defect-induced impulses. And a modified power spectrum kurtosis (MPSK) index is constructed to realize the adaptive selection of filter length in the MED algorithm. By introducing the time-delayed feedback item in to an over-damped monostable system, the TFMSR method can effectively utilize the historical information of input signal to enhance the periodicity of SR output, which is beneficial to the detection of periodic signal. Furthermore, the influence of time delay and feedback intensity on the SR phenomenon is analyzed, and by selecting appropriate time delay, feedback intensity and re-scaling ratio with genetic algorithm, the SR can be produced to realize the resonance detection of weak signal. The combination of the adaptive MED (AMED) method and TFMSR method is conducive to extracting the feature information from strong background noise and realizing the fault diagnosis of rolling bearings. Finally, some experiments and engineering application are performed to evaluate the effectiveness of the proposed AMED-TFMSR method in comparison with a traditional bistable SR method.
Wang, Haiqin; Liu, Wenlong; He, Fuyuan; Chen, Zuohong; Zhang, Xili; Xie, Xianggui; Zeng, Jiaoli; Duan, Xiaopeng
2012-02-01
To explore the once sampling quantitation of Houttuynia cordata through its DNA polymorphic bands that carried information entropy, from other form that the expression of traditional Chinese medicine polymorphism, genetic polymorphism, of traditional Chinese medicine. The technique of inter simple sequence repeat (ISSR) was applied to analyze genetic polymorphism of H. cordata samples from the same GAP producing area, the DNA genetic bands were transformed its into the information entropy, and the minimum once sampling quantitation with the mathematical mode was measured. One hundred and thirty-four DNA bands were obtained by using 9 screened ISSR primers to amplify from 46 strains DNA samples of H. cordata from the same GAP, the information entropy was H=0.365 6-0.978 6, and RSD was 14.75%. The once sampling quantitation was W=11.22 kg (863 strains). The "once minimum sampling quantitation" were calculated from the angle of the genetic polymorphism of H. cordata, and a great differences between this volume and the amount from the angle of fingerprint were found.
Entropic bounds on currents in Langevin systems
NASA Astrophysics Data System (ADS)
Dechant, Andreas; Sasa, Shin-ichi
2018-06-01
We derive a bound on generalized currents for Langevin systems in terms of the total entropy production in the system and its environment. For overdamped dynamics, any generalized current is bounded by the total rate of entropy production. We show that this entropic bound on the magnitude of generalized currents imposes power-efficiency tradeoff relations for ratchets in contact with a heat bath: Maximum efficiency—Carnot efficiency for a Smoluchowski-Feynman ratchet and unity for a flashing or rocking ratchet—can only be reached at vanishing power output. For underdamped dynamics, while there may be reversible currents that are not bounded by the entropy production rate, we show that the output power and heat absorption rate are irreversible currents and thus obey the same bound. As a consequence, a power-efficiency tradeoff relation holds not only for underdamped ratchets but also for periodically driven heat engines. For weak driving, the bound results in additional constraints on the Onsager matrix beyond those imposed by the second law. Finally, we discuss the connection between heat and entropy in a nonthermal situation where the friction and noise intensity are state dependent.
Rényi-Fisher entropy product as a marker of topological phase transitions
NASA Astrophysics Data System (ADS)
Bolívar, J. C.; Nagy, Ágnes; Romera, Elvira
2018-05-01
The combined Rényi-Fisher entropy product of electrons plus holes displays a minimum at the charge neutrality points. The Stam-Rényi difference and the Stam-Rényi uncertainty product of the electrons plus holes, show maxima at the charge neutrality points. Topological quantum numbers capable of detecting the topological insulator and the band insulator phases, are defined. Upper and lower bounds for the position and momentum space Rényi-Fisher entropy products are derived.
Statistical mechanical theory for steady state systems. VI. Variational principles
NASA Astrophysics Data System (ADS)
Attard, Phil
2006-12-01
Several variational principles that have been proposed for nonequilibrium systems are analyzed. These include the principle of minimum rate of entropy production due to Prigogine [Introduction to Thermodynamics of Irreversible Processes (Interscience, New York, 1967)], the principle of maximum rate of entropy production, which is common on the internet and in the natural sciences, two principles of minimum dissipation due to Onsager [Phys. Rev. 37, 405 (1931)] and to Onsager and Machlup [Phys. Rev. 91, 1505 (1953)], and the principle of maximum second entropy due to Attard [J. Chem.. Phys. 122, 154101 (2005); Phys. Chem. Chem. Phys. 8, 3585 (2006)]. The approaches of Onsager and Attard are argued to be the only viable theories. These two are related, although their physical interpretation and mathematical approximations differ. A numerical comparison with computer simulation results indicates that Attard's expression is the only accurate theory. The implications for the Langevin and other stochastic differential equations are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Pisin; Hsin, Po-Shen; Niu, Yuezhen, E-mail: pisinchen@phys.ntu.edu.tw, E-mail: r01222031@ntu.edu.tw, E-mail: yuezhenniu@gmail.com
We investigate the entropy evolution in the early universe by computing the change of the entanglement entropy in Freedmann-Robertson-Walker quantum cosmology in the presence of particle horizon. The matter is modeled by a Chaplygin gas so as to provide a smooth interpolation between inflationary and radiation epochs, rendering the evolution of entropy from early time to late time trackable. We found that soon after the onset of the inflation, the total entanglement entropy rapidly decreases to a minimum. It then rises monotonically in the remainder of the inflation epoch as well as the radiation epoch. Our result is in qualitativemore » agreement with the area law of Ryu and Takayanagi including the logarithmic correction. We comment on the possible implication of our finding to the cosmological entropy problem.« less
Entropy of adsorption of mixed surfactants from solutions onto the air/water interface
Chen, L.-W.; Chen, J.-H.; Zhou, N.-F.
1995-01-01
The partial molar entropy change for mixed surfactant molecules adsorbed from solution at the air/water interface has been investigated by surface thermodynamics based upon the experimental surface tension isotherms at various temperatures. Results for different surfactant mixtures of sodium dodecyl sulfate and sodium tetradecyl sulfate, decylpyridinium chloride and sodium alkylsulfonates have shown that the partial molar entropy changes for adsorption of the mixed surfactants were generally negative and decreased with increasing adsorption to a minimum near the maximum adsorption and then increased abruptly. The entropy decrease can be explained by the adsorption-orientation of surfactant molecules in the adsorbed monolayer and the abrupt entropy increase at the maximum adsorption is possible due to the strong repulsion between the adsorbed molecules.
Bimodal behavior of post-measured entropy and one-way quantum deficit for two-qubit X states
NASA Astrophysics Data System (ADS)
Yurischev, Mikhail A.
2018-01-01
A method for calculating the one-way quantum deficit is developed. It involves a careful study of post-measured entropy shapes. We discovered that in some regions of X-state space the post-measured entropy \\tilde{S} as a function of measurement angle θ \\in [0,π /2] exhibits a bimodal behavior inside the open interval (0,π /2), i.e., it has two interior extrema: one minimum and one maximum. Furthermore, cases are found when the interior minimum of such a bimodal function \\tilde{S}(θ ) is less than that one at the endpoint θ =0 or π /2. This leads to the formation of a boundary between the phases of one-way quantum deficit via finite jumps of optimal measured angle from the endpoint to the interior minimum. Phase diagram is built up for a two-parameter family of X states. The subregions with variable optimal measured angle are around 1% of the total region, with their relative linear sizes achieving 17.5%, and the fidelity between the states of those subregions can be reduced to F=0.968. In addition, a correction to the one-way deficit due to the interior minimum can achieve 2.3%. Such conditions are favorable to detect the subregions with variable optimal measured angle of one-way quantum deficit in an experiment.
Mauda, R.; Pinchas, M.
2014-01-01
Recently a new blind equalization method was proposed for the 16QAM constellation input inspired by the maximum entropy density approximation technique with improved equalization performance compared to the maximum entropy approach, Godard's algorithm, and others. In addition, an approximated expression for the minimum mean square error (MSE) was obtained. The idea was to find those Lagrange multipliers that bring the approximated MSE to minimum. Since the derivation of the obtained MSE with respect to the Lagrange multipliers leads to a nonlinear equation for the Lagrange multipliers, the part in the MSE expression that caused the nonlinearity in the equation for the Lagrange multipliers was ignored. Thus, the obtained Lagrange multipliers were not those Lagrange multipliers that bring the approximated MSE to minimum. In this paper, we derive a new set of Lagrange multipliers based on the nonlinear expression for the Lagrange multipliers obtained from minimizing the approximated MSE with respect to the Lagrange multipliers. Simulation results indicate that for the high signal to noise ratio (SNR) case, a faster convergence rate is obtained for a channel causing a high initial intersymbol interference (ISI) while the same equalization performance is obtained for an easy channel (initial ISI low). PMID:24723813
Nonequilibrium Thermodynamics in Biological Systems
NASA Astrophysics Data System (ADS)
Aoki, I.
2005-12-01
1. Respiration Oxygen-uptake by respiration in organisms decomposes macromolecules such as carbohydrate, protein and lipid and liberates chemical energy of high quality, which is then used to chemical reactions and motions of matter in organisms to support lively order in structure and function in organisms. Finally, this chemical energy becomes heat energy of low quality and is discarded to the outside (dissipation function). Accompanying this heat energy, entropy production which inevitably occurs by irreversibility also is discarded to the outside. Dissipation function and entropy production are estimated from data of respiration. 2. Human body From the observed data of respiration (oxygen absorption), the entropy production in human body can be estimated. Entropy production from 0 to 75 years old human has been obtained, and extrapolated to fertilized egg (beginning of human life) and to 120 years old (maximum period of human life). Entropy production show characteristic behavior in human life span : early rapid increase in short growing phase and later slow decrease in long aging phase. It is proposed that this tendency is ubiquitous and constitutes a Principle of Organization in complex biotic systems. 3. Ecological communities From the data of respiration of eighteen aquatic communities, specific (i.e. per biomass) entropy productions are obtained. They show two phase character with respect to trophic diversity : early increase and later decrease with the increase of trophic diversity. The trophic diversity in these aquatic ecosystems is shown to be positively correlated with the degree of eutrophication, and the degree of eutrophication is an "arrow of time" in the hierarchy of aquatic ecosystems. Hence specific entropy production has the two phase: early increase and later decrease with time. 4. Entropy principle for living systems The Second Law of Thermodynamics has been expressed as follows. 1) In isolated systems, entropy increases with time and approaches to a maximum value. This is well-known classical Clausius principle. 2) In open systems near equilibrium entropy production always decreases with time approaching a minimum stationary level. This is the minimum entropy production principle by Prigogine. These two principle are established ones. However, living systems are not isolated and not near to equilibrium. Hence, these two principles can not be applied to living systems. What is entropy principle for living systems? Answer: Entropy production in living systems consists of multi-stages with time: early increasing, later decreasing and/or intermediate stages. This tendency is supported by various living systems.
Increased temperature and entropy production in cancer: the role of anti-inflammatory drugs.
Pitt, Michael A
2015-02-01
Some cancers have been shown to have a higher temperature than surrounding normal tissue. This higher temperature is due to heat generated internally in the cancer. The higher temperature of cancer (compared to surrounding tissue) enables a thermodynamic analysis to be carried out. Here I show that there is increased entropy production in cancer compared with surrounding tissue. This is termed excess entropy production. The excess entropy production is expressed in terms of heat flow from the cancer to surrounding tissue and enzymic reactions in the cancer and surrounding tissue. The excess entropy production in cancer drives it away from the stationary state that is characterised by minimum entropy production. Treatments that reduce inflammation (and therefore temperature) should drive a cancer towards the stationary state. Anti-inflammatory agents, such as aspirin, other non-steroidal anti-inflammatory drugs, corticosteroids and also thyroxine analogues have been shown (using various criteria) to reduce the progress of cancer.
NASA Astrophysics Data System (ADS)
Sabater, Bartolomé; Marín, Dolores
2018-03-01
The minimum rate principle is applied to the chemical reaction in a steady-state open cell system where, under constant supply of the glucose precursor, reference to time or to glucose consumption does not affect the conclusions.
An entropy method for induced drag minimization
NASA Technical Reports Server (NTRS)
Greene, George C.
1989-01-01
A fundamentally new approach to the aircraft minimum induced drag problem is presented. The method, a 'viscous lifting line', is based on the minimum entropy production principle and does not require the planar wake assumption. An approximate, closed form solution is obtained for several wing configurations including a comparison of wing extension, winglets, and in-plane wing sweep, with and without a constraint on wing-root bending moment. Like the classical lifting-line theory, this theory predicts that induced drag is proportional to the square of the lift coefficient and inversely proportioinal to the wing aspect ratio. Unlike the classical theory, it predicts that induced drag is Reynolds number dependent and that the optimum spanwise circulation distribution is non-elliptic.
A secure image encryption method based on dynamic harmony search (DHS) combined with chaotic map
NASA Astrophysics Data System (ADS)
Mirzaei Talarposhti, Khadijeh; Khaki Jamei, Mehrzad
2016-06-01
In recent years, there has been increasing interest in the security of digital images. This study focuses on the gray scale image encryption using dynamic harmony search (DHS). In this research, first, a chaotic map is used to create cipher images, and then the maximum entropy and minimum correlation coefficient is obtained by applying a harmony search algorithm on them. This process is divided into two steps. In the first step, the diffusion of a plain image using DHS to maximize the entropy as a fitness function will be performed. However, in the second step, a horizontal and vertical permutation will be applied on the best cipher image, which is obtained in the previous step. Additionally, DHS has been used to minimize the correlation coefficient as a fitness function in the second step. The simulation results have shown that by using the proposed method, the maximum entropy and the minimum correlation coefficient, which are approximately 7.9998 and 0.0001, respectively, have been obtained.
Quantum Entanglement and the Topological Order of Fractional Hall States
NASA Astrophysics Data System (ADS)
Rezayi, Edward
2015-03-01
Fractional quantum Hall states or, more generally, topological phases of matter defy Landau classification based on order parameter and broken symmetry. Instead they have been characterized by their topological order. Quantum information concepts, such as quantum entanglement, appear to provide the most efficient method of detecting topological order solely from the knowledge of the ground state wave function. This talk will focus on real-space bi-partitioning of quantum Hall states and will present both exact diagonalization and quantum Monte Carlo studies of topological entanglement entropy in various geometries. Results on the torus for non-contractible cuts are quite rich and, through the use of minimum entropy states, yield the modular S-matrix and hence uniquely determine the topological order, as shown in recent literature. Concrete examples of minimum entropy states from known quantum Hall wave functions and their corresponding quantum numbers, used in exact diagonalizations, will be given. In collaboration with Clare Abreu and Raul Herrera. Supported by DOE Grant DE-SC0002140.
The minimum control authority of a system of actuators with applications to Gravity Probe-B
NASA Technical Reports Server (NTRS)
Wiktor, Peter; Debra, Dan
1991-01-01
The forcing capabilities of systems composed of many actuators are analyzed in this paper. Multiactuator systems can generate higher forces in some directions than in others. Techniques are developed to find the force in the weakest direction. This corresponds to the worst-case output and is defined as the 'minimum control authority'. The minimum control authority is a function of three things: the actuator configuration, the actuator controller and the way in which the output of the system is limited. Three output limits are studied: (1) fuel-flow rate, (2) power, and (3) actuator output. The three corresponding actuator controllers are derived. These controllers generate the desired force while minimizing either fuel flow rate, power or actuator output. It is shown that using the optimal controller can substantially increase the minimum control authority. The techniques for calculating the minimum control authority are applied to the Gravity Probe-B spacecraft thruster system. This example shows that the minimum control authority can be used to design the individual actuators, choose actuator configuration, actuator controller, and study redundancy.
Cross-entropy embedding of high-dimensional data using the neural gas model.
Estévez, Pablo A; Figueroa, Cristián J; Saito, Kazumi
2005-01-01
A cross-entropy approach to mapping high-dimensional data into a low-dimensional space embedding is presented. The method allows to project simultaneously the input data and the codebook vectors, obtained with the Neural Gas (NG) quantizer algorithm, into a low-dimensional output space. The aim of this approach is to preserve the relationship defined by the NG neighborhood function for each pair of input and codebook vectors. A cost function based on the cross-entropy between input and output probabilities is minimized by using a Newton-Raphson method. The new approach is compared with Sammon's non-linear mapping (NLM) and the hierarchical approach of combining a vector quantizer such as the self-organizing feature map (SOM) or NG with the NLM recall algorithm. In comparison with these techniques, our method delivers a clear visualization of both data points and codebooks, and it achieves a better mapping quality in terms of the topology preservation measure q(m).
Minimum energy dissipation required for a logically irreversible operation
NASA Astrophysics Data System (ADS)
Takeuchi, Naoki; Yoshikawa, Nobuyuki
2018-01-01
According to Landauer's principle, the minimum heat emission required for computing is linked to logical entropy, or logical reversibility. The validity of Landauer's principle has been investigated for several decades and was finally demonstrated in recent experiments by showing that the minimum heat emission is associated with the reduction in logical entropy during a logically irreversible operation. Although the relationship between minimum heat emission and logical reversibility is being revealed, it is not clear how much free energy is required to be dissipated for a logically irreversible operation. In the present study, in order to reveal the connection between logical reversibility and free energy dissipation, we numerically demonstrated logically irreversible protocols using adiabatic superconductor logic. The calculation results of work during the protocol showed that, while the minimum heat emission conforms to Landauer's principle, the free energy dissipation can be arbitrarily reduced by performing the protocol quasistatically. The above results show that logical reversibility is not associated with thermodynamic reversibility, and that heat is not only emitted from logic devices but also absorbed by logic devices. We also formulated the heat emission from adiabatic superconductor logic during a logically irreversible operation at a finite operation speed.
Craciun, Stefan; Brockmeier, Austin J; George, Alan D; Lam, Herman; Príncipe, José C
2011-01-01
Methods for decoding movements from neural spike counts using adaptive filters often rely on minimizing the mean-squared error. However, for non-Gaussian distribution of errors, this approach is not optimal for performance. Therefore, rather than using probabilistic modeling, we propose an alternate non-parametric approach. In order to extract more structure from the input signal (neuronal spike counts) we propose using minimum error entropy (MEE), an information-theoretic approach that minimizes the error entropy as part of an iterative cost function. However, the disadvantage of using MEE as the cost function for adaptive filters is the increase in computational complexity. In this paper we present a comparison between the decoding performance of the analytic Wiener filter and a linear filter trained with MEE, which is then mapped to a parallel architecture in reconfigurable hardware tailored to the computational needs of the MEE filter. We observe considerable speedup from the hardware design. The adaptation of filter weights for the multiple-input, multiple-output linear filters, necessary in motor decoding, is a highly parallelizable algorithm. It can be decomposed into many independent computational blocks with a parallel architecture readily mapped to a field-programmable gate array (FPGA) and scales to large numbers of neurons. By pipelining and parallelizing independent computations in the algorithm, the proposed parallel architecture has sublinear increases in execution time with respect to both window size and filter order.
Conditional quantum entropy power inequality for d-level quantum systems
NASA Astrophysics Data System (ADS)
Jeong, Kabgyun; Lee, Soojoon; Jeong, Hyunseok
2018-04-01
We propose an extension of the quantum entropy power inequality for finite dimensional quantum systems, and prove a conditional quantum entropy power inequality by using the majorization relation as well as the concavity of entropic functions also given by Audenaert et al (2016 J. Math. Phys. 57 052202). Here, we make particular use of the fact that a specific local measurement after a partial swap operation (or partial swap quantum channel) acting only on finite dimensional bipartite subsystems does not affect the majorization relation for the conditional output states when a separable ancillary subsystem is involved. We expect our conditional quantum entropy power inequality to be useful, and applicable in bounding and analyzing several capacity problems for quantum channels.
Li, Mengshan; Zhang, Huaijing; Chen, Bingsheng; Wu, Yan; Guan, Lixin
2018-03-05
The pKa value of drugs is an important parameter in drug design and pharmacology. In this paper, an improved particle swarm optimization (PSO) algorithm was proposed based on the population entropy diversity. In the improved algorithm, when the population entropy was higher than the set maximum threshold, the convergence strategy was adopted; when the population entropy was lower than the set minimum threshold the divergence strategy was adopted; when the population entropy was between the maximum and minimum threshold, the self-adaptive adjustment strategy was maintained. The improved PSO algorithm was applied in the training of radial basis function artificial neural network (RBF ANN) model and the selection of molecular descriptors. A quantitative structure-activity relationship model based on RBF ANN trained by the improved PSO algorithm was proposed to predict the pKa values of 74 kinds of neutral and basic drugs and then validated by another database containing 20 molecules. The validation results showed that the model had a good prediction performance. The absolute average relative error, root mean square error, and squared correlation coefficient were 0.3105, 0.0411, and 0.9685, respectively. The model can be used as a reference for exploring other quantitative structure-activity relationships.
A new approach for minimum phase output definition
NASA Astrophysics Data System (ADS)
Jahangiri, Fatemeh; Talebi, Heidar Ali; Menhaj, Mohammad Bagher; Ebenbauer, Christian
2017-01-01
This paper presents a novel method for output redefinition for linear systems. The approach also determines possible relative degrees for the systems corresponding to any new output vector. To guarantee the minimum phase property with a prescribed relative degree, a set of new conditions is introduced. A key feature of these conditions is that there is no need to any form of transformations which make the scheme suitable for optimisation problems in control to ensure the minimum phase property. Moreover, the results are useful for sensor placement problems and for obtaining minimum phase approximations of non-minimum phase systems. Numerical examples including an example of unmanned aerial vehicle systems are given to demonstrate the effectiveness of the methodology.
Bayesian cross-entropy methodology for optimal design of validation experiments
NASA Astrophysics Data System (ADS)
Jiang, X.; Mahadevan, S.
2006-07-01
An important concern in the design of validation experiments is how to incorporate the mathematical model in the design in order to allow conclusive comparisons of model prediction with experimental output in model assessment. The classical experimental design methods are more suitable for phenomena discovery and may result in a subjective, expensive, time-consuming and ineffective design that may adversely impact these comparisons. In this paper, an integrated Bayesian cross-entropy methodology is proposed to perform the optimal design of validation experiments incorporating the computational model. The expected cross entropy, an information-theoretic distance between the distributions of model prediction and experimental observation, is defined as a utility function to measure the similarity of two distributions. A simulated annealing algorithm is used to find optimal values of input variables through minimizing or maximizing the expected cross entropy. The measured data after testing with the optimum input values are used to update the distribution of the experimental output using Bayes theorem. The procedure is repeated to adaptively design the required number of experiments for model assessment, each time ensuring that the experiment provides effective comparison for validation. The methodology is illustrated for the optimal design of validation experiments for a three-leg bolted joint structure and a composite helicopter rotor hub component.
NASA Astrophysics Data System (ADS)
Fan, Tai-Fang
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
Magneto - Optical Imaging of Superconducting MgB2 Thin Films
NASA Astrophysics Data System (ADS)
Hummert, Stephanie Maria
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
Open Markov Processes and Reaction Networks
NASA Astrophysics Data System (ADS)
Swistock Pollard, Blake Stephen
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
Boron Carbide Filled Neutron Shielding Textile Polymers
NASA Astrophysics Data System (ADS)
Manzlak, Derrick Anthony
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
Parallel Unstructured Grid Generation for Complex Real-World Aerodynamic Simulations
NASA Astrophysics Data System (ADS)
Zagaris, George
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
NASA Astrophysics Data System (ADS)
Schiavone, Clinton Cleveland
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
Processing and Conversion of Algae to Bioethanol
NASA Astrophysics Data System (ADS)
Kampfe, Sara Katherine
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
The Development of the CALIPSO LiDAR Simulator
NASA Astrophysics Data System (ADS)
Powell, Kathleen A.
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
Exploring a Novel Approach to Technical Nuclear Forensics Utilizing Atomic Force Microscopy
NASA Astrophysics Data System (ADS)
Peeke, Richard Scot
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
NASA Astrophysics Data System (ADS)
Scully, Malcolm E.
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
Production of Cyclohexylene-Containing Diamines in Pursuit of Novel Radiation Shielding Materials
NASA Astrophysics Data System (ADS)
Bate, Norah G.
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
Development of Boron-Containing Polyimide Materials and Poly(arylene Ether)s for Radiation Shielding
NASA Astrophysics Data System (ADS)
Collins, Brittani May
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
Magnetization Dynamics and Anisotropy in Ferromagnetic/Antiferromagnetic Ni/NiO Bilayers
NASA Astrophysics Data System (ADS)
Petersen, Andreas
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
Wang, Chang; Qin, Xin; Liu, Yan; Zhang, Wenchao
2016-06-01
An adaptive inertia weight particle swarm algorithm is proposed in this study to solve the local optimal problem with the method of traditional particle swarm optimization in the process of estimating magnetic resonance(MR)image bias field.An indicator measuring the degree of premature convergence was designed for the defect of traditional particle swarm optimization algorithm.The inertia weight was adjusted adaptively based on this indicator to ensure particle swarm to be optimized globally and to avoid it from falling into local optimum.The Legendre polynomial was used to fit bias field,the polynomial parameters were optimized globally,and finally the bias field was estimated and corrected.Compared to those with the improved entropy minimum algorithm,the entropy of corrected image was smaller and the estimated bias field was more accurate in this study.Then the corrected image was segmented and the segmentation accuracy obtained in this research was 10% higher than that with improved entropy minimum algorithm.This algorithm can be applied to the correction of MR image bias field.
Scaling of the entropy budget with surface temperature in radiative-convective equilibrium
NASA Astrophysics Data System (ADS)
Singh, Martin S.; O'Gorman, Paul A.
2016-09-01
The entropy budget of the atmosphere is examined in simulations of radiative-convective equilibrium with a cloud-system resolving model over a wide range of surface temperatures from 281 to 311 K. Irreversible phase changes and the diffusion of water vapor account for more than half of the irreversible entropy production within the atmosphere, even in the coldest simulation. As the surface temperature is increased, the atmospheric radiative cooling rate increases, driving a greater entropy sink that must be matched by greater irreversible entropy production. The entropy production resulting from irreversible moist processes increases at a similar fractional rate as the entropy sink and at a lower rate than that implied by Clausius-Clapeyron scaling. This allows the entropy production from frictional drag on hydrometeors and on the atmospheric flow to also increase with warming, in contrast to recent results for simulations with global climate models in which the work output decreases with warming. A set of approximate scaling relations is introduced for the terms in the entropy budget as the surface temperature is varied, and many of the terms are found to scale with the mean surface precipitation rate. The entropy budget provides some insight into changes in frictional dissipation in response to warming or changes in model resolution, but it is argued that frictional dissipation is not closely linked to other measures of convective vigor.
NASA Astrophysics Data System (ADS)
Kim, Y.; Hwang, T.; Vose, J. M.; Martin, K. L.; Band, L. E.
2016-12-01
Obtaining quality hydrologic observations is the first step towards a successful water resources management. While remote sensing techniques have enabled to convert satellite images of the Earth's surface to hydrologic data, the importance of ground-based observations has never been diminished because in-situ data are often highly accurate and can be used to validate remote measurements. The existence of efficient hydrometric networks is becoming more important to obtain as much as information with minimum redundancy. The World Meteorological Organization (WMO) has recommended a guideline for the minimum hydrometric network density based on physiography; however, this guideline is not for the optimum network design but for avoiding serious deficiency from a network. Moreover, all hydrologic variables are interconnected within the hydrologic cycle, while monitoring networks have been designed individually. This study proposes an integrated network design method using entropy theory with a multiobjective optimization approach. In specific, a precipitation and a streamflow networks in a semi-urban watershed in Ontario, Canada were designed simultaneously by maximizing joint entropy, minimizing total correlation, and maximizing conditional entropy of streamflow network given precipitation network. After comparing with the typical individual network designs, the proposed design method would be able to determine more efficient optimal networks by avoiding the redundant stations, in which hydrologic information is transferable. Additionally, four quantization cases were applied in entropy calculations to assess their implications on the station rankings and the optimal networks. The results showed that the selection of quantization method should be considered carefully because the rankings and optimal networks are subject to change accordingly.
NASA Astrophysics Data System (ADS)
Keum, J.; Coulibaly, P. D.
2017-12-01
Obtaining quality hydrologic observations is the first step towards a successful water resources management. While remote sensing techniques have enabled to convert satellite images of the Earth's surface to hydrologic data, the importance of ground-based observations has never been diminished because in-situ data are often highly accurate and can be used to validate remote measurements. The existence of efficient hydrometric networks is becoming more important to obtain as much as information with minimum redundancy. The World Meteorological Organization (WMO) has recommended a guideline for the minimum hydrometric network density based on physiography; however, this guideline is not for the optimum network design but for avoiding serious deficiency from a network. Moreover, all hydrologic variables are interconnected within the hydrologic cycle, while monitoring networks have been designed individually. This study proposes an integrated network design method using entropy theory with a multiobjective optimization approach. In specific, a precipitation and a streamflow networks in a semi-urban watershed in Ontario, Canada were designed simultaneously by maximizing joint entropy, minimizing total correlation, and maximizing conditional entropy of streamflow network given precipitation network. After comparing with the typical individual network designs, the proposed design method would be able to determine more efficient optimal networks by avoiding the redundant stations, in which hydrologic information is transferable. Additionally, four quantization cases were applied in entropy calculations to assess their implications on the station rankings and the optimal networks. The results showed that the selection of quantization method should be considered carefully because the rankings and optimal networks are subject to change accordingly.
The cancer Warburg effect may be a testable example of the minimum entropy production rate principle
NASA Astrophysics Data System (ADS)
Marín, Dolores; Sabater, Bartolomé
2017-04-01
Cancer cells consume more glucose by glycolytic fermentation to lactate than by respiration, a characteristic known as the Warburg effect. In contrast with the 36 moles of ATP produced by respiration, fermentation produces two moles of ATP per mole of glucose consumed, which poses a puzzle with regard to the function of the Warburg effect. The production of free energy (ΔG), enthalpy (ΔH), and entropy (ΔS) per mole linearly varies with the fraction (x) of glucose consumed by fermentation that is frequently estimated around 0.9. Hence, calculation shows that, in respect to pure respiration, the predominant fermentative metabolism decreases around 10% the production of entropy per mole of glucose consumed in cancer cells. We hypothesize that increased fermentation could allow cancer cells to accomplish the Prigogine theorem of the trend to minimize the rate of production of entropy. According to the theorem, open cellular systems near the steady state could evolve to minimize the rates of entropy production that may be reached by modified replicating cells producing entropy at a low rate. Remarkably, at CO2 concentrations above 930 ppm, glucose respiration produces less entropy than fermentation, which suggests experimental tests to validate the hypothesis of minimization of the rate of entropy production through the Warburg effect.
Marín, Dolores; Sabater, Bartolomé
2017-04-28
Cancer cells consume more glucose by glycolytic fermentation to lactate than by respiration, a characteristic known as the Warburg effect. In contrast with the 36 moles of ATP produced by respiration, fermentation produces two moles of ATP per mole of glucose consumed, which poses a puzzle with regard to the function of the Warburg effect. The production of free energy (ΔG), enthalpy (ΔH), and entropy (ΔS) per mole linearly varies with the fraction (x) of glucose consumed by fermentation that is frequently estimated around 0.9. Hence, calculation shows that, in respect to pure respiration, the predominant fermentative metabolism decreases around 10% the production of entropy per mole of glucose consumed in cancer cells. We hypothesize that increased fermentation could allow cancer cells to accomplish the Prigogine theorem of the trend to minimize the rate of production of entropy. According to the theorem, open cellular systems near the steady state could evolve to minimize the rates of entropy production that may be reached by modified replicating cells producing entropy at a low rate. Remarkably, at CO 2 concentrations above 930 ppm, glucose respiration produces less entropy than fermentation, which suggests experimental tests to validate the hypothesis of minimization of the rate of entropy production through the Warburg effect.
Minimum relative entropy distributions with a large mean are Gaussian
NASA Astrophysics Data System (ADS)
Smerlak, Matteo
2016-12-01
Entropy optimization principles are versatile tools with wide-ranging applications from statistical physics to engineering to ecology. Here we consider the following constrained problem: Given a prior probability distribution q , find the posterior distribution p minimizing the relative entropy (also known as the Kullback-Leibler divergence) with respect to q under the constraint that mean (p ) is fixed and large. We show that solutions to this problem are approximately Gaussian. We discuss two applications of this result. In the context of dissipative dynamics, the equilibrium distribution of a Brownian particle confined in a strong external field is independent of the shape of the confining potential. We also derive an H -type theorem for evolutionary dynamics: The entropy of the (standardized) distribution of fitness of a population evolving under natural selection is eventually increasing in time.
Entropy information of heart rate variability and its power spectrum during day and night
NASA Astrophysics Data System (ADS)
Jin, Li; Jun, Wang
2013-07-01
Physiologic systems generate complex fluctuations in their output signals that reflect the underlying dynamics. We employed the base-scale entropy method and the power spectral analysis to study the 24 hours heart rate variability (HRV) signals. The results show that such profound circadian-, age- and pathologic-dependent changes are accompanied by changes in base-scale entropy and power spectral distribution. Moreover, the base-scale entropy changes reflect the corresponding changes in the autonomic nerve outflow. With the suppression of the vagal tone and dominance of the sympathetic tone in congestive heart failure (CHF) subjects, there is more variability in the date fluctuation mode. So the higher base-scale entropy belongs to CHF subjects. With the decrease of the sympathetic tone and the respiratory frequency (RSA) becoming more pronounced with slower breathing during sleeping, the base-scale entropy drops in CHF subjects. The HRV series of the two healthy groups have the same diurnal/nocturnal trend as the CHF series. The fluctuation dynamics trend of data in the three groups can be described as “HF effect”.
Post, Richard F.
2016-02-23
A circuit-based technique enhances the power output of electrostatic generators employing an array of axially oriented rods or tubes or azimuthal corrugated metal surfaces for their electrodes. During generator operation, the peak voltage across the electrodes occurs at an azimuthal position that is intermediate between the position of minimum gap and maximum gap. If this position is also close to the azimuthal angle where the rate of change of capacity is a maximum, then the highest rf power output possible for a given maximum allowable voltage at the minimum gap can be attained. This rf power output is then coupled to the generator load through a coupling condenser that prevents suppression of the dc charging potential by conduction through the load. Optimized circuit values produce phase shifts in the rf output voltage that allow higher power output to occur at the same voltage limit at the minimum gap position.
Method and system for managing an electrical output of a turbogenerator
Stahlhut, Ronnie Dean; Vuk, Carl Thomas
2009-06-02
The system and method manages an electrical output of a turbogenerator in accordance with multiple modes. In a first mode, a direct current (DC) bus receives power from a turbogenerator output via a rectifier where turbogenerator revolutions per unit time (e.g., revolutions per minute (RPM)) or an electrical output level of a turbogenerator output meet or exceed a minimum threshold. In a second mode, if the turbogenerator revolutions per unit time or electrical output level of a turbogenerator output are less than the minimum threshold, the electric drive motor or a generator mechanically powered by the engine provides electrical energy to the direct current bus.
Method and system for managing an electrical output of a turbogenerator
Stahlhut, Ronnie Dean; Vuk, Carl Thomas
2010-08-24
The system and method manages an electrical output of a turbogenerator in accordance with multiple modes. In a first mode, a direct current (DC) bus receives power from a turbogenerator output via a rectifier where turbogenerator revolutions per unit time (e.g., revolutions per minute (RPM)) or an electrical output level of a turbogenerator output meet or exceed a minimum threshold. In a second mode, if the turbogenerator revolutions per unit time or electrical output level of a turbogenerator output are less than the minimum threshold, the electric drive motor or a generator mechanically powered by the engine provides electrical energy to the direct current bus.
Use and validity of principles of extremum of entropy production in the study of complex systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitor Reis, A., E-mail: ahr@uevora.pt
2014-07-15
It is shown how both the principles of extremum of entropy production, which are often used in the study of complex systems, follow from the maximization of overall system conductivities, under appropriate constraints. In this way, the maximum rate of entropy production (MEP) occurs when all the forces in the system are kept constant. On the other hand, the minimum rate of entropy production (mEP) occurs when all the currents that cross the system are kept constant. A brief discussion on the validity of the application of the mEP and MEP principles in several cases, and in particular to themore » Earth’s climate is also presented. -- Highlights: •The principles of extremum of entropy production are not first principles. •They result from the maximization of conductivities under appropriate constraints. •The conditions of their validity are set explicitly. •Some long-standing controversies are discussed and clarified.« less
Morgaz, Juan; Granados, María del Mar; Domínguez, Juan Manuel; Navarrete, Rocío; Fernández, Andrés; Galán, Alba; Muñoz, Pilar; Gómez-Villamandos, Rafael J
2011-06-01
The use of spectral entropy to determine anaesthetic depth and antinociception was evaluated in sevoflurane-anaesthetised Beagle dogs. Dogs were anaesthetised at each of five multiples of their individual minimum alveolar concentrations (MAC; 0.75, 1, 1.25, 1.5 and 1.75 MAC), and response entropy (RE), state entropy (SE), RE-SE difference, burst suppression rate (BSR) and cardiorespiratory parameters were recorded before and after a painful stimulus. RE, SE and RE-SE difference did not change significantly after the stimuli. The correlation between MAC-entropy parameters was weak, but these values increased when 1.75 MAC results were excluded from the analysis. BSR was different to zero at 1.5 and 1.75 MAC. It was concluded that RE and RE-SE differences were not adequate indicators of antinociception and SE and RE were unable to detect deep planes of anaesthesia in dogs, although they both distinguished the awake and unconscious states. Copyright © 2010 Elsevier Ltd. All rights reserved.
Entropy-based artificial viscosity stabilization for non-equilibrium Grey Radiation-Hydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delchini, Marc O., E-mail: delchinm@email.tamu.edu; Ragusa, Jean C., E-mail: jean.ragusa@tamu.edu; Morel, Jim, E-mail: jim.morel@tamu.edu
2015-09-01
The entropy viscosity method is extended to the non-equilibrium Grey Radiation-Hydrodynamic equations. The method employs a viscous regularization to stabilize the numerical solution. The artificial viscosity coefficient is modulated by the entropy production and peaks at shock locations. The added dissipative terms are consistent with the entropy minimum principle. A new functional form of the entropy residual, suitable for the Radiation-Hydrodynamic equations, is derived. We demonstrate that the viscous regularization preserves the equilibrium diffusion limit. The equations are discretized with a standard Continuous Galerkin Finite Element Method and a fully implicit temporal integrator within the MOOSE multiphysics framework. The methodmore » of manufactured solutions is employed to demonstrate second-order accuracy in both the equilibrium diffusion and streaming limits. Several typical 1-D radiation-hydrodynamic test cases with shocks (from Mach 1.05 to Mach 50) are presented to establish the ability of the technique to capture and resolve shocks.« less
New Insights into the Fractional Order Diffusion Equation Using Entropy and Kurtosis.
Ingo, Carson; Magin, Richard L; Parrish, Todd B
2014-11-01
Fractional order derivative operators offer a concise description to model multi-scale, heterogeneous and non-local systems. Specifically, in magnetic resonance imaging, there has been recent work to apply fractional order derivatives to model the non-Gaussian diffusion signal, which is ubiquitous in the movement of water protons within biological tissue. To provide a new perspective for establishing the utility of fractional order models, we apply entropy for the case of anomalous diffusion governed by a fractional order diffusion equation generalized in space and in time. This fractional order representation, in the form of the Mittag-Leffler function, gives an entropy minimum for the integer case of Gaussian diffusion and greater values of spectral entropy for non-integer values of the space and time derivatives. Furthermore, we consider kurtosis, defined as the normalized fourth moment, as another probabilistic description of the fractional time derivative. Finally, we demonstrate the implementation of anomalous diffusion, entropy and kurtosis measurements in diffusion weighted magnetic resonance imaging in the brain of a chronic ischemic stroke patient.
Zhao, Yong; Hong, Wen-Xue
2011-11-01
Fast, nondestructive and accurate identification of special quality eggs is an urgent problem. The present paper proposed a new feature extraction method based on symbol entropy to identify near infrared spectroscopy of special quality eggs. The authors selected normal eggs, free range eggs, selenium-enriched eggs and zinc-enriched eggs as research objects and measured the near-infrared diffuse reflectance spectra in the range of 12 000-4 000 cm(-1). Raw spectra were symbolically represented with aggregation approximation algorithm and symbolic entropy was extracted as feature vector. An error-correcting output codes multiclass support vector machine classifier was designed to identify the spectrum. Symbolic entropy feature is robust when parameter changed and the highest recognition rate reaches up to 100%. The results show that the identification method of special quality eggs using near-infrared is feasible and the symbol entropy can be used as a new feature extraction method of near-infrared spectra.
McMahon, Christopher J; Toomey, Joshua P; Kane, Deb M
2017-01-01
We have analysed large data sets consisting of tens of thousands of time series from three Type B laser systems: a semiconductor laser in a photonic integrated chip, a semiconductor laser subject to optical feedback from a long free-space-external-cavity, and a solid-state laser subject to optical injection from a master laser. The lasers can deliver either constant, periodic, pulsed, or chaotic outputs when parameters such as the injection current and the level of external perturbation are varied. The systems represent examples of experimental nonlinear systems more generally and cover a broad range of complexity including systematically varying complexity in some regions. In this work we have introduced a new procedure for semi-automatically interrogating experimental laser system output power time series to calculate the correlation dimension (CD) using the commonly adopted Grassberger-Proccacia algorithm. The new CD procedure is called the 'minimum gradient detection algorithm'. A value of minimum gradient is returned for all time series in a data set. In some cases this can be identified as a CD, with uncertainty. Applying the new 'minimum gradient detection algorithm' CD procedure, we obtained robust measurements of the correlation dimension for many of the time series measured from each laser system. By mapping the results across an extended parameter space for operation of each laser system, we were able to confidently identify regions of low CD (CD < 3) and assign these robust values for the correlation dimension. However, in all three laser systems, we were not able to measure the correlation dimension at all parts of the parameter space. Nevertheless, by mapping the staged progress of the algorithm, we were able to broadly classify the dynamical output of the lasers at all parts of their respective parameter spaces. For two of the laser systems this included displaying regions of high-complexity chaos and dynamic noise. These high-complexity regions are differentiated from regions where the time series are dominated by technical noise. This is the first time such differentiation has been achieved using a CD analysis approach. More can be known of the CD for a system when it is interrogated in a mapping context, than from calculations using isolated time series. This has been shown for three laser systems and the approach is expected to be useful in other areas of nonlinear science where large data sets are available and need to be semi-automatically analysed to provide real dimensional information about the complex dynamics. The CD/minimum gradient algorithm measure provides additional information that complements other measures of complexity and relative complexity, such as the permutation entropy; and conventional physical measurements.
McMahon, Christopher J.; Toomey, Joshua P.
2017-01-01
Background We have analysed large data sets consisting of tens of thousands of time series from three Type B laser systems: a semiconductor laser in a photonic integrated chip, a semiconductor laser subject to optical feedback from a long free-space-external-cavity, and a solid-state laser subject to optical injection from a master laser. The lasers can deliver either constant, periodic, pulsed, or chaotic outputs when parameters such as the injection current and the level of external perturbation are varied. The systems represent examples of experimental nonlinear systems more generally and cover a broad range of complexity including systematically varying complexity in some regions. Methods In this work we have introduced a new procedure for semi-automatically interrogating experimental laser system output power time series to calculate the correlation dimension (CD) using the commonly adopted Grassberger-Proccacia algorithm. The new CD procedure is called the ‘minimum gradient detection algorithm’. A value of minimum gradient is returned for all time series in a data set. In some cases this can be identified as a CD, with uncertainty. Findings Applying the new ‘minimum gradient detection algorithm’ CD procedure, we obtained robust measurements of the correlation dimension for many of the time series measured from each laser system. By mapping the results across an extended parameter space for operation of each laser system, we were able to confidently identify regions of low CD (CD < 3) and assign these robust values for the correlation dimension. However, in all three laser systems, we were not able to measure the correlation dimension at all parts of the parameter space. Nevertheless, by mapping the staged progress of the algorithm, we were able to broadly classify the dynamical output of the lasers at all parts of their respective parameter spaces. For two of the laser systems this included displaying regions of high-complexity chaos and dynamic noise. These high-complexity regions are differentiated from regions where the time series are dominated by technical noise. This is the first time such differentiation has been achieved using a CD analysis approach. Conclusions More can be known of the CD for a system when it is interrogated in a mapping context, than from calculations using isolated time series. This has been shown for three laser systems and the approach is expected to be useful in other areas of nonlinear science where large data sets are available and need to be semi-automatically analysed to provide real dimensional information about the complex dynamics. The CD/minimum gradient algorithm measure provides additional information that complements other measures of complexity and relative complexity, such as the permutation entropy; and conventional physical measurements. PMID:28837602
Necessary conditions for the optimality of variable rate residual vector quantizers
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.
1993-01-01
Residual vector quantization (RVQ), or multistage VQ, as it is also called, has recently been shown to be a competitive technique for data compression. The competitive performance of RVQ reported in results from the joint optimization of variable rate encoding and RVQ direct-sum code books. In this paper, necessary conditions for the optimality of variable rate RVQ's are derived, and an iterative descent algorithm based on a Lagrangian formulation is introduced for designing RVQ's having minimum average distortion subject to an entropy constraint. Simulation results for these entropy-constrained RVQ's (EC-RVQ's) are presented for memory less Gaussian, Laplacian, and uniform sources. A Gauss-Markov source is also considered. The performance is superior to that of entropy-constrained scalar quantizers (EC-SQ's) and practical entropy-constrained vector quantizers (EC-VQ's), and is competitive with that of some of the best source coding techniques that have appeared in the literature.
Li, Jing Xin; Yang, Li; Yang, Lei; Zhang, Chao; Huo, Zhao Min; Chen, Min Hao; Luan, Xiao Feng
2018-03-01
Quantitative evaluation of ecosystem service is a primary premise for rational resources exploitation and sustainable development. Examining ecosystem services flow provides a scientific method to quantity ecosystem services. We built an assessment indicator system based on land cover/land use under the framework of four types of ecosystem services. The types of ecosystem services flow were reclassified. Using entropy theory, disorder degree and developing trend of indicators and urban ecosystem were quantitatively assessed. Beijing was chosen as the study area, and twenty-four indicators were selected for evaluation. The results showed that the entropy value of Beijing urban ecosystem during 2004 to 2015 was 0.794 and the entropy flow was -0.024, suggesting a large disordered degree and near verge of non-health. The system got maximum values for three times, while the mean annual variation of the system entropy value increased gradually in three periods, indicating that human activities had negative effects on urban ecosystem. Entropy flow reached minimum value in 2007, implying the environmental quality was the best in 2007. The determination coefficient for the fitting function of total permanent population in Beijing and urban ecosystem entropy flow was 0.921, indicating that urban ecosystem health was highly correlated with total permanent population.
Guastello, Stephen J; Gorin, Hillary; Huschen, Samuel; Peters, Natalie E; Fabisch, Megan; Poston, Kirsten
2012-10-01
It has become well established in laboratory experiments that switching tasks, perhaps due to interruptions at work, incur costs in response time to complete the next task. Conditions are also known that exaggerate or lessen the switching costs. Although switching costs can contribute to fatigue, task switching can also be an adaptive response to fatigue. The present study introduces a new research paradigm for studying the emergence of voluntary task switching regimes, self-organizing processes therein, and the possibly conflicting roles of switching costs and minimum entropy. Fifty-four undergraduates performed 7 different computer-based cognitive tasks producing sets of 49 responses under instructional conditions requiring task quotas or no quotas. The sequences of task choices were analyzed using orbital decomposition to extract pattern types and lengths, which were then classified and compared with regard to Shannon entropy, topological entropy, number of task switches involved, and overall performance. Results indicated that similar but different patterns were generated under the two instructional conditions, and better performance was associated with lower topological entropy. Both entropy metrics were associated with the amount of voluntary task switching. Future research should explore conditions affecting the trade-off between switching costs and entropy, levels of automaticity between task elements, and the role of voluntary switching regimes on fatigue.
Optimization of a Circular Microchannel With Entropy Generation Minimization Method
NASA Astrophysics Data System (ADS)
Jafari, Arash; Ghazali, Normah Mohd
2010-06-01
New advances in micro and nano scales are being realized and the contributions of micro and nano heat dissipation devices are of high importance in this novel technology development. Past studies showed that microchannel design depends on its thermal resistance and pressure drop. However, entropy generation minimization (EGM) as a new optimization theory stated that the rate of entropy generation should be also optimized. Application of EGM in microchannel heat sink design is reviewed and discussed in this paper. Latest principles for deriving the entropy generation relations are discussed to present how this approach can be achieved. An optimization procedure using EGM method with the entropy generation rate is derived for a circular microchannel heat sink based upon thermal resistance and pressure drop. The equations are solved using MATLAB and the obtained results are compared to similar past studies. The effects of channel diameter, number of channels, heat flux, and pumping power on the entropy generation rate and Reynolds number are investigated. Analytical correlations are utilized for heat transfer and friction coefficients. A minimum entropy generation has been observed for N = 40 and channel diameter of 90μm. It is concluded that for N = 40 and channel hydraulic diameter of 90μm, the circular microchannel heat sink is on its optimum operating point based on second law of thermodynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yonggang, E-mail: wangyg@ustc.edu.cn; Hui, Cong; Liu, Chong
The contribution of this paper is proposing a new entropy extraction mechanism based on sampling phase jitter in ring oscillators to make a high throughput true random number generator in a field programmable gate array (FPGA) practical. Starting from experimental observation and analysis of the entropy source in FPGA, a multi-phase sampling method is exploited to harvest the clock jitter with a maximum entropy and fast sampling speed. This parametrized design is implemented in a Xilinx Artix-7 FPGA, where the carry chains in the FPGA are explored to realize the precise phase shifting. The generator circuit is simple and resource-saving,more » so that multiple generation channels can run in parallel to scale the output throughput for specific applications. The prototype integrates 64 circuit units in the FPGA to provide a total output throughput of 7.68 Gbps, which meets the requirement of current high-speed quantum key distribution systems. The randomness evaluation, as well as its robustness to ambient temperature, confirms that the new method in a purely digital fashion can provide high-speed high-quality random bit sequences for a variety of embedded applications.« less
Wang, Yonggang; Hui, Cong; Liu, Chong; Xu, Chao
2016-04-01
The contribution of this paper is proposing a new entropy extraction mechanism based on sampling phase jitter in ring oscillators to make a high throughput true random number generator in a field programmable gate array (FPGA) practical. Starting from experimental observation and analysis of the entropy source in FPGA, a multi-phase sampling method is exploited to harvest the clock jitter with a maximum entropy and fast sampling speed. This parametrized design is implemented in a Xilinx Artix-7 FPGA, where the carry chains in the FPGA are explored to realize the precise phase shifting. The generator circuit is simple and resource-saving, so that multiple generation channels can run in parallel to scale the output throughput for specific applications. The prototype integrates 64 circuit units in the FPGA to provide a total output throughput of 7.68 Gbps, which meets the requirement of current high-speed quantum key distribution systems. The randomness evaluation, as well as its robustness to ambient temperature, confirms that the new method in a purely digital fashion can provide high-speed high-quality random bit sequences for a variety of embedded applications.
ECOSYSTEM GROWTH AND DEVELOPMENT
Thermodynamically, ecosystem growth and development is the process by which energy throughflow and stored biomass increase. Several proposed hypotheses describe the natural tendencies that occur as an ecosystem matures, and here, we consider five: minimum entropy production, maxi...
Resting state fMRI entropy probes complexity of brain activity in adults with ADHD.
Sokunbi, Moses O; Fung, Wilson; Sawlani, Vijay; Choppin, Sabine; Linden, David E J; Thome, Johannes
2013-12-30
In patients with attention deficit hyperactivity disorder (ADHD), quantitative neuroimaging techniques have revealed abnormalities in various brain regions, including the frontal cortex, striatum, cerebellum, and occipital cortex. Nonlinear signal processing techniques such as sample entropy have been used to probe the regularity of brain magnetoencephalography signals in patients with ADHD. In the present study, we extend this technique to analyse the complex output patterns of the 4 dimensional resting state functional magnetic resonance imaging signals in adult patients with ADHD. After adjusting for the effect of age, we found whole brain entropy differences (P=0.002) between groups and negative correlation (r=-0.45) between symptom scores and mean whole brain entropy values, indicating lower complexity in patients. In the regional analysis, patients showed reduced entropy in frontal and occipital regions bilaterally and a significant negative correlation between the symptom scores and the entropy maps at a family-wise error corrected cluster level of P<0.05 (P=0.001, initial threshold). Our findings support the hypothesis of abnormal frontal-striatal-cerebellar circuits in ADHD and the suggestion that sample entropy is a useful tool in revealing abnormalities in the brain dynamics of patients with psychiatric disorders. © 2013 Elsevier Ireland Ltd. All rights reserved.
Heat capacities and thermodynamic properties of annite (aluminous iron biotite)
Hemingway, B.S.; Robie, R.A.
1990-01-01
The heat capacities have been measured between 7 and 650 K by quasi-adiabatic calorimetry and differential scanning calorimetry. At 298.15 K and 1 bar, the calorimetric entropy for our sample is 354.9??0.7 J/(mol.K). A minimum configurational entropy of 18.7 J/(mol.K) for full disorder of Al/Si in the tetrahedral sites should be added to the calorimetric entropy for third-law calculations. The heat capacity equation [Cp in units of J/mol.K)] Cp0 = 583.586 + 0.075246T - 3420.60T-0.5 - (4.4551 ?? 106)T-2 fits the experimental and estimated heat capacities for our sample (valid range 250 to 1000 K) with an average deviation of 0.37%. -from Authors
NASA Astrophysics Data System (ADS)
Björnbom, Pehr
2016-03-01
In the first part of this work equilibrium temperature profiles in fluid columns with ideal gas or ideal liquid were obtained by numerically minimizing the column energy at constant entropy, equivalent to maximizing column entropy at constant energy. A minimum in internal plus potential energy for an isothermal temperature profile was obtained in line with Gibbs' classical equilibrium criterion. However, a minimum in internal energy alone for adiabatic temperature profiles was also obtained. This led to a hypothesis that the adiabatic lapse rate corresponds to a restricted equilibrium state, a type of state in fact discussed already by Gibbs. In this paper similar numerical results for a fluid column with saturated air suggest that also the saturated adiabatic lapse rate corresponds to a restricted equilibrium state. The proposed hypothesis is further discussed and amended based on the previous and the present numerical results and a theoretical analysis based on Gibbs' equilibrium theory.
Optimal Binarization of Gray-Scaled Digital Images via Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominguez, Jesus A. (Inventor); Klinko, Steven J. (Inventor)
2007-01-01
A technique for finding an optimal threshold for binarization of a gray scale image employs fuzzy reasoning. A triangular membership function is employed which is dependent on the degree to which the pixels in the image belong to either the foreground class or the background class. Use of a simplified linear fuzzy entropy factor function facilitates short execution times and use of membership values between 0.0 and 1.0 for improved accuracy. To improve accuracy further, the membership function employs lower and upper bound gray level limits that can vary from image to image and are selected to be equal to the minimum and the maximum gray levels, respectively, that are present in the image to be converted. To identify the optimal binarization threshold, an iterative process is employed in which different possible thresholds are tested and the one providing the minimum fuzzy entropy measure is selected.
A MATLAB implementation of the minimum relative entropy method for linear inverse problems
NASA Astrophysics Data System (ADS)
Neupauer, Roseanna M.; Borchers, Brian
2001-08-01
The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.
Output Feedback Adaptive Control of Non-Minimum Phase Systems Using Optimal Control Modification
NASA Technical Reports Server (NTRS)
Nguyen, Nhan; Hashemi, Kelley E.; Yucelen, Tansel; Arabi, Ehsan
2018-01-01
This paper describes output feedback adaptive control approaches for non-minimum phase SISO systems with relative degree 1 and non-strictly positive real (SPR) MIMO systems with uniform relative degree 1 using the optimal control modification method. It is well-known that the standard model-reference adaptive control (MRAC) cannot be used to control non-SPR plants to track an ideal SPR reference model. Due to the ideal property of asymptotic tracking, MRAC attempts an unstable pole-zero cancellation which results in unbounded signals for non-minimum phase SISO systems. The optimal control modification can be used to prevent the unstable pole-zero cancellation which results in a stable adaptation of non-minimum phase SISO systems. However, the tracking performance using this approach could suffer if the unstable zero is located far away from the imaginary axis. The tracking performance can be recovered by using an observer-based output feedback adaptive control approach which uses a Luenberger observer design to estimate the state information of the plant. Instead of explicitly specifying an ideal SPR reference model, the reference model is established from the linear quadratic optimal control to account for the non-minimum phase behavior of the plant. With this non-minimum phase reference model, the observer-based output feedback adaptive control can maintain stability as well as tracking performance. However, in the presence of the mismatch between the SPR reference model and the non-minimum phase plant, the standard MRAC results in unbounded signals, whereas a stable adaptation can be achieved with the optimal control modification. An application of output feedback adaptive control for a flexible wing aircraft illustrates the approaches.
Pfeiffer, Keram; French, Andrew S
2009-09-02
Neurotransmitter chemicals excite or inhibit a range of sensory afferents and sensory pathways. These changes in firing rate or static sensitivity can also be associated with changes in dynamic sensitivity or membrane noise and thus action potential timing. We measured action potential firing produced by random mechanical stimulation of spider mechanoreceptor neurons during long-duration excitation by the GABAA agonist muscimol. Information capacity was estimated from signal-to-noise ratio by averaging responses to repeated identical stimulation sequences. Information capacity was also estimated from the coherence function between input and output signals. Entropy rate was estimated by a data compression algorithm and maximum entropy rate from the firing rate. Action potential timing variability, or jitter, was measured as normalized interspike interval distance. Muscimol increased firing rate, information capacity, and entropy rate, but jitter was unchanged. We compared these data with the effects of increasing firing rate by current injection. Our results indicate that the major increase in information capacity by neurotransmitter action arose from the increased entropy rate produced by increased firing rate, not from reduction in membrane noise and action potential jitter.
NASA Astrophysics Data System (ADS)
Çakır, Süleyman
2017-10-01
In this study, a two-phase methodology for resource allocation problems under a fuzzy environment is proposed. In the first phase, the imprecise Shannon's entropy method and the acceptability index are suggested, for the first time in the literature, to select input and output variables to be used in the data envelopment analysis (DEA) application. In the second step, an interval inverse DEA model is executed for resource allocation in a short run. In an effort to exemplify the practicality of the proposed fuzzy model, a real case application has been conducted involving 16 cement firms listed in Borsa Istanbul. The results of the case application indicated that the proposed hybrid model is a viable procedure to handle input-output selection and resource allocation problems under fuzzy conditions. The presented methodology can also lend itself to different applications such as multi-criteria decision-making problems.
Sample entropy analysis of cervical neoplasia gene-expression signatures
Botting, Shaleen K; Trzeciakowski, Jerome P; Benoit, Michelle F; Salama, Salama A; Diaz-Arrastia, Concepcion R
2009-01-01
Background We introduce Approximate Entropy as a mathematical method of analysis for microarray data. Approximate entropy is applied here as a method to classify the complex gene expression patterns resultant of a clinical sample set. Since Entropy is a measure of disorder in a system, we believe that by choosing genes which display minimum entropy in normal controls and maximum entropy in the cancerous sample set we will be able to distinguish those genes which display the greatest variability in the cancerous set. Here we describe a method of utilizing Approximate Sample Entropy (ApSE) analysis to identify genes of interest with the highest probability of producing an accurate, predictive, classification model from our data set. Results In the development of a diagnostic gene-expression profile for cervical intraepithelial neoplasia (CIN) and squamous cell carcinoma of the cervix, we identified 208 genes which are unchanging in all normal tissue samples, yet exhibit a random pattern indicative of the genetic instability and heterogeneity of malignant cells. This may be measured in terms of the ApSE when compared to normal tissue. We have validated 10 of these genes on 10 Normal and 20 cancer and CIN3 samples. We report that the predictive value of the sample entropy calculation for these 10 genes of interest is promising (75% sensitivity, 80% specificity for prediction of cervical cancer over CIN3). Conclusion The success of the Approximate Sample Entropy approach in discerning alterations in complexity from biological system with such relatively small sample set, and extracting biologically relevant genes of interest hold great promise. PMID:19232110
A Maximum Entropy Method for Particle Filtering
NASA Astrophysics Data System (ADS)
Eyink, Gregory L.; Kim, Sangil
2006-06-01
Standard ensemble or particle filtering schemes do not properly represent states of low priori probability when the number of available samples is too small, as is often the case in practical applications. We introduce here a set of parametric resampling methods to solve this problem. Motivated by a general H-theorem for relative entropy, we construct parametric models for the filter distributions as maximum-entropy/minimum-information models consistent with moments of the particle ensemble. When the prior distributions are modeled as mixtures of Gaussians, our method naturally generalizes the ensemble Kalman filter to systems with highly non-Gaussian statistics. We apply the new particle filters presented here to two simple test cases: a one-dimensional diffusion process in a double-well potential and the three-dimensional chaotic dynamical system of Lorenz.
The minimal work cost of information processing
NASA Astrophysics Data System (ADS)
Faist, Philippe; Dupuis, Frédéric; Oppenheim, Jonathan; Renner, Renato
2015-07-01
Irreversible information processing cannot be carried out without some inevitable thermodynamical work cost. This fundamental restriction, known as Landauer's principle, is increasingly relevant today, as the energy dissipation of computing devices impedes the development of their performance. Here we determine the minimal work required to carry out any logical process, for instance a computation. It is given by the entropy of the discarded information conditional to the output of the computation. Our formula takes precisely into account the statistically fluctuating work requirement of the logical process. It enables the explicit calculation of practical scenarios, such as computational circuits or quantum measurements. On the conceptual level, our result gives a precise and operational connection between thermodynamic and information entropy, and explains the emergence of the entropy state function in macroscopic thermodynamics.
Adjusting protein graphs based on graph entropy.
Peng, Sheng-Lung; Tsay, Yu-Wei
2014-01-01
Measuring protein structural similarity attempts to establish a relationship of equivalence between polymer structures based on their conformations. In several recent studies, researchers have explored protein-graph remodeling, instead of looking a minimum superimposition for pairwise proteins. When graphs are used to represent structured objects, the problem of measuring object similarity become one of computing the similarity between graphs. Graph theory provides an alternative perspective as well as efficiency. Once a protein graph has been created, its structural stability must be verified. Therefore, a criterion is needed to determine if a protein graph can be used for structural comparison. In this paper, we propose a measurement for protein graph remodeling based on graph entropy. We extend the concept of graph entropy to determine whether a graph is suitable for representing a protein. The experimental results suggest that when applied, graph entropy helps a conformational on protein graph modeling. Furthermore, it indirectly contributes to protein structural comparison if a protein graph is solid.
Adjusting protein graphs based on graph entropy
2014-01-01
Measuring protein structural similarity attempts to establish a relationship of equivalence between polymer structures based on their conformations. In several recent studies, researchers have explored protein-graph remodeling, instead of looking a minimum superimposition for pairwise proteins. When graphs are used to represent structured objects, the problem of measuring object similarity become one of computing the similarity between graphs. Graph theory provides an alternative perspective as well as efficiency. Once a protein graph has been created, its structural stability must be verified. Therefore, a criterion is needed to determine if a protein graph can be used for structural comparison. In this paper, we propose a measurement for protein graph remodeling based on graph entropy. We extend the concept of graph entropy to determine whether a graph is suitable for representing a protein. The experimental results suggest that when applied, graph entropy helps a conformational on protein graph modeling. Furthermore, it indirectly contributes to protein structural comparison if a protein graph is solid. PMID:25474347
NASA Astrophysics Data System (ADS)
Auslander, Joseph Simcha
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
NASA Astrophysics Data System (ADS)
Frey, Alexander
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
NASA Astrophysics Data System (ADS)
Mountz, Elizabeth M.
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
NASA Astrophysics Data System (ADS)
Abelard, Joshua Erold Robert
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
NASA Astrophysics Data System (ADS)
Harbert, Emily Grace
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
Monitoring the Depth of Anesthesia Using a New Adaptive Neurofuzzy System.
Shalbaf, Ahmad; Saffar, Mohsen; Sleigh, Jamie W; Shalbaf, Reza
2018-05-01
Accurate and noninvasive monitoring of the depth of anesthesia (DoA) is highly desirable. Since the anesthetic drugs act mainly on the central nervous system, the analysis of brain activity using electroencephalogram (EEG) is very useful. This paper proposes a novel automated method for assessing the DoA using EEG. First, 11 features including spectral, fractal, and entropy are extracted from EEG signal and then, by applying an algorithm according to exhaustive search of all subsets of features, a combination of the best features (Beta-index, sample entropy, shannon permutation entropy, and detrended fluctuation analysis) is selected. Accordingly, we feed these extracted features to a new neurofuzzy classification algorithm, adaptive neurofuzzy inference system with linguistic hedges (ANFIS-LH). This structure can successfully model systems with nonlinear relationships between input and output, and also classify overlapped classes accurately. ANFIS-LH, which is based on modified classical fuzzy rules, reduces the effects of the insignificant features in input space, which causes overlapping and modifies the output layer structure. The presented method classifies EEG data into awake, light, general, and deep states during anesthesia with sevoflurane in 17 patients. Its accuracy is 92% compared to a commercial monitoring system (response entropy index) successfully. Moreover, this method reaches the classification accuracy of 93% to categorize EEG signal to awake and general anesthesia states by another database of propofol and volatile anesthesia in 50 patients. To sum up, this method is potentially applicable to a new real-time monitoring system to help the anesthesiologist with continuous assessment of DoA quickly and accurately.
Entropy as a Gene-Like Performance Indicator Promoting Thermoelectric Materials.
Liu, Ruiheng; Chen, Hongyi; Zhao, Kunpeng; Qin, Yuting; Jiang, Binbin; Zhang, Tiansong; Sha, Gang; Shi, Xun; Uher, Ctirad; Zhang, Wenqing; Chen, Lidong
2017-10-01
High-throughput explorations of novel thermoelectric materials based on the Materials Genome Initiative paradigm only focus on digging into the structure-property space using nonglobal indicators to design materials with tunable electrical and thermal transport properties. As the genomic units, following the biogene tradition, such indicators include localized crystal structural blocks in real space or band degeneracy at certain points in reciprocal space. However, this nonglobal approach does not consider how real materials differentiate from others. Here, this study successfully develops a strategy of using entropy as the global gene-like performance indicator that shows how multicomponent thermoelectric materials with high entropy can be designed via a high-throughput screening method. Optimizing entropy works as an effective guide to greatly improve the thermoelectric performance through either a significantly depressed lattice thermal conductivity down to its theoretical minimum value and/or via enhancing the crystal structure symmetry to yield large Seebeck coefficients. The entropy engineering using multicomponent crystal structures or other possible techniques provides a new avenue for an improvement of the thermoelectric performance beyond the current methods and approaches. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A survey of the role of thermodynamic stability in viscous flow
NASA Technical Reports Server (NTRS)
Horne, W. C.; Smith, C. A.; Karamcheti, K.
1991-01-01
The stability of near-equilibrium states has been studied as a branch of the general field of nonequilibrium thermodynamics. By treating steady viscous flow as an open thermodynamic system, nonequilibrium principles such as the condition of minimum entropy-production rate for steady, near-equilibrium processes can be used to generate flow distributions from variational analyses. Examples considered in this paper are steady heat conduction, channel flow, and unconstrained three-dimensional flow. The entropy-production-rate condition has also been used for hydrodynamic stability criteria, and calculations of the stability of a laminar wall jet support this interpretation.
Investigating dynamical complexity in the magnetosphere using various entropy measures
NASA Astrophysics Data System (ADS)
Balasis, Georgios; Daglis, Ioannis A.; Papadimitriou, Constantinos; Kalimeri, Maria; Anastasiadis, Anastasios; Eftaxias, Konstantinos
2009-09-01
The complex system of the Earth's magnetosphere corresponds to an open spatially extended nonequilibrium (input-output) dynamical system. The nonextensive Tsallis entropy has been recently introduced as an appropriate information measure to investigate dynamical complexity in the magnetosphere. The method has been employed for analyzing Dst time series and gave promising results, detecting the complexity dissimilarity among different physiological and pathological magnetospheric states (i.e., prestorm activity and intense magnetic storms, respectively). This paper explores the applicability and effectiveness of a variety of computable entropy measures (e.g., block entropy, Kolmogorov entropy, T complexity, and approximate entropy) to the investigation of dynamical complexity in the magnetosphere. We show that as the magnetic storm approaches there is clear evidence of significant lower complexity in the magnetosphere. The observed higher degree of organization of the system agrees with that inferred previously, from an independent linear fractal spectral analysis based on wavelet transforms. This convergence between nonlinear and linear analyses provides a more reliable detection of the transition from the quiet time to the storm time magnetosphere, thus showing evidence that the occurrence of an intense magnetic storm is imminent. More precisely, we claim that our results suggest an important principle: significant complexity decrease and accession of persistency in Dst time series can be confirmed as the magnetic storm approaches, which can be used as diagnostic tools for the magnetospheric injury (global instability). Overall, approximate entropy and Tsallis entropy yield superior results for detecting dynamical complexity changes in the magnetosphere in comparison to the other entropy measures presented herein. Ultimately, the analysis tools developed in the course of this study for the treatment of Dst index can provide convenience for space weather applications.
Fuzzy geometry, entropy, and image information
NASA Technical Reports Server (NTRS)
Pal, Sankar K.
1991-01-01
Presented here are various uncertainty measures arising from grayness ambiguity and spatial ambiguity in an image, and their possible applications as image information measures. Definitions are given of an image in the light of fuzzy set theory, and of information measures and tools relevant for processing/analysis e.g., fuzzy geometrical properties, correlation, bound functions and entropy measures. Also given is a formulation of algorithms along with management of uncertainties for segmentation and object extraction, and edge detection. The output obtained here is both fuzzy and nonfuzzy. Ambiguity in evaluation and assessment of membership function are also described.
Divvy Economies Based On (An Abstract) Temperature
NASA Astrophysics Data System (ADS)
Collins, Dennis G.
2004-04-01
The Leontief Input-Output economic system can provide a model for a one-parameter family of economic systems based on an abstract temperature T. In particular, given a normalized input-output matrix R and taking R= R(1), a family of economic systems R(1/T)=R(α) is developed that represents heating (T>1) and cooling (T<1) of the economy relative to T=1. .The economy for a given value of T represents the solution of a constrained maximum entropy problem.
Moisture sorption isotherms and thermodynamic properties of mexican mennonite-style cheese.
Martinez-Monteagudo, Sergio I; Salais-Fierro, Fabiola
2014-10-01
Moisture adsorption isotherms of fresh and ripened Mexican Mennonite-style cheese were investigated using the static gravimetric method at 4, 8, and 12 °C in a water activity range (aw) of 0.08-0.96. These isotherms were modeled using GAB, BET, Oswin and Halsey equations through weighed non-linear regression. All isotherms were sigmoid in shape, showing a type II BET isotherm, and the data were best described by GAB model. GAB model coefficients revealed that water adsorption by cheese matrix is a multilayer process characterized by molecules that are strongly bound in the monolayer and molecules that are slightly structured in a multilayer. Using the GAB model, it was possible to estimate thermodynamic functions (net isosteric heat, differential entropy, integral enthalpy and entropy, and enthalpy-entropy compensation) as function of moisture content. For both samples, the isosteric heat and differential entropy decreased with moisture content in exponential fashion. The integral enthalpy gradually decreased with increasing moisture content after reached a maximum value, while the integral entropy decreased with increasing moisture content after reached a minimum value. A linear compensation was found between integral enthalpy and entropy suggesting enthalpy controlled adsorption. Determination of moisture content and aw relationship yields to important information of controlling the ripening, drying and storage operations as well as understanding of the water state within a cheese matrix.
An Integrated Theory of Everything (TOE)
NASA Astrophysics Data System (ADS)
Colella, Antonio
2014-03-01
An Integrated TOE unifies all known physical phenomena from the Planck cube to the Super Universe (multiverse). Each matter/force particle is represented by a Planck cube string. Any Super Universe object is a volume of contiguous Planck cubes. Super force Planck cube string singularities existed at the start of all universes. An Integrated TOE foundations are twenty independent existing theories and without sacrificing their integrities, are replaced by twenty interrelated amplified theories. Amplifications of Higgs force theory are key to an Integrated TOE and include: 64 supersymmetric Higgs particles; super force condensations to 17 matter particles/associated Higgs forces; spontaneous symmetry breaking is bidirectional; and the sum of 8 permanent Higgs force energies is dark energy. Stellar black hole theory was amplified to include a quark star (matter) with mass, volume, near zero temperature, and maximum entropy. A black hole (energy) has energy, minimal volume (singularity), near infinite temperature, and minimum entropy. Our precursor universe's super supermassive quark star (matter) evaporated to a super supermassive black hole (energy). This transferred total conserved energy/mass and transformed entropy from maximum to minimum. Integrated Theory of Everything Book Video: https://www.youtube.com/watch?v=4a1c9IvdoGY Research Article Video: http://www.youtube.com/watch?v=CD-QoLeVbSY Research Article: http://toncolella.files.wordpress.com/2012/07/m080112.pdf.
Shock wave induced vaporization of porous solids
NASA Astrophysics Data System (ADS)
Shen, Andy H.; Ahrens, Thomas J.; O'Keefe, John D.
2003-05-01
Strong shock waves generated by hypervelocity impact can induce vaporization in solid materials. To pursue knowledge of the chemical species in the shock-induced vapors, one needs to design experiments that will drive the system to such thermodynamic states that sufficient vapor can be generated for investigation. It is common to use porous media to reach high entropy, vaporized states in impact experiments. We extended calculations by Ahrens [J. Appl. Phys. 43, 2443 (1972)] and Ahrens and O'Keefe [The Moon 4, 214 (1972)] to higher distentions (up to five) and improved their method with a different impedance match calculation scheme and augmented their model with recent thermodynamic and Hugoniot data of metals, minerals, and polymers. Although we reconfirmed the competing effects reported in the previous studies: (1) increase of entropy production and (2) decrease of impedance match, when impacting materials with increasing distentions, our calculations did not exhibit optimal entropy-generating distention. For different materials, very different impact velocities are needed to initiate vaporization. For aluminum at distention (m)<2.2, a minimum impact velocity of 2.7 km/s is required using tungsten projectile. For ionic solids such as NaCl at distention <2.2, 2.5 km/s is needed. For carbonate and sulfate minerals, the minimum impact velocities are much lower, ranging from less than 1 to 1.5 km/s.
The constructal law of design and evolution in nature
Bejan, Adrian; Lorente, Sylvie
2010-01-01
Constructal theory is the view that (i) the generation of images of design (pattern, rhythm) in nature is a phenomenon of physics and (ii) this phenomenon is covered by a principle (the constructal law): ‘for a finite-size flow system to persist in time (to live) it must evolve such that it provides greater and greater access to the currents that flow through it’. This law is about the necessity of design to occur, and about the time direction of the phenomenon: the tape of the design evolution ‘movie’ runs such that existing configurations are replaced by globally easier flowing configurations. The constructal law has two useful sides: the prediction of natural phenomena and the strategic engineering of novel architectures, based on the constructal law, i.e. not by mimicking nature. We show that the emergence of scaling laws in inanimate (geophysical) flow systems is the same phenomenon as the emergence of allometric laws in animate (biological) flow systems. Examples are lung design, animal locomotion, vegetation, river basins, turbulent flow structure, self-lubrication and natural multi-scale porous media. This article outlines the place of the constructal law as a self-standing law in physics, which covers all the ad hoc (and contradictory) statements of optimality such as minimum entropy generation, maximum entropy generation, minimum flow resistance, maximum flow resistance, minimum time, minimum weight, uniform maximum stresses and characteristic organ sizes. Nature is configured to flow and move as a conglomerate of ‘engine and brake’ designs. PMID:20368252
The constructal law of design and evolution in nature.
Bejan, Adrian; Lorente, Sylvie
2010-05-12
Constructal theory is the view that (i) the generation of images of design (pattern, rhythm) in nature is a phenomenon of physics and (ii) this phenomenon is covered by a principle (the constructal law): 'for a finite-size flow system to persist in time (to live) it must evolve such that it provides greater and greater access to the currents that flow through it'. This law is about the necessity of design to occur, and about the time direction of the phenomenon: the tape of the design evolution 'movie' runs such that existing configurations are replaced by globally easier flowing configurations. The constructal law has two useful sides: the prediction of natural phenomena and the strategic engineering of novel architectures, based on the constructal law, i.e. not by mimicking nature. We show that the emergence of scaling laws in inanimate (geophysical) flow systems is the same phenomenon as the emergence of allometric laws in animate (biological) flow systems. Examples are lung design, animal locomotion, vegetation, river basins, turbulent flow structure, self-lubrication and natural multi-scale porous media. This article outlines the place of the constructal law as a self-standing law in physics, which covers all the ad hoc (and contradictory) statements of optimality such as minimum entropy generation, maximum entropy generation, minimum flow resistance, maximum flow resistance, minimum time, minimum weight, uniform maximum stresses and characteristic organ sizes. Nature is configured to flow and move as a conglomerate of 'engine and brake' designs.
Maximum-Entropy Inference with a Programmable Annealer
Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.
2016-01-01
Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition. PMID:26936311
Velázquez-Gutiérrez, Sandra Karina; Figueira, Ana Cristina; Rodríguez-Huezo, María Eva; Román-Guerrero, Angélica; Carrillo-Navas, Hector; Pérez-Alonso, César
2015-05-05
Freeze-dried chia mucilage adsorption isotherms were determined at 25, 35 and 40°C and fitted with the Guggenheim-Anderson-de Boer model. The integral thermodynamic properties (enthalpy and entropy) were estimated with the Clausius-Clapeyron equation. Pore radius of the mucilage, calculated with the Kelvin equation, varied from 0.87 to 6.44 nm in the temperature range studied. The point of maximum stability (minimum integral entropy) ranged between 7.56 and 7.63kg H2O per 100 kg of dry solids (d.s.) (water activity of 0.34-0.53). Enthalpy-entropy compensation for the mucilage showed two isokinetic temperatures: (i) one occurring at low moisture contents (0-7.56 kg H2O per 100 kg d.s.), controlled by changes in water entropy; and (ii) another happening in the moisture interval of 7.56-24 kg H2O per 100 kg d.s. and was enthalpy driven. The glass transition temperature Tg of the mucilage fluctuated between 42.93 and 57.93°C. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Tie; He, Xiaoyang; Tang, Junci; Zeng, Hui; Zhou, Chunying; Zhang, Nan; Liu, Hui; Lu, Zhuoxin; Kong, Xiangrui; Yan, Zheng
2018-02-01
Forasmuch as the distinguishment of islanding is easy to be interfered by grid disturbance, island detection device may make misjudgment thus causing the consequence of photovoltaic out of service. The detection device must provide with the ability to differ islanding from grid disturbance. In this paper, the concept of deep learning is introduced into classification of islanding and grid disturbance for the first time. A novel deep learning framework is proposed to detect and classify islanding or grid disturbance. The framework is a hybrid of wavelet transformation, multi-resolution singular spectrum entropy, and deep learning architecture. As a signal processing method after wavelet transformation, multi-resolution singular spectrum entropy combines multi-resolution analysis and spectrum analysis with entropy as output, from which we can extract the intrinsic different features between islanding and grid disturbance. With the features extracted, deep learning is utilized to classify islanding and grid disturbance. Simulation results indicate that the method can achieve its goal while being highly accurate, so the photovoltaic system mistakenly withdrawing from power grids can be avoided.
NASA Astrophysics Data System (ADS)
Balasis, G.; Daglis, I. A.; Papadimitriou, C.; Kalimeri, M.; Anastasiadis, A.; Eftaxias, K.
2008-12-01
Dynamical complexity detection for output time series of complex systems is one of the foremost problems in physics, biology, engineering, and economic sciences. Especially in magnetospheric physics, accurate detection of the dissimilarity between normal and abnormal states (e.g. pre-storm activity and magnetic storms) can vastly improve space weather diagnosis and, consequently, the mitigation of space weather hazards. Herein, we examine the fractal spectral properties of the Dst data using a wavelet analysis technique. We show that distinct changes in associated scaling parameters occur (i.e., transition from anti- persistent to persistent behavior) as an intense magnetic storm approaches. We then analyze Dst time series by introducing the non-extensive Tsallis entropy, Sq, as an appropriate complexity measure. The Tsallis entropy sensitively shows the complexity dissimilarity among different "physiological" (normal) and "pathological" states (intense magnetic storms). The Tsallis entropy implies the emergence of two distinct patterns: (i) a pattern associated with the intense magnetic storms, which is characterized by a higher degree of organization, and (ii) a pattern associated with normal periods, which is characterized by a lower degree of organization.
Study of thermodynamic properties of liquid binary alloys by a pseudopotential method
NASA Astrophysics Data System (ADS)
Vora, Aditya M.
2010-11-01
On the basis of the Percus-Yevick hard-sphere model as a reference system and the Gibbs-Bogoliubov inequality, a thermodynamic perturbation method is applied with the use of the well-known model potential. By applying a variational method, the hard-core diameters are found which correspond to a minimum free energy. With this procedure, the thermodynamic properties such as the internal energy, entropy, Helmholtz free energy, entropy of mixing, and heat of mixing are computed for liquid NaK binary systems. The influence of the local-field correction functions of Hartree, Taylor, Ichimaru-Utsumi, Farid-Heine-Engel-Robertson, and Sarkar-Sen-Haldar-Roy is also investigated. The computed excess entropy is in agreement with available experimental data in the case of liquid alloys, whereas the agreement for the heat of mixing is poor. This may be due to the sensitivity of the latter to the potential parameters and dielectric function.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luis, Alfredo
The use of Renyi entropy as an uncertainty measure alternative to variance leads to the study of states with quantum fluctuations below the levels established by Gaussian states, which are the position-momentum minimum uncertainty states according to variance. We examine the quantum properties of states with exponential wave functions, which combine reduced fluctuations with practical feasibility.
Information dynamics in living systems: prokaryotes, eukaryotes, and cancer.
Frieden, B Roy; Gatenby, Robert A
2011-01-01
Living systems use information and energy to maintain stable entropy while far from thermodynamic equilibrium. The underlying first principles have not been established. We propose that stable entropy in living systems, in the absence of thermodynamic equilibrium, requires an information extremum (maximum or minimum), which is invariant to first order perturbations. Proliferation and death represent key feedback mechanisms that promote stability even in a non-equilibrium state. A system moves to low or high information depending on its energy status, as the benefit of information in maintaining and increasing order is balanced against its energy cost. Prokaryotes, which lack specialized energy-producing organelles (mitochondria), are energy-limited and constrained to an information minimum. Acquisition of mitochondria is viewed as a critical evolutionary step that, by allowing eukaryotes to achieve a sufficiently high energy state, permitted a phase transition to an information maximum. This state, in contrast to the prokaryote minima, allowed evolution of complex, multicellular organisms. A special case is a malignant cell, which is modeled as a phase transition from a maximum to minimum information state. The minimum leads to a predicted power-law governing the in situ growth that is confirmed by studies measuring growth of small breast cancers. We find living systems achieve a stable entropic state by maintaining an extreme level of information. The evolutionary divergence of prokaryotes and eukaryotes resulted from acquisition of specialized energy organelles that allowed transition from information minima to maxima, respectively. Carcinogenesis represents a reverse transition: of an information maximum to minimum. The progressive information loss is evident in accumulating mutations, disordered morphology, and functional decline characteristics of human cancers. The findings suggest energy restriction is a critical first step that triggers the genetic mutations that drive somatic evolution of the malignant phenotype.
Monte Carlo simulation of a noisy quantum channel with memory.
Akhalwaya, Ismail; Moodley, Mervlyn; Petruccione, Francesco
2015-10-01
The classical capacity of quantum channels is well understood for channels with uncorrelated noise. For the case of correlated noise, however, there are still open questions. We calculate the classical capacity of a forgetful channel constructed by Markov switching between two depolarizing channels. Techniques have previously been applied to approximate the output entropy of this channel and thus its capacity. In this paper, we use a Metropolis-Hastings Monte Carlo approach to numerically calculate the entropy. The algorithm is implemented in parallel and its performance is studied and optimized. The effects of memory on the capacity are explored and previous results are confirmed to higher precision.
Nonlinear distortion analysis for single heterojunction GaAs HEMT with frequency and temperature
NASA Astrophysics Data System (ADS)
Alim, Mohammad A.; Ali, Mayahsa M.; Rezazadeh, Ali A.
2018-07-01
Nonlinearity analysis using two-tone intermodulation distortion (IMD) technique for 0.5 μm gate-length AlGaAs/GaAs based high electron mobility transistor have been investigated based on biasing conditions, input power, frequency and temperature. The outcomes indicate a significant modification on the output IMD power and as well as the minimum distortion level. The input IMD power effects the output current and subsequently the threshold voltage reduces, resulting to an increment in the output IMD power. Both frequency and temperature reduces the magnitude of the output IMDs. In addition, the threshold voltage response with temperature alters the notch point of the nonlinear output IMD’s accordingly. The aforementioned investigation will help the circuit designers to evaluate the best biasing option in terms of minimum distortion, maximum gain for future design optimizations.
Application of genetic algorithms in nonlinear heat conduction problems.
Kadri, Muhammad Bilal; Khan, Waqar A
2014-01-01
Genetic algorithms are employed to optimize dimensionless temperature in nonlinear heat conduction problems. Three common geometries are selected for the analysis and the concept of minimum entropy generation is used to determine the optimum temperatures under the same constraints. The thermal conductivity is assumed to vary linearly with temperature while internal heat generation is assumed to be uniform. The dimensionless governing equations are obtained for each selected geometry and the dimensionless temperature distributions are obtained using MATLAB. It is observed that GA gives the minimum dimensionless temperature in each selected geometry.
NASA Astrophysics Data System (ADS)
Li, Bo; Ling, Zongcheng; Zhang, Jiang; Chen, Jian; Wu, Zhongchen; Ni, Yuheng; Zhao, Haowei
2015-11-01
The lunar global texture maps of roughness and entropy are derived at kilometer scales from Digital Elevation Models (DEMs) data obtained by Lunar Orbiter Laser Altimeter (LOLA) aboard on Lunar Reconnaissance Orbiter (LRO) spacecraft. We use statistical moments of a gray-level histogram of elevations in a neighborhood to compute the roughness and entropy value. Our texture descriptors measurements are shown in global maps at multi-sized square neighborhoods, whose length of side is 3, 5, 10, 20, 40 and 80 pixels, respectively. We found that large-scale topographical changes can only be displayed in maps with longer side of neighborhood, but the small scale global texture maps are more disorderly and unsystematic because of more complicated textures' details. Then, the frequency curves of texture maps are made out, whose shapes and distributions are changing as the spatial scales increases. Entropy frequency curve with minimum 3-pixel scale has large fluctuations and six peaks. According to this entropy curve we can classify lunar surface into maria, highlands, different parts of craters preliminarily. The most obvious textures in the middle-scale roughness and entropy maps are the two typical morphological units, smooth maria and rough highlands. For the impact crater, its roughness and entropy value are characterized by a multiple-ring structure obviously, and its different parts have different texture results. In the last, we made a 2D scatter plot between the two texture results of typical lunar maria and highlands. There are two clusters with largest dot density which are corresponded to the lunar highlands and maria separately. In the lunar mare regions (cluster A), there is a high correlation between roughness and entropy, but in the highlands (Cluster B), the entropy shows little change. This could be subjected to different geological processes of maria and highlands forming different landforms.
Capacities of quantum amplifier channels
NASA Astrophysics Data System (ADS)
Qi, Haoyu; Wilde, Mark M.
2017-01-01
Quantum amplifier channels are at the core of several physical processes. Not only do they model the optical process of spontaneous parametric down-conversion, but the transformation corresponding to an amplifier channel also describes the physics of the dynamical Casimir effect in superconducting circuits, the Unruh effect, and Hawking radiation. Here we study the communication capabilities of quantum amplifier channels. Invoking a recently established minimum output-entropy theorem for single-mode phase-insensitive Gaussian channels, we determine capacities of quantum-limited amplifier channels in three different scenarios. First, we establish the capacities of quantum-limited amplifier channels for one of the most general communication tasks, characterized by the trade-off between classical communication, quantum communication, and entanglement generation or consumption. Second, we establish capacities of quantum-limited amplifier channels for the trade-off between public classical communication, private classical communication, and secret key generation. Third, we determine the capacity region for a broadcast channel induced by the quantum-limited amplifier channel, and we also show that a fully quantum strategy outperforms those achieved by classical coherent-detection strategies. In all three scenarios, we find that the capacities significantly outperform communication rates achieved with a naive time-sharing strategy.
NASA Astrophysics Data System (ADS)
Cui, Shawn X.; Freedman, Michael H.; Sattath, Or; Stong, Richard; Minton, Greg
2016-06-01
The classical max-flow min-cut theorem describes transport through certain idealized classical networks. We consider the quantum analog for tensor networks. By associating an integral capacity to each edge and a tensor to each vertex in a flow network, we can also interpret it as a tensor network and, more specifically, as a linear map from the input space to the output space. The quantum max-flow is defined to be the maximal rank of this linear map over all choices of tensors. The quantum min-cut is defined to be the minimum product of the capacities of edges over all cuts of the tensor network. We show that unlike the classical case, the quantum max-flow=min-cut conjecture is not true in general. Under certain conditions, e.g., when the capacity on each edge is some power of a fixed integer, the quantum max-flow is proved to equal the quantum min-cut. However, concrete examples are also provided where the equality does not hold. We also found connections of quantum max-flow/min-cut with entropy of entanglement and the quantum satisfiability problem. We speculate that the phenomena revealed may be of interest both in spin systems in condensed matter and in quantum gravity.
An information-theoretical perspective on weighted ensemble forecasts
NASA Astrophysics Data System (ADS)
Weijs, Steven V.; van de Giesen, Nick
2013-08-01
This paper presents an information-theoretical method for weighting ensemble forecasts with new information. Weighted ensemble forecasts can be used to adjust the distribution that an existing ensemble of time series represents, without modifying the values in the ensemble itself. The weighting can, for example, add new seasonal forecast information in an existing ensemble of historically measured time series that represents climatic uncertainty. A recent article in this journal compared several methods to determine the weights for the ensemble members and introduced the pdf-ratio method. In this article, a new method, the minimum relative entropy update (MRE-update), is presented. Based on the principle of minimum discrimination information, an extension of the principle of maximum entropy (POME), the method ensures that no more information is added to the ensemble than is present in the forecast. This is achieved by minimizing relative entropy, with the forecast information imposed as constraints. From this same perspective, an information-theoretical view on the various weighting methods is presented. The MRE-update is compared with the existing methods and the parallels with the pdf-ratio method are analysed. The paper provides a new, information-theoretical justification for one version of the pdf-ratio method that turns out to be equivalent to the MRE-update. All other methods result in sets of ensemble weights that, seen from the information-theoretical perspective, add either too little or too much (i.e. fictitious) information to the ensemble.
Spaar, Alexander; Helms, Volkhard
2005-07-01
Over the past years Brownian dynamics (BD) simulations have been proven to be a suitable tool for the analysis of protein-protein association. The computed rates and relative trends for protein mutants and different ionic strength are generally in good agreement with experimental results, e.g. see ref 1. By design, BD simulations correspond to an intensive sampling over energetically favorable states, rather than to a systematic sampling over all possible states which is feasible only at rather low resolution. On the example of barnase and barstar, a well characterized model system of electrostatically steered diffusional encounter, we report here the computation of the 6-dimensional free energy landscape for the encounter process of two proteins by a novel, careful analysis of the trajectories from BD simulations. The aim of these studies was the clarification of the encounter state. Along the trajectories, the individual positions and orientations of one protein (relative to the other) are recorded and stored in so-called occupancy maps. Since the number of simulated trajectories is sufficiently high, these occupancy maps can be interpreted as a probability distribution which allows the calculation of the entropy landscape by the use of a locally defined entropy function. Additionally, the configuration dependent electrostatic and desolvation energies are recorded in separate maps. The free energy landscape of protein-protein encounter is finally obtained by summing the energy and entropy contributions. In the free energy profile along the reaction path, which is defined as the path along the minima in the free energy landscape, a minimum shows up suggesting this to be used as the definition of the encounter state. This minimum describes a state of reduced diffusion velocity where the electrostatic attraction is compensated by the repulsion due to the unfavorable desolvation of the charged residues and the entropy loss due to the increasing restriction of the motional freedom. In the simulations the orientational degrees of freedom at the encounter state are found to be less restricted than the translational degrees of freedom. Therefore, the orientational alignment of the two binding partners seems to take place beyond this free energy minimum. The free energy profiles along the reaction pathway are compared for different ionic strength and temperature. This novel analysis technique facilitates mechanistic interpretation of protein-protein encounter pathways which should be useful for interpretation of experimental results as well.
LANDMARK-BASED SPEECH RECOGNITION: REPORT OF THE 2004 JOHNS HOPKINS SUMMER WORKSHOP.
Hasegawa-Johnson, Mark; Baker, James; Borys, Sarah; Chen, Ken; Coogan, Emily; Greenberg, Steven; Juneja, Amit; Kirchhoff, Katrin; Livescu, Karen; Mohan, Srividya; Muller, Jennifer; Sonmez, Kemal; Wang, Tianyu
2005-01-01
Three research prototype speech recognition systems are described, all of which use recently developed methods from artificial intelligence (specifically support vector machines, dynamic Bayesian networks, and maximum entropy classification) in order to implement, in the form of an automatic speech recognizer, current theories of human speech perception and phonology (specifically landmark-based speech perception, nonlinear phonology, and articulatory phonology). All three systems begin with a high-dimensional multiframe acoustic-to-distinctive feature transformation, implemented using support vector machines trained to detect and classify acoustic phonetic landmarks. Distinctive feature probabilities estimated by the support vector machines are then integrated using one of three pronunciation models: a dynamic programming algorithm that assumes canonical pronunciation of each word, a dynamic Bayesian network implementation of articulatory phonology, or a discriminative pronunciation model trained using the methods of maximum entropy classification. Log probability scores computed by these models are then combined, using log-linear combination, with other word scores available in the lattice output of a first-pass recognizer, and the resulting combination score is used to compute a second-pass speech recognition output.
2013-01-01
Here we present a novel, end-point method using the dead-end-elimination and A* algorithms to efficiently and accurately calculate the change in free energy, enthalpy, and configurational entropy of binding for ligand–receptor association reactions. We apply the new approach to the binding of a series of human immunodeficiency virus (HIV-1) protease inhibitors to examine the effect ensemble reranking has on relative accuracy as well as to evaluate the role of the absolute and relative ligand configurational entropy losses upon binding in affinity differences for structurally related inhibitors. Our results suggest that most thermodynamic parameters can be estimated using only a small fraction of the full configurational space, and we see significant improvement in relative accuracy when using an ensemble versus single-conformer approach to ligand ranking. We also find that using approximate metrics based on the single-conformation enthalpy differences between the global minimum energy configuration in the bound as well as unbound states also correlates well with experiment. Using a novel, additive entropy expansion based on conditional mutual information, we also analyze the source of ligand configurational entropy loss upon binding in terms of both uncoupled per degree of freedom losses as well as changes in coupling between inhibitor degrees of freedom. We estimate entropic free energy losses of approximately +24 kcal/mol, 12 kcal/mol of which stems from loss of translational and rotational entropy. Coupling effects contribute only a small fraction to the overall entropy change (1–2 kcal/mol) but suggest differences in how inhibitor dihedral angles couple to each other in the bound versus unbound states. The importance of accounting for flexibility in drug optimization and design is also discussed. PMID:24250277
Statistical physics of self-replication.
England, Jeremy L
2013-09-28
Self-replication is a capacity common to every species of living thing, and simple physical intuition dictates that such a process must invariably be fueled by the production of entropy. Here, we undertake to make this intuition rigorous and quantitative by deriving a lower bound for the amount of heat that is produced during a process of self-replication in a system coupled to a thermal bath. We find that the minimum value for the physically allowed rate of heat production is determined by the growth rate, internal entropy, and durability of the replicator, and we discuss the implications of this finding for bacterial cell division, as well as for the pre-biotic emergence of self-replicating nucleic acids.
Benford's law and the FSD distribution of economic behavioral micro data
NASA Astrophysics Data System (ADS)
Villas-Boas, Sofia B.; Fu, Qiuzi; Judge, George
2017-11-01
In this paper, we focus on the first significant digit (FSD) distribution of European micro income data and use information theoretic-entropy based methods to investigate the degree to which Benford's FSD law is consistent with the nature of these economic behavioral systems. We demonstrate that Benford's law is not an empirical phenomenon that occurs only in important distributions in physical statistics, but that it also arises in self-organizing dynamic economic behavioral systems. The empirical likelihood member of the minimum divergence-entropy family, is used to recover country based income FSD probability density functions and to demonstrate the implications of using a Benford prior reference distribution in economic behavioral system information recovery.
Scaling laws for ignition at the National Ignition Facility from first principles.
Cheng, Baolian; Kwan, Thomas J T; Wang, Yi-Ming; Batha, Steven H
2013-10-01
We have developed an analytical physics model from fundamental physics principles and used the reduced one-dimensional model to derive a thermonuclear ignition criterion and implosion energy scaling laws applicable to inertial confinement fusion capsules. The scaling laws relate the fuel pressure and the minimum implosion energy required for ignition to the peak implosion velocity and the equation of state of the pusher and the hot fuel. When a specific low-entropy adiabat path is used for the cold fuel, our scaling laws recover the ignition threshold factor dependence on the implosion velocity, but when a high-entropy adiabat path is chosen, the model agrees with recent measurements.
Minimax Quantum Tomography: Estimators and Relative Entropy Bounds.
Ferrie, Christopher; Blume-Kohout, Robin
2016-03-04
A minimax estimator has the minimum possible error ("risk") in the worst case. We construct the first minimax estimators for quantum state tomography with relative entropy risk. The minimax risk of nonadaptive tomography scales as O(1/sqrt[N])-in contrast to that of classical probability estimation, which is O(1/N)-where N is the number of copies of the quantum state used. We trace this deficiency to sampling mismatch: future observations that determine risk may come from a different sample space than the past data that determine the estimate. This makes minimax estimators very biased, and we propose a computationally tractable alternative with similar behavior in the worst case, but superior accuracy on most states.
Consistent description of kinetic equation with triangle anomaly
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pu Shi; Gao Jianhua; Wang Qun
2011-05-01
We provide a consistent description of the kinetic equation with a triangle anomaly which is compatible with the entropy principle of the second law of thermodynamics and the charge/energy-momentum conservation equations. In general an anomalous source term is necessary to ensure that the equations for the charge and energy-momentum conservation are satisfied and that the correction terms of distribution functions are compatible to these equations. The constraining equations from the entropy principle are derived for the anomaly-induced leading order corrections to the particle distribution functions. The correction terms can be determined for the minimum number of unknown coefficients in onemore » charge and two charge cases by solving the constraining equations.« less
NASA Astrophysics Data System (ADS)
Sarma, Rajkumar; Jain, Manish; Mondal, Pranab Kumar
2017-10-01
We discuss the entropy generation minimization for electro-osmotic flow of a viscoelastic fluid through a parallel plate microchannel under the combined influences of interfacial slip and conjugate transport of heat. We use in this study the simplified Phan-Thien-Tanner model to describe the rheological behavior of the viscoelastic fluid. Using Navier's slip law and thermal boundary conditions of the third kind, we solve the transport equations analytically and evaluate the global entropy generation rate of the system. We examine the influential role of the following parameters on the entropy generation rate of the system, viz., the viscoelastic parameter (ɛDe2), Debye-Hückel parameter ( κ ¯ ) , channel wall thickness (δ), thermal conductivity of the wall (γ), Biot number (Bi), Peclet number (Pe), and axial temperature gradient (B). This investigation finally establishes the optimum values of the abovementioned parameters, leading to the minimum entropy generation of the system. We believe that results of this analysis could be helpful in optimizing the second-law performance of microscale thermal management devices, including the micro-heat exchangers, micro-reactors, and micro-heat pipes.
40 CFR 63.8688 - What are my monitoring installation, operation, and maintenance requirements?
Code of Federal Regulations, 2010 CFR
2010-07-01
... following: (1) Locate the temperature sensor in a position that provides a representative temperature. (2) For a noncryogenic temperature range, use a temperature sensor with a minimum measurement sensitivity... output; or (iii) By comparing the sensor output to the output from a calibrated temperature measurement...
40 CFR 63.8688 - What are my monitoring installation, operation, and maintenance requirements?
Code of Federal Regulations, 2011 CFR
2011-07-01
... following: (1) Locate the temperature sensor in a position that provides a representative temperature. (2) For a noncryogenic temperature range, use a temperature sensor with a minimum measurement sensitivity... output; or (iii) By comparing the sensor output to the output from a calibrated temperature measurement...
40 CFR 63.8688 - What are my monitoring installation, operation, and maintenance requirements?
Code of Federal Regulations, 2012 CFR
2012-07-01
... following: (1) Locate the temperature sensor in a position that provides a representative temperature. (2) For a noncryogenic temperature range, use a temperature sensor with a minimum measurement sensitivity... output; or (iii) By comparing the sensor output to the output from a calibrated temperature measurement...
40 CFR 63.8688 - What are my monitoring installation, operation, and maintenance requirements?
Code of Federal Regulations, 2013 CFR
2013-07-01
... following: (1) Locate the temperature sensor in a position that provides a representative temperature. (2) For a noncryogenic temperature range, use a temperature sensor with a minimum measurement sensitivity... output; or (iii) By comparing the sensor output to the output from a calibrated temperature measurement...
40 CFR 63.8688 - What are my monitoring installation, operation, and maintenance requirements?
Code of Federal Regulations, 2014 CFR
2014-07-01
... following: (1) Locate the temperature sensor in a position that provides a representative temperature. (2) For a noncryogenic temperature range, use a temperature sensor with a minimum measurement sensitivity... output; or (iii) By comparing the sensor output to the output from a calibrated temperature measurement...
Perspective: Maximum caliber is a general variational principle for dynamical systems
NASA Astrophysics Data System (ADS)
Dixit, Purushottam D.; Wagoner, Jason; Weistuch, Corey; Pressé, Steve; Ghosh, Kingshuk; Dill, Ken A.
2018-01-01
We review here Maximum Caliber (Max Cal), a general variational principle for inferring distributions of paths in dynamical processes and networks. Max Cal is to dynamical trajectories what the principle of maximum entropy is to equilibrium states or stationary populations. In Max Cal, you maximize a path entropy over all possible pathways, subject to dynamical constraints, in order to predict relative path weights. Many well-known relationships of non-equilibrium statistical physics—such as the Green-Kubo fluctuation-dissipation relations, Onsager's reciprocal relations, and Prigogine's minimum entropy production—are limited to near-equilibrium processes. Max Cal is more general. While it can readily derive these results under those limits, Max Cal is also applicable far from equilibrium. We give examples of Max Cal as a method of inference about trajectory distributions from limited data, finding reaction coordinates in bio-molecular simulations, and modeling the complex dynamics of non-thermal systems such as gene regulatory networks or the collective firing of neurons. We also survey its basis in principle and some limitations.
Divalent cation shrinks DNA but inhibits its compaction with trivalent cation.
Tongu, Chika; Kenmotsu, Takahiro; Yoshikawa, Yuko; Zinchenko, Anatoly; Chen, Ning; Yoshikawa, Kenichi
2016-05-28
Our observation reveals the effects of divalent and trivalent cations on the higher-order structure of giant DNA (T4 DNA 166 kbp) by fluorescence microscopy. It was found that divalent cations, Mg(2+) and Ca(2+), inhibit DNA compaction induced by a trivalent cation, spermidine (SPD(3+)). On the other hand, in the absence of SPD(3+), divalent cations cause the shrinkage of DNA. As the control experiment, we have confirmed the minimum effect of monovalent cation, Na(+) on the DNA higher-order structure. We interpret the competition between 2+ and 3+ cations in terms of the change in the translational entropy of the counterions. For the compaction with SPD(3+), we consider the increase in translational entropy due to the ion-exchange of the intrinsic monovalent cations condensing on a highly charged polyelectrolyte, double-stranded DNA, by the 3+ cations. In contrast, the presence of 2+ cation decreases the gain of entropy contribution by the ion-exchange between monovalent and 3+ ions.
Perspective: Maximum caliber is a general variational principle for dynamical systems.
Dixit, Purushottam D; Wagoner, Jason; Weistuch, Corey; Pressé, Steve; Ghosh, Kingshuk; Dill, Ken A
2018-01-07
We review here Maximum Caliber (Max Cal), a general variational principle for inferring distributions of paths in dynamical processes and networks. Max Cal is to dynamical trajectories what the principle of maximum entropy is to equilibrium states or stationary populations. In Max Cal, you maximize a path entropy over all possible pathways, subject to dynamical constraints, in order to predict relative path weights. Many well-known relationships of non-equilibrium statistical physics-such as the Green-Kubo fluctuation-dissipation relations, Onsager's reciprocal relations, and Prigogine's minimum entropy production-are limited to near-equilibrium processes. Max Cal is more general. While it can readily derive these results under those limits, Max Cal is also applicable far from equilibrium. We give examples of Max Cal as a method of inference about trajectory distributions from limited data, finding reaction coordinates in bio-molecular simulations, and modeling the complex dynamics of non-thermal systems such as gene regulatory networks or the collective firing of neurons. We also survey its basis in principle and some limitations.
Covariance hypotheses for LANDSAT data
NASA Technical Reports Server (NTRS)
Decell, H. P.; Peters, C.
1983-01-01
Two covariance hypotheses are considered for LANDSAT data acquired by sampling fields, one an autoregressive covariance structure and the other the hypothesis of exchangeability. A minimum entropy approximation of the first structure by the second is derived and shown to have desirable properties for incorporation into a mixture density estimation procedure. Results of a rough test of the exchangeability hypothesis are presented.
Minimum Entropy Autofocus Correction of Residual Range Cell Migration
2017-03-02
reduced the residual to effectively a slowly varying bias on the order of a wavelength ( ∼ 3 cm ) which has negligible impact on the image focus. Fig...Fitzgerrell, and J. Beaver , “Two- dimensional phase gradient autofocus,” Proc. SPIE, vol. 4123, pp. 162– 173, 2000. [6] D. H. Brandwood, “A complex gradient
NASA Astrophysics Data System (ADS)
Jarabo-Amores, María-Pilar; la Mata-Moya, David de; Gil-Pita, Roberto; Rosa-Zurera, Manuel
2013-12-01
The application of supervised learning machines trained to minimize the Cross-Entropy error to radar detection is explored in this article. The detector is implemented with a learning machine that implements a discriminant function, which output is compared to a threshold selected to fix a desired probability of false alarm. The study is based on the calculation of the function the learning machine approximates to during training, and the application of a sufficient condition for a discriminant function to be used to approximate the optimum Neyman-Pearson (NP) detector. In this article, the function a supervised learning machine approximates to after being trained to minimize the Cross-Entropy error is obtained. This discriminant function can be used to implement the NP detector, which maximizes the probability of detection, maintaining the probability of false alarm below or equal to a predefined value. Some experiments about signal detection using neural networks are also presented to test the validity of the study.
Constrained signal reconstruction from wavelet transform coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, C.M.
1991-12-31
A new method is introduced for reconstructing a signal from an incomplete sampling of its Discrete Wavelet Transform (DWT). The algorithm yields a minimum-norm estimate satisfying a priori upper and lower bounds on the signal. The method is based on a finite-dimensional representation theory for minimum-norm estimates of bounded signals developed by R.E. Cole. Cole`s work has its origins in earlier techniques of maximum-entropy spectral estimation due to Lang and McClellan, which were adapted by Steinhardt, Goodrich and Roberts for minimum-norm spectral estimation. Cole`s extension of their work provides a representation for minimum-norm estimates of a class of generalized transformsmore » in terms of general correlation data (not just DFT`s of autocorrelation lags, as in spectral estimation). One virtue of this great generality is that it includes the inverse DWT. 20 refs.« less
Sasikala, Wilbee D; Mukherjee, Arnab
2012-10-11
DNA intercalation, a biophysical process of enormous clinical significance, has surprisingly eluded molecular understanding for several decades. With appropriate configurational restraint (to prevent dissociation) in all-atom metadynamics simulations, we capture the free energy surface of direct intercalation from minor groove-bound state for the first time using an anticancer agent proflavine. Mechanism along the minimum free energy path reveals that intercalation happens through a minimum base stacking penalty pathway where nonstacking parameters (Twist→Slide/Shift) change first, followed by base stacking parameters (Buckle/Roll→Rise). This mechanism defies the natural fluctuation hypothesis and provides molecular evidence for the drug-induced cavity formation hypothesis. The thermodynamic origin of the barrier is found to be a combination of entropy and desolvation energy.
Efficiency at maximum power output of linear irreversible Carnot-like heat engines.
Wang, Yang; Tu, Z C
2012-01-01
The efficiency at maximum power output of linear irreversible Carnot-like heat engines is investigated based on the assumption that the rate of irreversible entropy production of the working substance in each "isothermal" process is a quadratic form of the heat exchange rate between the working substance and the reservoir. It is found that the maximum power output corresponds to minimizing the irreversible entropy production in two isothermal processes of the Carnot-like cycle, and that the efficiency at maximum power output has the form η(mP)=η(C)/(2-γη(C)), where η(C) is the Carnot efficiency, while γ depends on the heat transfer coefficients between the working substance and two reservoirs. The value of η(mP) is bounded between η(-)≡η(C)/2 and η(+)≡η(C)/(2-η(C)). These results are consistent with those obtained by Chen and Yan [J. Chem. Phys. 90, 3740 (1989)] based on the endoreversible assumption, those obtained by Esposito et al. [Phys. Rev. Lett. 105, 150603 (2010)] based on the low-dissipation assumption, and those obtained by Schmiedl and Seifert [Europhys. Lett. 81, 20003 (2008)] for stochastic heat engines which in fact also satisfy the low-dissipation assumption. Additionally, we find that the endoreversible assumption happens to hold for Carnot-like heat engines operating at the maximum power output based on our fundamental assumption, and that the Carnot-like heat engines that we focused on do not strictly satisfy the low-dissipation assumption, which implies that the low-dissipation assumption or our fundamental assumption is a sufficient but non-necessary condition for the validity of η(mP)=η(C)/(2-γη(C)) as well as the existence of two bounds, η(-)≡η(C)/2 and η(+)≡η(C)/(2-η(C)). © 2012 American Physical Society
Efficiency at maximum power output of linear irreversible Carnot-like heat engines
NASA Astrophysics Data System (ADS)
Wang, Yang; Tu, Z. C.
2012-01-01
The efficiency at maximum power output of linear irreversible Carnot-like heat engines is investigated based on the assumption that the rate of irreversible entropy production of the working substance in each “isothermal” process is a quadratic form of the heat exchange rate between the working substance and the reservoir. It is found that the maximum power output corresponds to minimizing the irreversible entropy production in two isothermal processes of the Carnot-like cycle, and that the efficiency at maximum power output has the form ηmP=ηC/(2-γηC), where ηC is the Carnot efficiency, while γ depends on the heat transfer coefficients between the working substance and two reservoirs. The value of ηmP is bounded between η-≡ηC/2 and η+≡ηC/(2-ηC). These results are consistent with those obtained by Chen and Yan [J. Chem. Phys.JCPSA60021-960610.1063/1.455832 90, 3740 (1989)] based on the endoreversible assumption, those obtained by Esposito [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.105.150603 105, 150603 (2010)] based on the low-dissipation assumption, and those obtained by Schmiedl and Seifert [Europhys. Lett.EULEEJ0295-507510.1209/0295-5075/81/20003 81, 20003 (2008)] for stochastic heat engines which in fact also satisfy the low-dissipation assumption. Additionally, we find that the endoreversible assumption happens to hold for Carnot-like heat engines operating at the maximum power output based on our fundamental assumption, and that the Carnot-like heat engines that we focused on do not strictly satisfy the low-dissipation assumption, which implies that the low-dissipation assumption or our fundamental assumption is a sufficient but non-necessary condition for the validity of ηmP=ηC/(2-γηC) as well as the existence of two bounds, η-≡ηC/2 and η+≡ηC/(2-ηC).
Application of SNODAS and hydrologic models to enhance entropy-based snow monitoring network design
NASA Astrophysics Data System (ADS)
Keum, Jongho; Coulibaly, Paulin; Razavi, Tara; Tapsoba, Dominique; Gobena, Adam; Weber, Frank; Pietroniro, Alain
2018-06-01
Snow has a unique characteristic in the water cycle, that is, snow falls during the entire winter season, but the discharge from snowmelt is typically delayed until the melting period and occurs in a relatively short period. Therefore, reliable observations from an optimal snow monitoring network are necessary for an efficient management of snowmelt water for flood prevention and hydropower generation. The Dual Entropy and Multiobjective Optimization is applied to design snow monitoring networks in La Grande River Basin in Québec and Columbia River Basin in British Columbia. While the networks are optimized to have the maximum amount of information with minimum redundancy based on entropy concepts, this study extends the traditional entropy applications to the hydrometric network design by introducing several improvements. First, several data quantization cases and their effects on the snow network design problems were explored. Second, the applicability the Snow Data Assimilation System (SNODAS) products as synthetic datasets of potential stations was demonstrated in the design of the snow monitoring network of the Columbia River Basin. Third, beyond finding the Pareto-optimal networks from the entropy with multi-objective optimization, the networks obtained for La Grande River Basin were further evaluated by applying three hydrologic models. The calibrated hydrologic models simulated discharges using the updated snow water equivalent data from the Pareto-optimal networks. Then, the model performances for high flows were compared to determine the best optimal network for enhanced spring runoff forecasting.
Systematic investigation of NLTE phenomena in the limit of small departures from LTE
NASA Astrophysics Data System (ADS)
Libby, S. B.; Graziani, F. R.; More, R. M.; Kato, T.
1997-04-01
In this paper, we begin a systematic study of Non-Local Thermal Equilibrium (NLTE) phenomena in near equilibrium (LTE) high energy density, highly radiative plasmas. It is shown that the principle of minimum entropy production rate characterizes NLTE steady states for average atom rate equations in the case of small departures form LTE. With the aid of a novel hohlraum-reaction box thought experiment, we use the principles of minimum entropy production and detailed balance to derive Onsager reciprocity relations for the NLTE responses of a near equilibrium sample to non-Planckian perturbations in different frequency groups. This result is a significant symmetry constraint on the linear corrections to Kirchoff's law. We envisage applying our strategy to a number of test problems which include: the NLTE corrections to the ionization state of an ion located near the edge of an otherwise LTE medium; the effect of a monochromatic radiation field perturbation on an LTE medium; the deviation of Rydberg state populations from LTE in recombining or ionizing plasmas; multi-electron temperature models such as that of Busquet; and finally, the effect of NLTE population shifts on opacity models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giampaolo, Salvatore M.; CNR-INFM Coherentia, Naples; CNISM Unita di Salerno and INFN Sezione di Napoli, Gruppo collegato di Salerno, Baronissi
2007-10-15
We investigate the geometric characterization of pure state bipartite entanglement of (2xD)- and (3xD)-dimensional composite quantum systems. To this aim, we analyze the relationship between states and their images under the action of particular classes of local unitary operations. We find that invariance of states under the action of single-qubit and single-qutrit transformations is a necessary and sufficient condition for separability. We demonstrate that in the (2xD)-dimensional case the von Neumann entropy of entanglement is a monotonic function of the minimum squared Euclidean distance between states and their images over the set of single qubit unitary transformations. Moreover, both inmore » the (2xD)- and in the (3xD)-dimensional cases the minimum squared Euclidean distance exactly coincides with the linear entropy [and thus as well with the tangle measure of entanglement in the (2xD)-dimensional case]. These results provide a geometric characterization of entanglement measures originally established in informational frameworks. Consequences and applications of the formalism to quantum critical phenomena in spin systems are discussed.« less
Thermodynamics of an ideal generalized gas: II. Means of order alpha.
Lavenda, B H
2005-11-01
The property that power means are monotonically increasing functions of their order is shown to be the basis of the second laws not only for processes involving heat conduction, but also for processes involving deformations. This generalizes earlier work involving only pure heat conduction and underlines the incomparability of the internal energy and adiabatic potentials when expressed as powers of the adiabatic variable. In an L-potential equilibration, the final state will be one of maximum entropy, whereas in an entropy equilibration, the final state will be one of minimum L. Unlike classical equilibrium thermodynamic phase space, which lacks an intrinsic metric structure insofar as distances and other geometrical concepts do not have an intrinsic thermodynamic significance in such spaces, a metric space can be constructed for the power means: the distance between means of different order is related to the Carnot efficiency. In the ideal classical gas limit, the average change in the entropy is shown to be proportional to the difference between the Shannon and Rényi entropies for nonextensive systems that are multifractal in nature. The L potential, like the internal energy, is a Schur convex function of the empirical temperature, which satisfies Jensen's inequality, and serves as a measure of the tendency to uniformity in processes involving pure thermal conduction.
Characterizing Protease Specificity: How Many Substrates Do We Need?
Schauperl, Michael; Fuchs, Julian E.; Waldner, Birgit J.; Huber, Roland G.; Kramer, Christian; Liedl, Klaus R.
2015-01-01
Calculation of cleavage entropies allows to quantify, map and compare protease substrate specificity by an information entropy based approach. The metric intrinsically depends on the number of experimentally determined substrates (data points). Thus a statistical analysis of its numerical stability is crucial to estimate the systematic error made by estimating specificity based on a limited number of substrates. In this contribution, we show the mathematical basis for estimating the uncertainty in cleavage entropies. Sets of cleavage entropies are calculated using experimental cleavage data and modeled extreme cases. By analyzing the underlying mathematics and applying statistical tools, a linear dependence of the metric in respect to 1/n was found. This allows us to extrapolate the values to an infinite number of samples and to estimate the errors. Analyzing the errors, a minimum number of 30 substrates was found to be necessary to characterize substrate specificity, in terms of amino acid variability, for a protease (S4-S4’) with an uncertainty of 5 percent. Therefore, we encourage experimental researchers in the protease field to record specificity profiles of novel proteases aiming to identify at least 30 peptide substrates of maximum sequence diversity. We expect a full characterization of protease specificity helpful to rationalize biological functions of proteases and to assist rational drug design. PMID:26559682
Entropy studies on beam distortion by atmospheric turbulence
NASA Astrophysics Data System (ADS)
Wu, Chensheng; Ko, Jonathan; Davis, Christopher C.
2015-09-01
When a beam propagates through atmospheric turbulence over a known distance, the target beam profile deviates from the projected profile of the beam on the receiver. Intuitively, the unwanted distortion provides information about the atmospheric turbulence. This information is crucial for guiding adaptive optic systems and improving beam propagation results. In this paper, we propose an entropy study based on the image from a plenoptic sensor to provide a measure of information content of atmospheric turbulence. In general, lower levels of atmospheric turbulence will have a smaller information size while higher levels of atmospheric turbulence will cause significant expansion of the information size, which may exceed the maximum capacity of a sensing system and jeopardize the reliability of an AO system. Therefore, the entropy function can be used to analyze the turbulence distortion and evaluate performance of AO systems. In fact, it serves as a metric that can tell the improvement of beam correction in each iteration step. In addition, it points out the limitation of an AO system at optimized correction as well as the minimum information needed for wavefront sensing to achieve certain levels of correction. In this paper, we will demonstrate the definition of the entropy function and how it is related to evaluating information (randomness) carried by atmospheric turbulence.
Applications of information theory, genetic algorithms, and neural models to predict oil flow
NASA Astrophysics Data System (ADS)
Ludwig, Oswaldo; Nunes, Urbano; Araújo, Rui; Schnitman, Leizer; Lepikson, Herman Augusto
2009-07-01
This work introduces a new information-theoretic methodology for choosing variables and their time lags in a prediction setting, particularly when neural networks are used in non-linear modeling. The first contribution of this work is the Cross Entropy Function (XEF) proposed to select input variables and their lags in order to compose the input vector of black-box prediction models. The proposed XEF method is more appropriate than the usually applied Cross Correlation Function (XCF) when the relationship among the input and output signals comes from a non-linear dynamic system. The second contribution is a method that minimizes the Joint Conditional Entropy (JCE) between the input and output variables by means of a Genetic Algorithm (GA). The aim is to take into account the dependence among the input variables when selecting the most appropriate set of inputs for a prediction problem. In short, theses methods can be used to assist the selection of input training data that have the necessary information to predict the target data. The proposed methods are applied to a petroleum engineering problem; predicting oil production. Experimental results obtained with a real-world dataset are presented demonstrating the feasibility and effectiveness of the method.
Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei
2016-01-01
Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation. PMID:27649190
Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei
2016-09-15
Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation.
Thermodynamical transcription of density functional theory with minimum Fisher information
NASA Astrophysics Data System (ADS)
Nagy, Á.
2018-03-01
Ghosh, Berkowitz and Parr designed a thermodynamical transcription of the ground-state density functional theory and introduced a local temperature that varies from point to point. The theory, however, is not unique because the kinetic energy density is not uniquely defined. Here we derive the expression of the phase-space Fisher information in the GBP theory taking the inverse temperature as the Fisher parameter. It is proved that this Fisher information takes its minimum for the case of constant temperature. This result is consistent with the recently proven theorem that the phase-space Shannon information entropy attains its maximum at constant temperature.
Antenna Allocation in MIMO Radar with Widely Separated Antennas for Multi-Target Detection
Gao, Hao; Wang, Jian; Jiang, Chunxiao; Zhang, Xudong
2014-01-01
In this paper, we explore a new resource called multi-target diversity to optimize the performance of multiple input multiple output (MIMO) radar with widely separated antennas for detecting multiple targets. In particular, we allocate antennas of the MIMO radar to probe different targets simultaneously in a flexible manner based on the performance metric of relative entropy. Two antenna allocation schemes are proposed. In the first scheme, each antenna is allocated to illuminate a proper target over the entire illumination time, so that the detection performance of each target is guaranteed. The problem is formulated as a minimum makespan scheduling problem in the combinatorial optimization framework. Antenna allocation is implemented through a branch-and-bound algorithm and an enhanced factor 2 algorithm. In the second scheme, called antenna-time allocation, each antenna is allocated to illuminate different targets with different illumination time. Both antenna allocation and time allocation are optimized based on illumination probabilities. Over a large range of transmitted power, target fluctuations and target numbers, both of the proposed antenna allocation schemes outperform the scheme without antenna allocation. Moreover, the antenna-time allocation scheme achieves a more robust detection performance than branch-and-bound algorithm and the enhanced factor 2 algorithm when the target number changes. PMID:25350505
Antenna allocation in MIMO radar with widely separated antennas for multi-target detection.
Gao, Hao; Wang, Jian; Jiang, Chunxiao; Zhang, Xudong
2014-10-27
In this paper, we explore a new resource called multi-target diversity to optimize the performance of multiple input multiple output (MIMO) radar with widely separated antennas for detecting multiple targets. In particular, we allocate antennas of the MIMO radar to probe different targets simultaneously in a flexible manner based on the performance metric of relative entropy. Two antenna allocation schemes are proposed. In the first scheme, each antenna is allocated to illuminate a proper target over the entire illumination time, so that the detection performance of each target is guaranteed. The problem is formulated as a minimum makespan scheduling problem in the combinatorial optimization framework. Antenna allocation is implemented through a branch-and-bound algorithm and an enhanced factor 2 algorithm. In the second scheme, called antenna-time allocation, each antenna is allocated to illuminate different targets with different illumination time. Both antenna allocation and time allocation are optimized based on illumination probabilities. Over a large range of transmitted power, target fluctuations and target numbers, both of the proposed antenna allocation schemes outperform the scheme without antenna allocation. Moreover, the antenna-time allocation scheme achieves a more robust detection performance than branch-and-bound algorithm and the enhanced factor 2 algorithm when the target number changes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Shawn X., E-mail: xingshan@math.ucsb.edu; Quantum Architectures and Computation Group, Microsoft Research, Redmond, Washington 98052; Freedman, Michael H., E-mail: michaelf@microsoft.com
2016-06-15
The classical max-flow min-cut theorem describes transport through certain idealized classical networks. We consider the quantum analog for tensor networks. By associating an integral capacity to each edge and a tensor to each vertex in a flow network, we can also interpret it as a tensor network and, more specifically, as a linear map from the input space to the output space. The quantum max-flow is defined to be the maximal rank of this linear map over all choices of tensors. The quantum min-cut is defined to be the minimum product of the capacities of edges over all cuts ofmore » the tensor network. We show that unlike the classical case, the quantum max-flow=min-cut conjecture is not true in general. Under certain conditions, e.g., when the capacity on each edge is some power of a fixed integer, the quantum max-flow is proved to equal the quantum min-cut. However, concrete examples are also provided where the equality does not hold. We also found connections of quantum max-flow/min-cut with entropy of entanglement and the quantum satisfiability problem. We speculate that the phenomena revealed may be of interest both in spin systems in condensed matter and in quantum gravity.« less
NASA Astrophysics Data System (ADS)
Mishra, V.; Cruise, J. F.; Mecikalski, J. R.
2015-12-01
Developing accurate vertical soil moisture profiles with minimum input requirements is important to agricultural as well as land surface modeling. Earlier studies show that the principle of maximum entropy (POME) can be utilized to develop vertical soil moisture profiles with accuracy (MAE of about 1% for a monotonically dry profile; nearly 2% for monotonically wet profiles and 3.8% for mixed profiles) with minimum constraints (surface, mean and bottom soil moisture contents). In this study, the constraints for the vertical soil moisture profiles were obtained from remotely sensed data. Low resolution (25 km) MW soil moisture estimates (AMSR-E) were downscaled to 4 km using a soil evaporation efficiency index based disaggregation approach. The downscaled MW soil moisture estimates served as a surface boundary condition, while 4 km resolution TIR based Atmospheric Land Exchange Inverse (ALEXI) estimates provided the required mean root-zone soil moisture content. Bottom soil moisture content is assumed to be a soil dependent constant. Mulit-year (2002-2011) gridded profiles were developed for the southeastern United States using the POME method. The soil moisture profiles were compared to those generated in land surface models (Land Information System (LIS) and an agricultural model DSSAT) along with available NRCS SCAN sites in the study region. The end product, spatial soil moisture profiles, can be assimilated into agricultural and hydrologic models in lieu of precipitation for data scarce regions.Developing accurate vertical soil moisture profiles with minimum input requirements is important to agricultural as well as land surface modeling. Previous studies have shown that the principle of maximum entropy (POME) can be utilized with minimal constraints to develop vertical soil moisture profiles with accuracy (MAE = 1% for monotonically dry profiles; MAE = 2% for monotonically wet profiles and MAE = 3.8% for mixed profiles) when compared to laboratory and field data. In this study, vertical soil moisture profiles were developed using the POME model to evaluate an irrigation schedule over a maze field in north central Alabama (USA). The model was validated using both field data and a physically based mathematical model. The results demonstrate that a simple two-constraint entropy model under the assumption of a uniform initial soil moisture distribution can simulate most soil moisture profiles within the field area for 6 different soil types. The results of the irrigation simulation demonstrated that the POME model produced a very efficient irrigation strategy with loss of about 1.9% of the total applied irrigation water. However, areas of fine-textured soil (i.e. silty clay) resulted in plant stress of nearly 30% of the available moisture content due to insufficient water supply on the last day of the drying phase of the irrigation cycle. Overall, the POME approach showed promise as a general strategy to guide irrigation in humid environments, with minimum input requirements.
Detection of cracks in shafts with the Approximated Entropy algorithm
NASA Astrophysics Data System (ADS)
Sampaio, Diego Luchesi; Nicoletti, Rodrigo
2016-05-01
The Approximate Entropy is a statistical calculus used primarily in the fields of Medicine, Biology, and Telecommunication for classifying and identifying complex signal data. In this work, an Approximate Entropy algorithm is used to detect cracks in a rotating shaft. The signals of the cracked shaft are obtained from numerical simulations of a de Laval rotor with breathing cracks modelled by the Fracture Mechanics. In this case, one analysed the vertical displacements of the rotor during run-up transients. The results show the feasibility of detecting cracks from 5% depth, irrespective of the unbalance of the rotating system and crack orientation in the shaft. The results also show that the algorithm can differentiate the occurrence of crack only, misalignment only, and crack + misalignment in the system. However, the algorithm is sensitive to intrinsic parameters p (number of data points in a sample vector) and f (fraction of the standard deviation that defines the minimum distance between two sample vectors), and good results are only obtained by appropriately choosing their values according to the sampling rate of the signal.
Minimax Quantum Tomography: Estimators and Relative Entropy Bounds
Ferrie, Christopher; Blume-Kohout, Robin
2016-03-04
A minimax estimator has the minimum possible error (“risk”) in the worst case. Here we construct the first minimax estimators for quantum state tomography with relative entropy risk. The minimax risk of nonadaptive tomography scales as O (1/more » $$\\sqrt{N}$$ ) —in contrast to that of classical probability estimation, which is O (1/N) —where N is the number of copies of the quantum state used. We trace this deficiency to sampling mismatch: future observations that determine risk may come from a different sample space than the past data that determine the estimate. Lastly, this makes minimax estimators very biased, and we propose a computationally tractable alternative with similar behavior in the worst case, but superior accuracy on most states.« less
NASA Technical Reports Server (NTRS)
Dirmeyer, Paul A.; Wei, Jiangfeng; Bosilovich, Michael G.; Mocko, David M.
2014-01-01
A quasi-isentropic back trajectory scheme is applied to output from the Modern Era Retrospective-analysis for Research and Applications and a land-only replay with corrected precipitation to estimate surface evaporative sources of moisture supplying precipitation over every ice-free land location for the period 1979-2005. The evaporative source patterns for any location and time period are effectively two dimensional probability distributions. As such, the evaporative sources for extreme situations like droughts or wet intervals can be compared to the corresponding climatological distributions using the method of relative entropy. Significant differences are found to be common and widespread for droughts, but not wet periods, when monthly data are examined. At pentad temporal resolution, which is more able to isolate floods and situations of atmospheric rivers, values of relative entropy over North America are typically 50-400 larger than at monthly time scales. Significant differences suggest that moisture transport may be the key to precipitation extremes. Where evaporative sources do not change significantly, it implies other local causes may underlie the extreme events.
Optimization of rainfall networks using information entropy and temporal variability analysis
NASA Astrophysics Data System (ADS)
Wang, Wenqi; Wang, Dong; Singh, Vijay P.; Wang, Yuankun; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Liu, Jiufu; Zou, Ying; He, Ruimin
2018-04-01
Rainfall networks are the most direct sources of precipitation data and their optimization and evaluation are essential and important. Information entropy can not only represent the uncertainty of rainfall distribution but can also reflect the correlation and information transmission between rainfall stations. Using entropy this study performs optimization of rainfall networks that are of similar size located in two big cities in China, Shanghai (in Yangtze River basin) and Xi'an (in Yellow River basin), with respect to temporal variability analysis. Through an easy-to-implement greedy ranking algorithm based on the criterion called, Maximum Information Minimum Redundancy (MIMR), stations of the networks in the two areas (each area is further divided into two subareas) are ranked during sliding inter-annual series and under different meteorological conditions. It is found that observation series with different starting days affect the ranking, alluding to the temporal variability during network evaluation. We propose a dynamic network evaluation framework for considering temporal variability, which ranks stations under different starting days with a fixed time window (1-year, 2-year, and 5-year). Therefore, we can identify rainfall stations which are temporarily of importance or redundancy and provide some useful suggestions for decision makers. The proposed framework can serve as a supplement for the primary MIMR optimization approach. In addition, during different periods (wet season or dry season) the optimal network from MIMR exhibits differences in entropy values and the optimal network from wet season tended to produce higher entropy values. Differences in spatial distribution of the optimal networks suggest that optimizing the rainfall network for changing meteorological conditions may be more recommended.
Ayyildiz, Dilara; Gov, Esra; Sinha, Raghu; Arga, Kazim Yalcin
2017-05-01
Ovarian cancer is one of the most common cancers and has a high mortality rate due to insidious symptoms and lack of robust diagnostics. A hitherto understudied concept in cancer pathogenesis may offer new avenues for innovation in ovarian cancer biomarker development. Cancer cells are characterized by an increase in network entropy, and several studies have exploited this concept to identify disease-associated gene and protein modules. We report in this study the changes in protein-protein interactions (PPIs) in ovarian cancer within a differential network (interactome) analysis framework utilizing the entropy concept and gene expression data. A compendium of six transcriptome datasets that included 140 samples from laser microdissected epithelial cells of ovarian cancer patients and 51 samples from healthy population was obtained from Gene Expression Omnibus, and the high confidence human protein interactome (31,465 interactions among 10,681 proteins) was used. The uncertainties of the up- or downregulation of PPIs in ovarian cancer were estimated through an entropy formulation utilizing combined expression levels of genes, and the interacting protein pairs with minimum uncertainty were identified. We identified 105 proteins with differential PPI patterns scattered in 11 modules, each indicating significantly affected biological pathways in ovarian cancer such as DNA repair, cell proliferation-related mechanisms, nucleoplasmic translocation of estrogen receptor, extracellular matrix degradation, and inflammation response. In conclusion, we suggest several PPIs as biomarker candidates for ovarian cancer and discuss their future biological implications as potential molecular targets for pharmaceutical development as well. In addition, network entropy analysis is a concept that deserves greater research attention for diagnostic innovation in oncology and tumor pathogenesis.
NASA Technical Reports Server (NTRS)
Lei, Shaw-Min; Yao, Kung
1990-01-01
A class of infinite impulse response (IIR) digital filters with a systolizable structure is proposed and its synthesis is investigated. The systolizable structure consists of pipelineable regular modules with local connections and is suitable for VLSI implementation. It is capable of achieving high performance as well as high throughput. This class of filter structure provides certain degrees of freedom that can be used to obtain some desirable properties for the filter. Techniques of evaluating the internal signal powers and the output roundoff noise of the proposed filter structure are developed. Based upon these techniques, a well-scaled IIR digital filter with minimum output roundoff noise is designed using a local optimization approach. The internal signals of all the modes of this filter are scaled to unity in the l2-norm sense. Compared to the Rao-Kailath (1984) orthogonal digital filter and the Gray-Markel (1973) normalized-lattice digital filter, this filter has better scaling properties and lower output roundoff noise.
HMM for hyperspectral spectrum representation and classification with endmember entropy vectors
NASA Astrophysics Data System (ADS)
Arabi, Samir Y. W.; Fernandes, David; Pizarro, Marco A.
2015-10-01
The Hyperspectral images due to its good spectral resolution are extensively used for classification, but its high number of bands requires a higher bandwidth in the transmission data, a higher data storage capability and a higher computational capability in processing systems. This work presents a new methodology for hyperspectral data classification that can work with a reduced number of spectral bands and achieve good results, comparable with processing methods that require all hyperspectral bands. The proposed method for hyperspectral spectra classification is based on the Hidden Markov Model (HMM) associated to each Endmember (EM) of a scene and the conditional probabilities of each EM belongs to each other EM. The EM conditional probability is transformed in EM vector entropy and those vectors are used as reference vectors for the classes in the scene. The conditional probability of a spectrum that will be classified is also transformed in a spectrum entropy vector, which is classified in a given class by the minimum ED (Euclidian Distance) among it and the EM entropy vectors. The methodology was tested with good results using AVIRIS spectra of a scene with 13 EM considering the full 209 bands and the reduced spectral bands of 128, 64 and 32. For the test area its show that can be used only 32 spectral bands instead of the original 209 bands, without significant loss in the classification process.
NASA Astrophysics Data System (ADS)
Ebrahimzadeh, Faezeh; Tsai, Jason Sheng-Hong; Chung, Min-Ching; Liao, Ying Ting; Guo, Shu-Mei; Shieh, Leang-San; Wang, Li
2017-01-01
Contrastive to Part 1, Part 2 presents a generalised optimal linear quadratic digital tracker (LQDT) with universal applications for the discrete-time (DT) systems. This includes (1) a generalised optimal LQDT design for the system with the pre-specified trajectories of the output and the control input and additionally with both the input-to-output direct-feedthrough term and known/estimated system disturbances or extra input/output signals; (2) a new optimal filter-shaped proportional plus integral state-feedback LQDT design for non-square non-minimum phase DT systems to achieve a minimum-phase-like tracking performance; (3) a new approach for computing the control zeros of the given non-square DT systems; and (4) a one-learning-epoch input-constrained iterative learning LQDT design for the repetitive DT systems.
de Nazelle, Audrey; Arunachalam, Saravanan; Serre, Marc L
2010-08-01
States in the USA are required to demonstrate future compliance of criteria air pollutant standards by using both air quality monitors and model outputs. In the case of ozone, the demonstration tests aim at relying heavily on measured values, due to their perceived objectivity and enforceable quality. Weight given to numerical models is diminished by integrating them in the calculations only in a relative sense. For unmonitored locations, the EPA has suggested the use of a spatial interpolation technique to assign current values. We demonstrate that this approach may lead to erroneous assignments of nonattainment and may make it difficult for States to establish future compliance. We propose a method that combines different sources of information to map air pollution, using the Bayesian Maximum Entropy (BME) Framework. The approach gives precedence to measured values and integrates modeled data as a function of model performance. We demonstrate this approach in North Carolina, using the State's ozone monitoring network in combination with outputs from the Multiscale Air Quality Simulation Platform (MAQSIP) modeling system. We show that the BME data integration approach, compared to a spatial interpolation of measured data, improves the accuracy and the precision of ozone estimations across the state.
Honing Theory: A Complex Systems Framework for Creativity.
Gabora, Liane
2017-01-01
This paper proposes a theory of creativity, referred to as honing theory, which posits that creativity fuels the process by which culture evolves through communal exchange amongst minds that are self-organizing, self-maintaining, and self-reproducing. According to honing theory, minds, like other self-organizing systems, modify their contents and adapt to their environments to minimize entropy. Creativity begins with detection of high psychological entropy material, which provokes uncertainty and is arousal-inducing. The creative process involves recursively considering this material from new contexts until it is sufficiently restructured that arousal dissipates. Restructuring involves neural synchrony and dynamic binding, and may be facilitated by temporarily shifting to a more associative mode of thought. A creative work may similarly induce restructuring in others, and thereby contribute to the cultural evolution of more nuanced worldviews. Since lines of cultural descent connecting creative outputs may exhibit little continuity, it is proposed that cultural evolution occurs at the level of self-organizing minds; outputs reflect their evolutionary state. Honing theory addresses challenges not addressed by other theories of creativity, such as the factors that guide restructuring, and in what sense creative works evolve. Evidence comes from empirical studies, an agent-based computational model of cultural evolution, and a model of concept combination.
Maximum nonlocality and minimum uncertainty using magic states
NASA Astrophysics Data System (ADS)
Howard, Mark
2015-04-01
We prove that magic states from the Clifford hierarchy give optimal solutions for tasks involving nonlocality and entropic uncertainty with respect to Pauli measurements. For both the nonlocality and uncertainty tasks, stabilizer states are the worst possible pure states, so our solutions have an operational interpretation as being highly nonstabilizer. The optimal strategy for a qudit version of the Clauser-Horne-Shimony-Holt game in prime dimensions is achieved by measuring maximally entangled states that are isomorphic to single-qudit magic states. These magic states have an appealingly simple form, and our proof shows that they are "balanced" with respect to all but one of the mutually unbiased stabilizer bases. Of all equatorial qudit states, magic states minimize the average entropic uncertainties for collision entropy and also, for small prime dimensions, min-entropy, a fact that may have implications for cryptography.
Engineering entropy-driven reactions and networks catalyzed by DNA.
Zhang, David Yu; Turberfield, Andrew J; Yurke, Bernard; Winfree, Erik
2007-11-16
Artificial biochemical circuits are likely to play as large a role in biological engineering as electrical circuits have played in the engineering of electromechanical devices. Toward that end, nucleic acids provide a designable substrate for the regulation of biochemical reactions. However, it has been difficult to incorporate signal amplification components. We introduce a design strategy that allows a specified input oligonucleotide to catalyze the release of a specified output oligonucleotide, which in turn can serve as a catalyst for other reactions. This reaction, which is driven forward by the configurational entropy of the released molecule, provides an amplifying circuit element that is simple, fast, modular, composable, and robust. We have constructed and characterized several circuits that amplify nucleic acid signals, including a feedforward cascade with quadratic kinetics and a positive feedback circuit with exponential growth kinetics.
High Power Microwave (HPM) and Ionizing Radiation Effects on CMOS Devices
2010-03-01
24 xviii Symbol Page VIH minimum input voltage for proper high voltage output...38 VOH output voltage corresponding to VIH ...design. The high level at the input, VIH , along with VDD, define the maximum permitted “Logic 1” region, which allows for proper state change for a
NASA Astrophysics Data System (ADS)
Suzuki, Masuo
2013-10-01
The mechanism of entropy production in transport phenomena is discussed again by emphasizing the role of symmetry of non-equilibrium states and also by reformulating Einstein’s theory of Brownian motion to derive entropy production from it. This yields conceptual reviews of the previous papers [M. Suzuki, Physica A 390 (2011) 1904; 391 (2012) 1074; 392 (2013) 314]. Separated variational principles of steady states for multi external fields {Xi} and induced currents {Ji} are proposed by extending the principle of minimum integrated entropy production found by the present author for a single external field. The basic strategy of our theory on steady states is to take in all the intermediate processes from the equilibrium state to the final possible steady states in order to study the irreversible physics even in the steady states. As an application of this principle, Gransdorff-Prigogine’s evolution criterion inequality (or stability condition) dXP≡∫dr∑iJidXi≤0 is derived in the stronger form dQi≡∫drJidXi≤0 for individual force Xi and current Ji even in nonlinear responses which depend on all the external forces {Xk} nonlinearly. This is called “separated evolution criterion”. Some explicit demonstrations of the present general theory to simple electric circuits with multi external fields are given in order to clarify the physical essence of our new theory and to realize the condition of its validity concerning the existence of the solutions of the simultaneous equations obtained by the separated variational principles. It is also instructive to compare the two results obtained by the new variational theory and by the old scheme based on the instantaneous entropy production. This seems to be suggestive even to the energy problem in the world.
Entropy generation method to quantify thermal comfort.
Boregowda, S C; Tiwari, S N; Chaturvedi, S K
2001-12-01
The present paper presents a thermodynamic approach to assess the quality of human-thermal environment interaction and quantify thermal comfort. The approach involves development of entropy generation term by applying second law of thermodynamics to the combined human-environment system. The entropy generation term combines both human thermal physiological responses and thermal environmental variables to provide an objective measure of thermal comfort. The original concepts and definitions form the basis for establishing the mathematical relationship between thermal comfort and entropy generation term. As a result of logic and deterministic approach, an Objective Thermal Comfort Index (OTCI) is defined and established as a function of entropy generation. In order to verify the entropy-based thermal comfort model, human thermal physiological responses due to changes in ambient conditions are simulated using a well established and validated human thermal model developed at the Institute of Environmental Research of Kansas State University (KSU). The finite element based KSU human thermal computer model is being utilized as a "Computational Environmental Chamber" to conduct series of simulations to examine the human thermal responses to different environmental conditions. The output from the simulation, which include human thermal responses and input data consisting of environmental conditions are fed into the thermal comfort model. Continuous monitoring of thermal comfort in comfortable and extreme environmental conditions is demonstrated. The Objective Thermal Comfort values obtained from the entropy-based model are validated against regression based Predicted Mean Vote (PMV) values. Using the corresponding air temperatures and vapor pressures that were used in the computer simulation in the regression equation generates the PMV values. The preliminary results indicate that the OTCI and PMV values correlate well under ideal conditions. However, an experimental study is needed in the future to fully establish the validity of the OTCI formula and the model. One of the practical applications of this index is that could it be integrated in thermal control systems to develop human-centered environmental control systems for potential use in aircraft, mass transit vehicles, intelligent building systems, and space vehicles.
Entropy generation method to quantify thermal comfort
NASA Technical Reports Server (NTRS)
Boregowda, S. C.; Tiwari, S. N.; Chaturvedi, S. K.
2001-01-01
The present paper presents a thermodynamic approach to assess the quality of human-thermal environment interaction and quantify thermal comfort. The approach involves development of entropy generation term by applying second law of thermodynamics to the combined human-environment system. The entropy generation term combines both human thermal physiological responses and thermal environmental variables to provide an objective measure of thermal comfort. The original concepts and definitions form the basis for establishing the mathematical relationship between thermal comfort and entropy generation term. As a result of logic and deterministic approach, an Objective Thermal Comfort Index (OTCI) is defined and established as a function of entropy generation. In order to verify the entropy-based thermal comfort model, human thermal physiological responses due to changes in ambient conditions are simulated using a well established and validated human thermal model developed at the Institute of Environmental Research of Kansas State University (KSU). The finite element based KSU human thermal computer model is being utilized as a "Computational Environmental Chamber" to conduct series of simulations to examine the human thermal responses to different environmental conditions. The output from the simulation, which include human thermal responses and input data consisting of environmental conditions are fed into the thermal comfort model. Continuous monitoring of thermal comfort in comfortable and extreme environmental conditions is demonstrated. The Objective Thermal Comfort values obtained from the entropy-based model are validated against regression based Predicted Mean Vote (PMV) values. Using the corresponding air temperatures and vapor pressures that were used in the computer simulation in the regression equation generates the PMV values. The preliminary results indicate that the OTCI and PMV values correlate well under ideal conditions. However, an experimental study is needed in the future to fully establish the validity of the OTCI formula and the model. One of the practical applications of this index is that could it be integrated in thermal control systems to develop human-centered environmental control systems for potential use in aircraft, mass transit vehicles, intelligent building systems, and space vehicles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khosla, D.; Singh, M.
The estimation of three-dimensional dipole current sources on the cortical surface from the measured magnetoencephalogram (MEG) is a highly under determined inverse problem as there are many {open_quotes}feasible{close_quotes} images which are consistent with the MEG data. Previous approaches to this problem have concentrated on the use of weighted minimum norm inverse methods. While these methods ensure a unique solution, they often produce overly smoothed solutions and exhibit severe sensitivity to noise. In this paper we explore the maximum entropy approach to obtain better solutions to the problem. This estimation technique selects that image from the possible set of feasible imagesmore » which has the maximum entropy permitted by the information available to us. In order to account for the presence of noise in the data, we have also incorporated a noise rejection or likelihood term into our maximum entropy method. This makes our approach mirror a Bayesian maximum a posteriori (MAP) formulation. Additional information from other functional techniques like functional magnetic resonance imaging (fMRI) can be incorporated in the proposed method in the form of a prior bias function to improve solutions. We demonstrate the method with experimental phantom data from a clinical 122 channel MEG system.« less
NASA Technical Reports Server (NTRS)
Hendricks, R. C.; Baron, A. K.; Peller, I. C.
1975-01-01
A FORTRAN IV subprogram called GASP is discussed which calculates the thermodynamic and transport properties for 10 pure fluids: parahydrogen, helium, neon, methane, nitrogen, carbon monoxide, oxygen, fluorine, argon, and carbon dioxide. The pressure range is generally from 0.1 to 400 atmospheres (to 100 atm for helium and to 1000 atm for hydrogen). The temperature ranges are from the triple point to 300 K for neon; to 500 K for carbon monoxide, oxygen, and fluorine; to 600 K for methane and nitrogen; to 1000 K for argon and carbon dioxide; to 2000 K for hydrogen; and from 6 to 500 K for helium. GASP accepts any two of pressure, temperature and density as input conditions along with pressure, and either entropy or enthalpy. The properties available in any combination as output include temperature, density, pressure, entropy, enthalpy, specific heats, sonic velocity, viscosity, thermal conductivity, and surface tension. The subprogram design is modular so that the user can choose only those subroutines necessary to the calculations.
2005-11-01
more random. Autonomous systems can exchange entropy statistics for packet streams with no confidentiality concerns, potentially enabling timely and... analysis began with simulation results, which were validated by analysis of actual data from an Autonomous System (AS). A scale-free network is one...traffic—for example, time series of flux at given nodes and mean path length Outputs the time series from any node queried Calculates
Potential of mean force between two hydrophobic solutes in water.
Southall, Noel T; Dill, Ken A
2002-12-10
We study the potential of mean force between two nonpolar solutes in the Mercedes Benz model of water. Using NPT Monte Carlo simulations, we find that the solute size determines the relative preference of two solute molecules to come into contact ('contact minimum') or to be separated by a single layer of water ('solvent-separated minimum'). Larger solutes more strongly prefer the contacting state, while smaller solutes have more tendency to become solvent-separated, particularly in cold water. The thermal driving forces oscillate with solute separation. Contacts are stabilized by entropy, whereas solvent-separated solute pairing is stabilized by enthalpy. The free energy of interaction for small solutes is well-approximated by scaled-particle theory. Copyright 2002 Elsevier Science B.V.
NASA Technical Reports Server (NTRS)
Chatterjee, Sharmista
1993-01-01
Our first goal in this project was to perform a systems analysis of a closed loop Environmental Control Life Support System (ECLSS). This pertains to the development of a model of an existing real system from which to assess the state or performance of the existing system. Systems analysis is applied to conceptual models obtained from a system design effort. For our modelling purposes we used a simulator tool called ASPEN (Advanced System for Process Engineering). Our second goal was to evaluate the thermodynamic efficiency of the different components comprising an ECLSS. Use is made of the second law of thermodynamics to determine the amount of irreversibility of energy loss of each component. This will aid design scientists in selecting the components generating the least entropy, as our penultimate goal is to keep the entropy generation of the whole system at a minimum.
NASA Astrophysics Data System (ADS)
Li, Yongbo; Yang, Yuantao; Li, Guoyan; Xu, Minqiang; Huang, Wenhu
2017-07-01
Health condition identification of planetary gearboxes is crucial to reduce the downtime and maximize productivity. This paper aims to develop a novel fault diagnosis method based on modified multi-scale symbolic dynamic entropy (MMSDE) and minimum redundancy maximum relevance (mRMR) to identify the different health conditions of planetary gearbox. MMSDE is proposed to quantify the regularity of time series, which can assess the dynamical characteristics over a range of scales. MMSDE has obvious advantages in the detection of dynamical changes and computation efficiency. Then, the mRMR approach is introduced to refine the fault features. Lastly, the obtained new features are fed into the least square support vector machine (LSSVM) to complete the fault pattern identification. The proposed method is numerically and experimentally demonstrated to be able to recognize the different fault types of planetary gearboxes.
Recurrence plots of discrete-time Gaussian stochastic processes
NASA Astrophysics Data System (ADS)
Ramdani, Sofiane; Bouchara, Frédéric; Lagarde, Julien; Lesne, Annick
2016-09-01
We investigate the statistical properties of recurrence plots (RPs) of data generated by discrete-time stationary Gaussian random processes. We analytically derive the theoretical values of the probabilities of occurrence of recurrence points and consecutive recurrence points forming diagonals in the RP, with an embedding dimension equal to 1. These results allow us to obtain theoretical values of three measures: (i) the recurrence rate (REC) (ii) the percent determinism (DET) and (iii) RP-based estimation of the ε-entropy κ(ε) in the sense of correlation entropy. We apply these results to two Gaussian processes, namely first order autoregressive processes and fractional Gaussian noise. For these processes, we simulate a number of realizations and compare the RP-based estimations of the three selected measures to their theoretical values. These comparisons provide useful information on the quality of the estimations, such as the minimum required data length and threshold radius used to construct the RP.
CPAP Devices for Emergency Prehospital Use: A Bench Study.
Brusasco, Claudia; Corradi, Francesco; De Ferrari, Alessandra; Ball, Lorenzo; Kacmarek, Robert M; Pelosi, Paolo
2015-12-01
CPAP is frequently used in prehospital and emergency settings. An air-flow output minimum of 60 L/min and a constant positive pressure are 2 important features for a successful CPAP device. Unlike hospital CPAP devices, which require electricity, CPAP devices for ambulance use need only an oxygen source to function. The aim of the study was to evaluate and compare on a bench model the performance of 3 orofacial mask devices (Ventumask, EasyVent, and Boussignac CPAP system) and 2 helmets (Ventukit and EVE Coulisse) used to apply CPAP in the prehospital setting. A static test evaluated air-flow output, positive pressure applied, and FIO2 delivered by each device. A dynamic test assessed airway pressure stability during simulated ventilation. Efficiency of devices was compared based on oxygen flow needed to generate a minimum air flow of 60 L/min at each CPAP setting. The EasyVent and EVE Coulisse devices delivered significantly higher mean air-flow outputs compared with the Ventumask and Ventukit under all CPAP conditions tested. The Boussignac CPAP system never reached an air-flow output of 60 L/min. The EasyVent had significantly lower pressure excursion than the Ventumask at all CPAP levels, and the EVE Coulisse had lower pressure excursion than the Ventukit at 5, 15, and 20 cm H2O, whereas at 10 cm H2O, no significant difference was observed between the 2 devices. Estimated oxygen consumption was lower for the EasyVent and EVE Coulisse compared with the Ventumask and Ventukit. Air-flow output, pressure applied, FIO2 delivered, device oxygen consumption, and ability to maintain air flow at 60 L/min differed significantly among the CPAP devices tested. Only the EasyVent and EVE Coulisse achieved the required minimum level of air-flow output needed to ensure an effective therapy under all CPAP conditions. Copyright © 2015 by Daedalus Enterprises.
Validation of a hybrid electromagnetic-piezoelectric vibration energy harvester
NASA Astrophysics Data System (ADS)
Edwards, Bryn; Hu, Patrick A.; Aw, Kean C.
2016-05-01
This paper presents a low frequency vibration energy harvester with contact based frequency up-conversion and hybrid electromagnetic-piezoelectric transduction. An electromagnetic generator is proposed as a power source for low power wearable electronic devices, while a second piezoelectric generator is investigated as a potential power source for a power conditioning circuit for the electromagnetic transducer output. Simulations and experiments are conducted in order to verify the behaviour of the device under harmonic as well as wide-band excitations across two key design parameters—the length of the piezoelectric beam and the excitation frequency. Experimental results demonstrated that the device achieved a power output between 25.5 and 34 μW at an root mean squared (rms) voltage level between 16 and 18.5 mV for the electromagnetic transducer in the excitation frequency range of 3-7 Hz, while the output power of the piezoelectric transducer ranged from 5 to 10.5 μW with a minimum peak-to-peak output voltage of 6 V. A multivariate model validation was performed between experimental and simulation results under wide-band excitation in terms of the rms voltage outputs of the electromagnetic and piezoelectric transducers, as well as the peak-to-peak voltage output of the piezoelectric transducer, and it is found that the experimental data fit the model predictions with a minimum probability of 63.4% across the parameter space.
NASA Astrophysics Data System (ADS)
Brask, Jonatan Bohr; Martin, Anthony; Esposito, William; Houlmann, Raphael; Bowles, Joseph; Zbinden, Hugo; Brunner, Nicolas
2017-05-01
An approach to quantum random number generation based on unambiguous quantum state discrimination is developed. We consider a prepare-and-measure protocol, where two nonorthogonal quantum states can be prepared, and a measurement device aims at unambiguously discriminating between them. Because the states are nonorthogonal, this necessarily leads to a minimal rate of inconclusive events whose occurrence must be genuinely random and which provide the randomness source that we exploit. Our protocol is semi-device-independent in the sense that the output entropy can be lower bounded based on experimental data and a few general assumptions about the setup alone. It is also practically relevant, which we demonstrate by realizing a simple optical implementation, achieving rates of 16.5 Mbits /s . Combining ease of implementation, a high rate, and a real-time entropy estimation, our protocol represents a promising approach intermediate between fully device-independent protocols and commercial quantum random number generators.
Liu, Lin; Huo, Ju; Zhao, Ying; Tian, Yu
2012-03-25
The present study investigated the disease trajectory of vascular cognitive impairment using the entropy of information in a neural network mathematical simulation based on the free radical and excitatory amino acids theories. Glutamate, malondialdehyde, and inducible nitric oxide synthase content was significantly elevated, but acetylcholine, catalase, superoxide dismutase, glutathione peroxidase and constitutive nitric oxide synthase content was significantly decreased in our vascular cognitive impairment model. The fitting curves for each factor were obtained using Matlab software. Nineteen, 30 and 49 days post ischemia were the main output time frames of the influence of these seven factors. Our results demonstrated that vascular cognitive impairment involves multiple factors. These factors include excitatory amino acid toxicity and nitric oxide toxicity. These toxicities disrupt the dynamic equilibrium of the production and removal of oxygen free radicals after cerebral ischemia, reducing the ability to clear oxygen free radicals and worsening brain injury.
Solid state Ku-band spacecraft transmitters
NASA Technical Reports Server (NTRS)
Wisseman, W. R.; Tserng, H. Q.; Coleman, D. J.; Doerbeck, F. H.
1977-01-01
A transmitter is considered that consists of GaAs IMPATT and Read diodes operating in a microstrip circuit environment to provide amplification with a minimum of 63 db small signal gain and a minimum compressed gain at 5 W output of 57 db. Reported are Schottky-Read diode design and fabrication, microstrip and circulator optimization, preamplifier development, power amplifier development, dc-to-dc converter design, and integration of the breadboard transmitter modules. A four-stage power amplifier in cascade with a three-stage preamplifier had an overall gain of 56.5 db at 13.5 GHz with a power output of 4.5 W. A single-stage Read amplifier delivered 5.9 W with 4 db gain at 22% efficiency.
Mariotti, Erika; Veronese, Mattia; Dunn, Joel T; Southworth, Richard; Eykyn, Thomas R
2015-06-01
To assess the feasibility of using a hybrid Maximum-Entropy/Nonlinear Least Squares (MEM/NLS) method for analyzing the kinetics of hyperpolarized dynamic data with minimum a priori knowledge. A continuous distribution of rates obtained through the Laplace inversion of the data is used as a constraint on the NLS fitting to derive a discrete spectrum of rates. Performance of the MEM/NLS algorithm was assessed through Monte Carlo simulations and validated by fitting the longitudinal relaxation time curves of hyperpolarized [1-(13) C] pyruvate acquired at 9.4 Tesla and at three different flip angles. The method was further used to assess the kinetics of hyperpolarized pyruvate-lactate exchange acquired in vitro in whole blood and to re-analyze the previously published in vitro reaction of hyperpolarized (15) N choline with choline kinase. The MEM/NLS method was found to be adequate for the kinetic characterization of hyperpolarized in vitro time-series. Additional insights were obtained from experimental data in blood as well as from previously published (15) N choline experimental data. The proposed method informs on the compartmental model that best approximate the biological system observed using hyperpolarized (13) C MR especially when the metabolic pathway assessed is complex or a new hyperpolarized probe is used. © 2014 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc.
Ge, Hao; Qian, Hong
2013-06-01
Nonequilibrium thermodynamics of a system situated in a sustained environment with influx and efflux is usually treated as a subsystem in a larger, closed "universe." A question remains with regard to what the minimally required description for the surrounding of such an open driven system is so that its nonequilibrium thermodynamics can be established solely based on the internal stochastic kinetics. We provide a solution to this problem using insights from studies of molecular motors in a chemical nonequilibrium steady state (NESS) with sustained external drive through a regenerating system or in a quasisteady state (QSS) with an excess amount of adenosine triphosphate (ATP), adenosine diphosphate (ADP), and inorganic phosphate (Pi). We introduce the key notion of minimal work that is needed, W(min), for the external regenerating system to sustain a NESS (e.g., maintaining constant concentrations of ATP, ADP and Pi for a molecular motor). Using a Markov (master-equation) description of a motor protein, we illustrate that the NESS and QSS have identical kinetics as well as the second law in terms of the same positive entropy production rate. The heat dissipation of a NESS without mechanical output is exactly the W(min). This provides a justification for introducing an ideal external regenerating system and yields a free-energy balance equation between the net free-energy input F(in) and total dissipation F(dis) in an NESS: F(in) consists of chemical input minus mechanical output; F(dis) consists of dissipative heat, i.e. the amount of useful energy becoming heat, which also equals the NESS entropy production. Furthermore, we show that for nonstationary systems, the F(dis) and F(in) correspond to the entropy production rate and housekeeping heat in stochastic thermodynamics and identify a relative entropy H as a generalized free energy. We reach a new formulation of Markovian nonequilibrium thermodynamics based on only the internal kinetic equation without further reference to the intrinsic degree of freedom within each Markov state. It includes an extended free-energy balance and a second law which are valid for driven stochastic dynamics with an ideal external regenerating system. Our result suggests new ingredients for a generalized thermodynamics of self-organization in driven systems.
Liu, Quan; Ma, Li; Fan, Shou-Zen; Abbod, Maysam F; Shieh, Jiann-Shing
2018-01-01
Estimating the depth of anaesthesia (DoA) in operations has always been a challenging issue due to the underlying complexity of the brain mechanisms. Electroencephalogram (EEG) signals are undoubtedly the most widely used signals for measuring DoA. In this paper, a novel EEG-based index is proposed to evaluate DoA for 24 patients receiving general anaesthesia with different levels of unconsciousness. Sample Entropy (SampEn) algorithm was utilised in order to acquire the chaotic features of the signals. After calculating the SampEn from the EEG signals, Random Forest was utilised for developing learning regression models with Bispectral index (BIS) as the target. Correlation coefficient, mean absolute error, and area under the curve (AUC) were used to verify the perioperative performance of the proposed method. Validation comparisons with typical nonstationary signal analysis methods (i.e., recurrence analysis and permutation entropy) and regression methods (i.e., neural network and support vector machine) were conducted. To further verify the accuracy and validity of the proposed methodology, the data is divided into four unconsciousness-level groups on the basis of BIS levels. Subsequently, analysis of variance (ANOVA) was applied to the corresponding index (i.e., regression output). Results indicate that the correlation coefficient improved to 0.72 ± 0.09 after filtering and to 0.90 ± 0.05 after regression from the initial values of 0.51 ± 0.17. Similarly, the final mean absolute error dramatically declined to 5.22 ± 2.12. In addition, the ultimate AUC increased to 0.98 ± 0.02, and the ANOVA analysis indicates that each of the four groups of different anaesthetic levels demonstrated significant difference from the nearest levels. Furthermore, the Random Forest output was extensively linear in relation to BIS, thus with better DoA prediction accuracy. In conclusion, the proposed method provides a concrete basis for monitoring patients' anaesthetic level during surgeries.
Energy landscapes and properties of biomolecules.
Wales, David J
2005-11-09
Thermodynamic and dynamic properties of biomolecules can be calculated using a coarse-grained approach based upon sampling stationary points of the underlying potential energy surface. The superposition approximation provides an overall partition function as a sum of contributions from the local minima, and hence functions such as internal energy, entropy, free energy and the heat capacity. To obtain rates we must also sample transition states that link the local minima, and the discrete path sampling method provides a systematic means to achieve this goal. A coarse-grained picture is also helpful in locating the global minimum using the basin-hopping approach. Here we can exploit a fictitious dynamics between the basins of attraction of local minima, since the objective is to find the lowest minimum, rather than to reproduce the thermodynamics or dynamics.
Principles of time evolution in classical physics
NASA Astrophysics Data System (ADS)
Güémez, J.; Fiolhais, M.
2018-07-01
We address principles of time evolution in classical mechanical/thermodynamical systems in translational and rotational motion, in three cases: when there is conservation of mechanical energy, when there is energy dissipation and when there is mechanical energy production. In the first case, the time derivative of the Hamiltonian vanishes. In the second one, when dissipative forces are present, the time evolution is governed by the minimum potential energy principle, or, equivalently, maximum increase of the entropy of the universe. Finally, in the third situation, when internal sources of work are available to the system, it evolves in time according to the principle of minimum Gibbs function. We apply the Lagrangian formulation to the systems, dealing with the non-conservative forces using restriction functions such as the Rayleigh dissipative function.
Quantum heat engine with coupled superconducting resonators
NASA Astrophysics Data System (ADS)
Hardal, Ali Ü. C.; Aslan, Nur; Wilson, C. M.; Müstecaplıoǧlu, Özgür E.
2017-12-01
We propose a quantum heat engine composed of two superconducting transmission line resonators interacting with each other via an optomechanical-like coupling. One resonator is periodically excited by a thermal pump. The incoherently driven resonator induces coherent oscillations in the other one due to the coupling. A limit cycle, indicating finite power output, emerges in the thermodynamical phase space. The system implements an all-electrical analog of a photonic piston. Instead of mechanical motion, the power output is obtained as a coherent electrical charging in our case. We explore the differences between the quantum and classical descriptions of our system by solving the quantum master equation and classical Langevin equations. Specifically, we calculate the mean number of excitations, second-order coherence, as well as the entropy, temperature, power, and mean energy to reveal the signatures of quantum behavior in the statistical and thermodynamic properties of the system. We find evidence of a quantum enhancement in the power output of the engine at low temperatures.
Quantum heat engine with coupled superconducting resonators.
Hardal, Ali Ü C; Aslan, Nur; Wilson, C M; Müstecaplıoğlu, Özgür E
2017-12-01
We propose a quantum heat engine composed of two superconducting transmission line resonators interacting with each other via an optomechanical-like coupling. One resonator is periodically excited by a thermal pump. The incoherently driven resonator induces coherent oscillations in the other one due to the coupling. A limit cycle, indicating finite power output, emerges in the thermodynamical phase space. The system implements an all-electrical analog of a photonic piston. Instead of mechanical motion, the power output is obtained as a coherent electrical charging in our case. We explore the differences between the quantum and classical descriptions of our system by solving the quantum master equation and classical Langevin equations. Specifically, we calculate the mean number of excitations, second-order coherence, as well as the entropy, temperature, power, and mean energy to reveal the signatures of quantum behavior in the statistical and thermodynamic properties of the system. We find evidence of a quantum enhancement in the power output of the engine at low temperatures.
Imaging non-Gaussian output fields produced by Josephson parametric amplifiers: experiments
NASA Astrophysics Data System (ADS)
Toyli, D. M.; Venkatramani, A. V.; Boutin, S.; Eddins, A.; Didier, N.; Clerk, A. A.; Blais, A.; Siddiqi, I.
2015-03-01
In recent years, squeezed microwave states have become the focus of intense research motivated by applications in continuous-variables quantum computation and precision qubit measurement. Despite numerous demonstrations of vacuum squeezing with superconducting parametric amplifiers such as the Josephson parametric amplifier (JPA), most experiments have also suggested that the squeezed output field becomes non-ideal at the large (> 10dB) signal gains required for low-noise qubit measurement. Here we describe a systematic experimental study of JPA squeezing performance in this regime for varying lumped-element device designs and pumping methods. We reconstruct the JPA output fields through homodyne detection of the field moments and quantify the deviations from an ideal squeezed state using maximal entropy techniques. These methods provide a powerful diagnostic tool to understand how effects such as gain compression impact JPA squeezing. Our results highlight the importance of weak device nonlinearity for generating highly squeezed states. This work is supported by ARO and ONR.
Experimental Detection of Quantum Channel Capacities.
Cuevas, Álvaro; Proietti, Massimiliano; Ciampini, Mario Arnolfo; Duranti, Stefano; Mataloni, Paolo; Sacchi, Massimiliano F; Macchiavello, Chiara
2017-09-08
We present an efficient experimental procedure that certifies nonvanishing quantum capacities for qubit noisy channels. Our method is based on the use of a fixed bipartite entangled state, where the system qubit is sent to the channel input. A particular set of local measurements is performed at the channel output and the ancilla qubit mode, obtaining lower bounds to the quantum capacities for any unknown channel with no need of quantum process tomography. The entangled qubits have a Bell state configuration and are encoded in photon polarization. The lower bounds are found by estimating the Shannon and von Neumann entropies at the output using an optimized basis, whose statistics is obtained by measuring only the three observables σ_{x}⊗σ_{x}, σ_{y}⊗σ_{y}, and σ_{z}⊗σ_{z}.
Applications of active adaptive noise control to jet engines
NASA Technical Reports Server (NTRS)
Shoureshi, Rahmat; Brackney, Larry
1993-01-01
During phase 2 research on the application of active noise control to jet engines, the development of multiple-input/multiple-output (MIMO) active adaptive noise control algorithms and acoustic/controls models for turbofan engines were considered. Specific goals for this research phase included: (1) implementation of a MIMO adaptive minimum variance active noise controller; and (2) turbofan engine model development. A minimum variance control law for adaptive active noise control has been developed, simulated, and implemented for single-input/single-output (SISO) systems. Since acoustic systems tend to be distributed, multiple sensors, and actuators are more appropriate. As such, the SISO minimum variance controller was extended to the MIMO case. Simulation and experimental results are presented. A state-space model of a simplified gas turbine engine is developed using the bond graph technique. The model retains important system behavior, yet is of low enough order to be useful for controller design. Expansion of the model to include multiple stages and spools is also discussed.
Adaptive feedforward control of non-minimum phase structural systems
NASA Astrophysics Data System (ADS)
Vipperman, J. S.; Burdisso, R. A.
1995-06-01
Adaptive feedforward control algorithms have been effectively applied to stationary disturbance rejection. For structural systems, the ideal feedforward compensator is a recursive filter which is a function of the transfer functions between the disturbance and control inputs and the error sensor output. Unfortunately, most control configurations result in a non-minimum phase control path; even a collocated control actuator and error sensor will not necessarily produce a minimum phase control path in the discrete domain. Therefore, the common practice is to choose a suitable approximation of the ideal compensator. In particular, all-zero finite impulse response (FIR) filters are desirable because of their inherent stability for adaptive control approaches. However, for highly resonant systems, large order filters are required for broadband applications. In this work, a control configuration is investigated for controlling non-minimum phase lightly damped structural systems. The control approach uses low order FIR filters as feedforward compensators in a configuration that has one more control actuator than error sensors. The performance of the controller was experimentally evaluated on a simply supported plate under white noise excitation for a two-input, one-output (2I1O) system. The results show excellent error signal reduction, attesting to the effectiveness of the method.
Rate-Compatible LDPC Codes with Linear Minimum Distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel
2009-01-01
A recently developed method of constructing protograph-based low-density parity-check (LDPC) codes provides for low iterative decoding thresholds and minimum distances proportional to block sizes, and can be used for various code rates. A code constructed by this method can have either fixed input block size or fixed output block size and, in either case, provides rate compatibility. The method comprises two submethods: one for fixed input block size and one for fixed output block size. The first mentioned submethod is useful for applications in which there are requirements for rate-compatible codes that have fixed input block sizes. These are codes in which only the numbers of parity bits are allowed to vary. The fixed-output-blocksize submethod is useful for applications in which framing constraints are imposed on the physical layers of affected communication systems. An example of such a system is one that conforms to one of many new wireless-communication standards that involve the use of orthogonal frequency-division modulation
Exact-Output Tracking Theory for Systems with Parameter Jumps
NASA Technical Reports Server (NTRS)
Devasia, Santosh; Paden, Brad; Rossi, Carlo
1997-01-01
We consider the exact output tracking problem for systems with parameter jumps. Necessary and sufficient conditions are derived for the elimination of switching-introduced output transient. Previous works have studied this problem by developing a regulator that maintains exact tracking through parameter jumps (switches). Such techniques are, however, only applicable to minimum-phase systems. In contrast, our approach is applicable to non-minimum-phase systems and it obtains bounded but possibly non-causal solutions. If the reference trajectories are generated by an exosystem, then we develop an exact-tracking controller in a feed-back form. As in standard regulator theory, we obtain a linear map from the states of the exosystem to the desired system state which is defined via a matrix differential equation. The constant solution of this differential equation provides asymptotic tracking, and coincides with the feedback law used in standard regulator theory. The obtained results are applied to a simple flexible manipulator with jumps in the pay-load mass.
NASA Astrophysics Data System (ADS)
Inoshita, Kensuke; Hama, Yoshimitsu; Kishikawa, Hiroki; Goto, Nobuo
2016-12-01
In photonic label routers, various optical signal processing functions are required; these include optical label extraction, recognition of the label, optical switching and buffering controlled by signals based on the label information and network routing tables, and label rewriting. Among these functions, we focus on photonic label recognition. We have proposed two kinds of optical waveguide circuits to recognize 16 quadrature amplitude modulation codes, i.e., recognition from the minimum output port and from the maximum output port. The recognition function was theoretically analyzed and numerically simulated by finite-difference beam-propagation method. We discuss noise tolerance in the circuit and show numerically simulated results to evaluate bit-error-rate (BER) characteristics against optical signal-to-noise ratio (OSNR). The OSNR required to obtain a BER less than 1.0×10-3 for the symbol rate of 2.5 GBaud was 14.5 and 27.0 dB for recognition from the minimum and maximum output, respectively.
Parameter Estimation as a Problem in Statistical Thermodynamics.
Earle, Keith A; Schneider, David J
2011-03-14
In this work, we explore the connections between parameter fitting and statistical thermodynamics using the maxent principle of Jaynes as a starting point. In particular, we show how signal averaging may be described by a suitable one particle partition function, modified for the case of a variable number of particles. These modifications lead to an entropy that is extensive in the number of measurements in the average. Systematic error may be interpreted as a departure from ideal gas behavior. In addition, we show how to combine measurements from different experiments in an unbiased way in order to maximize the entropy of simultaneous parameter fitting. We suggest that fit parameters may be interpreted as generalized coordinates and the forces conjugate to them may be derived from the system partition function. From this perspective, the parameter fitting problem may be interpreted as a process where the system (spectrum) does work against internal stresses (non-optimum model parameters) to achieve a state of minimum free energy/maximum entropy. Finally, we show how the distribution function allows us to define a geometry on parameter space, building on previous work[1, 2]. This geometry has implications for error estimation and we outline a program for incorporating these geometrical insights into an automated parameter fitting algorithm.
Query construction, entropy, and generalization in neural-network models
NASA Astrophysics Data System (ADS)
Sollich, Peter
1994-05-01
We study query construction algorithms, which aim at improving the generalization ability of systems that learn from examples by choosing optimal, nonredundant training sets. We set up a general probabilistic framework for deriving such algorithms from the requirement of optimizing a suitable objective function; specifically, we consider the objective functions entropy (or information gain) and generalization error. For two learning scenarios, the high-low game and the linear perceptron, we evaluate the generalization performance obtained by applying the corresponding query construction algorithms and compare it to training on random examples. We find qualitative differences between the two scenarios due to the different structure of the underlying rules (nonlinear and ``noninvertible'' versus linear); in particular, for the linear perceptron, random examples lead to the same generalization ability as a sequence of queries in the limit of an infinite number of examples. We also investigate learning algorithms which are ill matched to the learning environment and find that, in this case, minimum entropy queries can in fact yield a lower generalization ability than random examples. Finally, we study the efficiency of single queries and its dependence on the learning history, i.e., on whether the previous training examples were generated randomly or by querying, and the difference between globally and locally optimal query construction.
Neural-scaled entropy predicts the effects of nonlinear frequency compression on speech perception
Rallapalli, Varsha H.; Alexander, Joshua M.
2015-01-01
The Neural-Scaled Entropy (NSE) model quantifies information in the speech signal that has been altered beyond simple gain adjustments by sensorineural hearing loss (SNHL) and various signal processing. An extension of Cochlear-Scaled Entropy (CSE) [Stilp, Kiefte, Alexander, and Kluender (2010). J. Acoust. Soc. Am. 128(4), 2112–2126], NSE quantifies information as the change in 1-ms neural firing patterns across frequency. To evaluate the model, data from a study that examined nonlinear frequency compression (NFC) in listeners with SNHL were used because NFC can recode the same input information in multiple ways in the output, resulting in different outcomes for different speech classes. Overall, predictions were more accurate for NSE than CSE. The NSE model accurately described the observed degradation in recognition, and lack thereof, for consonants in a vowel-consonant-vowel context that had been processed in different ways by NFC. While NSE accurately predicted recognition of vowel stimuli processed with NFC, it underestimated them relative to a low-pass control condition without NFC. In addition, without modifications, it could not predict the observed improvement in recognition for word final /s/ and /z/. Findings suggest that model modifications that include information from slower modulations might improve predictions across a wider variety of conditions. PMID:26627780
Reversibility and stability of information processing systems
NASA Technical Reports Server (NTRS)
Zurek, W. H.
1984-01-01
Classical and quantum models of dynamically reversible computers are considered. Instabilities in the evolution of the classical 'billiard ball computer' are analyzed and shown to result in a one-bit increase of entropy per step of computation. 'Quantum spin computers', on the other hand, are not only microscopically, but also operationally reversible. Readoff of the output of quantum computation is shown not to interfere with this reversibility. Dissipation, while avoidable in principle, can be used in practice along with redundancy to prevent errors.
Quantum Random Number Generation Using a Quanta Image Sensor
Amri, Emna; Felk, Yacine; Stucki, Damien; Ma, Jiaju; Fossum, Eric R.
2016-01-01
A new quantum random number generation method is proposed. The method is based on the randomness of the photon emission process and the single photon counting capability of the Quanta Image Sensor (QIS). It has the potential to generate high-quality random numbers with remarkable data output rate. In this paper, the principle of photon statistics and theory of entropy are discussed. Sample data were collected with QIS jot device, and its randomness quality was analyzed. The randomness assessment method and results are discussed. PMID:27367698
NASA Astrophysics Data System (ADS)
Aksenov, Andrey; Malysheva, Anna
2018-03-01
The analytical solution of one of the urgent problems of modern hydromechanics and heat engineering about the distribution of gas and liquid phases along the channel cross-section, the thickness of the annular layer and their connection with the mass content of the gas phase in the gas-liquid flow is given in the paper.The analytical method is based on the fundamental laws of theoretical mechanics and thermophysics on the minimum of energy dissipation and the minimum rate of increase in the system entropy, which determine the stability of stationary states and processes. Obtained dependencies disclose the physical laws of the motion of two-phase media and can be used in hydraulic calculations during the design and operation of refrigeration and air conditioning systems.
Amortized entanglement of a quantum channel and approximately teleportation-simulable channels
NASA Astrophysics Data System (ADS)
Kaur, Eneet; Wilde, Mark M.
2018-01-01
This paper defines the amortized entanglement of a quantum channel as the largest difference in entanglement between the output and the input of the channel, where entanglement is quantified by an arbitrary entanglement measure. We prove that the amortized entanglement of a channel obeys several desirable properties, and we also consider special cases such as the amortized relative entropy of entanglement and the amortized Rains relative entropy. These latter quantities are shown to be single-letter upper bounds on the secret-key-agreement and PPT-assisted quantum capacities of a quantum channel, respectively. Of especial interest is a uniform continuity bound for these latter two special cases of amortized entanglement, in which the deviation between the amortized entanglement of two channels is bounded from above by a simple function of the diamond norm of their difference and the output dimension of the channels. We then define approximately teleportation- and positive-partial-transpose-simulable (PPT-simulable) channels as those that are close in diamond norm to a channel which is either exactly teleportation- or PPT-simulable, respectively. These results then lead to single-letter upper bounds on the secret-key-agreement and PPT-assisted quantum capacities of channels that are approximately teleportation- or PPT-simulable, respectively. Finally, we generalize many of the concepts in the paper to the setting of general resource theories, defining the amortized resourcefulness of a channel and the notion of ν-freely-simulable channels, connecting these concepts in an operational way as well.
NASA Technical Reports Server (NTRS)
Hendricks, R. C.
1994-01-01
A computer program, GASP, has been written to calculate the thermodynamic and transport properties of argon, carbon dioxide, carbon monoxide, fluorine, methane, neon, nitrogen, and oxygen. GASP accepts any two of pressure, temperature, or density as input. In addition, entropy and enthalpy are possible inputs. Outputs are temperature, density, pressure, entropy, enthalpy, specific heats, expansion coefficient, sonic velocity, viscosity, thermal conductivity, and surface tension. A special technique is provided to estimate the thermal conductivity near the thermodynamic critical point. GASP is a group of FORTRAN subroutines. The user typically would write a main program that invoked GASP to provide only the described outputs. Subroutines are structured so that the user may call only those subroutines needed for his particular calculations. Allowable pressures range from 0.l atmosphere to 100 to l,000 atmospheres, depending on the fluid. Similarly, allowable pressures range from the triple point of each substance to 300 degrees K to 2000 degrees K, depending on the substance. The GASP package was developed to be used with heat transfer and fluid flow applications. It is particularly useful in applications of cryogenic fluids. Some problems associated with the liquefication, storage, and gasification of liquefied natural gas and liquefied petroleum gas can also be studied using GASP. This program is written in FORTRAN IV for batch execution and is available for implementation on IBM 7000 series computers. GASP was developed in 1971.
van Es, Andrew; Wiarda, Wim; Hordijk, Maarten; Alberink, Ivo; Vergeer, Peter
2017-05-01
For the comparative analysis of glass fragments, a method using Laser Ablation Inductively Coupled Plasma Mass Spectrometry (LA-ICP-MS) is in use at the NFI, giving measurements of the concentration of 18 elements. An important question is how to evaluate the results as evidence that a glass sample originates from a known glass source or from an arbitrary different glass source. One approach is the use of matching criteria e.g. based on a t-test or overlap of confidence intervals. An important drawback of this method is the fact that the rarity of the glass composition is not taken into account. A similar match can have widely different evidential values. In addition the use of fixed matching criteria can give rise to a "fall off the cliff" effect. Small differences may result in a match or a non-match. In this work a likelihood ratio system is presented, largely based on the two-level model as proposed by Aitken and Lucy [1], and Aitken, Zadora and Lucy [2]. Results show that the output from the two-level model gives good discrimination between same and different source hypotheses, but a post-hoc calibration step is necessary to improve the accuracy of the likelihood ratios. Subsequently, the robustness and performance of the LR system are studied. Results indicate that the output of the LR system is robust to the sample properties of the dataset used for calibration. Furthermore, the empirical upper and lower bound method [3], designed to deal with extrapolation errors in the density models, results in minimum and maximum values of the LR outputted by the system of 3.1×10 -3 and 3.4×10 4 . Calibration of the system, as measured by empirical cross-entropy, shows good behavior over the complete prior range. Rates of misleading evidence are small: for same-source comparisons, 0.3% of LRs support a different-source hypothesis; for different-source comparisons, 0.2% supports a same-source hypothesis. The authors use the LR system in reporting of glass cases to support expert opinion in the interpretation of glass evidence for origin of source questions. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.
Heat engine by exorcism of Maxwell Demon using spin angular momentum reservoir
NASA Astrophysics Data System (ADS)
Bedkihal, Salil; Wright, Jackson; Vaccaro, Joan; Gould, Tim
Landauer's erasure principle is a hallmark in thermodynamics and information theory. According to this principle, erasing one bit of information incurs a minimum energy cost. Recently, Vaccaro and Barnett (VB) have explored the role of multiple conserved quantities in memory erasure. They further illustrated that for the energy degenerate spin reservoirs, the cost of erasure can be solely in terms of spin angular momentum and no energy. Motivated by the VB erasure, in this work we propose a novel optical heat engine that operates under a single thermal reservoir and a spin angular momentum reservoir. The novel heat engine exploits ultrafast processes of phonon absorption to convert thermal phonon energy to coherent light. The entropy generated in this process then corresponds to a mixture of spin up and spin down populations of energy degenerate electronic ground states which acts as demon's memory. This information is then erased using a polarised spin reservoir that acts as an entropy sink. The proposed heat engines goes beyond the traditional Carnot engine.
System Mass Variation and Entropy Generation in 100k We Closed-Brayton-Cycle Space Power Systems
NASA Technical Reports Server (NTRS)
Barrett, Michael J.; Reid, Bryan M.
2004-01-01
State-of-the-art closed-Brayton-cycle (CBC) space power systems were modeled to study performance trends in a trade space characteristic of interplanetary orbiters. For working-fluid molar masses of 48.6, 39.9, and 11.9 kg/kmol, peak system pressures of 1.38 and 3.0 MPa and compressor pressure ratios ranging from 1.6 to 2.4, total system masses were estimated. System mass increased as peak operating pressure increased for all compressor pressure ratios and molar mass values examined. Minimum mass point comparison between 72 percent He at 1.38 MPa peak and 94 percent He at 3.0 MPa peak showed an increase in system mass of 14 percent. Converter flow loop entropy generation rates were calculated for 1.38 and 3.0 MPa peak pressure cases. Physical system behavior was approximated using a pedigreed NASA Glenn modeling code, Closed Cycle Engine Program (CCEP), which included realistic performance prediction for heat exchangers, radiators and turbomachinery.
System Mass Variation and Entropy Generation in 100-kWe Closed-Brayton-Cycle Space Power Systems
NASA Technical Reports Server (NTRS)
Barrett, Michael J.; Reid, Bryan M.
2004-01-01
State-of-the-art closed-Brayton-cycle (CBC) space power systems were modeled to study performance trends in a trade space characteristic of interplanetary orbiters. For working-fluid molar masses of 48.6, 39.9, and 11.9 kg/kmol, peak system pressures of 1.38 and 3.0 MPa and compressor pressure ratios ranging from 1.6 to 2.4, total system masses were estimated. System mass increased as peak operating pressure increased for all compressor pressure ratios and molar mass values examined. Minimum mass point comparison between 72 percent He at 1.38 MPa peak and 94 percent He at 3.0 MPa peak showed an increase in system mass of 14 percent. Converter flow loop entropy generation rates were calculated for 1.38 and 3.0 MPa peak pressure cases. Physical system behavior was approximated using a pedigreed NASA Glenn modeling code, Closed Cycle Engine Program (CCEP), which included realistic performance prediction for heat exchangers, radiators and turbomachinery.
Study of Thermodynamics of Liquid Noble-Metals Alloys Through a Pseudopotential Theory
NASA Astrophysics Data System (ADS)
Vora, Aditya M.
2010-09-01
The Gibbs-Bogoliubov (GB) inequality is applied to investigate the thermodynamic properties of some equiatomic noble metal alloys in liquid phase such as Au-Cu, Ag-Cu, and Ag-Au using well recognized pseudopotential formalism. For description of the structure, well known Percus-Yevick (PY) hard sphere model is used as a reference system. By applying a variation method the best hard core diameters have been found which correspond to minimum free energy. With this procedure the thermodynamic properties such as entropy and heat of mixing have been computed. The influence of local field correction function viz; Hartree (H), Taylor (T), Ichimaru-Utsumi (IU), Farid et al. (F), and Sarkar et al. (S) is also investigated. The computed results of the excess entropy compares favourably in the case of liquid alloys while the agreement with experiment is poor in the case of heats of mixing. This may be due to the sensitivity of the heats of mixing with the potential parameters and the dielectric function.
NASA Astrophysics Data System (ADS)
Böer, Karl W.
2016-10-01
The solar cell does not use a pn-junction to separate electrons from holes, but uses an undoped CdS layer that is p-type inverted when attached to a p-type collector and collects the holes while rejecting the backflow of electrons and thereby prevents junction leakage. The operation of the solar cell is determined by the minimum entropy principle of the cell and its external circuit that determines the electrochemical potential, i.e., the Fermi-level of the base electrode to the operating (maximum power point) voltage. It leaves the Fermi level of the metal electrode of the CdS unchanged, since CdS does not participate in the photo-emf. All photoelectric actions are generated by the holes excited from the light that causes the shift of the quasi-Fermi levels in the generator and supports the diffusion current in operating conditions. It is responsible for the measured solar maximum power current. The open circuit voltage (Voc) can approach its theoretical limit of the band gap of the collector at 0 K and the cell increases the efficiency at AM1 to 21% for a thin-film CdS/CdTe that is given as an example here. However, a series resistance of the CdS forces a limitation of its thickness to preferably below 200 Å to avoid unnecessary reduction in efficiency or Voc. The operation of the CdS solar cell does not involve heated carriers. It is initiated by the field at the CdS/CdTe interface that exceeds 20 kV/cm that is sufficient to cause extraction of holes by the CdS that is inverted to become p-type. Here a strong doubly charged intrinsic donor can cause a negative differential conductivity that switches-on a high-field domain that is stabilized by the minimum entropy principle and permits an efficient transport of the holes from the CdTe to the base electrode. Experimental results of the band model of CdS/CdTe solar cells are given and show that the conduction bands are connected in the dark, where the electron current must be continuous, and the valence bands are connected with light where the hole currents are dominant and must be continuous through the junction. The major shifts of the bands in operating conditions are self-adjusting by a change in the junction dipole momentum.
Single molecule thermodynamics in biological motors.
Taniguchi, Yuichi; Karagiannis, Peter; Nishiyama, Masayoshi; Ishii, Yoshiharu; Yanagida, Toshio
2007-04-01
Biological molecular machines use thermal activation energy to carry out various functions. The process of thermal activation has the stochastic nature of output events that can be described according to the laws of thermodynamics. Recently developed single molecule detection techniques have allowed each distinct enzymatic event of single biological machines to be characterized providing clues to the underlying thermodynamics. In this study, the thermodynamic properties in the stepping movement of a biological molecular motor have been examined. A single molecule detection technique was used to measure the stepping movements at various loads and temperatures and a range of thermodynamic parameters associated with the production of each forward and backward step including free energy, enthalpy, entropy and characteristic distance were obtained. The results show that an asymmetry in entropy is a primary factor that controls the direction in which the motor will step. The investigation on single molecule thermodynamics has the potential to reveal dynamic properties underlying the mechanisms of how biological molecular machines work.
WASP: A flexible FORTRAN 4 computer code for calculating water and steam properties
NASA Technical Reports Server (NTRS)
Hendricks, R. C.; Peller, I. C.; Baron, A. K.
1973-01-01
A FORTRAN 4 subprogram, WASP, was developed to calculate the thermodynamic and transport properties of water and steam. The temperature range is from the triple point to 1750 K, and the pressure range is from 0.1 to 100 MN/m2 (1 to 1000 bars) for the thermodynamic properties and to 50 MN/m2 (500 bars) for thermal conductivity and to 80 MN/m2 (800 bars) for viscosity. WASP accepts any two of pressure, temperature, and density as input conditions. In addition, pressure and either entropy or enthalpy are also allowable input variables. This flexibility is especially useful in cycle analysis. The properties available in any combination as output include temperature, density, pressure, entropy, enthalpy, specific heats, sonic velocity, viscosity, thermal conductivity, surface tension, and the Laplace constant. The subroutine structure is modular so that the user can choose only those subroutines necessary to his calculations. Metastable calculations can also be made by using WASP.
A CMOS Imager with Focal Plane Compression using Predictive Coding
NASA Technical Reports Server (NTRS)
Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.
2007-01-01
This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.
Adaptive variable-length coding for efficient compression of spacecraft television data.
NASA Technical Reports Server (NTRS)
Rice, R. F.; Plaunt, J. R.
1971-01-01
An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.
NASA Astrophysics Data System (ADS)
Ebrahimi Orimi, H.; Esmaeili, M.; Refahi Oskouei, A.; Mirhadizadehd, S. A.; Tse, P. W.
2017-10-01
Condition monitoring of rotary devices such as helical gears is an issue of great significance in industrial projects. This paper introduces a feature extraction method for gear fault diagnosis using wavelet packet due to its higher frequency resolution. During this investigation, the mother wavelet Daubechies 10 (Db-10) was applied to calculate the coefficient entropy of each frequency band of 5th level (32 frequency bands) as features. In this study, the peak value of the signal entropies was selected as applicable features in order to improve frequency band differentiation and reduce feature vectors' dimension. Feature extraction is followed by the fusion network where four different structured multi-layer perceptron networks are trained to classify the recorded signals (healthy/faulty). The robustness of fusion network outputs is greater compared to perceptron networks. The results provided by the fusion network indicate a classification of 98.88 and 97.95% for healthy and faulty classes, respectively.
Moras, Gerard; Fernández-Valdés, Bruno; Vázquez-Guerrero, Jairo; Tous-Fajardo, Julio; Exel, Juliana; Sampaio, Jaime
2018-05-24
This study described the variability in acceleration during a resistance training task, performed in horizontal inertial flywheels without (NOBALL) or with the constraint of catching and throwing a rugby ball (BALL). Twelve elite rugby players (mean±SD: age 25.6±3.0years, height 1.82±0.07m, weight 94.0±9.9kg) performed a resistance training task in both conditions (NOBALL AND BALL). Players had five minutes of a standardized warm-up, followed by two series of six repetitions of both conditions: at the first three repetitions the intensity was progressively increased while the last three were performed at maximal voluntary effort. Thereafter, the participants performed two series of eight repetitions from each condition for two days and in a random order, with a minimum of 10min between series. The structure of variability was analysed using non-linear measures of entropy. Mean changes (%; ±90% CL) of 4.64; ±3.1g for mean acceleration and 39.48; ±36.63a.u. for sample entropy indicated likely and very likely increase when in BALL condition. Multiscale entropy also showed higher unpredictability of acceleration under the BALL condition, especially at higher time scales. The application of match specific constraints in resistance training for rugby players elicit different amount of variability of body acceleration across multiple physiological time scales. Understanding the non-linear process inherent to the manipulation of resistance training variables with constraints and its motor adaptations may help coaches and trainers to enhance the effectiveness of physical training and, ultimately, better understand and maximize sports performance. Copyright © 2018 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Energy and the capital of nations
NASA Astrophysics Data System (ADS)
Karakatsanis, Georgios
2016-04-01
The economically useful time of fossil fuels in Earth is estimated in just ~160 years, while humanity itself counts ~150*103 years. Within only ~0,15% of this time, humanity has used more energy, accumulating so much wealth than within the rest of its existence time. According to this perspective, the availability of heat gradients is what fundamentally drives the evolution of economic systems, via the extensive enhancement -or even substitution- of human labor (Ayres and Warr 2009). In the modern industrial civilization it is estimated (Kümmel 2011) that the average human ability to generate wealth (productivity) has increased by ~40%-50% -including the effects from the growth of human population- further augmented by significant economies of scale achieved in the industrial era. This process led to significant accumulation of surpluses that generally have the form of capital. Although capital is frequently confused with the stock of mechanical equipment, capital can be generalized as any form of accumulated (not currently consumed) production factor that can deliver a benefit in the future. In that sense, capital is found in various forms, such as machinery, technology or natural resources and environmental capacities. While it is expected that anthropogenic forms of capital are accumulated along the increase of energy use, natural capital should be declining, due to the validity of the Second Law of Thermodynamics (2nd Law), entropy production and -in turn- the irreversible (monotonic) consumption of exergy (Wall 2005). Regressions of the LINear EXponential (LINEX) function (an economic growth function depending linearly on energy and exponentially on output elasticity quotients) (Lindenbeger and Kummel 2011) for a number of industrialized economies -like the USA, Germany and Japan, found that output elasticities were highest for energy (except for US where it was second highest after capital); meaning that in industrial economies, energy comprises the most significant production factor. This work enriches such studies via integrating the analysis all forms of capital and for a wider range of countries; estimating the trade-off -as output elasticity ratios- between the accumulation of various anthropogenic capital forms and the deterioration of natural capital -considered both as resource stock and carrying capacities of the environment. Keywords: energy, fossil fuels, industrial civilization, capital, production factor, natural capital, 2nd Law, entropy, irreversibility, exergy, LINEX function, output elasticity References 1. Ayres, Robert U. and Benjamin Warr (2009), The Economic Growth Engine: How Energy and Work Drive Material Prosperity, Edward Elgar and IIASA 2. Kümmel, Reiner (2011), The Second Law of Economics: Energy, Entropy and the Origins of Wealth, Springer 3. Lindenberger, Dietmar and Reiner Kümmel (2011), Energy and the state of nations, Energy 36, 6010 - 6018 4. Wall, Goran (2005), Exergy Capital and Sustainable Development, Proceedings of the Second International Exergy, Energy and Environment Symposium, Kos, Greece, Paper No. XII-I49
A Mechanism For Solar Forcing of Climate: Did the Maunder Minimum Cause the Little Ice Age?
NASA Technical Reports Server (NTRS)
Yung, Yuk L.
2004-01-01
The mechanism we wish to demonstrate exploits chemical, radiative, and dynamical sensitivities in the stratosphere to affect the climate of the troposphere. The sun, while its variability in total radiative output over the course of the solar cycle is on the order of 0.1%, exhibits variability in the UV output on the order of 5%. We expect to show that a substantially decreased solar UV output lessened the heating of the Earth's stratosphere during the Maunder Minimum, through decreased radiative absorption by ozone and oxygen. These changes in stratospheric heating would lead to major changes in the stratospheric zonal wind pattern which would in turn affect the propagation characteristics of planetary-scale waves launched in the winter hemisphere. Until recently, there was no quantitative data to relate the changes in the stratosphere to those at the surface. There is now empirical evidence from the NCEP Reanalysis data that a definitive effect of the solar cycle on climate in the troposphere exists. Our recent work is summarized as follows (see complete list of publications in later part of this report).
Parametric study of minimum reactor mass in energy-storage dc-to-dc converters
NASA Technical Reports Server (NTRS)
Wong, R. C.; Owen, H. A., Jr.; Wilson, T. G.
1981-01-01
Closed-form analytical solutions for the design equations of a minimum-mass reactor for a two-winding voltage-or-current step-up converter are derived. A quantitative relationship between the three parameters - minimum total reactor mass, maximum output power, and switching frequency - is extracted from these analytical solutions. The validity of the closed-form solution is verified by a numerical minimization procedure. A computer-aided design procedure using commercially available toroidal cores and magnet wires is also used to examine how the results from practical designs follow the predictions of the analytical solutions.
Generating Multivariate Ordinal Data via Entropy Principles.
Lee, Yen; Kaplan, David
2018-03-01
When conducting robustness research where the focus of attention is on the impact of non-normality, the marginal skewness and kurtosis are often used to set the degree of non-normality. Monte Carlo methods are commonly applied to conduct this type of research by simulating data from distributions with skewness and kurtosis constrained to pre-specified values. Although several procedures have been proposed to simulate data from distributions with these constraints, no corresponding procedures have been applied for discrete distributions. In this paper, we present two procedures based on the principles of maximum entropy and minimum cross-entropy to estimate the multivariate observed ordinal distributions with constraints on skewness and kurtosis. For these procedures, the correlation matrix of the observed variables is not specified but depends on the relationships between the latent response variables. With the estimated distributions, researchers can study robustness not only focusing on the levels of non-normality but also on the variations in the distribution shapes. A simulation study demonstrates that these procedures yield excellent agreement between specified parameters and those of estimated distributions. A robustness study concerning the effect of distribution shape in the context of confirmatory factor analysis shows that shape can affect the robust [Formula: see text] and robust fit indices, especially when the sample size is small, the data are severely non-normal, and the fitted model is complex.
Static inverter with synchronous output waveform synthesized by time-optimal-response feedback
NASA Technical Reports Server (NTRS)
Kernick, A.; Stechschulte, D. L.; Shireman, D. W.
1976-01-01
Time-optimal-response 'bang-bang' or 'bang-hang' technique, using four feedback control loops, synthesizes static-inverter sinusoidal output waveform by self-oscillatory but yet synchronous pulse-frequency-modulation (SPFM). A single modular power stage per phase of ac output entails the minimum of circuit complexity while providing by feedback synthesis individual phase voltage regulation, phase position control and inherent compensation simultaneously for line and load disturbances. Clipped sinewave performance is described under off-limit load or input voltage conditions. Also, approaches to high power levels, 3-phase arraying and parallel modular connection are given.
Area/latency optimized early output asynchronous full adders and relative-timed ripple carry adders.
Balasubramanian, P; Yamashita, S
2016-01-01
This article presents two area/latency optimized gate level asynchronous full adder designs which correspond to early output logic. The proposed full adders are constructed using the delay-insensitive dual-rail code and adhere to the four-phase return-to-zero handshaking. For an asynchronous ripple carry adder (RCA) constructed using the proposed early output full adders, the relative-timing assumption becomes necessary and the inherent advantages of the relative-timed RCA are: (1) computation with valid inputs, i.e., forward latency is data-dependent, and (2) computation with spacer inputs involves a bare minimum constant reverse latency of just one full adder delay, thus resulting in the optimal cycle time. With respect to different 32-bit RCA implementations, and in comparison with the optimized strong-indication, weak-indication, and early output full adder designs, one of the proposed early output full adders achieves respective reductions in latency by 67.8, 12.3 and 6.1 %, while the other proposed early output full adder achieves corresponding reductions in area by 32.6, 24.6 and 6.9 %, with practically no power penalty. Further, the proposed early output full adders based asynchronous RCAs enable minimum reductions in cycle time by 83.4, 15, and 8.8 % when considering carry-propagation over the entire RCA width of 32-bits, and maximum reductions in cycle time by 97.5, 27.4, and 22.4 % for the consideration of a typical carry chain length of 4 full adder stages, when compared to the least of the cycle time estimates of various strong-indication, weak-indication, and early output asynchronous RCAs of similar size. All the asynchronous full adders and RCAs were realized using standard cells in a semi-custom design fashion based on a 32/28 nm CMOS process technology.
Akita, Yasuyuki; Baldasano, Jose M; Beelen, Rob; Cirach, Marta; de Hoogh, Kees; Hoek, Gerard; Nieuwenhuijsen, Mark; Serre, Marc L; de Nazelle, Audrey
2014-04-15
In recognition that intraurban exposure gradients may be as large as between-city variations, recent air pollution epidemiologic studies have become increasingly interested in capturing within-city exposure gradients. In addition, because of the rapidly accumulating health data, recent studies also need to handle large study populations distributed over large geographic domains. Even though several modeling approaches have been introduced, a consistent modeling framework capturing within-city exposure variability and applicable to large geographic domains is still missing. To address these needs, we proposed a modeling framework based on the Bayesian Maximum Entropy method that integrates monitoring data and outputs from existing air quality models based on Land Use Regression (LUR) and Chemical Transport Models (CTM). The framework was applied to estimate the yearly average NO2 concentrations over the region of Catalunya in Spain. By jointly accounting for the global scale variability in the concentration from the output of CTM and the intraurban scale variability through LUR model output, the proposed framework outperformed more conventional approaches.
NASA Astrophysics Data System (ADS)
Zausner, Tobi
Chaos theory may provide models for creativity and for the personality of the artist. A collection of speculative hypotheses examines the connection between art and such fundamentals of non-linear dynamics as iteration, dissipative processes, open systems, entropy, sensitivity to stimuli, autocatalysis, subsystems, bifurcations, randomness, unpredictability, irreversibility, increasing levels of organization, far-from-equilibrium conditions, strange attractors, period doubling, intermittency and self-similar fractal organization. Non-linear dynamics may also explain why certain individuals suffer mental disorders while others remain intact during a lifetime of sustained creative output.
Bit-wise arithmetic coding for data compression
NASA Technical Reports Server (NTRS)
Kiely, A. B.
1994-01-01
This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.
Data free inference with processed data products
Chowdhary, K.; Najm, H. N.
2014-07-12
Here, we consider the context of probabilistic inference of model parameters given error bars or confidence intervals on model output values, when the data is unavailable. We introduce a class of algorithms in a Bayesian framework, relying on maximum entropy arguments and approximate Bayesian computation methods, to generate consistent data with the given summary statistics. Once we obtain consistent data sets, we pool the respective posteriors, to arrive at a single, averaged density on the parameters. This approach allows us to perform accurate forward uncertainty propagation consistent with the reported statistics.
Minimal entropy probability paths between genome families.
Ahlbrandt, Calvin; Benson, Gary; Casey, William
2004-05-01
We develop a metric for probability distributions with applications to biological sequence analysis. Our distance metric is obtained by minimizing a functional defined on the class of paths over probability measures on N categories. The underlying mathematical theory is connected to a constrained problem in the calculus of variations. The solution presented is a numerical solution, which approximates the true solution in a set of cases called rich paths where none of the components of the path is zero. The functional to be minimized is motivated by entropy considerations, reflecting the idea that nature might efficiently carry out mutations of genome sequences in such a way that the increase in entropy involved in transformation is as small as possible. We characterize sequences by frequency profiles or probability vectors, in the case of DNA where N is 4 and the components of the probability vector are the frequency of occurrence of each of the bases A, C, G and T. Given two probability vectors a and b, we define a distance function based as the infimum of path integrals of the entropy function H( p) over all admissible paths p(t), 0 < or = t< or =1, with p(t) a probability vector such that p(0)=a and p(1)=b. If the probability paths p(t) are parameterized as y(s) in terms of arc length s and the optimal path is smooth with arc length L, then smooth and "rich" optimal probability paths may be numerically estimated by a hybrid method of iterating Newton's method on solutions of a two point boundary value problem, with unknown distance L between the abscissas, for the Euler-Lagrange equations resulting from a multiplier rule for the constrained optimization problem together with linear regression to improve the arc length estimate L. Matlab code for these numerical methods is provided which works only for "rich" optimal probability vectors. These methods motivate a definition of an elementary distance function which is easier and faster to calculate, works on non-rich vectors, does not involve variational theory and does not involve differential equations, but is a better approximation of the minimal entropy path distance than the distance //b-a//(2). We compute minimal entropy distance matrices for examples of DNA myostatin genes and amino-acid sequences across several species. Output tree dendograms for our minimal entropy metric are compared with dendograms based on BLAST and BLAST identity scores.
Canadian crop calendars in support of the early warning project
NASA Technical Reports Server (NTRS)
Trenchard, M. H.; Hodges, T. (Principal Investigator)
1980-01-01
The Canadian crop calendars for LACIE are presented. Long term monthly averages of daily maximum and daily minimum temperatures for subregions of provinces were used to simulate normal daily maximum and minimum temperatures. The Robertson (1968) spring wheat and Williams (1974) spring barley phenology models were run using the simulated daily temperatures and daylengths for appropriate latitudes. Simulated daily temperatures and phenology model outputs for spring wheat and spring barley are given.
NASA Astrophysics Data System (ADS)
Adesso, Gerardo; Giampaolo, Salvatore M.; Illuminati, Fabrizio
2007-10-01
We present a geometric approach to the characterization of separability and entanglement in pure Gaussian states of an arbitrary number of modes. The analysis is performed adapting to continuous variables a formalism based on single subsystem unitary transformations that has been recently introduced to characterize separability and entanglement in pure states of qubits and qutrits [S. M. Giampaolo and F. Illuminati, Phys. Rev. A 76, 042301 (2007)]. In analogy with the finite-dimensional case, we demonstrate that the 1×M bipartite entanglement of a multimode pure Gaussian state can be quantified by the minimum squared Euclidean distance between the state itself and the set of states obtained by transforming it via suitable local symplectic (unitary) operations. This minimum distance, corresponding to a , uniquely determined, extremal local operation, defines an entanglement monotone equivalent to the entropy of entanglement, and amenable to direct experimental measurement with linear optical schemes.
Dirac dispersion generates unusually large Nernst effect in Weyl semimetals
NASA Astrophysics Data System (ADS)
Watzman, Sarah J.; McCormick, Timothy M.; Shekhar, Chandra; Wu, Shu-Chun; Sun, Yan; Prakash, Arati; Felser, Claudia; Trivedi, Nandini; Heremans, Joseph P.
2018-04-01
Weyl semimetals contain linearly dispersing electronic states, offering interesting features in transport yet to be thoroughly explored thermally. Here we show how the Nernst effect, combining entropy with charge transport, gives a unique signature for the presence of Dirac bands and offers a diagnostic to determine if trivial pockets play a role in this transport. The Nernst thermopower of NbP exceeds its conventional thermopower by a 100-fold, and the temperature dependence of the Nernst effect has a pronounced maximum. The charge-neutrality condition dictates that the Fermi level shifts with increasing temperature toward the energy that has the minimum density of states (DOS). In NbP, the agreement of the Nernst and Seebeck data with a model that assumes this minimum DOS resides at the Dirac points is taken as strong experimental evidence that the trivial (non-Dirac) bands play no role in high-temperature transport.
Fan, Shou-Zen; Abbod, Maysam F.
2018-01-01
Estimating the depth of anaesthesia (DoA) in operations has always been a challenging issue due to the underlying complexity of the brain mechanisms. Electroencephalogram (EEG) signals are undoubtedly the most widely used signals for measuring DoA. In this paper, a novel EEG-based index is proposed to evaluate DoA for 24 patients receiving general anaesthesia with different levels of unconsciousness. Sample Entropy (SampEn) algorithm was utilised in order to acquire the chaotic features of the signals. After calculating the SampEn from the EEG signals, Random Forest was utilised for developing learning regression models with Bispectral index (BIS) as the target. Correlation coefficient, mean absolute error, and area under the curve (AUC) were used to verify the perioperative performance of the proposed method. Validation comparisons with typical nonstationary signal analysis methods (i.e., recurrence analysis and permutation entropy) and regression methods (i.e., neural network and support vector machine) were conducted. To further verify the accuracy and validity of the proposed methodology, the data is divided into four unconsciousness-level groups on the basis of BIS levels. Subsequently, analysis of variance (ANOVA) was applied to the corresponding index (i.e., regression output). Results indicate that the correlation coefficient improved to 0.72 ± 0.09 after filtering and to 0.90 ± 0.05 after regression from the initial values of 0.51 ± 0.17. Similarly, the final mean absolute error dramatically declined to 5.22 ± 2.12. In addition, the ultimate AUC increased to 0.98 ± 0.02, and the ANOVA analysis indicates that each of the four groups of different anaesthetic levels demonstrated significant difference from the nearest levels. Furthermore, the Random Forest output was extensively linear in relation to BIS, thus with better DoA prediction accuracy. In conclusion, the proposed method provides a concrete basis for monitoring patients’ anaesthetic level during surgeries. PMID:29844970
Power generator driven by Maxwell's demon
NASA Astrophysics Data System (ADS)
Chida, Kensaku; Desai, Samarth; Nishiguchi, Katsuhiko; Fujiwara, Akira
2017-05-01
Maxwell's demon is an imaginary entity that reduces the entropy of a system and generates free energy in the system. About 150 years after its proposal, theoretical studies explained the physical validity of Maxwell's demon in the context of information thermodynamics, and there have been successful experimental demonstrations of energy generation by the demon. The demon's next task is to convert the generated free energy to work that acts on the surroundings. Here, we demonstrate that Maxwell's demon can generate and output electric current and power with individual randomly moving electrons in small transistors. Real-time monitoring of electron motion shows that two transistors functioning as gates that control an electron's trajectory so that an electron moves directionally. A numerical calculation reveals that power generation is increased by miniaturizing the room in which the electrons are partitioned. These results suggest that evolving transistor-miniaturization technology can increase the demon's power output.
Information-theoretic measures of hydrogen-like ions in weakly coupled Debye plasmas
NASA Astrophysics Data System (ADS)
Zan, Li Rong; Jiao, Li Guang; Ma, Jia; Ho, Yew Kam
2017-12-01
Recent development of information theory provides researchers an alternative and useful tool to quantitatively investigate the variation of the electronic structure when atoms interact with the external environment. In this work, we make systematic studies on the information-theoretic measures for hydrogen-like ions immersed in weakly coupled plasmas modeled by Debye-Hückel potential. Shannon entropy, Fisher information, and Fisher-Shannon complexity in both position and momentum spaces are quantified in high accuracy for the hydrogen atom in a large number of stationary states. The plasma screening effect on embedded atoms can significantly affect the electronic density distributions, in both conjugate spaces, and it is quantified by the variation of information quantities. It is shown that the composite quantities (the Shannon entropy sum and the Fisher information product in combined spaces and Fisher-Shannon complexity in individual space) give a more comprehensive description of the atomic structure information than single ones. The nodes of wave functions play a significant role in the changes of composite information quantities caused by plasmas. With the continuously increasing screening strength, all composite quantities in circular states increase monotonously, while in higher-lying excited states where nodal structures exist, they first decrease to a minimum and then increase rapidly before the bound state approaches the continuum limit. The minimum represents the most reduction of uncertainty properties of the atom in plasmas. The lower bounds for the uncertainty product of the system based on composite information quantities are discussed. Our research presents a comprehensive survey in the investigation of information-theoretic measures for simple atoms embedded in Debye model plasmas.
Adam-Poupart, Ariane; Brand, Allan; Fournier, Michel; Jerrett, Michael; Smargiassi, Audrey
2014-09-01
Ambient air ozone (O3) is a pulmonary irritant that has been associated with respiratory health effects including increased lung inflammation and permeability, airway hyperreactivity, respiratory symptoms, and decreased lung function. Estimation of O3 exposure is a complex task because the pollutant exhibits complex spatiotemporal patterns. To refine the quality of exposure estimation, various spatiotemporal methods have been developed worldwide. We sought to compare the accuracy of three spatiotemporal models to predict summer ground-level O3 in Quebec, Canada. We developed a land-use mixed-effects regression (LUR) model based on readily available data (air quality and meteorological monitoring data, road networks information, latitude), a Bayesian maximum entropy (BME) model incorporating both O3 monitoring station data and the land-use mixed model outputs (BME-LUR), and a kriging method model based only on available O3 monitoring station data (BME kriging). We performed leave-one-station-out cross-validation and visually assessed the predictive capability of each model by examining the mean temporal and spatial distributions of the average estimated errors. The BME-LUR was the best predictive model (R2 = 0.653) with the lowest root mean-square error (RMSE ;7.06 ppb), followed by the LUR model (R2 = 0.466, RMSE = 8.747) and the BME kriging model (R2 = 0.414, RMSE = 9.164). Our findings suggest that errors of estimation in the interpolation of O3 concentrations with BME can be greatly reduced by incorporating outputs from a LUR model developed with readily available data.
Urban-rural migration: uncertainty and the effect of a change in the minimum wage.
Ingene, C A; Yu, E S
1989-01-01
"This paper extends the neoclassical, Harris-Todaro model of urban-rural migration to the case of production uncertainty in the agricultural sector. A unique feature of the Harris-Todaro model is an exogenously determined minimum wage in the urban sector that exceeds the rural wage. Migration occurs until the rural wage equals the expected urban wage ('expected' due to employment uncertainty). The effects of a change in the minimum wage upon regional outputs, resource allocation, factor rewards, expected profits, and expected national income are explored, and the influence of production uncertainty upon the obtained results are delineated." The geographical focus is on developing countries. excerpt
Parabolic replicator dynamics and the principle of minimum Tsallis information gain
2013-01-01
Background Non-linear, parabolic (sub-exponential) and hyperbolic (super-exponential) models of prebiological evolution of molecular replicators have been proposed and extensively studied. The parabolic models appear to be the most realistic approximations of real-life replicator systems due primarily to product inhibition. Unlike the more traditional exponential models, the distribution of individual frequencies in an evolving parabolic population is not described by the Maximum Entropy (MaxEnt) Principle in its traditional form, whereby the distribution with the maximum Shannon entropy is chosen among all the distributions that are possible under the given constraints. We sought to identify a more general form of the MaxEnt principle that would be applicable to parabolic growth. Results We consider a model of a population that reproduces according to the parabolic growth law and show that the frequencies of individuals in the population minimize the Tsallis relative entropy (non-additive information gain) at each time moment. Next, we consider a model of a parabolically growing population that maintains a constant total size and provide an “implicit” solution for this system. We show that in this case, the frequencies of the individuals in the population also minimize the Tsallis information gain at each moment of the ‘internal time” of the population. Conclusions The results of this analysis show that the general MaxEnt principle is the underlying law for the evolution of a broad class of replicator systems including not only exponential but also parabolic and hyperbolic systems. The choice of the appropriate entropy (information) function depends on the growth dynamics of a particular class of systems. The Tsallis entropy is non-additive for independent subsystems, i.e. the information on the subsystems is insufficient to describe the system as a whole. In the context of prebiotic evolution, this “non-reductionist” nature of parabolic replicator systems might reflect the importance of group selection and competition between ensembles of cooperating replicators. Reviewers This article was reviewed by Viswanadham Sridhara (nominated by Claus Wilke), Puushottam Dixit (nominated by Sergei Maslov), and Nick Grishin. For the complete reviews, see the Reviewers’ Reports section. PMID:23937956
Inertial effects on mechanically braked Wingate power calculations.
Reiser, R F; Broker, J P; Peterson, M L
2000-09-01
The standard procedure for determining subject power output from a 30-s Wingate test on a mechanically braked (friction-loaded) ergometer includes only the braking resistance and flywheel velocity in the computations. However, the inertial effects associated with accelerating and decelerating the crank and flywheel also require energy and, therefore, represent a component of the subject's power output. The present study was designed to determine the effects of drive-system inertia on power output calculations. Twenty-eight male recreational cyclists completed Wingate tests on a Monark 324E mechanically braked ergometer (resistance: 8.5% body mass (BM), starting cadence: 60 rpm). Power outputs were then compared using both standard (without inertial contribution) and corrected methods (with inertial contribution) of calculating power output. Relative 5-s peak power and 30-s average power for the corrected method (14.8 +/- 1.2 W x kg(-1) BM; 9.9 +/- 0.7 W x kg(-1) BM) were 20.3% and 3.1% greater than that of the standard method (12.3 +/- 0.7 W x kg(-1) BM; 9.6 +/- 0.7 W x kg(-1) BM), respectively. Relative 5-s minimum power for the corrected method (6.8 +/- 0.7 W x kg(-1) BM) was 6.8% less than that of the standard method (7.3 +/- 0.8 W x kg(-1) BM). The combined differences in the peak power and minimum power produced a fatigue index for the corrected method (54 +/- 5%) that was 31.7% greater than that of the standard method (41 +/- 6%). All parameter differences were significant (P < 0.01). The inertial contribution to power output was dominated by the flywheel; however, the contribution from the crank was evident. These results indicate that the inertial components of the ergometer drive system influence the power output characteristics, requiring care when computing, interpreting, and comparing Wingate results, particularly among different ergometer designs and test protocols.
Dish Stirling solar receiver program
NASA Technical Reports Server (NTRS)
Haglund, R. A.
1980-01-01
A technology demonstration of a Dish Stirling solar thermal electric system can be accomplished earlier and at a much lower cost than previous planning had indicated by employing technical solutions that allow already existing hardware, with minimum modifications, to be integrated into a total system with a minimum of development. The DSSR operates with a modified United Stirling p-40 engine/alternator and the JPL Test Bed Concentrator as a completely integrated solar thermal electric system having a design output of 25 kWe. The system is augmented by fossil fuel combustion which ensures a continuous electrical output under all environmental conditions. Technical and economic studies by government and industry in the United States and abroad identify the Dish Stirling solar electric system as the most appropriate, efficient and economical method for conversion of solar energy to electricity in applications when the electrical demand is 10 MWe and less.
A Novel Method to Increase LinLog CMOS Sensors’ Performance in High Dynamic Range Scenarios
Martínez-Sánchez, Antonio; Fernández, Carlos; Navarro, Pedro J.; Iborra, Andrés
2011-01-01
Images from high dynamic range (HDR) scenes must be obtained with minimum loss of information. For this purpose it is necessary to take full advantage of the quantification levels provided by the CCD/CMOS image sensor. LinLog CMOS sensors satisfy the above demand by offering an adjustable response curve that combines linear and logarithmic responses. This paper presents a novel method to quickly adjust the parameters that control the response curve of a LinLog CMOS image sensor. We propose to use an Adaptive Proportional-Integral-Derivative controller to adjust the exposure time of the sensor, together with control algorithms based on the saturation level and the entropy of the images. With this method the sensor’s maximum dynamic range (120 dB) can be used to acquire good quality images from HDR scenes with fast, automatic adaptation to scene conditions. Adaptation to a new scene is rapid, with a sensor response adjustment of less than eight frames when working in real time video mode. At least 67% of the scene entropy can be retained with this method. PMID:22164083
A novel method to increase LinLog CMOS sensors' performance in high dynamic range scenarios.
Martínez-Sánchez, Antonio; Fernández, Carlos; Navarro, Pedro J; Iborra, Andrés
2011-01-01
Images from high dynamic range (HDR) scenes must be obtained with minimum loss of information. For this purpose it is necessary to take full advantage of the quantification levels provided by the CCD/CMOS image sensor. LinLog CMOS sensors satisfy the above demand by offering an adjustable response curve that combines linear and logarithmic responses. This paper presents a novel method to quickly adjust the parameters that control the response curve of a LinLog CMOS image sensor. We propose to use an Adaptive Proportional-Integral-Derivative controller to adjust the exposure time of the sensor, together with control algorithms based on the saturation level and the entropy of the images. With this method the sensor's maximum dynamic range (120 dB) can be used to acquire good quality images from HDR scenes with fast, automatic adaptation to scene conditions. Adaptation to a new scene is rapid, with a sensor response adjustment of less than eight frames when working in real time video mode. At least 67% of the scene entropy can be retained with this method.
Nature of phase transitions in crystalline and amorphous GeTe-Sb2Te3 phase change materials.
Kalkan, B; Sen, S; Clark, S M
2011-09-28
The thermodynamic nature of phase stabilities and transformations are investigated in crystalline and amorphous Ge(1)Sb(2)Te(4) (GST124) phase change materials as a function of pressure and temperature using high-resolution synchrotron x-ray diffraction in a diamond anvil cell. The phase transformation sequences upon compression, for cubic and hexagonal GST124 phases are found to be: cubic → amorphous → orthorhombic → bcc and hexagonal → orthorhombic → bcc. The Clapeyron slopes for melting of the hexagonal and bcc phases are negative and positive, respectively, resulting in a pressure dependent minimum in the liquidus. When taken together, the phase equilibria relations are consistent with the presence of polyamorphism in this system with the as-deposited amorphous GST phase being the low entropy low-density amorphous phase and the laser melt-quenched and high-pressure amorphized GST being the high entropy high-density amorphous phase. The metastable phase boundary between these two polyamorphic phases is expected to have a negative Clapeyron slope. © 2011 American Institute of Physics
Simulations of dissociation constants in low pressure supercritical water
NASA Astrophysics Data System (ADS)
Halstead, S. J.; An, P.; Zhang, S.
2014-09-01
This article reports molecular dynamics simulations of the dissociation of hydrochloric acid and sodium hydroxide in water from ambient to supercritical temperatures at a fixed pressure of 250 atm. Corrosion of reaction vessels is known to be a serious problem of supercritical water, and acid/base dissociation can be a significant contributing factor to this. The SPC/e model was used in conjunction with solute models determined from density functional calculations and OPLSAA Lennard-Jones parameters. Radial distribution functions were calculated, and these show a significant increase in solute-solvent ordering upon forming the product ions at all temperatures. For both dissociations, rapidly decreasing entropy of reaction was found to be the controlling thermodynamic factor, and this is thought to arise due to the ions produced from dissociation maintaining a relatively high density and ordered solvation shell compared to the reactants. The change in entropy of reaction reaches a minimum at the critical temperature. The values of pKa and pKb were calculated and both increased with temperature, in qualitative agreement with other work, until a maximum value at 748 K, after which there was a slight decrease.
Mei, Wenjuan; Zeng, Xianping; Yang, Chenglin; Zhou, Xiuyun
2017-01-01
The insulated gate bipolar transistor (IGBT) is a kind of excellent performance switching device used widely in power electronic systems. How to estimate the remaining useful life (RUL) of an IGBT to ensure the safety and reliability of the power electronics system is currently a challenging issue in the field of IGBT reliability. The aim of this paper is to develop a prognostic technique for estimating IGBTs’ RUL. There is a need for an efficient prognostic algorithm that is able to support in-situ decision-making. In this paper, a novel prediction model with a complete structure based on optimally pruned extreme learning machine (OPELM) and Volterra series is proposed to track the IGBT’s degradation trace and estimate its RUL; we refer to this model as Volterra k-nearest neighbor OPELM prediction (VKOPP) model. This model uses the minimum entropy rate method and Volterra series to reconstruct phase space for IGBTs’ ageing samples, and a new weight update algorithm, which can effectively reduce the influence of the outliers and noises, is utilized to establish the VKOPP network; then a combination of the k-nearest neighbor method (KNN) and least squares estimation (LSE) method is used to calculate the output weights of OPELM and predict the RUL of the IGBT. The prognostic results show that the proposed approach can predict the RUL of IGBT modules with small error and achieve higher prediction precision and lower time cost than some classic prediction approaches. PMID:29099811
A reduction of the saddle vertical force triggers the sit-stand transition in cycling.
Costes, Antony; Turpin, Nicolas A; Villeger, David; Moretto, Pierre; Watier, Bruno
2015-09-18
The purpose of the study was to establish the link between the saddle vertical force and its determinants in order to establish the strategies that could trigger the sit-stand transition. We hypothesized that the minimum saddle vertical force would be a critical parameter influencing the sit-stand transition during cycling. Twenty-five non-cyclists were asked to pedal at six different power outputs from 20% (1.6 ± 0.3 W kg(-1)) to 120% (9.6 ± 1.6 W kg(-1)) of their spontaneous sit-stand transition power obtained at 90 rpm. Five 6-component sensors (saddle tube, pedals and handlebars) and a full-body kinematic reconstruction were used to provide the saddle vertical force and other force components (trunk inertial force, hips and shoulders reaction forces, and trunk weight) linked to the saddle vertical force. Minimum saddle vertical force linearly decreased with power output by 87% from a static position on the bicycle (5.30 ± 0.50 N kg(-1)) to power output=120% of the sit-stand transition power (0.68 ± 0.49 N kg(-1)). This decrease was mainly explained by the increase in instantaneous pedal forces from 2.84 ± 0.58 N kg(-1) to 6.57 ± 1.02 N kg(-1) from 20% to 120% of the power output corresponding to the sit-stand transition, causing an increase in hip vertical forces from -0.17 N kg(-1) to 3.29 N kg(-1). The emergence of strategies aiming at counteracting the elevation of the trunk (handlebars and pedals pulling) coincided with the spontaneous sit-stand transition power. The present data suggest that the large decrease in minimum saddle vertical force observed at high pedal reaction forces might trigger the sit-stand transition in cycling. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ye, Qing; Pan, Hao; Liu, Changhua
2015-01-01
This research proposes a novel framework of final drive simultaneous failure diagnosis containing feature extraction, training paired diagnostic models, generating decision threshold, and recognizing simultaneous failure modes. In feature extraction module, adopt wavelet package transform and fuzzy entropy to reduce noise interference and extract representative features of failure mode. Use single failure sample to construct probability classifiers based on paired sparse Bayesian extreme learning machine which is trained only by single failure modes and have high generalization and sparsity of sparse Bayesian learning approach. To generate optimal decision threshold which can convert probability output obtained from classifiers into final simultaneous failure modes, this research proposes using samples containing both single and simultaneous failure modes and Grid search method which is superior to traditional techniques in global optimization. Compared with other frequently used diagnostic approaches based on support vector machine and probability neural networks, experiment results based on F 1-measure value verify that the diagnostic accuracy and efficiency of the proposed framework which are crucial for simultaneous failure diagnosis are superior to the existing approach. PMID:25722717
Bifacial PV cell with reflector for stand-alone mast for sensor powering purposes
NASA Astrophysics Data System (ADS)
Jakobsen, Michael L.; Thorsteinsson, Sune; Poulsen, Peter B.; Riedel, N.; Rødder, Peter M.; Rødder, Kristin
2017-09-01
Reflectors to bifacial PV-cells are simulated and prototyped in this work. The aim is to optimize the reflector to specific latitudes, and particularly northern latitudes. Specifically, by using minimum semiconductor area the reflector must be able to deliver the electrical power required at the condition of minimum solar travel above the horizon, worst weather condition etc. We will test a bifacial PV-module with a retroreflector, and compare the output with simulations combined with local solar data.
Test and evaluation of the Navy half-watt RTG. [Radioisotope Thermoelectric Generator
NASA Technical Reports Server (NTRS)
Rosell, F. E., Jr.; Lane, S. D.; Eggers, P. E.; Gawthrop, W. E.; Rouklove, P. G.; Truscello, V. C.
1976-01-01
The radioisotope thermoelectric generator (RTG) considered is to provide a continuous minimum power output of 0.5 watt at 6.0 to 8.5 volts for a minimum period of 15 years. The mechanical-electrical evaluation phase discussed involved the conduction of shock and vibration tests. The thermochemical-physical evaluation phase consisted of an analysis of the materials and the development of a thermal model. The thermoelectric evaluation phase included the accelerated testing of the thermoelectric modules.
Estimating Fluctuating Pressures From Distorted Measurements
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Leondes, Cornelius T.
1994-01-01
Two algorithms extract estimates of time-dependent input (upstream) pressures from outputs of pressure sensors located at downstream ends of pneumatic tubes. Effect deconvolutions that account for distoring effects of tube upon pressure signal. Distortion of pressure measurements by pneumatic tubes also discussed in "Distortion of Pressure Signals in Pneumatic Tubes," (ARC-12868). Varying input pressure estimated from measured time-varying output pressure by one of two deconvolution algorithms that take account of measurement noise. Algorithms based on minimum-covariance (Kalman filtering) theory.
1992-09-01
ease with which a model is employed, may depend on several factors, among them the users’ past experience in modeling, preferences for menu driven...partially on our knowledge of important logistics factors, partially on the past work of Diener (12), and partially on the assumption that comparison of...flexibility in output report selection. The minimum output was used in each instance 74 to conserve computer storage and to minimize the consumption of paper
NASA Astrophysics Data System (ADS)
Ulyanov, Sergey S.; Tuchin, Valery V.
1993-06-01
The sex differences in cardiovascular system responses to a mild noise stress are established using the physiological and the dynamic systems theory methods. Lower levels of basal systolic arterial pressure and higher rates of its dropping and normalization under influence and after its cessation are typical for women. There are no hypertensive responses to stresses in women in contrast to men. The normalized entropy of the ECG signal, describing the physiological variability, increases in women and decreases in men. The advantages of female cardiovascular system response to mild stresses are discussed.
NASA Astrophysics Data System (ADS)
Nalewajski, Roman F.
Information theory (IT) probe of the molecular electronic structure, within the communication theory of chemical bonds (CTCB), uses the standard entropy/information descriptors of the Shannon theory of communication to characterize a scattering of the electronic probabilities and their information content throughout the system chemical bonds generated by the occupied molecular orbitals (MO). These "communications" between the basis-set orbitals are determined by the two-orbital conditional probabilities: one- and two-electron in character. They define the molecular information system, in which the electron-allocation "signals" are transmitted between various orbital "inputs" and "outputs". It is argued, using the quantum mechanical superposition principle, that the one-electron conditional probabilities are proportional to the squares of corresponding elements of the charge and bond-order (CBO) matrix of the standard LCAO MO theory. Therefore, the probability of the interorbital connections in the molecular communication system is directly related to Wiberg's quadratic covalency indices of chemical bonds. The conditional-entropy (communication "noise") and mutual-information (information capacity) descriptors of these molecular channels generate the IT-covalent and IT-ionic bond components, respectively. The former reflects the electron delocalization (indeterminacy) due to the orbital mixing, throughout all chemical bonds in the system under consideration. The latter characterizes the localization (determinacy) in the probability scattering in the molecule. These two IT indices, respectively, indicate a fraction of the input information lost in the channel output, due to the communication noise, and its surviving part, due to deterministic elements in probability scattering in the molecular network. Together, these two components generate the system overall bond index. By a straightforward output reduction (condensation) of the molecular channel, the IT indices of molecular fragments, for example, localized bonds, functional groups, and forward and back donations accompanying the bond formation, and so on, can be extracted. The flow of information in such molecular communication networks is investigated in several prototype molecules. These illustrative (model) applications of the orbital communication theory of chemical bonds (CTCB) deal with several classical issues in the electronic structure theory: atom hybridization/promotion, single and multiple chemical bonds, bond conjugation, and so on. The localized bonds in hydrides and delocalized [pi]-bonds in simple hydrocarbons, as well as the multiple bonds in CO and CO2, are diagnosed using the entropy/information descriptors of CTCB. The atom promotion in hydrides and bond conjugation in [pi]-electron systems are investigated in more detail. A major drawback of the previous two-electron approach to molecular channels, namely, two weak bond differentiation in aromatic systems, has been shown to be remedied in the one-electron approach.
NASA Technical Reports Server (NTRS)
Johnson, P. R.; Bardusch, R. E.
1974-01-01
A hydraulic control loading system for aircraft simulation was analyzed to find the causes of undesirable low frequency oscillations and loading effects in the output. The hypothesis of mechanical compliance in the control linkage was substantiated by comparing the behavior of a mathematical model of the system with previously obtained experimental data. A compensation scheme based on the minimum integral of the squared difference between desired and actual output was shown to be effective in reducing the undesirable output effects. The structure of the proposed compensation was computed by use of a dynamic programing algorithm and a linear state space model of the fixed elements in the system.
Kerns, Q.A.; Anderson, O.A.
1960-05-01
An electronic control circuit is described in which a first signal frequency is held in synchronization with a second varying reference signal. The circuit receives the first and second signals as inputs and produces an output signal having an amplitude dependent upon rate of phase change between the two signals and a polarity dependent on direction of the phase change. The output may thus serve as a correction signal for maintaining the desired synchronization. The response of the system is not dependent on relative phase angle between the two compared signals. By having practically no capacitance in the circuit, there is minimum delay between occurrence of a phase shift and a response in the output signal and therefore very fast synchronization is effected.
ENVIRONMENTAL ECONOMICS FOR WATERSHED RESTORATION
This book overviews non-market valuation, input-output analysis, cost-benefit analysis, and presents case studies from the Mid Atlantic Highland region, with all but the bare minimum econometrics, statistics, and math excluded or relegated to an appendix. It is a non-market valu...
Computer program optimizes design of nuclear radiation shields
NASA Technical Reports Server (NTRS)
Lahti, G. P.
1971-01-01
Computer program, OPEX 2, determines minimum weight, volume, or cost for shields. Program incorporates improved coding, simplified data input, spherical geometry, and an expanded output. Method is capable of altering dose-thickness relationship when a shield layer has been removed.
True randomness from an incoherent source
NASA Astrophysics Data System (ADS)
Qi, Bing
2017-11-01
Quantum random number generators (QRNGs) harness the intrinsic randomness in measurement processes: the measurement outputs are truly random, given the input state is a superposition of the eigenstates of the measurement operators. In the case of trusted devices, true randomness could be generated from a mixed state ρ so long as the system entangled with ρ is well protected. We propose a random number generation scheme based on measuring the quadrature fluctuations of a single mode thermal state using an optical homodyne detector. By mixing the output of a broadband amplified spontaneous emission (ASE) source with a single mode local oscillator (LO) at a beam splitter and performing differential photo-detection, we can selectively detect the quadrature fluctuation of a single mode output of the ASE source, thanks to the filtering function of the LO. Experimentally, a quadrature variance about three orders of magnitude larger than the vacuum noise has been observed, suggesting this scheme can tolerate much higher detector noise in comparison with QRNGs based on measuring the vacuum noise. The high quality of this entropy source is evidenced by the small correlation coefficients of the acquired data. A Toeplitz-hashing extractor is applied to generate unbiased random bits from the Gaussian distributed raw data, achieving an efficiency of 5.12 bits per sample. The output of the Toeplitz extractor successfully passes all the NIST statistical tests for random numbers.
NASA Astrophysics Data System (ADS)
Chun, Paul W.
2005-01-01
Applying the Planck-Benzinger methodology to biological systems, we have established that the negative Gibbs free energy minimum at a well-defined stable temperature, langTSrang, where the bound unavailable energy TΔS° = 0, has its origin in the sequence-specific hydrophobic interactions. Each such system we have examined confirms the existence of a thermodynamic molecular switch wherein a change of sign in [ΔCp°]reaction leads to a true negative minimum in the Gibbs free energy change of reaction, and hence a maximum in the related equilibrium constant, Keq. At this temperature, langTSrang, where ΔH°(TS)(-) = ΔG°(TS)(-)min, the maximum work can be accomplished in transpiration, digestion, reproduction or locomotion. In the human body, this temperature is 37°C. The langTSrang values may vary from one living organism to another, but the fact that the value of TΔS°(T) = 0 will not. There is a lower cutoff point, langThrang, where enthalpy is unfavorable but entropy is favorable, i.e. ΔH°(Th)(+) = TΔS°(Th)(+), and an upper limit, langTmrang, above which enthalpy is favorable but entropy is unfavorable, i.e. ΔH°(Tm)(-) = TΔS°(Tm)(-). Only between these two temperature limits, where ΔG°(T) = 0, is the net chemical driving force favorable for such biological processes as protein folding, protein-protein, protein-nucleic acid or protein-membrane interactions, and protein self-assembly. All interacting biological systems examined using the Planck-Benzinger methodology have shown such a thermodynamic switch at the molecular level, suggesting that its existence may be universal.
Measurement Uncertainty Relations for Discrete Observables: Relative Entropy Formulation
NASA Astrophysics Data System (ADS)
Barchielli, Alberto; Gregoratti, Matteo; Toigo, Alessandro
2018-02-01
We introduce a new information-theoretic formulation of quantum measurement uncertainty relations, based on the notion of relative entropy between measurement probabilities. In the case of a finite-dimensional system and for any approximate joint measurement of two target discrete observables, we define the entropic divergence as the maximal total loss of information occurring in the approximation at hand. For fixed target observables, we study the joint measurements minimizing the entropic divergence, and we prove the general properties of its minimum value. Such a minimum is our uncertainty lower bound: the total information lost by replacing the target observables with their optimal approximations, evaluated at the worst possible state. The bound turns out to be also an entropic incompatibility degree, that is, a good information-theoretic measure of incompatibility: indeed, it vanishes if and only if the target observables are compatible, it is state-independent, and it enjoys all the invariance properties which are desirable for such a measure. In this context, we point out the difference between general approximate joint measurements and sequential approximate joint measurements; to do this, we introduce a separate index for the tradeoff between the error of the first measurement and the disturbance of the second one. By exploiting the symmetry properties of the target observables, exact values, lower bounds and optimal approximations are evaluated in two different concrete examples: (1) a couple of spin-1/2 components (not necessarily orthogonal); (2) two Fourier conjugate mutually unbiased bases in prime power dimension. Finally, the entropic incompatibility degree straightforwardly generalizes to the case of many observables, still maintaining all its relevant properties; we explicitly compute it for three orthogonal spin-1/2 components.
Atmospheric Circulations of Rocky Planets as Heat Engines
NASA Astrophysics Data System (ADS)
Koll, D. D. B.
2017-12-01
Rocky planets are extremely common in the galaxy and include Earth, Mars, Venus, and hundreds of exoplanets. To understand and compare the climates of these planets, we need theories that are general enough to accommodate drastically different atmospheric and planetary properties. Unfortunately, few such theories currently exist.For Earth, there is a well-known principle that its atmosphere resembles a heat engine - the atmosphere absorbs heat near the surface, at a hot temperature, and emits heat to space in the upper troposphere, at a cold temperature, which allows it to perform work and balance dissipative processes such as friction. However, previous studies also showed that Earth's hydrological cycle uses up a large fraction of the heat engine's work output, which makes it difficult to view other atmospheres as heat engines.In this work I extend the heat engine principle from Earth towards other rocky planets. I explore both dry and moist atmospheres in an idealized general circulation model (GCM), and quantify their work output using entropy budgets. First, I show that convection and turbulent heat diffusion are important entropy sources in dry atmospheres. I develop a scaling that accounts for its effects, which allows me to predict the strength of frictional dissipation in dry atmospheres. There are strong parallels between my scaling and so-called potential intensity theory, which is a seminal theory for understanding tropical cyclones on Earth. Second, I address how moisture affects atmospheric heat engines. Moisture modifies both the thermodynamic properties of air and releases latent heat when water vapor condenses. I explore the impact of both effects, and use numerical simulations to explore the difference between dry and moist atmospheric circulations across a wide range of climates.
The Haitian Economy and the HOPE Act
2010-03-05
questioned, its economic effects were concrete and devastating. Haiti was already experiencing a decline in output, employment, and income, but the...over increasing the minimum wage produced protests and political conflagration , an outrage that attests to the importance that the Haitian people place
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adesso, Gerardo; CNR-INFM Coherentia, Naples; CNISM, Unita di Salerno, Salerno
2007-10-15
We present a geometric approach to the characterization of separability and entanglement in pure Gaussian states of an arbitrary number of modes. The analysis is performed adapting to continuous variables a formalism based on single subsystem unitary transformations that has been recently introduced to characterize separability and entanglement in pure states of qubits and qutrits [S. M. Giampaolo and F. Illuminati, Phys. Rev. A 76, 042301 (2007)]. In analogy with the finite-dimensional case, we demonstrate that the 1xM bipartite entanglement of a multimode pure Gaussian state can be quantified by the minimum squared Euclidean distance between the state itself andmore » the set of states obtained by transforming it via suitable local symplectic (unitary) operations. This minimum distance, corresponding to a, uniquely determined, extremal local operation, defines an entanglement monotone equivalent to the entropy of entanglement, and amenable to direct experimental measurement with linear optical schemes.« less
Automated mango fruit assessment using fuzzy logic approach
NASA Astrophysics Data System (ADS)
Hasan, Suzanawati Abu; Kin, Teoh Yeong; Sauddin@Sa'duddin, Suraiya; Aziz, Azlan Abdul; Othman, Mahmod; Mansor, Ab Razak; Parnabas, Vincent
2014-06-01
In term of value and volume of production, mango is the third most important fruit product next to pineapple and banana. Accurate size assessment of mango fruits during harvesting is vital to ensure that they are classified to the grade accordingly. However, the current practice in mango industry is grading the mango fruit manually using human graders. This method is inconsistent, inefficient and labor intensive. In this project, a new method of automated mango size and grade assessment is developed using RGB fiber optic sensor and fuzzy logic approach. The calculation of maximum, minimum and mean values based on RGB fiber optic sensor and the decision making development using minimum entropy formulation to analyse the data and make the classification for the mango fruit. This proposed method is capable to differentiate three different grades of mango fruit automatically with 77.78% of overall accuracy compared to human graders sorting. This method was found to be helpful for the application in the current agricultural industry.
Application of modern control theory to the design of optimum aircraft controllers
NASA Technical Reports Server (NTRS)
Power, L. J.
1973-01-01
The synthesis procedure presented is based on the solution of the output regulator problem of linear optimal control theory for time-invariant systems. By this technique, solution of the matrix Riccati equation leads to a constant linear feedback control law for an output regulator which will maintain a plant in a particular equilibrium condition in the presence of impulse disturbances. Two simple algorithms are presented that can be used in an automatic synthesis procedure for the design of maneuverable output regulators requiring only selected state variables for feedback. The first algorithm is for the construction of optimal feedforward control laws that can be superimposed upon a Kalman output regulator and that will drive the output of a plant to a desired constant value on command. The second algorithm is for the construction of optimal Luenberger observers that can be used to obtain feedback control laws for the output regulator requiring measurement of only part of the state vector. This algorithm constructs observers which have minimum response time under the constraint that the magnitude of the gains in the observer filter be less than some arbitrary limit.
Minimum energy information fusion in sensor networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chapline, G
1999-05-11
In this paper we consider how to organize the sharing of information in a distributed network of sensors and data processors so as to provide explanations for sensor readings with minimal expenditure of energy. We point out that the Minimum Description Length principle provides an approach to information fusion that is more naturally suited to energy minimization than traditional Bayesian approaches. In addition we show that for networks consisting of a large number of identical sensors Kohonen self-organization provides an exact solution to the problem of combing the sensor outputs into minimal description length explanations.
Trivariate characteristics of intensity fluctuations for heavily saturated optical systems.
Das, Biman; Drake, Eli; Jack, John
2004-02-01
Trivariate cumulants of intensity fluctuations have been computed starting from a trivariate intensity probability distribution function, which rests on the assumption that the variation of intensity has a maximum entropy distribution with the constraint that the total intensity is constant. The assumption holds for optical systems such as a thin, long, mirrorless gas laser amplifier where under heavy gain saturation the total output approaches a constant intensity, although intensity of any mode fluctuates rapidly over the average intensity. The relations between trivariate cumulants and central moments that were needed for the computation of trivariate cumulants were derived. The results of the computation show that the cumulants have characteristic values that depend on the number of interacting modes in the system. The cumulant values approach zero when the number of modes is infinite, as expected. The results will be useful for comparison with the experimental triavariate statistics of heavily saturated optical systems such as the output from a thin, long, bidirectional gas laser amplifier.
IOS: PDP 11/45 formatted input/output task stacker and processer. [In MACRO-II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koschik, J.
1974-07-08
IOS allows the programer to perform formated Input/Output at assembly language level to/from any peripheral device. It runs under DOS versions V8-O8 or V9-19, reading and writing DOS-compatible files. Additionally, IOS will run, with total transparency, in an environment with memory management enabled. Minimum hardware required is a 16K PDP 11/45, Keyboard Device, DISK (DK,DF, or DC), and Line Frequency Clock. The source language is MACRO-11 (3.3K Decimal Words).
NASA Astrophysics Data System (ADS)
Kalli, K.; Brady, G. P.; Webb, D. J.; Jackson, D. A.; Zhang, L.; Bennion, I.
1995-12-01
We present a new method for the interrogation of large arrays of Bragg grating sensors. Eight gratings operating between the wavelengths of 1533 and 1555 nm have been demultiplexed. An unbalanced Mach-Zehnder interferometer illuminated by a single low-coherence source provides a high-phase-resolution output for each sensor, the outputs of which are sequentially selected in wavelength by a tunable Fabry-Perot interferometer. The minimum detectable strain measured was 90 n 3 / \\radical Hz \\end-radical at 7 Hz for a wavelength of 1535 nm.
Use of Regional Climate Model Output for Hydrologic Simulations
NASA Astrophysics Data System (ADS)
Hay, L. E.; Clark, M. P.; Wilby, R. L.; Gutowski, W. J.; Leavesley, G. H.; Pan, Z.; Arritt, R. W.; Takle, E. S.
2001-12-01
Daily precipitation and maximum and minimum temperature time series from a Regional Climate Model (RegCM2) were used as input to a distributed hydrologic model for a rainfall-dominated basin (Alapaha River at Statenville, Georgia) and three snowmelt-dominated basins (Animas River at Durango, Colorado; East Fork of the Carson River near Gardnerville, Nevada; and Cle Elum River near Roslyn, Washington). For comparison purposes, spatially averaged daily data sets of precipitation and maximum and minimum temperature were developed from measured data. These datasets included precipitation and temperature data for all stations that are located within the area of the RegCM2 model output used for each basin, but excluded station data used to calibrate the hydrologic model. Both the RegCM2 output and station data capture the gross aspects of the seasonal cycles of precipitation and temperature. However, in all four basins, the RegCM2- and station-based simulations of runoff show little skill on a daily basis (Nash-Sutcliffe (NS) values ranging from 0.05-0.37 for RegCM2 and -0.08-0.65 for station). When the precipitation and temperature biases are corrected in the RegCM2 output and station data sets (Bias-RegCM2 and Bias-station, respectively) the accuracy of the daily runoff simulations improve dramatically for the snowmelt-dominated basins. In the rainfall-dominated basin, runoff simulations based on the Bias-RegCM2 output show no skill (NS value of 0.09) whereas Bias-All simulated runoff improves (NS value improved from -0.08 to 0.72). These results indicate that the resolution of the RegCM2 output is appropriate for basin-scale modeling, but RegCM2 model output does not contain the day-to-day variability needed for basin-scale modeling in rainfall-dominated basins. Future work is warranted to identify the causes for systematic biases in RegCM2 simulations, develop methods to remove the biases, and improve RegCM2 simulations of daily variability in local climate.
Updated Model of the Solar Energetic Proton Environment in Space
NASA Astrophysics Data System (ADS)
Jiggens, Piers; Heynderickx, Daniel; Sandberg, Ingmar; Truscott, Pete; Raukunen, Osku; Vainio, Rami
2018-05-01
The Solar Accumulated and Peak Proton and Heavy Ion Radiation Environment (SAPPHIRE) model provides environment specification outputs for all aspects of the Solar Energetic Particle (SEP) environment. The model is based upon a thoroughly cleaned and carefully processed data set. Herein the evolution of the solar proton model is discussed with comparisons to other models and data. This paper discusses the construction of the underlying data set, the modelling methodology, optimisation of fitted flux distributions and extrapolation of model outputs to cover a range of proton energies from 0.1 MeV to 1 GeV. The model provides outputs in terms of mission cumulative fluence, maximum event fluence and peak flux for both solar maximum and solar minimum periods. A new method for describing maximum event fluence and peak flux outputs in terms of 1-in-x-year SPEs is also described. SAPPHIRE proton model outputs are compared with previous models including CREME96, ESP-PSYCHIC and the JPL model. Low energy outputs are compared to SEP data from ACE/EPAM whilst high energy outputs are compared to a new model based on GLEs detected by Neutron Monitors (NMs).
The evaluation of alternate methodologies for land cover classification in an urbanizing area
NASA Technical Reports Server (NTRS)
Smekofski, R. M.
1981-01-01
The usefulness of LANDSAT in classifying land cover and in identifying and classifying land use change was investigated using an urbanizing area as the study area. The question of what was the best technique for classification was the primary focus of the study. The many computer-assisted techniques available to analyze LANDSAT data were evaluated. Techniques of statistical training (polygons from CRT, unsupervised clustering, polygons from digitizer and binary masks) were tested with minimum distance to the mean, maximum likelihood and canonical analysis with minimum distance to the mean classifiers. The twelve output images were compared to photointerpreted samples, ground verified samples and a current land use data base. Results indicate that for a reconnaissance inventory, the unsupervised training with canonical analysis-minimum distance classifier is the most efficient. If more detailed ground truth and ground verification is available, the polygons from the digitizer training with the canonical analysis minimum distance is more accurate.
Design of a compensation for an ARMA model of a discrete time system. M.S. Thesis
NASA Technical Reports Server (NTRS)
Mainemer, C. I.
1978-01-01
The design of an optimal dynamic compensator for a multivariable discrete time system is studied. Also the design of compensators to achieve minimum variance control strategies for single input single output systems is analyzed. In the first problem the initial conditions of the plant are random variables with known first and second order moments, and the cost is the expected value of the standard cost, quadratic in the states and controls. The compensator is based on the minimum order Luenberger observer and it is found optimally by minimizing a performance index. Necessary and sufficient conditions for optimality of the compensator are derived. The second problem is solved in three different ways; two of them working directly in the frequency domain and one working in the time domain. The first and second order moments of the initial conditions are irrelevant to the solution. Necessary and sufficient conditions are derived for the compensator to minimize the variance of the output.
The Snow Data System at NASA JPL
NASA Astrophysics Data System (ADS)
Horn, J.; Painter, T. H.; Bormann, K. J.; Rittger, K.; Brodzik, M. J.; Skiles, M.; Burgess, A. B.; Mattmann, C. A.; Ramirez, P.; Joyce, M.; Goodale, C. E.; McGibbney, L. J.; Zimdars, P.; Yaghoobi, R.
2017-12-01
The Snow Data System at NASA JPL includes data processing pipelines built with open source software, Apache 'Object Oriented Data Technology' (OODT). Processing is carried out in parallel across a high-powered computing cluster. The pipelines use input data from satellites such as MODIS, VIIRS and Landsat. They apply algorithms to the input data to produce a variety of outputs in GeoTIFF format. These outputs include daily data for SCAG (Snow Cover And Grain size) and DRFS (Dust Radiative Forcing in Snow), along with 8-day composites and MODICE annual minimum snow and ice calculations. This poster will describe the Snow Data System, its outputs and their uses and applications. It will also highlight recent advancements to the system and plans for the future.
The Snow Data System at NASA JPL
NASA Astrophysics Data System (ADS)
Joyce, M.; Laidlaw, R.; Painter, T. H.; Bormann, K. J.; Rittger, K.; Brodzik, M. J.; Skiles, M.; Burgess, A. B.; Mattmann, C. A.; Ramirez, P.; Goodale, C. E.; McGibbney, L. J.; Zimdars, P.; Yaghoobi, R.
2016-12-01
The Snow Data System at NASA JPL includes data processing pipelines built with open source software, Apache 'Object Oriented Data Technology' (OODT). Processing is carried out in parallel across a high-powered computing cluster. The pipelines use input data from satellites such as MODIS, VIIRS and Landsat. They apply algorithms to the input data to produce a variety of outputs in GeoTIFF format. These outputs include daily data for SCAG (Snow Cover And Grain size) and DRFS (Dust Radiative Forcing in Snow), along with 8-day composites and MODICE annual minimum snow and ice calculations. This poster will describe the Snow Data System, its outputs and their uses and applications. It will also highlight recent advancements to the system and plans for the future.
Zhang, G-Y; Yang, M; Liu, B; Huang, Z-C; Li, J; Chen, J-Y; Chen, H; Zhang, P-P; Liu, L-J; Wang, J; Teng, G-J
2016-01-28
Previous studies often report that early auditory deprivation or congenital deafness contributes to cross-modal reorganization in the auditory-deprived cortex, and this cross-modal reorganization limits clinical benefit from cochlear prosthetics. However, there are inconsistencies among study results on cortical reorganization in those subjects with long-term unilateral sensorineural hearing loss (USNHL). It is also unclear whether there exists a similar cross-modal plasticity of the auditory cortex for acquired monaural deafness and early or congenital deafness. To address this issue, we constructed the directional brain functional networks based on entropy connectivity of resting-state functional MRI and researched changes of the networks. Thirty-four long-term USNHL individuals and seventeen normally hearing individuals participated in the test, and all USNHL patients had acquired deafness. We found that certain brain regions of the sensorimotor and visual networks presented enhanced synchronous output entropy connectivity with the left primary auditory cortex in the left long-term USNHL individuals as compared with normally hearing individuals. Especially, the left USNHL showed more significant changes of entropy connectivity than the right USNHL. No significant plastic changes were observed in the right USNHL. Our results indicate that the left primary auditory cortex (non-auditory-deprived cortex) in patients with left USNHL has been reorganized by visual and sensorimotor modalities through cross-modal plasticity. Furthermore, the cross-modal reorganization also alters the directional brain functional networks. The auditory deprivation from the left or right side generates different influences on the human brain. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.
A Low-Complexity Circuit for On-Sensor Concurrent A/D Conversion and Compression
NASA Technical Reports Server (NTRS)
Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.
2007-01-01
A low-complexity circuit for on-sensor compression is presented. The proposed circuit achieves complexity savings by combining a single-slope analog-to-digital converter with a Golomb-Rice entropy encoder and by implementing a low-complexity adaptation rule. The adaptation rule monitors the output codewords and minimizes their length by incrementing or decrementing the value of the Golomb-Rice coding parameter k. Its hardware implementation is one order of magnitude lower than existing adaptive algorithms. The compression circuit has been fabricated using a 0.35 micrometers CMOS technology and occupies an area of 0.0918 mm2. Test measurements confirm the validity of the design
Schob, Stefan; Beeskow, Anne; Dieckow, Julia; Meyer, Hans-Jonas; Krause, Matthias; Frydrychowicz, Clara; Hirsch, Franz-Wolfgang; Surov, Alexey
2018-05-31
Medulloblastomas are the most common central nervous system tumors in childhood. Treatment and prognosis strongly depend on histology and transcriptomic profiling. However, the proliferative potential also has prognostical value. Our study aimed to investigate correlations between histogram profiling of diffusion-weighted images and further microarchitectural features. Seven patients (age median 14.6 years, minimum 2 years, maximum 20 years; 5 male, 2 female) were included in this retrospective study. Using a Matlab-based analysis tool, histogram analysis of whole apparent diffusion coefficient (ADC) volumes was performed. ADC entropy revealed a strong inverse correlation with the expression of the proliferation marker Ki67 (r = - 0.962, p = 0.009) and with total nuclear area (r = - 0.888, p = 0.044). Furthermore, ADC percentiles, most of all ADCp90, showed significant correlations with Ki67 expression (r = 0.902, p = 0.036). Diffusion histogram profiling of medulloblastomas provides valuable in vivo information which potentially can be used for risk stratification and prognostication. First of all, entropy revealed to be the most promising imaging biomarker. However, further studies are warranted.
NASA Astrophysics Data System (ADS)
White, Ronald; Lipson, Jane
Free volume has a storied history in polymer physics. To introduce our own results, we consider how free volume has been defined in the past, e.g. in the works of Fox and Flory, Doolittle, and the equation of Williams, Landel, and Ferry. We contrast these perspectives with our own analysis using our Locally Correlated Lattice (LCL) model where we have found a striking connection between polymer free volume (analyzed using PVT data) and the polymer's corresponding glass transition temperature, Tg. The pattern, covering over 50 different polymers, is robust enough to be reasonably predictive based on melt properties alone; when a melt hits this T-dependent boundary of critical minimum free volume it becomes glassy. We will present a broad selection of results from our thermodynamic analysis, and make connections with historical treatments. We will discuss patterns that have emerged across the polymers in the energy and entropy when quantified as ''per LCL theoretical segment''. Finally we will relate the latter trend to the point of view popularized in the theory of Adam and Gibbs. The authors gratefully acknowledge support from NSF DMR-1403757.
NASA Astrophysics Data System (ADS)
O'Brien, Paul
2017-01-01
Max Plank did not quantize temperature. I will show that the Plank temperature violates the Plank scale. Plank stated that the Plank scale was Natures scale and independent of human construct. Also stating that even aliens would derive the same values. He made a huge mistake, because temperature is based on the Kelvin scale, which is man-made just like the meter and kilogram. He did not discover natures scale for the quantization of temperature. His formula is flawed, and his value is incorrect. Plank's calculation is Tp = c2Mp/Kb. The general form of this equation is T = E/Kb Why is this wrong? The temperature for a fixed amount of energy is dependent upon the volume it occupies. Using the correct formula involves specifying the radius of the volume in the form of (RE). This leads to an inequality and a limit that is equivalent to the Bekenstein Bound, but using temperature instead of entropy. Rewriting this equation as a limit defines both the maximum temperature and Boltzmann's constant. This will saturate any space-time boundary with maximum temperature and information density, also the minimum radius and entropy. The general form of the equation then becomes a limit in BH thermodynamics T <= (RE)/(λKb) .
Hot-start Giant Planets Form with Radiative Interiors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berardo, David; Cumming, Andrew, E-mail: david.berardo@mcgill.ca, E-mail: andrew.cumming@mcgill.ca
In the hot-start core accretion formation model for gas giants, the interior of a planet is usually assumed to be fully convective. By calculating the detailed internal evolution of a planet assuming hot-start outer boundary conditions, we show that such a planet will in fact form with a radially increasing internal entropy profile, so that its interior will be radiative instead of convective. For a hot outer boundary, there is a minimum value for the entropy of the internal adiabat S {sub min} below which the accreting envelope does not match smoothly onto the interior, but instead deposits high entropymore » material onto the growing interior. One implication of this would be to at least temporarily halt the mixing of heavy elements within the planet, which are deposited by planetesimals accreted during formation. The compositional gradient this would impose could subsequently disrupt convection during post-accretion cooling, which would alter the observed cooling curve of the planet. However, even with a homogeneous composition, for which convection develops as the planet cools, the difference in cooling timescale will change the inferred mass of directly imaged gas giants.« less
Multicellular regulation of entropy, spatial order, and information
NASA Astrophysics Data System (ADS)
Youk, Hyun
Many multicellular systems such as tissues and microbial biofilms consist of cells that secrete and sense signalling molecules. Understanding how collective behaviours of secrete-and-sense cells is an important challenge. We combined experimental and theoretical approaches to understand multicellular coordination of gene expression and spatial pattern formation among secrete-and-sense cells. We engineered secrete-and-sense yeast cells to show that cells can collectively and permanently remember a past event by reminding each other with their secreted signalling molecule. If one cell ``forgets'' then another cell can remind it. Cell-cell communication ensures a long-term (permanent) memory by overcoming common limitations of intracellular memory. We also established a new theoretical framework inspired by statistical mechanics to understand how fields of secrete-and-sense cells form spatial patterns. We introduce new metrics - cellular entropy, cellular Hamiltonian, and spatial order index - for dynamics of cellular automata that form spatial patterns. Our theory predicts how fast any spatial patterns form, how ordered they are, and establishes cellular Hamiltonian that, like energy for non-living systems, monotonically decreases towards a minimum over time. ERC Starting Grant (MultiCellSysBio), NWO VIDI, NWO NanoFront.
Black-box Brain Experiments, Causal Mathematical Logic, and the Thermodynamics of Intelligence
NASA Astrophysics Data System (ADS)
Pissanetzky, Sergio; Lanzalaco, Felix
2013-12-01
Awareness of the possible existence of a yet-unknown principle of Physics that explains cognition and intelligence does exist in several projects of emulation, simulation, and replication of the human brain currently under way. Brain simulation projects define their success partly in terms of the emergence of non-explicitly programmed biophysical signals such as self-oscillation and spreading cortical waves. We propose that a recently discovered theory of Physics known as Causal Mathematical Logic (CML) that links intelligence with causality and entropy and explains intelligent behavior from first principles, is the missing link. We further propose the theory as a roadway to understanding more complex biophysical signals, and to explain the set of intelligence principles. The new theory applies to information considered as an entity by itself. The theory proposes that any device that processes information and exhibits intelligence must satisfy certain theoretical conditions irrespective of the substrate where it is being processed. The substrate can be the human brain, a part of it, a worm's brain, a motor protein that self-locomotes in response to its environment, a computer. Here, we propose to extend the causal theory to systems in Neuroscience, because of its ability to model complex systems without heuristic approximations, and to predict emerging signals of intelligence directly from the models. The theory predicts the existence of a large number of observables (or "signals"), all of which emerge and can be directly and mathematically calculated from non-explicitly programmed detailed causal models. This approach is aiming for a universal and predictive language for Neuroscience and AGI based on causality and entropy, detailed enough to describe the finest structures and signals of the brain, yet general enough to accommodate the versatility and wholeness of intelligence. Experiments are focused on a black-box as one of the devices described above of which both the input and the output are precisely known, but not the internal implementation. The same input is separately supplied to a causal virtual machine, and the calculated output is compared with the measured output. The virtual machine, described in a previous paper, is a computer implementation of CML, fixed for all experiments and unrelated to the device in the black box. If the two outputs are equivalent, then the experiment has quantitatively succeeded and conclusions can be drawn regarding details of the internal implementation of the device. Several small black-box experiments were successfully performed and demonstrated the emergence of non-explicitly programmed cognitive function in each case
Breeding Energy Cane Cultivars as a Biomass Feedstock for Coal Replacement
USDA-ARS?s Scientific Manuscript database
Research and advanced breeding have demonstrated that energy cane possesses all of the attributes desirable in a biofuel feedstock: extremely good biomass yield in a small farming footprint; negative/neutral carbon footprint; maximum outputs from minimum inputs; well-established growing model for fa...
40 CFR 63.11563 - What are my monitoring requirements?
Code of Federal Regulations, 2010 CFR
2010-07-01
... and the following requirements: (1) Locate the temperature sensor in a position that provides a representative temperature. (2) For a noncryogenic temperature range, use a temperature sensor with a minimum... procedures in the manufacturer's documentation; or (ii) By comparing the sensor output to redundant sensor...
40 CFR 63.11563 - What are my monitoring requirements?
Code of Federal Regulations, 2011 CFR
2011-07-01
... and the following requirements: (1) Locate the temperature sensor in a position that provides a representative temperature. (2) For a noncryogenic temperature range, use a temperature sensor with a minimum... procedures in the manufacturer's documentation; or (ii) By comparing the sensor output to redundant sensor...
40 CFR 63.11563 - What are my monitoring requirements?
Code of Federal Regulations, 2014 CFR
2014-07-01
... and the following requirements: (1) Locate the temperature sensor in a position that provides a representative temperature. (2) For a noncryogenic temperature range, use a temperature sensor with a minimum... procedures in the manufacturer's documentation; or (ii) By comparing the sensor output to redundant sensor...
40 CFR 63.11563 - What are my monitoring requirements?
Code of Federal Regulations, 2013 CFR
2013-07-01
... and the following requirements: (1) Locate the temperature sensor in a position that provides a representative temperature. (2) For a noncryogenic temperature range, use a temperature sensor with a minimum... procedures in the manufacturer's documentation; or (ii) By comparing the sensor output to redundant sensor...
40 CFR 63.11563 - What are my monitoring requirements?
Code of Federal Regulations, 2012 CFR
2012-07-01
... and the following requirements: (1) Locate the temperature sensor in a position that provides a representative temperature. (2) For a noncryogenic temperature range, use a temperature sensor with a minimum... procedures in the manufacturer's documentation; or (ii) By comparing the sensor output to redundant sensor...
NASA Astrophysics Data System (ADS)
Liu, Leibo; Chen, Yingjie; Yin, Shouyi; Lei, Hao; He, Guanghui; Wei, Shaojun
2014-07-01
A VLSI architecture for entropy decoder, inverse quantiser and predictor is proposed in this article. This architecture is used for decoding video streams of three standards on a single chip, i.e. H.264/AVC, AVS (China National Audio Video coding Standard) and MPEG2. The proposed scheme is called MPMP (Macro-block-Parallel based Multilevel Pipeline), which is intended to improve the decoding performance to satisfy the real-time requirements while maintaining a reasonable area and power consumption. Several techniques, such as slice level pipeline, MB (Macro-Block) level pipeline, MB level parallel, etc., are adopted. Input and output buffers for the inverse quantiser and predictor are shared by the decoding engines for H.264, AVS and MPEG2, therefore effectively reducing the implementation overhead. Simulation shows that decoding process consumes 512, 435 and 438 clock cycles per MB in H.264, AVS and MPEG2, respectively. Owing to the proposed techniques, the video decoder can support H.264 HP (High Profile) 1920 × 1088@30fps (frame per second) streams, AVS JP (Jizhun Profile) 1920 × 1088@41fps streams and MPEG2 MP (Main Profile) 1920 × 1088@39fps streams when exploiting a 200 MHz working frequency.
Facial expression recognition under partial occlusion based on fusion of global and local features
NASA Astrophysics Data System (ADS)
Wang, Xiaohua; Xia, Chen; Hu, Min; Ren, Fuji
2018-04-01
Facial expression recognition under partial occlusion is a challenging research. This paper proposes a novel framework for facial expression recognition under occlusion by fusing the global and local features. In global aspect, first, information entropy are employed to locate the occluded region. Second, principal Component Analysis (PCA) method is adopted to reconstruct the occlusion region of image. After that, a replace strategy is applied to reconstruct image by replacing the occluded region with the corresponding region of the best matched image in training set, Pyramid Weber Local Descriptor (PWLD) feature is then extracted. At last, the outputs of SVM are fitted to the probabilities of the target class by using sigmoid function. For the local aspect, an overlapping block-based method is adopted to extract WLD features, and each block is weighted adaptively by information entropy, Chi-square distance and similar block summation methods are then applied to obtain the probabilities which emotion belongs to. Finally, fusion at the decision level is employed for the data fusion of the global and local features based on Dempster-Shafer theory of evidence. Experimental results on the Cohn-Kanade and JAFFE databases demonstrate the effectiveness and fault tolerance of this method.
FPGA Implementation of Metastability-Based True Random Number Generator
NASA Astrophysics Data System (ADS)
Hata, Hisashi; Ichikawa, Shuichi
True random number generators (TRNGs) are important as a basis for computer security. Though there are some TRNGs composed of analog circuit, the use of digital circuits is desired for the application of TRNGs to logic LSIs. Some of the digital TRNGs utilize jitter in free-running ring oscillators as a source of entropy, which consume large power. Another type of TRNG exploits the metastability of a latch to generate entropy. Although this kind of TRNG has been mostly implemented with full-custom LSI technology, this study presents an implementation based on common FPGA technology. Our TRNG is comprised of logic gates only, and can be integrated in any kind of logic LSI. The RS latch in our TRNG is implemented as a hard-macro to guarantee the quality of randomness by minimizing the signal skew and load imbalance of internal nodes. To improve the quality and throughput, the output of 64-256 latches are XOR'ed. The derived design was verified on a Xilinx Virtex-4 FPGA (XC4VFX20), and passed NIST statistical test suite without post-processing. Our TRNG with 256 latches occupies 580 slices, while achieving 12.5Mbps throughput.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ebert, R. W.; Dayeh, M. A.; Desai, M. I.
2013-05-10
We examined solar wind plasma and interplanetary magnetic field (IMF) observations from Ulysses' first and third orbits to study hemispheric differences in the properties of the solar wind and IMF originating from the Sun's large polar coronal holes (PCHs) during the declining and minimum phase of solar cycles 22 and 23. We identified hemispheric asymmetries in several parameters, most notably {approx}15%-30% south-to-north differences in averages for the solar wind density, mass flux, dynamic pressure, and energy flux and the radial and total IMF magnitudes. These differences were driven by relatively larger, more variable solar wind density and radial IMF betweenmore » {approx}36 Degree-Sign S-60 Degree-Sign S during the declining phase of solar cycles 22 and 23. These observations indicate either a hemispheric asymmetry in the PCH output during the declining and minimum phase of solar cycles 22 and 23 with the southern hemisphere being more active than its northern counterpart, or a solar cycle effect where the PCH output in both hemispheres is enhanced during periods of higher solar activity. We also report a strong linear correlation between these solar wind and IMF parameters, including the periods of enhanced PCH output, that highlight the connection between the solar wind mass and energy output and the Sun's magnetic field. That these enhancements were not matched by similar sized variations in solar wind speed points to the mass and energy responsible for these increases being added to the solar wind while its flow was subsonic.« less
Reduction of dissipation in a thermal engine by means of periodic changes of external constraintsa)
NASA Astrophysics Data System (ADS)
Escher, Claus; Ross, John
1985-03-01
We consider a thermal engine driven by chemical reactions, which take place in a continuous flow, stirred tank reactor fitted with a movable piston. Work can be produced by means of a heat engine coupled to the products and to an external heat bath, and by the piston. Two modes of operation are compared, each with fixed input rate of chemicals: one with periodic variation of an external constraint [mode (b)], in which we vary the external pressure, and one without such variation [mode (a)]. We derive equations for the total power output in each of the two modes. The power output in mode (b) can be larger than that of mode (a) for the same chemical throughput and for the same average value of the external pressure. For a particularly simple case it is shown that the total power output in mode (b) is larger than that in (a) if work is done by the piston. At the same time the entropy production is decreased and the efficiency is increased. The possibility of an increased power output is due to the proper control of the relative phase of the externally varied constraint and its conjugate variable, the external pressure and the volume. This control is achieved by the coupling of nonlinear kinetics to the externally varied constraint. Details of specific mechanisms and the occurrence of resonance phenomena are presented in the following article.
Practice and Age-Related Loss of Adaptability in Sensorimotor Performance
Sosnoff, Jacob J.; Voudrie, Stefani J.
2009-01-01
The purpose of the present investigation was to examine whether the ability to adapt to task constraints is influenced by short-term practice in older adults. Young (18–29 years old) and old (65–75 years old) adults produced force output to a constant force target and a 1-Hz sinusoidal force target by way of the index finger flexion. Participants completed each task 5 times per session for 5 concurrent sessions. The amount and structure of force variability was calculated using linear and nonlinear analyses. As expected, there was a decrease in the magnitude of variability (coefficient of variation) in both tasks and task-related change in the structure of force variability (approximate entropy) with training across groups. The authors found older adults to have a greater amount of variability than their younger counterparts in both tasks. Older adults also demonstrated an increase in the structure of force output in the constant task but a decrease in structure in the sinusoidal task. Age differences in the adaptability to task constraints persisted throughout practice. The authors propose that older adults' ability to adapt sensorimotor output to task demands is not a result of lack of familiarity with the task but that it is, instead, characteristic of the aging process. PMID:19201684
This fact sheet summarizes how buildings connected to a CHP- equipped district energy system can earn more LEED® points than they could otherwise earn. It presents guidance for meeting the LEED® Minimum Energy Performance prerequisite and calculating point
Mathematical models of the simplest fuzzy PI/PD controllers with skewed input and output fuzzy sets.
Mohan, B M; Sinha, Arpita
2008-07-01
This paper unveils mathematical models for fuzzy PI/PD controllers which employ two skewed fuzzy sets for each of the two-input variables and three skewed fuzzy sets for the output variable. The basic constituents of these models are Gamma-type and L-type membership functions for each input, trapezoidal/triangular membership functions for output, intersection/algebraic product triangular norm, maximum/drastic sum triangular conorm, Mamdani minimum/Larsen product/drastic product inference method, and center of sums defuzzification method. The existing simplest fuzzy PI/PD controller structures derived via symmetrical fuzzy sets become special cases of the mathematical models revealed in this paper. Finally, a numerical example along with its simulation results are included to demonstrate the effectiveness of the simplest fuzzy PI controllers.
High-power linearly polarized diode-side-pumped a-cut Nd:GdVO4 rod laser
NASA Astrophysics Data System (ADS)
Li, Xiaowen; Qian, Jianqiang; Zhang, Baitao
2017-03-01
An efficiently high-power diode-side-pumped Nd:GdVO4 rod laser system was successfully demonstrated, operating in continuous wave (CW) and acousto-optically (AO) Q-switched regime. With a 65 mm-long a-cut Nd:GdVO4 crystal, a maximum linearly polarized CW output power of 60 W at 1063.2 nm was obtained under an absorbed pump power of 180 W, corresponding to a slope efficiency of 50.6%. The output laser beam was linearly polarized with a degree of polarization of 98%. In AO Q-switched operation, the highest output power, minimum pulse width, and highest peak power were achieved to be 42 W, 36 ns, and 58 kW at the pulse repetition frequency of 20 kHz.
Guan, Yue; Li, Weifeng; Jiang, Zhuoran; Chen, Ying; Liu, Song; He, Jian; Zhou, Zhengyang; Ge, Yun
2016-12-01
This study aimed to develop whole-lesion apparent diffusion coefficient (ADC)-based entropy-related parameters of cervical cancer to preliminarily assess intratumoral heterogeneity of this lesion in comparison to adjacent normal cervical tissues. A total of 51 women (mean age, 49 years) with cervical cancers confirmed by biopsy underwent 3-T pelvic diffusion-weighted magnetic resonance imaging with b values of 0 and 800 s/mm 2 prospectively. ADC-based entropy-related parameters including first-order entropy and second-order entropies were derived from the whole tumor volume as well as adjacent normal cervical tissues. Intraclass correlation coefficient, Wilcoxon test with Bonferroni correction, Kruskal-Wallis test, and receiver operating characteristic curve were used for statistical analysis. All the parameters showed excellent interobserver agreement (all intraclass correlation coefficients > 0.900). Entropy, entropy(H) 0 , entropy(H) 45 , entropy(H) 90 , entropy(H) 135 , and entropy(H) mean were significantly higher, whereas entropy(H) range and entropy(H) std were significantly lower in cervical cancers compared to adjacent normal cervical tissues (all P <.0001). Kruskal-Wallis test showed that there were no significant differences among the values of various second-order entropies including entropy(H) 0, entropy(H) 45 , entropy(H) 90 , entropy(H) 135 , and entropy(H) mean. All second-order entropies had larger area under the receiver operating characteristic curve than first-order entropy in differentiating cervical cancers from adjacent normal cervical tissues. Further, entropy(H) 45 , entropy(H) 90 , entropy(H) 135 , and entropy(H) mean had the same largest area under the receiver operating characteristic curve of 0.867. Whole-lesion ADC-based entropy-related parameters of cervical cancers were developed successfully, which showed initial potential in characterizing intratumoral heterogeneity in comparison to adjacent normal cervical tissues. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Oil and the world economy: some possible futures.
Kumhof, Michael; Muir, Dirk
2014-01-13
This paper, using a six-region dynamic stochastic general equilibrium model of the world economy, assesses the output and current account implications of permanent oil supply shocks hitting the world economy. For modest-sized shocks and conventional production technologies, the effects are modest. But for larger shocks, for elasticities of substitution that decline as oil usage is reduced to a minimum, and for production functions in which oil acts as a critical enabler of technologies, output growth could drop significantly. Also, oil prices could become so high that smooth adjustment, as assumed in the model, may become very difficult.
High Energy, Single-Mode, All-Solid-State Nd:YAG Laser
NASA Technical Reports Server (NTRS)
Prasad, Narasimha S.; Singh, Upendra N.; Hovis, Floyd
2006-01-01
In this paper, recent progress made in the design and development of an all-solid-state, single longitudinal mode, conductively cooled Nd:YAG laser operating at 1064 nm wavelength for UV lidar for ozone sensing applications is presented. Currently, this pump laser provides an output pulse energy of greater than 1.1 J/pulse at 50 Hz PRF and a pulsewidth of 22 ns. The spatial profile of the output beam is a rectangular super Gaussian. Electrical-to-optical system efficiency of greater than 7% and a minimum M(sup 2) value of less than 2 have been achieved.
Apparatus and method for measurement of weak optical absorptions by thermally induced laser pulsing
Cremers, D.A.; Keller, R.A.
1982-06-08
The thermal lensing phenomenon is used as the basis for measurement of weak optical absorptions when a cell containing the sample to be investigated is inserted into a normally continuous-wave operation laser-pumped dye laser cavity for which the output coupler is deliberately tilted relative to intracavity circulating laser light, and pulsed laser output ensues, the pulsewidth of which can be rlated to the sample absorptivity by a simple algorithm or calibration curve. A minimum detection limit of less than 10/sup -5/ cm/sup -1/ has been demonstrated using this technique.
Apparatus and method for measurement of weak optical absorptions by thermally induced laser pulsing
Cremers, D.A.; Keller, R.A.
1985-10-01
The thermal lensing phenomenon is used as the basis for measurement of weak optical absorptions when a cell containing the sample to be investigated is inserted into a normally continuous-wave operation laser-pumped dye laser cavity for which the output coupler is deliberately tilted relative to intracavity circulating laser light, and pulsed laser output ensues, the pulsewidth of which can be related to the sample absorptivity by a simple algorithm or calibration curve. A minimum detection limit of less than 10[sup [minus]5] cm[sup [minus]1] has been demonstrated using this technique. 6 figs.
Ku-band field-effect power transistors
NASA Technical Reports Server (NTRS)
Taylor, G. C.; Huang, H. C.
1979-01-01
A single stage amplifier was developed using an 8 gate, 1200 micrometer width device to give a gain of 3.3 + or - 0.1 dB over the 14.4 to 15.4 GHz band with an output power of 0.48 W and 15% minimum efficiency with 0.255 W of input power. With two 8 gate devices combined and matched on the device carrier, using a lumped element format, a gain of 3 dB was attained over the 14.5 to 15.5 GHz band with a maximum efficiency of 9.9% for an output power of 0.8 W.
Apparatus and method for measurement of weak optical absorptions by thermally induced laser pulsing
Cremers, David A.; Keller, Richard A.
1985-01-01
The thermal lensing phenomenon is used as the basis for measurement of weak optical absorptions when a cell containing the sample to be investigated is inserted into a normally continuous-wave operation laser-pumped dye laser cavity for which the output coupler is deliberately tilted relative to intracavity circulating laser light, and pulsed laser output ensues, the pulsewidth of which can be related to the sample absorptivity by a simple algorithm or calibration curve. A minimum detection limit of less than 10.sup.-5 cm.sup.-1 has been demonstrated using this technique.
SACR ADVance 3-D Cartesian Cloud Cover (SACR-ADV-3D3C) product
Meng Wang, Tami Toto, Eugene Clothiaux, Katia Lamer, Mariko Oue
2017-03-08
SACR-ADV-3D3C remaps the outputs of SACRCORR for cross-wind range-height indicator (CW-RHI) scans to a Cartesian grid and reports reflectivity CFAD and best estimate domain averaged cloud fraction. The final output is a single NetCDF file containing all aforementioned corrected radar moments remapped on a 3-D Cartesian grid, the SACR reflectivity CFAD, a profile of best estimate cloud fraction, a profile of maximum observable x-domain size (xmax), a profile time to horizontal distance estimate and a profile of minimum observable reflectivity (dBZmin).
Flyback CCM inverter for AC module applications: iterative learning control and convergence analysis
NASA Astrophysics Data System (ADS)
Lee, Sung-Ho; Kim, Minsung
2017-12-01
This paper presents an iterative learning controller (ILC) for an interleaved flyback inverter operating in continuous conduction mode (CCM). The flyback CCM inverter features small output ripple current, high efficiency, and low cost, and hence it is well suited for photovoltaic power applications. However, it exhibits the non-minimum phase behaviour, because its transfer function from control duty to output current has the right-half-plane (RHP) zero. Moreover, the flyback CCM inverter suffers from the time-varying grid voltage disturbance. Thus, conventional control scheme results in inaccurate output tracking. To overcome these problems, the ILC is first developed and applied to the flyback inverter operating in CCM. The ILC makes use of both predictive and current learning terms which help the system output to converge to the reference trajectory. We take into account the nonlinear averaged model and use it to construct the proposed controller. It is proven that the system output globally converges to the reference trajectory in the absence of state disturbances, output noises, or initial state errors. Numerical simulations are performed to validate the proposed control scheme, and experiments using 400-W AC module prototype are carried out to demonstrate its practical feasibility.
On quantum Rényi entropies: A new generalization and some properties
NASA Astrophysics Data System (ADS)
Müller-Lennert, Martin; Dupuis, Frédéric; Szehr, Oleg; Fehr, Serge; Tomamichel, Marco
2013-12-01
The Rényi entropies constitute a family of information measures that generalizes the well-known Shannon entropy, inheriting many of its properties. They appear in the form of unconditional and conditional entropies, relative entropies, or mutual information, and have found many applications in information theory and beyond. Various generalizations of Rényi entropies to the quantum setting have been proposed, most prominently Petz's quasi-entropies and Renner's conditional min-, max-, and collision entropy. However, these quantum extensions are incompatible and thus unsatisfactory. We propose a new quantum generalization of the family of Rényi entropies that contains the von Neumann entropy, min-entropy, collision entropy, and the max-entropy as special cases, thus encompassing most quantum entropies in use today. We show several natural properties for this definition, including data-processing inequalities, a duality relation, and an entropic uncertainty relation.
Upper entropy axioms and lower entropy axioms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Jin-Li, E-mail: phd5816@163.com; Suo, Qi
2015-04-15
The paper suggests the concepts of an upper entropy and a lower entropy. We propose a new axiomatic definition, namely, upper entropy axioms, inspired by axioms of metric spaces, and also formulate lower entropy axioms. We also develop weak upper entropy axioms and weak lower entropy axioms. Their conditions are weaker than those of Shannon–Khinchin axioms and Tsallis axioms, while these conditions are stronger than those of the axiomatics based on the first three Shannon–Khinchin axioms. The subadditivity and strong subadditivity of entropy are obtained in the new axiomatics. Tsallis statistics is a special case of satisfying our axioms. Moreover,more » different forms of information measures, such as Shannon entropy, Daroczy entropy, Tsallis entropy and other entropies, can be unified under the same axiomatics.« less
EEG entropy measures in anesthesia
Liang, Zhenhu; Wang, Yinghua; Sun, Xue; Li, Duan; Voss, Logan J.; Sleigh, Jamie W.; Hagihira, Satoshi; Li, Xiaoli
2015-01-01
Highlights: ► Twelve entropy indices were systematically compared in monitoring depth of anesthesia and detecting burst suppression.► Renyi permutation entropy performed best in tracking EEG changes associated with different anesthesia states.► Approximate Entropy and Sample Entropy performed best in detecting burst suppression. Objective: Entropy algorithms have been widely used in analyzing EEG signals during anesthesia. However, a systematic comparison of these entropy algorithms in assessing anesthesia drugs' effect is lacking. In this study, we compare the capability of 12 entropy indices for monitoring depth of anesthesia (DoA) and detecting the burst suppression pattern (BSP), in anesthesia induced by GABAergic agents. Methods: Twelve indices were investigated, namely Response Entropy (RE) and State entropy (SE), three wavelet entropy (WE) measures [Shannon WE (SWE), Tsallis WE (TWE), and Renyi WE (RWE)], Hilbert-Huang spectral entropy (HHSE), approximate entropy (ApEn), sample entropy (SampEn), Fuzzy entropy, and three permutation entropy (PE) measures [Shannon PE (SPE), Tsallis PE (TPE) and Renyi PE (RPE)]. Two EEG data sets from sevoflurane-induced and isoflurane-induced anesthesia respectively were selected to assess the capability of each entropy index in DoA monitoring and BSP detection. To validate the effectiveness of these entropy algorithms, pharmacokinetic/pharmacodynamic (PK/PD) modeling and prediction probability (Pk) analysis were applied. The multifractal detrended fluctuation analysis (MDFA) as a non-entropy measure was compared. Results: All the entropy and MDFA indices could track the changes in EEG pattern during different anesthesia states. Three PE measures outperformed the other entropy indices, with less baseline variability, higher coefficient of determination (R2) and prediction probability, and RPE performed best; ApEn and SampEn discriminated BSP best. Additionally, these entropy measures showed an advantage in computation efficiency compared with MDFA. Conclusion: Each entropy index has its advantages and disadvantages in estimating DoA. Overall, it is suggested that the RPE index was a superior measure. Investigating the advantages and disadvantages of these entropy indices could help improve current clinical indices for monitoring DoA. PMID:25741277
On Use of Multi-Chambered Fission Detectors for In-Core, Neutron Spectroscopy
NASA Astrophysics Data System (ADS)
Roberts, Jeremy A.
2018-01-01
Presented is a short, computational study on the potential use of multichambered fission detectors for in-core, neutron spectroscopy. Motivated by the development of very small fission chambers at CEA in France and at Kansas State University in the U.S., it was assumed in this preliminary analysis that devices can be made small enough to avoid flux perturbations and that uncertainties related to measurements can be ignored. It was hypothesized that a sufficient number of chambers with unique reactants can act as a real-time, foilactivation experiment. An unfolding scheme based on maximizing (Shannon) entropy was used to produce a flux spectrum from detector signals that requires no prior information. To test the method, integral, detector responses were generated for singleisotope detectors of various Th, U, Np, Pu, Am, and Cs isotopes using a simplified, pressurized-water reactor spectrum and fluxweighted, microscopic, fission cross sections, in the WIMS-69 multigroup format. An unfolded spectrum was found from subsets of these responses that had a maximum entropy while reproducing the responses considered and summing to one (that is, they were normalized). Several nuclide subsets were studied, and, as expected, the results indicate inclusion of more nuclides leads to better spectra but with diminishing improvements, with the best-case spectrum having an average, relative, group-wise error of approximately 51%. Furthermore, spectra found from minimum-norm and Tihkonov-regularization inversion were of lower quality than the maximum entropy solutions. Finally, the addition of thermal-neutron filters (here, Cd and Gd) provided substantial improvement over unshielded responses alone. The results, as a whole, suggest that in-core, neutron spectroscopy is at least marginally feasible.
Communication: Introducing prescribed biases in out-of-equilibrium Markov models
NASA Astrophysics Data System (ADS)
Dixit, Purushottam D.
2018-03-01
Markov models are often used in modeling complex out-of-equilibrium chemical and biochemical systems. However, many times their predictions do not agree with experiments. We need a systematic framework to update existing Markov models to make them consistent with constraints that are derived from experiments. Here, we present a framework based on the principle of maximum relative path entropy (minimum Kullback-Leibler divergence) to update Markov models using stationary state and dynamical trajectory-based constraints. We illustrate the framework using a biochemical model network of growth factor-based signaling. We also show how to find the closest detailed balanced Markov model to a given Markov model. Further applications and generalizations are discussed.
Entropy, pumped-storage and energy system finance
NASA Astrophysics Data System (ADS)
Karakatsanis, Georgios
2015-04-01
Pumped-storage holds a key role for integrating renewable energy units with non-renewable fuel plants into large-scale energy systems of electricity output. An emerging issue is the development of financial engineering models with physical basis to systematically fund energy system efficiency improvements across its operation. A fundamental physically-based economic concept is the Scarcity Rent; which concerns the pricing of a natural resource's scarcity. Specifically, the scarcity rent comprises a fraction of a depleting resource's full price and accumulates to fund its more efficient future use. In an integrated energy system, scarcity rents derive from various resources and can be deposited to a pooled fund to finance the energy system's overall efficiency increase; allowing it to benefit from economies of scale. With pumped-storage incorporated to the system, water upgrades to a hub resource, in which the scarcity rents of all connected energy sources are denominated to. However, as available water for electricity generation or storage is also limited, a scarcity rent upon it is also imposed. It is suggested that scarcity rent generation is reducible to three (3) main factors, incorporating uncertainty: (1) water's natural renewability, (2) the energy system's intermittent components and (3) base-load prediction deviations from actual loads. For that purpose, the concept of entropy is used in order to measure the energy system's overall uncertainty; hence pumped-storage intensity requirements and generated water scarcity rents. Keywords: pumped-storage, integration, energy systems, financial engineering, physical basis, Scarcity Rent, pooled fund, economies of scale, hub resource, uncertainty, entropy Acknowledgement: This research was funded by the Greek General Secretariat for Research and Technology through the research project Combined REnewable Systems for Sustainable ENergy DevelOpment (CRESSENDO; grant number 5145)
Arcus: Exploring the formation and evolution of clusters, galaxies, and stars
NASA Astrophysics Data System (ADS)
Smith, Randall K.
2017-08-01
Arcus, a proposed soft X-ray grating spectrometer Explorer, leverages recent advances in critical-angle transmission (CAT) gratings and silicon pore optics (SPOs), using CCDs with strong Suzaku heritage and electronics based on the Swift mission; both the spacecraft and mission operations reuse highly successful designs. To be launched in 2023, Arcus will be the only observatory capable of studying, in detail, the hot galactic and intergalactic gas that is the dominant baryonic component of the present-day Universe and ultimate reservoir of entropy, metals and the output from cosmic feedback. Its superior soft (12-50Å) X-ray sensitivity will complement forthcoming calorimeters, which will have comparably high spectral resolution above 2 keV.
Ideal photon number amplifier and duplicator
NASA Technical Reports Server (NTRS)
Dariano, G. M.
1992-01-01
The photon number-amplification and number-duplication mechanism are analyzed in the ideal case. The search for unitary evolutions leads to consider also a number-deamplification mechanism, the symmetry between amplification and deamplification being broken by the integer-value nature of the number operator. Both transformations, amplification and duplication, need an auxiliary field which, in the case of amplification, turns out to be amplified in the inverse way. Input-output energy conservation is accounted for using a classical pump or through frequency-conversion of the fields. Ignoring one of the fields is equivalent to considering the amplifier as an open system involving entropy production. The Hamiltonians of the ideal devices are given and compared with those of realistic systems.
SU-E-QI-17: Dependence of 3D/4D PET Quantitative Image Features On Noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliver, J; Budzevich, M; Zhang, G
2014-06-15
Purpose: Quantitative imaging is a fast evolving discipline where a large number of features are extracted from images; i.e., radiomics. Some features have been shown to have diagnostic, prognostic and predictive value. However, they are sensitive to acquisition and processing factors; e.g., noise. In this study noise was added to positron emission tomography (PET) images to determine how features were affected by noise. Methods: Three levels of Gaussian noise were added to 8 lung cancer patients PET images acquired in 3D mode (static) and using respiratory tracking (4D); for the latter images from one of 10 phases were used. Amore » total of 62 features: 14 shape, 19 intensity (1stO), 18 GLCM textures (2ndO; from grey level co-occurrence matrices) and 11 RLM textures (2ndO; from run-length matrices) features were extracted from segmented tumors. Dimensions of GLCM were 256×256, calculated using 3D images with a step size of 1 voxel in 13 directions. Grey levels were binned into 256 levels for RLM and features were calculated in all 13 directions. Results: Feature variation generally increased with noise. Shape features were the most stable while RLM were the most unstable. Intensity and GLCM features performed well; the latter being more robust. The most stable 1stO features were compactness, maximum and minimum length, standard deviation, root-mean-squared, I30, V10-V90, and entropy. The most stable 2ndO features were entropy, sum-average, sum-entropy, difference-average, difference-variance, difference-entropy, information-correlation-2, short-run-emphasis, long-run-emphasis, and run-percentage. In general, features computed from images from one of the phases of 4D scans were more stable than from 3D scans. Conclusion: This study shows the need to characterize image features carefully before they are used in research and medical applications. It also shows that the performance of features, and thereby feature selection, may be assessed in part by noise analysis.« less
Computer program calculates gamma ray source strengths of materials exposed to neutron fluxes
NASA Technical Reports Server (NTRS)
Heiser, P. C.; Ricks, L. O.
1968-01-01
Computer program contains an input library of nuclear data for 44 elements and their isotopes to determine the induced radioactivity for gamma emitters. Minimum input requires the irradiation history of the element, a four-energy-group neutron flux, specification of an alloy composition by elements, and selection of the output.
40 CFR 1065.510 - Engine mapping.
Code of Federal Regulations, 2011 CFR
2011-07-01
... the warm-up until the engine coolant, block, or head absolute temperature is within ± 2% of its mean... demand to minimum, use the dynamometer or other loading device to target a torque of zero on the engine's...-speed governor, operate the engine at warm idle speed and zero torque on the engine's primary output...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-10
... revise the minimum Emergency Diesel Generator (EDG) output voltage acceptance criterion in Surveillance... ensures the timely transfer of plant safety system loads to the Emergency Diesel Generators in the event a... from the emergency diesel generators in a timely manner. This change is needed to bring Fermi 2 into...
Cost Efficiency in Public Higher Education.
ERIC Educational Resources Information Center
Robst, John
This study used the frontier cost function framework to examine cost efficiency in public higher education. The frontier cost function estimates the minimum predicted cost for producing a given amount of output. Data from the annual Almanac issues of the "Chronicle of Higher Education" were used to calculate state level enrollments at two-year and…
Refined two-index entropy and multiscale analysis for complex system
NASA Astrophysics Data System (ADS)
Bian, Songhan; Shang, Pengjian
2016-10-01
As a fundamental concept in describing complex system, entropy measure has been proposed to various forms, like Boltzmann-Gibbs (BG) entropy, one-index entropy, two-index entropy, sample entropy, permutation entropy etc. This paper proposes a new two-index entropy Sq,δ and we find the new two-index entropy is applicable to measure the complexity of wide range of systems in the terms of randomness and fluctuation range. For more complex system, the value of two-index entropy is smaller and the correlation between parameter δ and entropy Sq,δ is weaker. By combining the refined two-index entropy Sq,δ with scaling exponent h(δ), this paper analyzes the complexities of simulation series and classifies several financial markets in various regions of the world effectively.
Flight Demonstration of a Shock Location Sensor Using Constant Voltage Hot-Film Anemometry
NASA Technical Reports Server (NTRS)
Moes, Timothy R.; Sarma, Garimella R.; Mangalam, Siva M.
1997-01-01
Flight tests have demonstrated the effectiveness of an array of hot-film sensors using constant voltage anemometry to determine shock position on a wing or aircraft surface at transonic speeds. Flights were conducted at the NASA Dryden Flight Research Center using the F-15B aircraft and Flight Test Fixture (FTF). A modified NACA 0021 airfoil was attached to the side of the FTF, and its upper surface was instrumented to correlate shock position with pressure and hot-film sensors. In the vicinity of the shock-induced pressure rise, test results consistently showed the presence of a minimum voltage in the hot-film anemometer outputs. Comparing these results with previous investigations indicate that hot-film anemometry can identify the location of the shock-induced boundary layer separation. The flow separation occurred slightly forward of the shock- induced pressure rise for a laminar boundary layer and slightly aft of the start of the pressure rise when the boundary layer was tripped near the airfoil leading edge. Both minimum mean output and phase reversal analyses were used to identify the shock location.
Entropy criteria applied to pattern selection in systems with free boundaries
NASA Astrophysics Data System (ADS)
Kirkaldy, J. S.
1985-10-01
The steady state differential or integral equations which describe patterned dissipative structures, typically to be identified with first order phase transformation morphologies like isothermal pearlites, are invariably degenerate in one or more order parameters (the lamellar spacing in the pearlite case). It is often observed that a different pattern is attained at the steady state for each initial condition (the hysteresis or metastable case). Alternatively, boundary perturbations and internal fluctuations during transition up to, or at the steady state, destroy the path coherence. In this case a statistical ensemble of imperfect patterns often emerges which represents a fluctuating but recognizably patterned and unique average steady state. It is cases like cellular, lamellar pearlite, involving an assembly of individual cell patterns which are regularly perturbed by local fluctuation and growth processes, which concern us here. Such weakly fluctuating nonlinear steady state ensembles can be arranged in a thought experiment so as to evolve as subsystems linking two very large mass-energy reservoirs in isolation. Operating on this discontinuous thermodynamic ideal, Onsager’s principle of maximum path probability for isolated systems, which we interpret as a minimal time correlation function connecting subsystem and baths, identifies the stable steady state at a parametric minimum or maximum (or both) in the dissipation rate. This nonlinear principle is independent of the Principle of Minimum Dissipation which is applicable in the linear regime of irreversible thermodynamics. The statistical argument is equivalent to the weak requirement that the isolated system entropy as a function of time be differentiable to the second order despite the macroscopic pattern fluctuations which occur in the subsystem. This differentiability condition is taken for granted in classical stability theory based on the 2nd Law. The optimal principle as applied to isothermal and forced velocity pearlites (in this case maximal) possesses a Le Chatelier (perturbation) Principle which can be formulated exactly via Langer’s conjecture that “each lamella must grow in a direction which is perpendicular to the solidification front”. This is the first example of such an equivalence to be experimentally and theoretically recognized in nonlinear irreversible thermodynamics. A further application to binary solidification cells is reviewed. In this case the optimum in the dissipation is a minimum and the closure between theory and experiment is excellent. Other applications in thermal-hydraulics, biology, and solid state physics are briefy described.
Aging and cardiovascular complexity: effect of the length of RR tachograms
Nagaraj, Nithin
2016-01-01
As we age, our hearts undergo changes that result in a reduction in complexity of physiological interactions between different control mechanisms. This results in a potential risk of cardiovascular diseases which are the number one cause of death globally. Since cardiac signals are nonstationary and nonlinear in nature, complexity measures are better suited to handle such data. In this study, three complexity measures are used, namely Lempel–Ziv complexity (LZ), Sample Entropy (SampEn) and Effort-To-Compress (ETC). We determined the minimum length of RR tachogram required for characterizing complexity of healthy young and healthy old hearts. All the three measures indicated significantly lower complexity values for older subjects than younger ones. However, the minimum length of heart-beat interval data needed differs for the three measures, with LZ and ETC needing as low as 10 samples, whereas SampEn requires at least 80 samples. Our study indicates that complexity measures such as LZ and ETC are good candidates for the analysis of cardiovascular dynamics since they are able to work with very short RR tachograms. PMID:27957395
Comment on "Inference with minimal Gibbs free energy in information field theory".
Iatsenko, D; Stefanovska, A; McClintock, P V E
2012-03-01
Enßlin and Weig [Phys. Rev. E 82, 051112 (2010)] have introduced a "minimum Gibbs free energy" (MGFE) approach for estimation of the mean signal and signal uncertainty in Bayesian inference problems: it aims to combine the maximum a posteriori (MAP) and maximum entropy (ME) principles. We point out, however, that there are some important questions to be clarified before the new approach can be considered fully justified, and therefore able to be used with confidence. In particular, after obtaining a Gaussian approximation to the posterior in terms of the MGFE at some temperature T, this approximation should always be raised to the power of T to yield a reliable estimate. In addition, we show explicitly that MGFE indeed incorporates the MAP principle, as well as the MDI (minimum discrimination information) approach, but not the well-known ME principle of Jaynes [E.T. Jaynes, Phys. Rev. 106, 620 (1957)]. We also illuminate some related issues and resolve apparent discrepancies. Finally, we investigate the performance of MGFE estimation for different values of T, and we discuss the advantages and shortcomings of the approach.
Dang, C; Xu, L
2001-03-01
In this paper a globally convergent Lagrange and barrier function iterative algorithm is proposed for approximating a solution of the traveling salesman problem. The algorithm employs an entropy-type barrier function to deal with nonnegativity constraints and Lagrange multipliers to handle linear equality constraints, and attempts to produce a solution of high quality by generating a minimum point of a barrier problem for a sequence of descending values of the barrier parameter. For any given value of the barrier parameter, the algorithm searches for a minimum point of the barrier problem in a feasible descent direction, which has a desired property that the nonnegativity constraints are always satisfied automatically if the step length is a number between zero and one. At each iteration the feasible descent direction is found by updating Lagrange multipliers with a globally convergent iterative procedure. For any given value of the barrier parameter, the algorithm converges to a stationary point of the barrier problem without any condition on the objective function. Theoretical and numerical results show that the algorithm seems more effective and efficient than the softassign algorithm.
Wilkes, Donald F.; Purvis, James W.; Miller, A. Keith
1997-01-01
An infinitely variable transmission is capable of operating between a maximum speed in one direction and a minimum speed in an opposite direction, including a zero output angular velocity, while being supplied with energy at a constant angular velocity. Input energy is divided between a first power path carrying an orbital set of elements and a second path that includes a variable speed adjustment mechanism. The second power path also connects with the orbital set of elements in such a way as to vary the rate of angular rotation thereof. The combined effects of power from the first and second power paths are combined and delivered to an output element by the orbital element set. The transmission can be designed to operate over a preselected ratio of forward to reverse output speeds.
HYSEP: A Computer Program for Streamflow Hydrograph Separation and Analysis
Sloto, Ronald A.; Crouse, Michele Y.
1996-01-01
HYSEP is a computer program that can be used to separate a streamflow hydrograph into base-flow and surface-runoff components. The base-flow component has traditionally been associated with ground-water discharge and the surface-runoff component with precipitation that enters the stream as overland runoff. HYSEP includes three methods of hydrograph separation that are referred to in the literature as the fixed interval, sliding-interval, and local-minimum methods. The program also describes the frequency and duration of measured streamflow and computed base flow and surface runoff. Daily mean stream discharge is used as input to the program in either an American Standard Code for Information Interchange (ASCII) or binary format. Output from the program includes table,s graphs, and data files. Graphical output may be plotted on the computer screen or output to a printer, plotter, or metafile.
Microcanonical entropy for classical systems
NASA Astrophysics Data System (ADS)
Franzosi, Roberto
2018-03-01
The entropy definition in the microcanonical ensemble is revisited. We propose a novel definition for the microcanonical entropy that resolve the debate on the correct definition of the microcanonical entropy. In particular we show that this entropy definition fixes the problem inherent the exact extensivity of the caloric equation. Furthermore, this entropy reproduces results which are in agreement with the ones predicted with standard Boltzmann entropy when applied to macroscopic systems. On the contrary, the predictions obtained with the standard Boltzmann entropy and with the entropy we propose, are different for small system sizes. Thus, we conclude that the Boltzmann entropy provides a correct description for macroscopic systems whereas extremely small systems should be better described with the entropy that we propose here.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granderson, G.D.
The purpose of the dissertation is to examine the impact of rate-of-return regulation on the cost of transporting natural gas in interstate commerce. Of particular interest is the effect of the regulation on the input choice of a firm. Does regulation induce a regulated firm to produce its selected level of output at greater than minimum cost The theoretical model is based on the work of Rolf Faere and James Logan who investigate the duality relationship between the cost and production functions of a rate-of-return regulated firm. Faere and Logan derive the cost function for a regulated firm as themore » minimum cost of producing the firm's selected level of output, subject to the regulatory constraint. The regulated cost function is used to recover the unregulated cost function. A firm's unregulated cost function is the minimum cost of producing its selected level of output. Characteristics of the production technology are obtained from duality between the production and unregulated cost functions. Using data on 20 pipeline companies from 1977 to 1987, the author estimates a random effects model that consists of a regulated cost function and its associated input share equations. The model is estimated as a set of seemingly unrelated regressions. The empirical results are used to test the Faere and Logan theory and the traditional Averch-Johnson hypothesis of overcapitalization. Parameter estimates are used to recover the unregulated cost function and to calculate the amount by which transportation costs are increased by the regulation of the industry. Empirical results show that a firm's transportation cost decreases as the allowed rate of return increases and the regulatory constraint becomes less tight. Elimination of the regulatory constraint would lead to a reduction in costs on average of 5.278%. There is evidence that firms overcapitalize on pipeline capital. There is inconclusive evidence on whether firms overcapitalized on compressor station capital.« less
WINDOWAC (Wing Design Optimization With Aeroelastic Constraints): Program manual
NASA Technical Reports Server (NTRS)
Haftka, R. T.; Starnes, J. H., Jr.
1974-01-01
User and programer documentation for the WIDOWAC programs is given. WIDOWAC may be used for the design of minimum mass wing structures subjected to flutter, strength, and minimum gage constraints. The wing structure is modeled by finite elements, flutter conditions may be both subsonic and supersonic, and mathematical programing methods are used for the optimization procedure. The user documentation gives general directions on how the programs may be used and describes their limitations; in addition, program input and output are described, and example problems are presented. A discussion of computational algorithms and flow charts of the WIDOWAC programs and major subroutines is also given.
Fermionic entanglement in superconducting systems
NASA Astrophysics Data System (ADS)
Di Tullio, M.; Gigena, N.; Rossignoli, R.
2018-06-01
We examine distinct measures of fermionic entanglement in the exact ground state of a finite superconducting system. It is first shown that global measures such as the one-body entanglement entropy, which represents the minimum relative entropy between the exact ground state and the set of fermionic Gaussian states, exhibit a close correlation with the BCS gap, saturating in the strong superconducting regime. The same behavior is displayed by the bipartite entanglement between the set of all single-particle states k of positive quasimomenta and their time-reversed partners k ¯. In contrast, the entanglement associated with the reduced density matrix of four single-particle modes k ,k ¯ , k',k¯' , which can be measured through a properly defined fermionic concurrence, exhibits a different behavior, showing a peak in the vicinity of the superconducting transition for states k ,k' close to the Fermi level and becoming small in the strong coupling regime. In the latter, such reduced state exhibits, instead, a finite mutual information and quantum discord. While the first measures can be correctly estimated with the BCS approximation, the previous four-level concurrence lies strictly beyond the latter, requiring at least a particle-number projected BCS treatment for its description. Formal properties of all previous entanglement measures are as well discussed.
Stotts, Steven A; Koch, Robert A
2017-08-01
In this paper an approach is presented to estimate the constraint required to apply maximum entropy (ME) for statistical inference with underwater acoustic data from a single track segment. Previous algorithms for estimating the ME constraint require multiple source track segments to determine the constraint. The approach is relevant for addressing model mismatch effects, i.e., inaccuracies in parameter values determined from inversions because the propagation model does not account for all acoustic processes that contribute to the measured data. One effect of model mismatch is that the lowest cost inversion solution may be well outside a relatively well-known parameter value's uncertainty interval (prior), e.g., source speed from track reconstruction or towed source levels. The approach requires, for some particular parameter value, the ME constraint to produce an inferred uncertainty interval that encompasses the prior. Motivating this approach is the hypothesis that the proposed constraint determination procedure would produce a posterior probability density that accounts for the effect of model mismatch on inferred values of other inversion parameters for which the priors might be quite broad. Applications to both measured and simulated data are presented for model mismatch that produces minimum cost solutions either inside or outside some priors.
Analytical design of intelligent machines
NASA Technical Reports Server (NTRS)
Saridis, George N.; Valavanis, Kimon P.
1987-01-01
The problem of designing 'intelligent machines' to operate in uncertain environments with minimum supervision or interaction with a human operator is examined. The structure of an 'intelligent machine' is defined to be the structure of a Hierarchically Intelligent Control System, composed of three levels hierarchically ordered according to the principle of 'increasing precision with decreasing intelligence', namely: the organizational level, performing general information processing tasks in association with a long-term memory; the coordination level, dealing with specific information processing tasks with a short-term memory; and the control level, which performs the execution of various tasks through hardware using feedback control methods. The behavior of such a machine may be managed by controls with special considerations and its 'intelligence' is directly related to the derivation of a compatible measure that associates the intelligence of the higher levels with the concept of entropy, which is a sufficient analytic measure that unifies the treatment of all the levels of an 'intelligent machine' as the mathematical problem of finding the right sequence of internal decisions and controls for a system structured in the order of intelligence and inverse order of precision such that it minimizes its total entropy. A case study on the automatic maintenance of a nuclear plant illustrates the proposed approach.
Einstein-Podolsky-Rosen paradox implies a minimum achievable temperature
NASA Astrophysics Data System (ADS)
Rogers, David M.
2017-01-01
This work examines the thermodynamic consequences of the repeated partial projection model for coupling a quantum system to an arbitrary series of environments under feedback control. This paper provides observational definitions of heat and work that can be realized in current laboratory setups. In contrast to other definitions, it uses only properties of the environment and the measurement outcomes, avoiding references to the "measurement" of the central system's state in any basis. These definitions are consistent with the usual laws of thermodynamics at all temperatures, while never requiring complete projective measurement of the entire system. It is shown that the back action of measurement must be counted as work rather than heat to satisfy the second law. Comparisons are made to quantum jump (unravelling) and transition-probability based definitions, many of which appear as particular limits of the present model. These limits show that our total entropy production is a lower bound on traditional definitions of heat that trace out the measurement device. Examining the master equation approximation to the process at finite measurement rates, we show that most interactions with the environment make the system unable to reach absolute zero. We give an explicit formula for the minimum temperature achievable in repeatedly measured quantum systems. The phenomenon of minimum temperature offers an explanation of recent experiments aimed at testing fluctuation theorems in the quantum realm and places a fundamental purity limit on quantum computers.
Tunable narrow band difference frequency THz wave generation in DAST via dual seed PPLN OPG.
Dolasinski, Brian; Powers, Peter E; Haus, Joseph W; Cooney, Adam
2015-02-09
We report a widely tunable narrowband terahertz (THz) source via difference frequency generation (DFG). A narrowband THz source uses the output of dual seeded periodically poled lithium niobate (PPLN) optical parametric generators (OPG) combined in the nonlinear crystal 4-dimthylamino-N-methyl-4-stilbazolium-tosylate (DAST). We demonstrate a seamlessly tunable THZ output that tunes from 1.5 THz to 27 THz with a minimum bandwidth of 3.1 GHz. The effects of dispersive phase matching, two-photon absorption, and polarization were examined and compared to a power emission model that consisted of the current accepted parameters of DAST.
Soft context clustering for F0 modeling in HMM-based speech synthesis
NASA Astrophysics Data System (ADS)
Khorram, Soheil; Sameti, Hossein; King, Simon
2015-12-01
This paper proposes the use of a new binary decision tree, which we call a soft decision tree, to improve generalization performance compared to the conventional `hard' decision tree method that is used to cluster context-dependent model parameters in statistical parametric speech synthesis. We apply the method to improve the modeling of fundamental frequency, which is an important factor in synthesizing natural-sounding high-quality speech. Conventionally, hard decision tree-clustered hidden Markov models (HMMs) are used, in which each model parameter is assigned to a single leaf node. However, this `divide-and-conquer' approach leads to data sparsity, with the consequence that it suffers from poor generalization, meaning that it is unable to accurately predict parameters for models of unseen contexts: the hard decision tree is a weak function approximator. To alleviate this, we propose the soft decision tree, which is a binary decision tree with soft decisions at the internal nodes. In this soft clustering method, internal nodes select both their children with certain membership degrees; therefore, each node can be viewed as a fuzzy set with a context-dependent membership function. The soft decision tree improves model generalization and provides a superior function approximator because it is able to assign each context to several overlapped leaves. In order to use such a soft decision tree to predict the parameters of the HMM output probability distribution, we derive the smoothest (maximum entropy) distribution which captures all partial first-order moments and a global second-order moment of the training samples. Employing such a soft decision tree architecture with maximum entropy distributions, a novel speech synthesis system is trained using maximum likelihood (ML) parameter re-estimation and synthesis is achieved via maximum output probability parameter generation. In addition, a soft decision tree construction algorithm optimizing a log-likelihood measure is developed. Both subjective and objective evaluations were conducted and indicate a considerable improvement over the conventional method.
NASA Astrophysics Data System (ADS)
Nalewajski, Roman F.
The flow of information in the molecular communication networks in the (condensed) atomic orbital (AO) resolution is investigated and the plane-wave (momentum-space) interpretation of the average Fisher information in the molecular information system is given. It is argued using the quantum-mechanical superposition principle that, in the LCAO MO theory the squares of corresponding elements of the Charge and Bond-Order (CBO) matrix determine the conditional probabilities between AO, which generate the molecular communication system of the Orbital Communication Theory (OCT) of the chemical bond. The conditional-entropy ("noise," information-theoretic "covalency") and the mutual-information (information flow, information-theoretic "ionicity") descriptors of these molecular channels are related to Wiberg's covalency indices of chemical bonds. The illustrative application of OCT to the three-orbital model of the chemical bond X-Y, which is capable of describing the forward- and back-donations as well as the atom promotion accompanying the bond formation, is reported. It is demonstrated that the entropy/information characteristics of these separate bond-effects can be extracted by an appropriate reduction of the output of the molecular information channel, carried out by combining several exits into a single (condensed) one. The molecular channels in both the AO and hybrid orbital representations are examined for both the molecular and representative promolecular input probabilities.
Strength training improves the tri-digit finger-pinch force control of older adults.
Keogh, Justin W; Morrison, Steve; Barrett, Rod
2007-08-01
To investigate the effect of unilateral upper-limb strength training on the finger-pinch force control of older men. Pretest and post-test 6-week intervention study. Exercise science research laboratory. Eleven neurologically fit older men (age range, 70-80y). The strength training group (n=7) trained twice a week for 6 weeks, performing dumbbell bicep curls, wrist flexions, and wrists extensions, while the control group subjects (n=4) maintained their normal activities. Changes in force variability, targeting error, peak power frequency, proportional power, sample entropy, digit force sharing, and coupling relations were assessed during a series of finger-pinch tasks. These tasks involved maintaining a constant or sinusoidal force output at 20% and 40% of each subject's maximum voluntary contraction. All participants performed the finger-pinch tasks with both the preferred and nonpreferred limbs. Analysis of covariance for between-group change scores indicated that the strength training group (trained limb) experienced significantly greater reductions in finger-pinch force variability and targeting error, as well as significantly greater increases in finger-pinch force, sample entropy, bicep curl, and wrist flexion strength than did the control group. A nonspecific upper-limb strength-training program may improve the finger-pinch force control of older men.
NASA Astrophysics Data System (ADS)
Li, Guanchen; von Spakovsky, Michael R.; Shen, Fengyu; Lu, Kathy
2018-01-01
Oxygen reduction in a solid oxide fuel cell cathode involves a nonequilibrium process of coupled mass and heat diffusion and electrochemical and chemical reactions. These phenomena occur at multiple temporal and spatial scales, making the modeling, especially in the transient regime, very difficult. Nonetheless, multiscale models are needed to improve the understanding of oxygen reduction and guide cathode design. Of particular importance for long-term operation are microstructure degradation and chromium oxide poisoning both of which degrade cathode performance. Existing methods are phenomenological or empirical in nature and their application limited to the continuum realm with quantum effects not captured. In contrast, steepest-entropy-ascent quantum thermodynamics can be used to model nonequilibrium processes (even those far-from equilibrium) at all scales. The nonequilibrium relaxation is characterized by entropy generation, which can unify coupled phenomena into one framework to model transient and steady behavior. The results reveal the effects on performance of the different timescales of the varied phenomena involved and their coupling. Results are included here for the effects of chromium oxide concentrations on cathode output as is a parametric study of the effects of interconnect-three-phase-boundary length, oxygen mean free path, and adsorption site effectiveness. A qualitative comparison with experimental results is made.
Pan, Guangbo; Xu, Youpeng; Yu, Zhihui; Song, Song; Zhang, Yuan
2015-05-01
Maintaining the health of the river ecosystem is an essential ecological and environmental guarantee for regional sustainable development and one of the basic objectives in water resource management. With the rapid development of urbanization, the river health situation is deteriorating, especially in urban areas. The river health evaluation is a complex process that involves various natural and social components; eight eco-hydrological indicators were selected to establish an evaluation system, and the variation of river health status under the background of urbanization was explored based on entropy weight and matter-element model. The comprehensive correlative degrees of urban river health of Huzhou City in 2001, 2006 and 2010 were then calculated. The results indicated that river health status of the study area was in the direction of pathological trend, and the impact of limiting factors (such as Shannon's diversity index and agroforestry output growth rate) played an important role in river health. The variation of maximum correlative degree could be classified into stationary status, deterioration status, deterioration-to-improvement status, and improvement-to-deterioration status. There was a severe deterioration situation of river health under the background of urbanization. Copyright © 2015 Elsevier Inc. All rights reserved.
Entropy and equilibrium via games of complexity
NASA Astrophysics Data System (ADS)
Topsøe, Flemming
2004-09-01
It is suggested that thermodynamical equilibrium equals game theoretical equilibrium. Aspects of this thesis are discussed. The philosophy is consistent with maximum entropy thinking of Jaynes, but goes one step deeper by deriving the maximum entropy principle from an underlying game theoretical principle. The games introduced are based on measures of complexity. Entropy is viewed as minimal complexity. It is demonstrated that Tsallis entropy ( q-entropy) and Kaniadakis entropy ( κ-entropy) can be obtained in this way, based on suitable complexity measures. A certain unifying effect is obtained by embedding these measures in a two-parameter family of entropy functions.
The influence of lower leg configurations on muscle force variability.
Ofori, Edward; Shim, Jaeho; Sosnoff, Jacob J
2018-04-11
The maintenance of steady contractions is required in many daily tasks. However, there is little understanding of how various lower limb configurations influence the ability to maintain force. The purpose of the current investigation was to examine the influence of joint angle on various lower-limb constant force contractions. Nineteen adults performed knee extension, knee flexion, and ankle plantarflexion isometric force contractions to 11 target forces, ranging from 2 to 95% maximal voluntary contraction (MVC) at 2 angles. Force variability was quantified with mean force, standard deviation, and the coefficient of variation of force output. Non-linearities in force output were quantified with approximate entropy. Curve fitting analyses were performed on each set of data from each individual across contractions to further examine whether joint angle interacts with global functions of lower-limb force variability. Joint angle had significant effects on the model parameters used to describe the force-variability function for each muscle contraction (p < 0.05). Regularities in force output were more explained by force level in smaller angle conditions relative to the larger angle conditions (p < 0.05). The findings support the notion that limb configuration influences the magnitude and regularities in force production. Biomechanical factors, such as joint angle, along with neurophysiological factors should be considered together in the discussion of the dynamics of constant force production. Copyright © 2018 Elsevier Ltd. All rights reserved.
Complexity measures of the central respiratory networks during wakefulness and sleep
NASA Astrophysics Data System (ADS)
Dragomir, Andrei; Akay, Yasemin; Curran, Aidan K.; Akay, Metin
2008-06-01
Since sleep is known to influence respiratory activity we studied whether the sleep state would affect the complexity value of the respiratory network output. Specifically, we tested the hypothesis that the complexity values of the diaphragm EMG (EMGdia) activity would be lower during REM compared to NREM. Furthermore, since REM is primarily generated by a homogeneous population of neurons in the medulla, the possibility that REM-related respiratory output would be less complex than that of the awake state was also considered. Additionally, in order to examine the influence of neuron vulnerabilities within the rostral ventral medulla (RVM) on the complexity of the respiratory network output, we inhibited respiratory neurons in the RVM by microdialysis of GABAA receptor agonist muscimol. Diaphragm EMG, nuchal EMG, EEG, EOG as well as other physiological signals (tracheal pressure, blood pressure and respiratory volume) were recorded from five unanesthetized chronically instrumented intact piglets (3-10 days old). Complexity of the diaphragm EMG (EMGdia) signal during wakefulness, NREM and REM was evaluated using the approximate entropy method (ApEn). ApEn values of the EMGdia during NREM and REM sleep were found significantly (p < 0.05 and p < 0.001, respectively) lower than those of awake EMGdia after muscimol inhibition. In the absence of muscimol, only the differences between REM and wakefulness ApEn values were found to be significantly different.
Quantile based Tsallis entropy in residual lifetime
NASA Astrophysics Data System (ADS)
Khammar, A. H.; Jahanshahi, S. M. A.
2018-02-01
Tsallis entropy is a generalization of type α of the Shannon entropy, that is a nonadditive entropy unlike the Shannon entropy. Shannon entropy may be negative for some distributions, but Tsallis entropy can always be made nonnegative by choosing appropriate value of α. In this paper, we derive the quantile form of this nonadditive's entropy function in the residual lifetime, namely the residual quantile Tsallis entropy (RQTE) and get the bounds for it, depending on the Renyi's residual quantile entropy. Also, we obtain relationship between RQTE and concept of proportional hazards model in the quantile setup. Based on the new measure, we propose a stochastic order and aging classes, and study its properties. Finally, we prove characterizations theorems for some well known lifetime distributions. It is shown that RQTE uniquely determines the parent distribution unlike the residual Tsallis entropy.
Time-dependent entropy evolution in microscopic and macroscopic electromagnetic relaxation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker-Jarvis, James
This paper is a study of entropy and its evolution in the time and frequency domains upon application of electromagnetic fields to materials. An understanding of entropy and its evolution in electromagnetic interactions bridges the boundaries between electromagnetism and thermodynamics. The approach used here is a Liouville-based statistical-mechanical theory. I show that the microscopic entropy is reversible and the macroscopic entropy satisfies an H theorem. The spectral entropy development can be very useful for studying the frequency response of materials. Using a projection-operator based nonequilibrium entropy, different equations are derived for the entropy and entropy production and are applied tomore » the polarization, magnetization, and macroscopic fields. I begin by proving an exact H theorem for the entropy, progress to application of time-dependent entropy in electromagnetics, and then apply the theory to relevant applications in electromagnetics. The paper concludes with a discussion of the relationship of the frequency-domain form of the entropy to the permittivity, permeability, and impedance.« less
Entropy flow and entropy production in the human body in basal conditions.
Aoki, I
1989-11-08
Entropy inflow and outflow for the naked human body in basal conditions in the respiration calorimeter due to infrared radiation, convection, evaporation of water and mass-flow are calculated by use of the energetic data obtained by Hardy & Du Bois. Also, the change of entropy content in the body is estimated. The entropy production in the human body is obtained as the change of entropy content minus the net entropy flow into the body. The entropy production thus calculated becomes positive. The magnitude of entropy production per effective radiating surface area does not show any significant variation with subjects. The entropy production is nearly constant at the calorimeter temperatures of 26-32 degrees C; the average in this temperature range is 0.172 J m-2 sec-1 K-1. The forced air currents around the human body and also clothing have almost no effect in changing the entropy production. Thus, the entropy production of the naked human body in basal conditions does not depend on its environmental factors.
USDA-ARS?s Scientific Manuscript database
Research and advanced breeding have demonstrated that energy cane possesses all of the attributes desirable in a biofuel feedstock: extremely good biomass yield in a small farming footprint; negative/neutral carbon footprint; maximum outputs from minimum inputs; well-established growing model for fa...
New design for a microwave discharge lamp.
Glangetas, A
1980-03-01
A simple discharge lamp with a microwave cavity fitting inside provides an intense source of VUV resonance radiation for photochemical work inside a vacuum chamber. Good coupling and minimum reabsorption result in better efficiency ( greater, similar1%) and more intense output power (up to 2.5x10(16) quanta s(-1)) than have been achieved previously.
NASA Astrophysics Data System (ADS)
Thurner, Stefan; Corominas-Murtra, Bernat; Hanel, Rudolf
2017-09-01
There are at least three distinct ways to conceptualize entropy: entropy as an extensive thermodynamic quantity of physical systems (Clausius, Boltzmann, Gibbs), entropy as a measure for information production of ergodic sources (Shannon), and entropy as a means for statistical inference on multinomial processes (Jaynes maximum entropy principle). Even though these notions represent fundamentally different concepts, the functional form of the entropy for thermodynamic systems in equilibrium, for ergodic sources in information theory, and for independent sampling processes in statistical systems, is degenerate, H (p ) =-∑ipilogpi . For many complex systems, which are typically history-dependent, nonergodic, and nonmultinomial, this is no longer the case. Here we show that for such processes, the three entropy concepts lead to different functional forms of entropy, which we will refer to as SEXT for extensive entropy, SIT for the source information rate in information theory, and SMEP for the entropy functional that appears in the so-called maximum entropy principle, which characterizes the most likely observable distribution functions of a system. We explicitly compute these three entropy functionals for three concrete examples: for Pólya urn processes, which are simple self-reinforcing processes, for sample-space-reducing (SSR) processes, which are simple history dependent processes that are associated with power-law statistics, and finally for multinomial mixture processes.
Information dynamics in carcinogenesis and tumor growth.
Gatenby, Robert A; Frieden, B Roy
2004-12-21
The storage and transmission of information is vital to the function of normal and transformed cells. We use methods from information theory and Monte Carlo theory to analyze the role of information in carcinogenesis. Our analysis demonstrates that, during somatic evolution of the malignant phenotype, the accumulation of genomic mutations degrades intracellular information. However, the degradation is constrained by the Darwinian somatic ecology in which mutant clones proliferate only when the mutation confers a selective growth advantage. In that environment, genes that normally decrease cellular proliferation, such as tumor suppressor or differentiation genes, suffer maximum information degradation. Conversely, those that increase proliferation, such as oncogenes, are conserved or exhibit only gain of function mutations. These constraints shield most cellular populations from catastrophic mutator-induced loss of the transmembrane entropy gradient and, therefore, cell death. The dynamics of constrained information degradation during carcinogenesis cause the tumor genome to asymptotically approach a minimum information state that is manifested clinically as dedifferentiation and unconstrained proliferation. Extreme physical information (EPI) theory demonstrates that altered information flow from cancer cells to their environment will manifest in-vivo as power law tumor growth with an exponent of size 1.62. This prediction is based only on the assumption that tumor cells are at an absolute information minimum and are capable of "free field" growth that is, they are unconstrained by external biological parameters. The prediction agrees remarkably well with several studies demonstrating power law growth in small human breast cancers with an exponent of 1.72+/-0.24. This successful derivation of an analytic expression for cancer growth from EPI alone supports the conceptual model that carcinogenesis is a process of constrained information degradation and that malignant cells are minimum information systems. EPI theory also predicts that the estimated age of a clinically observed tumor is subject to a root-mean square error of about 30%. This is due to information loss and tissue disorganization and probably manifests as a randomly variable lag phase in the growth pattern that has been observed experimentally. This difference between tumor size and age may impose a fundamental limit on the efficacy of screening based on early detection of small tumors. Independent of the EPI analysis, Monte Carlo methods are applied to predict statistical tumor growth due to perturbed information flow from the environment into transformed cells. A "simplest" Monte Carlo model is suggested by the findings in the EPI approach that tumor growth arises out of a minimally complex mechanism. The outputs of large numbers of simulations show that (a) about 40% of the populations do not survive the first two-generations due to mutations in critical gene segments; but (b) those that do survive will experience power law growth identical to the predicted rate obtained from the independent EPI approach. The agreement between these two very different approaches to the problem strongly supports the idea that tumor cells regress to a state of minimum information during carcinogenesis, and that information dynamics are integrally related to tumor development and growth.
Relationship between fluid bed aerosol generator operation and the aerosol produced
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carpenter, R.L.; Yerkes, K.
1980-12-01
The relationships between bed operation in a fluid bed aerosol generator and aerosol output were studied. A two-inch diameter fluid bed aerosol generator (FBG) was constructed using stainless steel powder as a fluidizing medium. Fly ash from coal combustion was aerosolized and the influence of FBG operating parameters on aerosol mass median aerodynamic diameter (MMAD), geometric standard deviation (sigma/sub g/) and concentration was examined. In an effort to extend observations on large fluid beds to small beds using fine bed particles, minimum fluidizing velocities and elutriation constant were computed. Although FBG minimum fluidizing velocity agreed well with calculations, FBG elutriationmore » constant did not. The results of this study show that the properties of aerosols produced by a FBG depend on fluid bed height and air flow through the bed after the minimum fluidizing velocity is exceeded.« less
NASA Astrophysics Data System (ADS)
Kitagawa, M.; Yamamoto, Y.
1987-11-01
An alternative scheme for generating amplitude-squeezed states of photons based on unitary evolution which can properly be described by quantum mechanics is presented. This scheme is a nonlinear Mach-Zehnder interferometer containing an optical Kerr medium. The quasi-probability density (QPD) and photon-number distribution of the output field are calculated, and it is demonstrated that the reduced photon-number uncertainty and enhanced phase uncertainty maintain the minimum-uncertainty product. A self-phase-modulation of the single-mode quantized field in the Kerr medium is described based on localized operators. The spatial evolution of the state is demonstrated by QPD in the Schroedinger picture. It is shown that photon-number variance can be reduced to a level far below the limit for an ordinary squeezed state, and that the state prepared using this scheme remains a number-phase minimum-uncertainty state until the maximum reduction of number fluctuations is surpassed.
Liu, Zhigang; Han, Zhiwei; Zhang, Yang; Zhang, Qiaoge
2014-11-01
Multiwavelets possess better properties than traditional wavelets. Multiwavelet packet transformation has more high-frequency information. Spectral entropy can be applied as an analysis index to the complexity or uncertainty of a signal. This paper tries to define four multiwavelet packet entropies to extract the features of different transmission line faults, and uses a radial basis function (RBF) neural network to recognize and classify 10 fault types of power transmission lines. First, the preprocessing and postprocessing problems of multiwavelets are presented. Shannon entropy and Tsallis entropy are introduced, and their difference is discussed. Second, multiwavelet packet energy entropy, time entropy, Shannon singular entropy, and Tsallis singular entropy are defined as the feature extraction methods of transmission line fault signals. Third, the plan of transmission line fault recognition using multiwavelet packet entropies and an RBF neural network is proposed. Finally, the experimental results show that the plan with the four multiwavelet packet energy entropies defined in this paper achieves better performance in fault recognition. The performance with SA4 (symmetric antisymmetric) multiwavelet packet Tsallis singular entropy is the best among the combinations of different multiwavelet packets and the four multiwavelet packet entropies.
Uniqueness and characterization theorems for generalized entropies
NASA Astrophysics Data System (ADS)
Enciso, Alberto; Tempesta, Piergiulio
2017-12-01
The requirement that an entropy function be composable is key: it means that the entropy of a compound system can be calculated in terms of the entropy of its independent components. We prove that, under mild regularity assumptions, the only composable generalized entropy in trace form is the Tsallis one-parameter family (which contains Boltzmann-Gibbs as a particular case). This result leads to the use of generalized entropies that are not of trace form, such as Rényi’s entropy, in the study of complex systems. In this direction, we also present a characterization theorem for a large class of composable non-trace-form entropy functions with features akin to those of Rényi’s entropy.
MIMO equalization with adaptive step size for few-mode fiber transmission systems.
van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J
2014-01-13
Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.
Nonlinear Modeling and Control of a Propellant Mixer
NASA Technical Reports Server (NTRS)
Barbieri, Enrique; Richter, Hanz; Figueroa, Fernando
2003-01-01
A mixing chamber used in rocket engine combustion testing at NASA Stennis Space Center is modeled by a second order nonlinear MIMO system. The mixer is used to condition the thermodynamic properties of cryogenic liquid propellant by controlled injection of the same substance in the gaseous phase. The three inputs of the mixer are the positions of the valves regulating the liquid and gas flows at the inlets, and the position of the exit valve regulating the flow of conditioned propellant. The outputs to be tracked and/or regulated are mixer internal pressure, exit mass flow, and exit temperature. The outputs must conform to test specifications dictated by the type of rocket engine or component being tested downstream of the mixer. Feedback linearization is used to achieve tracking and regulation of the outputs. It is shown that the system is minimum-phase provided certain conditions on the parameters are satisfied. The conditions are shown to have physical interpretation.
NASA Astrophysics Data System (ADS)
Schliesser, Jacob M.; Huang, Baiyu; Sahu, Sulata K.; Asplund, Megan; Navrotsky, Alexandra; Woodfield, Brian F.
2018-03-01
We have measured the heat capacities of several well-characterized bulk and nanophase Fe3O4-Co3O4 and Fe3O4-Mn3O4 spinel solid solution samples from which magnetic properties of transitions and third-law entropies have been determined. The magnetic transitions show several features common to effects of particle and magnetic domain sizes. From the standard molar entropies, excess entropies of mixing have been generated for these solid solutions and compared with configurational entropies determined previously by assuming appropriate cation and valence distributions. The vibrational and magnetic excess entropies for bulk materials are comparable in magnitude to the respective configurational entropies indicating that excess entropies of mixing must be included when analyzing entropies of mixing. The excess entropies for nanophase materials are even larger than the configurational entropies. Changes in valence, cation distribution, bonding and microstructure between the mixing ions are the likely sources of the positive excess entropies of mixing.
Abe, Sumiyoshi
2002-10-01
The q-exponential distributions, which are generalizations of the Zipf-Mandelbrot power-law distribution, are frequently encountered in complex systems at their stationary states. From the viewpoint of the principle of maximum entropy, they can apparently be derived from three different generalized entropies: the Rényi entropy, the Tsallis entropy, and the normalized Tsallis entropy. Accordingly, mere fittings of observed data by the q-exponential distributions do not lead to identification of the correct physical entropy. Here, stabilities of these entropies, i.e., their behaviors under arbitrary small deformation of a distribution, are examined. It is shown that, among the three, the Tsallis entropy is stable and can provide an entropic basis for the q-exponential distributions, whereas the others are unstable and cannot represent any experimentally observable quantities.
One-Shot Decoupling and Page Curves from a Dynamical Model for Black Hole Evaporation.
Brádler, Kamil; Adami, Christoph
2016-03-11
One-shot decoupling is a powerful primitive in quantum information theory and was hypothesized to play a role in the black hole information paradox. We study black hole dynamics modeled by a trilinear Hamiltonian whose semiclassical limit gives rise to Hawking radiation. An explicit numerical calculation of the discretized path integral of the S matrix shows that decoupling is exact in the continuous limit, implying that quantum information is perfectly transferred from the black hole to radiation. A striking consequence of decoupling is the emergence of an output radiation entropy profile that follows Page's prediction. We argue that information transfer and the emergence of Page curves is a robust feature of any multilinear interaction Hamiltonian with a bounded spectrum.
Computer programs for thermodynamic and transport properties of hydrogen (tabcode-II)
NASA Technical Reports Server (NTRS)
Roder, H. M.; Mccarty, R. D.; Hall, W. J.
1972-01-01
The thermodynamic and transport properties of para and equilibrium hydrogen have been programmed into a series of computer routines. Input variables are the pair's pressure-temperature and pressure-enthalpy. The programs cover the range from 1 to 5000 psia with temperatures from the triple point to 6000 R or enthalpies from minus 130 BTU/lb to 25,000 BTU/lb. Output variables are enthalpy or temperature, density, entropy, thermal conductivity, viscosity, at constant volume, the heat capacity ratio, and a heat transfer parameter. Property values on the liquid and vapor boundaries are conveniently obtained through two small routines. The programs achieve high speed by using linear interpolation in a grid of precomputed points which define the surface of the property returned.
Visual communication - Information and fidelity. [of images
NASA Technical Reports Server (NTRS)
Huck, Freidrich O.; Fales, Carl L.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur; Reichenbach, Stephen E.
1993-01-01
This assessment of visual communication deals with image gathering, coding, and restoration as a whole rather than as separate and independent tasks. The approach focuses on two mathematical criteria, information and fidelity, and on their relationships to the entropy of the encoded data and to the visual quality of the restored image. Past applications of these criteria to the assessment of image coding and restoration have been limited to the link that connects the output of the image-gathering device to the input of the image-display device. By contrast, the approach presented in this paper explicitly includes the critical limiting factors that constrain image gathering and display. This extension leads to an end-to-end assessment theory of visual communication that combines optical design with digital processing.
On the entropy variation in the scenario of entropic gravity
NASA Astrophysics Data System (ADS)
Xiao, Yong; Bai, Shi-Yang
2018-05-01
In the scenario of entropic gravity, entropy varies as a function of the location of the matter, while the tendency to increase entropy appears as gravity. We concentrate on studying the entropy variation of a typical gravitational system with different relative positions between the mass and the gravitational source. The result is that the entropy of the system doesn't increase when the mass is displaced closer to the gravitational source. In this way it disproves the proposal of entropic gravity from thermodynamic entropy. It doesn't exclude the possibility that gravity originates from non-thermodynamic entropy like entanglement entropy.
Entropy and climate. I - ERBE observations of the entropy production of the earth
NASA Technical Reports Server (NTRS)
Stephens, G. L.; O'Brien, D. M.
1993-01-01
An approximate method for estimating the global distributions of the entropy fluxes flowing through the upper boundary of the climate system is introduced, and an estimate of the entropy exchange between the earth and space and the entropy production of the planet is provided. Entropy fluxes calculated from the Earth Radiation Budget Experiment measurements show how the long-wave entropy flux densities dominate the total entropy fluxes at all latitudes compared with the entropy flux densities associated with reflected sunlight, although the short-wave flux densities are important in the context of clear sky-cloudy sky net entropy flux differences. It is suggested that the entropy production of the planet is both constant for the 36 months of data considered and very near its maximum possible value. The mean value of this production is 0.68 x 10 exp 15 W/K, and the amplitude of the annual cycle is approximately 1 to 2 percent of this value.
Logarithmic black hole entropy corrections and holographic Rényi entropy
NASA Astrophysics Data System (ADS)
Mahapatra, Subhash
2018-01-01
The entanglement and Rényi entropies for spherical entangling surfaces in CFTs with gravity duals can be explicitly calculated by mapping these entropies first to the thermal entropy on hyperbolic space and then, using the AdS/CFT correspondence, to the Wald entropy of topological black holes. Here we extend this idea by taking into account corrections to the Wald entropy. Using the method based on horizon symmetries and the asymptotic Cardy formula, we calculate corrections to the Wald entropy and find that these corrections are proportional to the logarithm of the area of the horizon. With the corrected expression for the entropy of the black hole, we then find corrections to the Rényi entropies. We calculate these corrections for both Einstein and Gauss-Bonnet gravity duals. Corrections with logarithmic dependence on the area of the entangling surface naturally occur at the order GD^0. The entropic c-function and the inequalities of the Rényi entropy are also satisfied even with the correction terms.
Towse, Clare-Louise; Akke, Mikael; Daggett, Valerie
2017-04-27
Molecular dynamics (MD) simulations contain considerable information with regard to the motions and fluctuations of a protein, the magnitude of which can be used to estimate conformational entropy. Here we survey conformational entropy across protein fold space using the Dynameomics database, which represents the largest existing data set of protein MD simulations for representatives of essentially all known protein folds. We provide an overview of MD-derived entropies accounting for all possible degrees of dihedral freedom on an unprecedented scale. Although different side chains might be expected to impose varying restrictions on the conformational space that the backbone can sample, we found that the backbone entropy and side chain size are not strictly coupled. An outcome of these analyses is the Dynameomics Entropy Dictionary, the contents of which have been compared with entropies derived by other theoretical approaches and experiment. As might be expected, the conformational entropies scale linearly with the number of residues, demonstrating that conformational entropy is an extensive property of proteins. The calculated conformational entropies of folding agree well with previous estimates. Detailed analysis of specific cases identifies deviations in conformational entropy from the average values that highlight how conformational entropy varies with sequence, secondary structure, and tertiary fold. Notably, α-helices have lower entropy on average than do β-sheets, and both are lower than coil regions.
Double symbolic joint entropy in nonlinear dynamic complexity analysis
NASA Astrophysics Data System (ADS)
Yao, Wenpo; Wang, Jun
2017-07-01
Symbolizations, the base of symbolic dynamic analysis, are classified as global static and local dynamic approaches which are combined by joint entropy in our works for nonlinear dynamic complexity analysis. Two global static methods, symbolic transformations of Wessel N. symbolic entropy and base-scale entropy, and two local ones, namely symbolizations of permutation and differential entropy, constitute four double symbolic joint entropies that have accurate complexity detections in chaotic models, logistic and Henon map series. In nonlinear dynamical analysis of different kinds of heart rate variability, heartbeats of healthy young have higher complexity than those of the healthy elderly, and congestive heart failure (CHF) patients are lowest in heartbeats' joint entropy values. Each individual symbolic entropy is improved by double symbolic joint entropy among which the combination of base-scale and differential symbolizations have best complexity analysis. Test results prove that double symbolic joint entropy is feasible in nonlinear dynamic complexity analysis.
Effect of entropy on anomalous transport in ITG-modes of magneto-plasma
NASA Astrophysics Data System (ADS)
Yaqub Khan, M.; Qaiser Manzoor, M.; Haq, A. ul; Iqbal, J.
2017-04-01
The ideal gas equation and S={{c}v}log ≤ft(P/ρ \\right) (where S is entropy, P is pressure and ρ is the mass density) define the interconnection of entropy with the temperature and density of plasma. Therefore, different phenomena relating to plasma and entropy need to be investigated. By employing the Braginskii transport equations for a nonuniform electron-ion magnetoplasma, two new parameters—the entropy distribution function and the entropy gradient drift—are defined, a new dispersion relation is obtained, and the dependence of anomalous transport on entropy is also proved. Some results, like monotonicity, the entropy principle and the second law of thermodynamics, are proved with a new definition of entropy. This work will open new horizons in fusion processes, not only by controlling entropy in tokamak plasmas—particularly in the pedestal regions of the H-mode and space plasmas—but also in engineering sciences.
Quantifying and minimizing entropy generation in AMTEC cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendricks, T.J.; Huang, C.
1997-12-31
Entropy generation in an AMTEC cell represents inherent power loss to the AMTEC cell. Minimizing cell entropy generation directly maximizes cell power generation and efficiency. An internal project is on-going at AMPS to identify, quantify and minimize entropy generation mechanisms within an AMTEC cell, with the goal of determining cost-effective design approaches for maximizing AMTEC cell power generation. Various entropy generation mechanisms have been identified and quantified. The project has investigated several cell design techniques in a solar-driven AMTEC system to minimize cell entropy generation and produce maximum power cell designs. In many cases, various sources of entropy generation aremore » interrelated such that minimizing entropy generation requires cell and system design optimization. Some of the tradeoffs between various entropy generation mechanisms are quantified and explained and their implications on cell design are discussed. The relationship between AMTEC cell power and efficiency and entropy generation is presented and discussed.« less
Particle tracking by using single coefficient of Wigner-Ville distribution
NASA Astrophysics Data System (ADS)
Widjaja, J.; Dawprateep, S.; Chuamchaitrakool, P.; Meemon, P.
2016-11-01
A new method for extracting information from particle holograms by using a single coefficient of Wigner-Ville distribution (WVD) is proposed to obviate drawbacks of conventional numerical reconstructions. Our previous study found that analysis of the holograms by using the WVD gives output coefficients which are mainly confined along a diagonal direction intercepted at the origin of the WVD plane. The slope of this diagonal direction is inversely proportional to the particle position. One of these coefficients always has minimum amplitude, regardless of the particle position. By detecting position of the coefficient with minimum amplitude in the WVD plane, the particle position can be accurately measured. The proposed method is verified through computer simulations.
Dependence of injection locking of a TEA CO2 laser on intensity of injected radiation
NASA Technical Reports Server (NTRS)
Oppenheim, U. P.; Menzies, R. T.; Kavaya, M. J.
1982-01-01
The results of an experimental study to determine the minimum required injected power to control the output frequency of a TEA CO2 laser are reported. A CW CO2 waveguide laser was used as the injection oscillator. Both the power and the frequency of the injected radiation were varied, while the TEA resonator cavity length was adjusted to match the frequency of the injected signal. Single-longitudinal mode (SLM) TEA laser radiation was produced for injected power levels which are several orders of magnitude below those previously reported. The ratio of SLM output power to injection power exceeded 10 to the 12th at the lowest levels of injected intensity.
Floating-point system quantization errors in digital control systems
NASA Technical Reports Server (NTRS)
Phillips, C. L.
1973-01-01
The results are reported of research into the effects on system operation of signal quantization in a digital control system. The investigation considered digital controllers (filters) operating in floating-point arithmetic in either open-loop or closed-loop systems. An error analysis technique is developed, and is implemented by a digital computer program that is based on a digital simulation of the system. As an output the program gives the programing form required for minimum system quantization errors (either maximum of rms errors), and the maximum and rms errors that appear in the system output for a given bit configuration. The program can be integrated into existing digital simulations of a system.
End-pumped continuous-wave intracavity yellow Raman laser at 590 nm with SrWO4 Raman crystal
NASA Astrophysics Data System (ADS)
Yang, F. G.; You, Z. Y.; Zhu, Z. J.; Wang, Y.; Li, J. F.; Tu, C. Y.
2010-01-01
We present an end-pumped continuous-wave intra-cavity yellow Raman laser at 590 nm with a 60 mm long pure crystal SrWO4 and an intra-cavity LiB3O5 frequency doubling crystal. The highest output power of yellow laser at 590 nm was 230 mW and the output power and threshold were found to be correlative with the polarized directions of pure single crystal SrWO4 deeply. Along different directions, the minimum and maximum thresholds of yellow Raman laser at 590 nm were measured to be 2.8 W and 14.3 W with respect to 808 nm LD pump power, respectively.
Thermodynamic and Differential Entropy under a Change of Variables
Hnizdo, Vladimir; Gilson, Michael K.
2013-01-01
The differential Shannon entropy of information theory can change under a change of variables (coordinates), but the thermodynamic entropy of a physical system must be invariant under such a change. This difference is puzzling, because the Shannon and Gibbs entropies have the same functional form. We show that a canonical change of variables can, indeed, alter the spatial component of the thermodynamic entropy just as it alters the differential Shannon entropy. However, there is also a momentum part of the entropy, which turns out to undergo an equal and opposite change when the coordinates are transformed, so that the total thermodynamic entropy remains invariant. We furthermore show how one may correctly write the change in total entropy for an isothermal physical process in any set of spatial coordinates. PMID:24436633
Entropy for Mechanically Vibrating Systems
NASA Astrophysics Data System (ADS)
Tufano, Dante
The research contained within this thesis deals with the subject of entropy as defined for and applied to mechanically vibrating systems. This work begins with an overview of entropy as it is understood in the fields of classical thermodynamics, information theory, statistical mechanics, and statistical vibroacoustics. Khinchin's definition of entropy, which is the primary definition used for the work contained in this thesis, is introduced in the context of vibroacoustic systems. The main goal of this research is to to establish a mathematical framework for the application of Khinchin's entropy in the field of statistical vibroacoustics by examining the entropy context of mechanically vibrating systems. The introduction of this thesis provides an overview of statistical energy analysis (SEA), a modeling approach to vibroacoustics that motivates this work on entropy. The objective of this thesis is given, and followed by a discussion of the intellectual merit of this work as well as a literature review of relevant material. Following the introduction, an entropy analysis of systems of coupled oscillators is performed utilizing Khinchin's definition of entropy. This analysis develops upon the mathematical theory relating to mixing entropy, which is generated by the coupling of vibroacoustic systems. The mixing entropy is shown to provide insight into the qualitative behavior of such systems. Additionally, it is shown that the entropy inequality property of Khinchin's entropy can be reduced to an equality using the mixing entropy concept. This equality can be interpreted as a facet of the second law of thermodynamics for vibroacoustic systems. Following this analysis, an investigation of continuous systems is performed using Khinchin's entropy. It is shown that entropy analyses using Khinchin's entropy are valid for continuous systems that can be decomposed into a finite number of modes. The results are shown to be analogous to those obtained for simple oscillators, which demonstrates the applicability of entropy-based approaches to real-world systems. Three systems are considered to demonstrate these findings: 1) a rod end-coupled to a simple oscillator, 2) two end-coupled rods, and 3) two end-coupled beams. The aforementioned work utilizes the weak coupling assumption to determine the entropy of composite systems. Following this discussion, a direct method of finding entropy is developed which does not rely on this limiting assumption. The resulting entropy provides a useful benchmark for evaluating the accuracy of the weak coupling approach, and is validated using systems of coupled oscillators. The later chapters of this work discuss Khinchin's entropy as applied to nonlinear and nonconservative systems, respectively. The discussion of entropy for nonlinear systems is motivated by the desire to expand the applicability of SEA techniques beyond the linear regime. The discussion of nonconservative systems is also crucial, since real-world systems interact with their environment, and it is necessary to confirm the validity of an entropy approach for systems that are relevant in the context of SEA. Having developed a mathematical framework for determining entropy under a number of previously unexplored cases, the relationship between thermodynamics and statistical vibroacoustics can be better understood. Specifically, vibroacoustic temperatures can be obtained for systems that are not necessarily linear or weakly coupled. In this way, entropy provides insight into how the power flow proportionality of statistical energy analysis (SEA) can be applied to a broader class of vibroacoustic systems. As such, entropy is a useful tool for both justifying and expanding the foundational results of SEA.
Entropy is more resistant to artifacts than bispectral index in brain-dead organ donors.
Wennervirta, Johanna; Salmi, Tapani; Hynynen, Markku; Yli-Hankala, Arvi; Koivusalo, Anna-Maria; Van Gils, Mark; Pöyhiä, Reino; Vakkuri, Anne
2007-01-01
To evaluate the usefulness of entropy and the bispectral index (BIS) in brain-dead subjects. A prospective, open, nonselective, observational study in the university hospital. 16 brain-dead organ donors. Time-domain electroencephalography (EEG), spectral entropy of the EEG, and BIS were recorded during solid organ harvest. State entropy differed significantly from 0 (isoelectric EEG) 28%, response entropy 29%, and BIS 68% of the total recorded time. The median values during the operation were state entropy 0.0, response entropy 0.0, and BIS 3.0. In four of 16 organ donors studied the EEG was not isoelectric, and nonreactive rhythmic activity was noted in time-domain EEG. After excluding the results from subjects with persistent residual EEG activity state entropy, response entropy, and BIS values differed from zero 17%, 18%, and 62% of the recorded time, respectively. Median values were 0.0, 0.0, and 2.0 for state entropy, response entropy, and BIS, respectively. The highest index values in entropy and BIS monitoring were recorded without neuromuscular blockade. The main sources of artifacts were electrocauterization, 50-Hz artifact, handling of the donor, ballistocardiography, electromyography, and electrocardiography. Both entropy and BIS showed nonzero values due to artifacts after brain death diagnosis. BIS was more liable to artifacts than entropy. Neither of these indices are diagnostic tools, and care should be taken when interpreting EEG and EEG-derived indices in the evaluation of brain death.
Use of regional climate model output for hydrologic simulations
Hay, L.E.; Clark, M.P.; Wilby, R.L.; Gutowski, W.J.; Leavesley, G.H.; Pan, Z.; Arritt, R.W.; Takle, E.S.
2002-01-01
Daily precipitation and maximum and minimum temperature time series from a regional climate model (RegCM2) configured using the continental United States as a domain and run on a 52-km (approximately) spatial resolution were used as input to a distributed hydrologic model for one rainfall-dominated basin (Alapaha River at Statenville, Georgia) and three snowmelt-dominated basins (Animas River at Durango. Colorado; east fork of the Carson River near Gardnerville, Nevada: and Cle Elum River near Roslyn, Washington). For comparison purposes, spatially averaged daily datasets of precipitation and maximum and minimum temperature were developed from measured data for each basin. These datasets included precipitation and temperature data for all stations (hereafter, All-Sta) located within the area of the RegCM2 output used for each basin, but excluded station data used to calibrate the hydrologic model. Both the RegCM2 output and All-Sta data capture the gross aspects of the seasonal cycles of precipitation and temperature. However, in all four basins, the RegCM2- and All-Sta-based simulations of runoff show little skill on a daily basis [Nash-Sutcliffe (NS) values range from 0.05 to 0.37 for RegCM2 and -0.08 to 0.65 for All-Sta]. When the precipitation and temperature biases are corrected in the RegCM2 output and All-Sta data (Bias-RegCM2 and Bias-All, respectively) the accuracy of the daily runoff simulations improve dramatically for the snowmelt-dominated basins (NS values range from 0.41 to 0.66 for RegCM2 and 0.60 to 0.76 for All-Sta). In the rainfall-dominated basin, runoff simulations based on the Bias-RegCM2 output show no skill (NS value of 0.09) whereas Bias-All simulated runoff improves (NS value improved from - 0.08 to 0.72). These results indicate that measured data at the coarse resolution of the RegCM2 output can be made appropriate for basin-scale modeling through bias correction (essentially a magnitude correction). However, RegCM2 output, even when bias corrected, does not contain the day-to-day variability present in the All-Sta dataset that is necessary for basin-scale modeling. Future work is warranted to identify the causes for systematic biases in RegCM2 simulations, develop methods to remove the biases, and improve RegCM2 simulations of daily variability in local climate.
NASA Astrophysics Data System (ADS)
Jeon, Wonju; Lee, Sang-Hee
2012-12-01
In our previous study, we defined the branch length similarity (BLS) entropy for a simple network consisting of a single node and numerous branches. As the first application of this entropy to characterize shapes, the BLS entropy profiles of 20 battle tank shapes were calculated from simple networks created by connecting pixels in the boundary of the shape. The profiles successfully characterized the tank shapes through a comparison of their BLS entropy profiles. Following the application, this entropy was used to characterize human's emotional faces, such as happiness and sad, and to measure the degree of complexity for termite tunnel networks. These applications indirectly indicate that the BLS entropy profile can be a useful tool to characterize networks and shapes. However, the ability of the BLS entropy in the characterization depends on the image resolution because the entropy is determined by the number of nodes for the boundary of a shape. Higher resolution means more nodes. If the entropy is to be widely used in the scientific community, the effect of the resolution on the entropy profile should be understood. In the present study, we mathematically investigated the BLS entropy profile of a shape with infinite resolution and numerically investigated the variation in the pattern of the entropy profile caused by changes in the resolution change in the case of finite resolution.
NASA Astrophysics Data System (ADS)
Liu, Haixing; Savić, Dragan; Kapelan, Zoran; Zhao, Ming; Yuan, Yixing; Zhao, Hongbin
2014-07-01
Flow entropy is a measure of uniformity of pipe flows in water distribution systems. By maximizing flow entropy one can identify reliable layouts or connectivity in networks. In order to overcome the disadvantage of the common definition of flow entropy that does not consider the impact of pipe diameter on reliability, an extended definition of flow entropy, termed as diameter-sensitive flow entropy, is proposed. This new methodology is then assessed by using other reliability methods, including Monte Carlo Simulation, a pipe failure probability model, and a surrogate measure (resilience index) integrated with water demand and pipe failure uncertainty. The reliability assessment is based on a sample of WDS designs derived from an optimization process for each of the two benchmark networks. Correlation analysis is used to evaluate quantitatively the relationship between entropy and reliability. To ensure reliability, a comparative analysis between the flow entropy and the new method is conducted. The results demonstrate that the diameter-sensitive flow entropy shows consistently much stronger correlation with the three reliability measures than simple flow entropy. Therefore, the new flow entropy method can be taken as a better surrogate measure for reliability and could be potentially integrated into the optimal design problem of WDSs. Sensitivity analysis results show that the velocity parameters used in the new flow entropy has no significant impact on the relationship between diameter-sensitive flow entropy and reliability.
Entropy generation of nanofluid flow in a microchannel heat sink
NASA Astrophysics Data System (ADS)
Manay, Eyuphan; Akyürek, Eda Feyza; Sahin, Bayram
2018-06-01
Present study aims to investigate the effects of the presence of nano sized TiO2 particles in the base fluid on entropy generation rate in a microchannel heat sink. Pure water was chosen as base fluid, and TiO2 particles were suspended into the pure water in five different particle volume fractions of 0.25%, 0.5%, 1.0%, 1.5% and 2.0%. Under laminar, steady state flow and constant heat flux boundary conditions, thermal, frictional, total entropy generation rates and entropy generation number ratios of nanofluids were experimentally analyzed in microchannel flow for different channel heights of 200 μm, 300 μm, 400 μm and 500 μm. It was observed that frictional and total entropy generation rates increased as thermal entropy generation rate were decreasing with an increase in particle volume fraction. In microchannel flows, thermal entropy generation could be neglected due to its too low rate smaller than 1.10e-07 in total entropy generation. Higher channel heights caused higher thermal entropy generation rates, and increasing channel height yielded an increase from 30% to 52% in thermal entropy generation. When channel height decreased, an increase of 66%-98% in frictional entropy generation was obtained. Adding TiO2 nanoparticles into the base fluid caused thermal entropy generation to decrease about 1.8%-32.4%, frictional entropy generation to increase about 3.3%-21.6%.
NASA Astrophysics Data System (ADS)
Guo, Ran
2018-04-01
In this paper, we investigate the definition of the entropy in the Fokker–Planck equation under the generalized fluctuation–dissipation relation (FDR), which describes a Brownian particle moving in a complex medium with friction and multiplicative noise. The friction and the noise are related by the generalized FDR. The entropy for such a system is defined first. According to the definition of the entropy, we calculate the entropy production and the entropy flux. Lastly, we make a numerical calculation to display the results in figures.
Single water entropy: hydrophobic crossover and application to drug binding.
Sasikala, Wilbee D; Mukherjee, Arnab
2014-09-11
Entropy of water plays an important role in both chemical and biological processes e.g. hydrophobic effect, molecular recognition etc. Here we use a new approach to calculate translational and rotational entropy of the individual water molecules around different hydrophobic and charged solutes. We show that for small hydrophobic solutes, the translational and rotational entropies of each water molecule increase as a function of its distance from the solute reaching finally to a constant bulk value. As the size of the solute increases (0.746 nm), the behavior of the translational entropy is opposite; water molecules closest to the solute have higher entropy that reduces with distance from the solute. This indicates that there is a crossover in translational entropy of water molecules around hydrophobic solutes from negative to positive values as the size of the solute is increased. Rotational entropy of water molecules around hydrophobic solutes for all sizes increases with distance from the solute, indicating the absence of crossover in rotational entropy. This makes the crossover in total entropy (translation + rotation) of water molecule happen at much larger size (>1.5 nm) for hydrophobic solutes. Translational entropy of single water molecule scales logarithmically (Str(QH) = C + kB ln V), with the volume V obtained from the ellipsoid of inertia. We further discuss the origin of higher entropy of water around water and show the possibility of recovering the entropy loss of some hypothetical solutes. The results obtained are helpful to understand water entropy behavior around various hydrophobic and charged environments within biomolecules. Finally, we show how our approach can be used to calculate the entropy of the individual water molecules in a protein cavity that may be replaced during ligand binding.
RNA Thermodynamic Structural Entropy
Garcia-Martin, Juan Antonio; Clote, Peter
2015-01-01
Conformational entropy for atomic-level, three dimensional biomolecules is known experimentally to play an important role in protein-ligand discrimination, yet reliable computation of entropy remains a difficult problem. Here we describe the first two accurate and efficient algorithms to compute the conformational entropy for RNA secondary structures, with respect to the Turner energy model, where free energy parameters are determined from UV absorption experiments. An algorithm to compute the derivational entropy for RNA secondary structures had previously been introduced, using stochastic context free grammars (SCFGs). However, the numerical value of derivational entropy depends heavily on the chosen context free grammar and on the training set used to estimate rule probabilities. Using data from the Rfam database, we determine that both of our thermodynamic methods, which agree in numerical value, are substantially faster than the SCFG method. Thermodynamic structural entropy is much smaller than derivational entropy, and the correlation between length-normalized thermodynamic entropy and derivational entropy is moderately weak to poor. In applications, we plot the structural entropy as a function of temperature for known thermoswitches, such as the repression of heat shock gene expression (ROSE) element, we determine that the correlation between hammerhead ribozyme cleavage activity and total free energy is improved by including an additional free energy term arising from conformational entropy, and we plot the structural entropy of windows of the HIV-1 genome. Our software RNAentropy can compute structural entropy for any user-specified temperature, and supports both the Turner’99 and Turner’04 energy parameters. It follows that RNAentropy is state-of-the-art software to compute RNA secondary structure conformational entropy. Source code is available at https://github.com/clotelab/RNAentropy/; a full web server is available at http://bioinformatics.bc.edu/clotelab/RNAentropy, including source code and ancillary programs. PMID:26555444
RNA Thermodynamic Structural Entropy.
Garcia-Martin, Juan Antonio; Clote, Peter
2015-01-01
Conformational entropy for atomic-level, three dimensional biomolecules is known experimentally to play an important role in protein-ligand discrimination, yet reliable computation of entropy remains a difficult problem. Here we describe the first two accurate and efficient algorithms to compute the conformational entropy for RNA secondary structures, with respect to the Turner energy model, where free energy parameters are determined from UV absorption experiments. An algorithm to compute the derivational entropy for RNA secondary structures had previously been introduced, using stochastic context free grammars (SCFGs). However, the numerical value of derivational entropy depends heavily on the chosen context free grammar and on the training set used to estimate rule probabilities. Using data from the Rfam database, we determine that both of our thermodynamic methods, which agree in numerical value, are substantially faster than the SCFG method. Thermodynamic structural entropy is much smaller than derivational entropy, and the correlation between length-normalized thermodynamic entropy and derivational entropy is moderately weak to poor. In applications, we plot the structural entropy as a function of temperature for known thermoswitches, such as the repression of heat shock gene expression (ROSE) element, we determine that the correlation between hammerhead ribozyme cleavage activity and total free energy is improved by including an additional free energy term arising from conformational entropy, and we plot the structural entropy of windows of the HIV-1 genome. Our software RNAentropy can compute structural entropy for any user-specified temperature, and supports both the Turner'99 and Turner'04 energy parameters. It follows that RNAentropy is state-of-the-art software to compute RNA secondary structure conformational entropy. Source code is available at https://github.com/clotelab/RNAentropy/; a full web server is available at http://bioinformatics.bc.edu/clotelab/RNAentropy, including source code and ancillary programs.
Relating different quantum generalizations of the conditional Rényi entropy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tomamichel, Marco; School of Physics, The University of Sydney, Sydney 2006; Berta, Mario
2014-08-15
Recently a new quantum generalization of the Rényi divergence and the corresponding conditional Rényi entropies was proposed. Here, we report on a surprising relation between conditional Rényi entropies based on this new generalization and conditional Rényi entropies based on the quantum relative Rényi entropy that was used in previous literature. Our result generalizes the well-known duality relation H(A|B) + H(A|C) = 0 of the conditional von Neumann entropy for tripartite pure states to Rényi entropies of two different kinds. As a direct application, we prove a collection of inequalities that relate different conditional Rényi entropies and derive a new entropicmore » uncertainty relation.« less
Exact analytical thermodynamic expressions for a Brownian heat engine
NASA Astrophysics Data System (ADS)
Taye, Mesfin Asfaw
2015-09-01
The nonequilibrium thermodynamics feature of a Brownian motor operating between two different heat baths is explored as a function of time t . Using the Gibbs entropy and Schnakenberg microscopic stochastic approach, we find exact closed form expressions for the free energy, the rate of entropy production, and the rate of entropy flow from the system to the outside. We show that when the system is out of equilibrium, it constantly produces entropy and at the same time extracts entropy out of the system. Its entropy production and extraction rates decrease in time and saturate to a constant value. In the long time limit, the rate of entropy production balances the rate of entropy extraction, and at equilibrium both entropy production and extraction rates become zero. Furthermore, via the present model, many thermodynamic theories can be checked.
Exact analytical thermodynamic expressions for a Brownian heat engine.
Taye, Mesfin Asfaw
2015-09-01
The nonequilibrium thermodynamics feature of a Brownian motor operating between two different heat baths is explored as a function of time t. Using the Gibbs entropy and Schnakenberg microscopic stochastic approach, we find exact closed form expressions for the free energy, the rate of entropy production, and the rate of entropy flow from the system to the outside. We show that when the system is out of equilibrium, it constantly produces entropy and at the same time extracts entropy out of the system. Its entropy production and extraction rates decrease in time and saturate to a constant value. In the long time limit, the rate of entropy production balances the rate of entropy extraction, and at equilibrium both entropy production and extraction rates become zero. Furthermore, via the present model, many thermodynamic theories can be checked.
Modeling the Overalternating Bias with an Asymmetric Entropy Measure
Gronchi, Giorgio; Raglianti, Marco; Noventa, Stefano; Lazzeri, Alessandro; Guazzini, Andrea
2016-01-01
Psychological research has found that human perception of randomness is biased. In particular, people consistently show the overalternating bias: they rate binary sequences of symbols (such as Heads and Tails in coin flipping) with an excess of alternation as more random than prescribed by the normative criteria of Shannon's entropy. Within data mining for medical applications, Marcellin proposed an asymmetric measure of entropy that can be ideal to account for such bias and to quantify subjective randomness. We fitted Marcellin's entropy and Renyi's entropy (a generalized form of uncertainty measure comprising many different kinds of entropies) to experimental data found in the literature with the Differential Evolution algorithm. We observed a better fit for Marcellin's entropy compared to Renyi's entropy. The fitted asymmetric entropy measure also showed good predictive properties when applied to different datasets of randomness-related tasks. We concluded that Marcellin's entropy can be a parsimonious and effective measure of subjective randomness that can be useful in psychological research about randomness perception. PMID:27458418
Entropy for the Complexity of Physiological Signal Dynamics.
Zhang, Xiaohua Douglas
2017-01-01
Recently, the rapid development of large data storage technologies, mobile network technology, and portable medical devices makes it possible to measure, record, store, and track analysis of biological dynamics. Portable noninvasive medical devices are crucial to capture individual characteristics of biological dynamics. The wearable noninvasive medical devices and the analysis/management of related digital medical data will revolutionize the management and treatment of diseases, subsequently resulting in the establishment of a new healthcare system. One of the key features that can be extracted from the data obtained by wearable noninvasive medical device is the complexity of physiological signals, which can be represented by entropy of biological dynamics contained in the physiological signals measured by these continuous monitoring medical devices. Thus, in this chapter I present the major concepts of entropy that are commonly used to measure the complexity of biological dynamics. The concepts include Shannon entropy, Kolmogorov entropy, Renyi entropy, approximate entropy, sample entropy, and multiscale entropy. I also demonstrate an example of using entropy for the complexity of glucose dynamics.
Information Entropy Analysis of the H1N1 Genetic Code
NASA Astrophysics Data System (ADS)
Martwick, Andy
2010-03-01
During the current H1N1 pandemic, viral samples are being obtained from large numbers of infected people world-wide and are being sequenced on the NCBI Influenza Virus Resource Database. The information entropy of the sequences was computed from the probability of occurrence of each nucleotide base at every position of each set of sequences using Shannon's definition of information entropy, [ H=∑bpb,2( 1pb ) ] where H is the observed information entropy at each nucleotide position and pb is the probability of the base pair of the nucleotides A, C, G, U. Information entropy of the current H1N1 pandemic is compared to reference human and swine H1N1 entropy. As expected, the current H1N1 entropy is in a low entropy state and has a very large mutation potential. Using the entropy method in mature genes we can identify low entropy regions of nucleotides that generally correlate to critical protein function.
Generalized Entanglement Entropy and Holography
NASA Astrophysics Data System (ADS)
Obregón, O.
2018-04-01
A nonextensive statistical mechanics entropy that depends only on the probability distribution is proposed in the framework of superstatistics. It is based on a Γ(χ 2) distribution that depends on β and also on pl . The corresponding modified von Neumann entropy is constructed; it is shown that it can also be obtained from a generalized Replica trick. We address the question whether the generalized entanglement entropy can play a role in the gauge/gravity duality. We pay attention to 2dCFT and their gravity duals. The correction terms to the von Neumann entropy result more relevant than the usual UV (for c = 1) ones and also than those due to the area dependent AdS 3 entropy which result comparable to the UV ones. Then the correction terms due to the new entropy would modify the Ryu-Takayanagi identification between the CFT entanglement entropy and the AdS entropy in a different manner than the UV ones or than the corrections to the AdS 3 area dependent entropy.
Quench action and Rényi entropies in integrable systems
NASA Astrophysics Data System (ADS)
Alba, Vincenzo; Calabrese, Pasquale
2017-09-01
Entropy is a fundamental concept in equilibrium statistical mechanics, yet its origin in the nonequilibrium dynamics of isolated quantum systems is not fully understood. A strong consensus is emerging around the idea that the stationary thermodynamic entropy is the von Neumann entanglement entropy of a large subsystem embedded in an infinite system. Also motivated by cold-atom experiments, here we consider the generalization to Rényi entropies. We develop a new technique to calculate the diagonal Rényi entropy in the quench action formalism. In the spirit of the replica treatment for the entanglement entropy, the diagonal Rényi entropies are generalized free energies evaluated over a thermodynamic macrostate which depends on the Rényi index and, in particular, is not the same state describing von Neumann entropy. The technical reason for this perhaps surprising result is that the evaluation of the moments of the diagonal density matrix shifts the saddle point of the quench action. An interesting consequence is that different Rényi entropies encode information about different regions of the spectrum of the postquench Hamiltonian. Our approach provides a very simple proof of the long-standing issue that, for integrable systems, the diagonal entropy is half of the thermodynamic one and it allows us to generalize this result to the case of arbitrary Rényi entropy.
NASA Astrophysics Data System (ADS)
Sadeghi, Pegah; Safavinejad, Ali
2017-11-01
Radiative entropy generation through a gray absorbing, emitting, and scattering planar medium at radiative equilibrium with diffuse-gray walls is investigated. The radiative transfer equation and radiative entropy generation equations are solved using discrete ordinates method. Components of the radiative entropy generation are considered for two different boundary conditions: two walls are at a prescribed temperature and mixed boundary conditions, which one wall is at a prescribed temperature and the other is at a prescribed heat flux. The effect of wall emissivities, optical thickness, single scattering albedo, and anisotropic-scattering factor on the entropy generation is attentively investigated. The results reveal that entropy generation in the system mainly arises from irreversible radiative transfer at wall with lower temperature. Total entropy generation rate for the system with prescribed temperature at walls remarkably increases as wall emissivity increases; conversely, for system with mixed boundary conditions, total entropy generation rate slightly decreases. Furthermore, as the optical thickness increases, total entropy generation rate remarkably decreases for the system with prescribed temperature at walls; nevertheless, for the system with mixed boundary conditions, total entropy generation rate increases. The variation of single scattering albedo does not considerably affect total entropy generation rate. This parametric analysis demonstrates that the optical thickness and wall emissivities have a significant effect on the entropy generation in the system at radiative equilibrium. Considering the parameters affecting radiative entropy generation significantly, provides an opportunity to optimally design or increase overall performance and efficiency by applying entropy minimization techniques for the systems at radiative equilibrium.
NASA Astrophysics Data System (ADS)
Madami, Marco; Gubbiotti, Gianluca; Tacchi, Silvia; Carlotti, Giovanni
2017-11-01
Single- or multi-layered planar magnetic dots, with lateral dimensions ranging from tens to hundreds of nanometers, are used as elemental switches in current and forthcoming devices for information and communication technology (ICT), including magnetic memories, spin-torque oscillators and nano-magnetic logic gates. In this review article, we will first discuss energy dissipation during irreversible switching protocols of dots of different dimensions, ranging from a few tens of nanometers to the micrometric range. Then we will focus on the fundamental energy limits of adiabatic (slow) erasure and reversal of a magnetic nanodot, showing that dissipationless operation is achievable, provided that both dynamic reversibility (arbitrarily slow application of external fields) and entropic reversibility (no free entropy increase) are insured. However, recent theoretical and experimental tests of magnetic-dot erasure reveal that intrinsic defects related to materials imperfections such as roughness or polycrystallinity, may cause an excess of dissipation if compared to the minimum theoretical limit. We will conclude providing an outlook on the most promising strategies to achieve a new generation of power-saving nanomagnetic logic devices based on clusters of interacting dots and on straintronics.
Quantum tomography for collider physics: illustrations with lepton-pair production
NASA Astrophysics Data System (ADS)
Martens, John C.; Ralston, John P.; Takaki, J. D. Tapia
2018-01-01
Quantum tomography is a method to experimentally extract all that is observable about a quantum mechanical system. We introduce quantum tomography to collider physics with the illustration of the angular distribution of lepton pairs. The tomographic method bypasses much of the field-theoretic formalism to concentrate on what can be observed with experimental data. We provide a practical, experimentally driven guide to model-independent analysis using density matrices at every step. Comparison with traditional methods of analyzing angular correlations of inclusive reactions finds many advantages in the tomographic method, which include manifest Lorentz covariance, direct incorporation of positivity constraints, exhaustively complete polarization information, and new invariants free from frame conventions. For example, experimental data can determine the entanglement entropy of the production process. We give reproducible numerical examples and provide a supplemental standalone computer code that implements the procedure. We also highlight a property of complex positivity that guarantees in a least-squares type fit that a local minimum of a χ 2 statistic will be a global minimum: There are no isolated local minima. This property with an automated implementation of positivity promises to mitigate issues relating to multiple minima and convention dependence that have been problematic in previous work on angular distributions.
The Nature of Grand Minima and Maxima from Fully Nonlinear Flux Transport Dynamos
DOE Office of Scientific and Technical Information (OSTI.GOV)
Inceoglu, Fadil; Arlt, Rainer; Rempel, Matthias, E-mail: finceoglu@aip.de
We aim to investigate the nature and occurrence characteristics of grand solar minimum and maximum periods, which are observed in the solar proxy records such as {sup 10}Be and {sup 14}C, using a fully nonlinear Babcock–Leighton type flux transport dynamo including momentum and entropy equations. The differential rotation and meridional circulation are generated from the effect of turbulent Reynolds stress and are subjected to back-reaction from the magnetic field. To generate grand minimum- and maximum-like periods in our simulations, we used random fluctuations in the angular momentum transport process, namely the Λ-mechanism, and in the Babcock–Leighton mechanism. To characterize themore » nature and occurrences of the identified grand minima and maxima in our simulations, we used the waiting time distribution analyses, which reflect whether the underlying distribution arises from a random or a memory-bearing process. The results show that, in the majority of the cases, the distributions of grand minima and maxima reveal that the nature of these events originates from memoryless processes. We also found that in our simulations the meridional circulation speed tends to be smaller during grand maximum, while it is faster during grand minimum periods. The radial differential rotation tends to be larger during grand maxima, while it is smaller during grand minima. The latitudinal differential rotation, on the other hand, is found to be larger during grand minima.« less
High Order Entropy-Constrained Residual VQ for Lossless Compression of Images
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen
1995-01-01
High order entropy coding is a powerful technique for exploiting high order statistical dependencies. However, the exponentially high complexity associated with such a method often discourages its use. In this paper, an entropy-constrained residual vector quantization method is proposed for lossless compression of images. The method consists of first quantizing the input image using a high order entropy-constrained residual vector quantizer and then coding the residual image using a first order entropy coder. The distortion measure used in the entropy-constrained optimization is essentially the first order entropy of the residual image. Experimental results show very competitive performance.
Using entropy measures to characterize human locomotion.
Leverick, Graham; Szturm, Tony; Wu, Christine Q
2014-12-01
Entropy measures have been widely used to quantify the complexity of theoretical and experimental dynamical systems. In this paper, the value of using entropy measures to characterize human locomotion is demonstrated based on their construct validity, predictive validity in a simple model of human walking and convergent validity in an experimental study. Results show that four of the five considered entropy measures increase meaningfully with the increased probability of falling in a simple passive bipedal walker model. The same four entropy measures also experienced statistically significant increases in response to increasing age and gait impairment caused by cognitive interference in an experimental study. Of the considered entropy measures, the proposed quantized dynamical entropy (QDE) and quantization-based approximation of sample entropy (QASE) offered the best combination of sensitivity to changes in gait dynamics and computational efficiency. Based on these results, entropy appears to be a viable candidate for assessing the stability of human locomotion.
Giant onsite electronic entropy enhances the performance of ceria for water splitting.
Naghavi, S Shahab; Emery, Antoine A; Hansen, Heine A; Zhou, Fei; Ozolins, Vidvuds; Wolverton, Chris
2017-08-18
Previous studies have shown that a large solid-state entropy of reduction increases the thermodynamic efficiency of metal oxides, such as ceria, for two-step thermochemical water splitting cycles. In this context, the configurational entropy arising from oxygen off-stoichiometry in the oxide, has been the focus of most previous work. Here we report a different source of entropy, the onsite electronic configurational entropy, arising from coupling between orbital and spin angular momenta in lanthanide f orbitals. We find that onsite electronic configurational entropy is sizable in all lanthanides, and reaches a maximum value of ≈4.7 k B per oxygen vacancy for Ce 4+ /Ce 3+ reduction. This unique and large positive entropy source in ceria explains its excellent performance for high-temperature catalytic redox reactions such as water splitting. Our calculations also show that terbium dioxide has a high electronic entropy and thus could also be a potential candidate for solar thermochemical reactions.Solid-state entropy of reduction increases the thermodynamic efficiency of ceria for two-step thermochemical water splitting. Here, the authors report a large and different source of entropy, the onsite electronic configurational entropy arising from coupling between orbital and spin angular momenta in f orbitals.
Crowd macro state detection using entropy model
NASA Astrophysics Data System (ADS)
Zhao, Ying; Yuan, Mengqi; Su, Guofeng; Chen, Tao
2015-08-01
In the crowd security research area a primary concern is to identify the macro state of crowd behaviors to prevent disasters and to supervise the crowd behaviors. The entropy is used to describe the macro state of a self-organization system in physics. The entropy change indicates the system macro state change. This paper provides a method to construct crowd behavior microstates and the corresponded probability distribution using the individuals' velocity information (magnitude and direction). Then an entropy model was built up to describe the crowd behavior macro state. Simulation experiments and video detection experiments were conducted. It was verified that in the disordered state, the crowd behavior entropy is close to the theoretical maximum entropy; while in ordered state, the entropy is much lower than half of the theoretical maximum entropy. The crowd behavior macro state sudden change leads to the entropy change. The proposed entropy model is more applicable than the order parameter model in crowd behavior detection. By recognizing the entropy mutation, it is possible to detect the crowd behavior macro state automatically by utilizing cameras. Results will provide data support on crowd emergency prevention and on emergency manual intervention.
Sensor trustworthiness in uncertain time varying stochastic environments
NASA Astrophysics Data System (ADS)
Verma, Ajay; Fernandes, Ronald; Vadakkeveedu, Kalyan
2011-06-01
Persistent surveillance applications require unattended sensors deployed in remote regions to track and monitor some physical stimulant of interest that can be modeled as output of time varying stochastic process. However, the accuracy or the trustworthiness of the information received through a remote and unattended sensor and sensor network cannot be readily assumed, since sensors may get disabled, corrupted, or even compromised, resulting in unreliable information. The aim of this paper is to develop information theory based metric to determine sensor trustworthiness from the sensor data in an uncertain and time varying stochastic environment. In this paper we show an information theory based determination of sensor data trustworthiness using an adaptive stochastic reference sensor model that tracks the sensor performance for the time varying physical feature, and provides a baseline model that is used to compare and analyze the observed sensor output. We present an approach in which relative entropy is used for reference model adaptation and determination of divergence of the sensor signal from the estimated reference baseline. We show that that KL-divergence is a useful metric that can be successfully used in determination of sensor failures or sensor malice of various types.
Wood texture classification by fuzzy neural networks
NASA Astrophysics Data System (ADS)
Gonzaga, Adilson; de Franca, Celso A.; Frere, Annie F.
1999-03-01
The majority of scientific papers focusing on wood classification for pencil manufacturing take into account defects and visual appearance. Traditional methodologies are base don texture analysis by co-occurrence matrix, by image modeling, or by tonal measures over the plate surface. In this work, we propose to classify plates of wood without biological defects like insect holes, nodes, and cracks, by analyzing their texture. By this methodology we divide the plate image in several rectangular windows or local areas and reduce the number of gray levels. From each local area, we compute the histogram of difference sand extract texture features, given them as input to a Local Neuro-Fuzzy Network. Those features are from the histogram of differences instead of the image pixels due to their better performance and illumination independence. Among several features like media, contrast, second moment, entropy, and IDN, the last three ones have showed better results for network training. Each LNN output is taken as input to a Partial Neuro-Fuzzy Network (PNFN) classifying a pencil region on the plate. At last, the outputs from the PNFN are taken as input to a Global Fuzzy Logic doing the plate classification. Each pencil classification within the plate is done taking into account each quality index.
Wang, Junmei; Hou, Tingjun
2012-01-01
It is of great interest in modern drug design to accurately calculate the free energies of protein-ligand or nucleic acid-ligand binding. MM-PBSA (Molecular Mechanics-Poisson Boltzmann Surface Area) and MM-GBSA (Molecular Mechanics-Generalized Born Surface Area) have gained popularity in this field. For both methods, the conformational entropy, which is usually calculated through normal mode analysis (NMA), is needed to calculate the absolute binding free energies. Unfortunately, NMA is computationally demanding and becomes a bottleneck of the MM-PB/GBSA-NMA methods. In this work, we have developed a fast approach to estimate the conformational entropy based upon solvent accessible surface area calculations. In our approach, the conformational entropy of a molecule, S, can be obtained by summing up the contributions of all atoms, no matter they are buried or exposed. Each atom has two types of surface areas, solvent accessible surface area (SAS) and buried SAS (BSAS). The two types of surface areas are weighted to estimate the contribution of an atom to S. Atoms having the same atom type share the same weight and a general parameter k is applied to balance the contributions of the two types of surface areas. This entropy model was parameterized using a large set of small molecules for which their conformational entropies were calculated at the B3LYP/6-31G* level taking the solvent effect into account. The weighted solvent accessible surface area (WSAS) model was extensively evaluated in three tests. For the convenience, TS, the product of temperature T and conformational entropy S, were calculated in those tests. T was always set to 298.15 K through the text. First of all, good correlations were achieved between WSAS TS and NMA TS for 44 protein or nucleic acid systems sampled with molecular dynamics simulations (10 snapshots were collected for post-entropy calculations): the mean correlation coefficient squares (R2) was 0.56. As to the 20 complexes, the TS changes upon binding, TΔS, were also calculated and the mean R2 was 0.67 between NMA and WSAS. In the second test, TS were calculated for 12 proteins decoy sets (each set has 31 conformations) generated by the Rosetta software package. Again, good correlations were achieved for all decoy sets: the mean, maximum, minimum of R2 were 0.73, 0.89 and 0.55, respectively. Finally, binding free energies were calculated for 6 protein systems (the numbers of inhibitors range from 4 to 18) using four scoring functions. Compared to the measured binding free energies, the mean R2 of the six protein systems were 0.51, 0.47, 0.40 and 0.43 for MM-GBSA-WSAS, MM-GBSA-NMA, MM-PBSA-WSAS and MM-PBSA-NMA, respectively. The mean RMS errors of prediction were 1.19, 1.24, 1.41, 1.29 kcal/mol for the four scoring functions, correspondingly. Therefore, the two scoring functions employing WSAS achieved a comparable prediction performance to that of the scoring functions using NMA. It should be emphasized that no minimization was performed prior to the WSAS calculation in the last test. Although WSAS is not as rigorous as physical models such as quasi-harmonic analysis and thermodynamic integration (TI), it is computationally very efficient as only surface area calculation is involved and no structural minimization is required. Moreover, WSAS has achieved a comparable performance to normal mode analysis. We expect that this model could find its applications in the fields like high throughput screening (HTS), molecular docking and rational protein design. In those fields, efficiency is crucial since there are a large number of compounds, docking poses or protein models to be evaluated. A list of acronyms and abbreviations used in this work is provided for quick reference. PMID:22497310
Evolution of cyclic mixmaster universes with noncomoving radiation
NASA Astrophysics Data System (ADS)
Ganguly, Chandrima; Barrow, John D.
2017-12-01
We study a model of a cyclic, spatially homogeneous, anisotropic, "mixmaster" universe of Bianchi type IX, containing a radiation field with noncomoving ("tilted" with respect to the tetrad frame of reference) velocities and vorticity. We employ a combination of numerical and approximate analytic methods to investigate the consequences of the second law of thermodynamics on the evolution. We model a smooth cycle-to-cycle evolution of the mixmaster universe, bouncing at a finite minimum, by the device of adding a comoving "ghost" field with negative energy density. In the absence of a cosmological constant, an increase in entropy, injected at the start of each cycle, causes an increase in the volume maxima, increasing approach to flatness, falling velocities and vorticities, and growing anisotropy at the expansion maxima of successive cycles. We find that the velocities oscillate rapidly as they evolve and change logarithmically in time relative to the expansion volume. When the conservation of momentum and angular momentum constraints are imposed, the spatial components of these velocities fall to smaller values when the entropy density increases, and vice versa. Isotropization is found to occur when a positive cosmological constant is added because the sequence of oscillations ends and the dynamics expand forever, evolving towards a quasi-de Sitter asymptote with constant velocity amplitudes. The case of a single cycle of evolution with a negative cosmological constant added is also studied.
YamiPred: A Novel Evolutionary Method for Predicting Pre-miRNAs and Selecting Relevant Features.
Kleftogiannis, Dimitrios; Theofilatos, Konstantinos; Likothanassis, Spiros; Mavroudi, Seferina
2015-01-01
MicroRNAs (miRNAs) are small non-coding RNAs, which play a significant role in gene regulation. Predicting miRNA genes is a challenging bioinformatics problem and existing experimental and computational methods fail to deal with it effectively. We developed YamiPred, an embedded classification method that combines the efficiency and robustness of support vector machines (SVM) with genetic algorithms (GA) for feature selection and parameters optimization. YamiPred was tested in a new and realistic human dataset and was compared with state-of-the-art computational intelligence approaches and the prevalent SVM-based tools for miRNA prediction. Experimental results indicate that YamiPred outperforms existing approaches in terms of accuracy and of geometric mean of sensitivity and specificity. The embedded feature selection component selects a compact feature subset that contributes to the performance optimization. Further experimentation with this minimal feature subset has achieved very high classification performance and revealed the minimum number of samples required for developing a robust predictor. YamiPred also confirmed the important role of commonly used features such as entropy and enthalpy, and uncovered the significance of newly introduced features, such as %A-U aggregate nucleotide frequency and positional entropy. The best model trained on human data has successfully predicted pre-miRNAs to other organisms including the category of viruses.
An unbalanced spectra classification method based on entropy
NASA Astrophysics Data System (ADS)
Liu, Zhong-bao; Zhao, Wen-juan
2017-05-01
How to solve the problem of distinguishing the minority spectra from the majority of the spectra is quite important in astronomy. In view of this, an unbalanced spectra classification method based on entropy (USCM) is proposed in this paper to deal with the unbalanced spectra classification problem. USCM greatly improves the performances of the traditional classifiers on distinguishing the minority spectra as it takes the data distribution into consideration in the process of classification. However, its time complexity is exponential with the training size, and therefore, it can only deal with the problem of small- and medium-scale classification. How to solve the large-scale classification problem is quite important to USCM. It can be easily obtained by mathematical computation that the dual form of USCM is equivalent to the minimum enclosing ball (MEB), and core vector machine (CVM) is introduced, USCM based on CVM is proposed to deal with the large-scale classification problem. Several comparative experiments on the 4 subclasses of K-type spectra, 3 subclasses of F-type spectra and 3 subclasses of G-type spectra from Sloan Digital Sky Survey (SDSS) verify USCM and USCM based on CVM perform better than kNN (k nearest neighbor) and SVM (support vector machine) in dealing with the problem of rare spectra mining respectively on the small- and medium-scale datasets and the large-scale datasets.
Bilayer graphene phonovoltaic-FET: In situ phonon recycling
NASA Astrophysics Data System (ADS)
Melnick, Corey; Kaviany, Massoud
2017-11-01
A new heat harvester, the phonovoltaic (pV) cell, was recently proposed. The device converts optical phonons into power before they become heat. Due to the low entropy of a typical hot optical phonon population, the phonovoltaic can operate at high fractions of the Carnot limit and harvest heat more efficiently than conventional heat harvesting technologies such as the thermoelectric generator. Previously, the optical phonon source was presumed to produce optical phonons with a single polarization and momentum. Here, we examine a realistic optical phonon source in a potential pV application and the effects this has on pV operation. Supplementing this work is our investigation of bilayer graphene as a new pV material. Our ab initio calculations show that bilayer graphene has a figure of merit exceeding 0.9, well above previously investigated materials. This allows a room-temperature pV to recycle 65% of a highly nonequilibrium, minimum entropy population of phonons. However, full-band Monte Carlo simulations of the electron and phonon dynamics in a bilayer graphene field-effect transistor (FET) show that the optical phonons emitted by field-accelerated electrons can only be recycled in situ with an efficiency of 50%, and this efficiency falls as the field strength grows. Still, an appropriately designed FET-pV can recycle the phonons produced therein in situ with a much higher efficiency than a thermoelectric generator can harvest heat produced by a FET ex situ.
NASA Technical Reports Server (NTRS)
Ahmed, Kazi Farzan; Wang, Guiling; Silander, John; Wilson, Adam M.; Allen, Jenica M.; Horton, Radley; Anyah, Richard
2013-01-01
Statistical downscaling can be used to efficiently downscale a large number of General Circulation Model (GCM) outputs to a fine temporal and spatial scale. To facilitate regional impact assessments, this study statistically downscales (to 1/8deg spatial resolution) and corrects the bias of daily maximum and minimum temperature and daily precipitation data from six GCMs and four Regional Climate Models (RCMs) for the northeast United States (US) using the Statistical Downscaling and Bias Correction (SDBC) approach. Based on these downscaled data from multiple models, five extreme indices were analyzed for the future climate to quantify future changes of climate extremes. For a subset of models and indices, results based on raw and bias corrected model outputs for the present-day climate were compared with observations, which demonstrated that bias correction is important not only for GCM outputs, but also for RCM outputs. For future climate, bias correction led to a higher level of agreements among the models in predicting the magnitude and capturing the spatial pattern of the extreme climate indices. We found that the incorporation of dynamical downscaling as an intermediate step does not lead to considerable differences in the results of statistical downscaling for the study domain.
Changes of the Oceanic Long-term and seasonal variation in a Global-warming Climate
NASA Astrophysics Data System (ADS)
Xia, Q.; He, Y.; Dong, C.
2015-12-01
Abstract: Gridded absolute dynamic topography (ADT) from AVISO and outputs of sea surface height above geoid from a series of climate models run for CMIP5 are used to analysis global sea level variation. Variance has been calculated to determine the magnitude of change in sea level variation over two decades. Increasing trend of variance of ADT suggests an enhanced fluctuation as well as geostrophic shear of global ocean. To further determine on what scale does the increasing fluctuation dominate, the global absolute dynamic topography (ADT) has been separated into two distinguished parts: the global five-year mean sea surface (MSS) and the residual absolute dynamic topography (RADT). Increased variance of MSS can be ascribed to the nonuniform rising of global sea level and an enhancement of ocean gyres in the Pacific Ocean. While trend in the variance of RADT is found to be close to zero which suggests an unchanged ocean mesoscale variability. The Gaussian-like distribution of global ADT are used to study the change in extreme sea levels. Information entropy has also been adapted in our study. Increasing trend of information entropy which measures the degree of dispersion of a probability distribution suggests more appearance of extreme sea levels. Extreme high sea levels are increasing with a higher growing rate than the mean sea level rise.
Defect-detection algorithm for noncontact acoustic inspection using spectrum entropy
NASA Astrophysics Data System (ADS)
Sugimoto, Kazuko; Akamatsu, Ryo; Sugimoto, Tsuneyoshi; Utagawa, Noriyuki; Kuroda, Chitose; Katakura, Kageyoshi
2015-07-01
In recent years, the detachment of concrete from bridges or tunnels and the degradation of concrete structures have become serious social problems. The importance of inspection, repair, and updating is recognized in measures against degradation. We have so far studied the noncontact acoustic inspection method using airborne sound and the laser Doppler vibrometer. In this method, depending on the surface state (reflectance, dirt, etc.), the quantity of the light of the returning laser decreases and optical noise resulting from the leakage of light reception arises. Some influencing factors are the stability of the output of the laser Doppler vibrometer, the low reflective characteristic of the measurement surface, the diffused reflection characteristic, measurement distance, and laser irradiation angle. If defect detection depends only on the vibration energy ratio since the frequency characteristic of the optical noise resembles white noise, the detection of optical noise resulting from the leakage of light reception may indicate a defective part. Therefore, in this work, the combination of the vibrational energy ratio and spectrum entropy is used to judge whether a measured point is healthy or defective or an abnormal measurement point. An algorithm that enables more vivid detection of a defective part is proposed. When our technique was applied in an experiment with real concrete structures, the defective part could be extracted more vividly and the validity of our proposed algorithm was confirmed.
Hanel, Rudolf; Thurner, Stefan; Gell-Mann, Murray
2014-05-13
The maximum entropy principle (MEP) is a method for obtaining the most likely distribution functions of observables from statistical systems by maximizing entropy under constraints. The MEP has found hundreds of applications in ergodic and Markovian systems in statistical mechanics, information theory, and statistics. For several decades there has been an ongoing controversy over whether the notion of the maximum entropy principle can be extended in a meaningful way to nonextensive, nonergodic, and complex statistical systems and processes. In this paper we start by reviewing how Boltzmann-Gibbs-Shannon entropy is related to multiplicities of independent random processes. We then show how the relaxation of independence naturally leads to the most general entropies that are compatible with the first three Shannon-Khinchin axioms, the (c,d)-entropies. We demonstrate that the MEP is a perfectly consistent concept for nonergodic and complex statistical systems if their relative entropy can be factored into a generalized multiplicity and a constraint term. The problem of finding such a factorization reduces to finding an appropriate representation of relative entropy in a linear basis. In a particular example we show that path-dependent random processes with memory naturally require specific generalized entropies. The example is to our knowledge the first exact derivation of a generalized entropy from the microscopic properties of a path-dependent random process.
Chirikjian, Gregory S.
2011-01-01
Proteins fold from a highly disordered state into a highly ordered one. Traditionally, the folding problem has been stated as one of predicting ‘the’ tertiary structure from sequential information. However, new evidence suggests that the ensemble of unfolded forms may not be as disordered as once believed, and that the native form of many proteins may not be described by a single conformation, but rather an ensemble of its own. Quantifying the relative disorder in the folded and unfolded ensembles as an entropy difference may therefore shed light on the folding process. One issue that clouds discussions of ‘entropy’ is that many different kinds of entropy can be defined: entropy associated with overall translational and rotational Brownian motion, configurational entropy, vibrational entropy, conformational entropy computed in internal or Cartesian coordinates (which can even be different from each other), conformational entropy computed on a lattice; each of the above with different solvation and solvent models; thermodynamic entropy measured experimentally, etc. The focus of this work is the conformational entropy of coil/loop regions in proteins. New mathematical modeling tools for the approximation of changes in conformational entropy during transition from unfolded to folded ensembles are introduced. In particular, models for computing lower and upper bounds on entropy for polymer models of polypeptide coils both with and without end constraints are presented. The methods reviewed here include kinematics (the mathematics of rigid-body motions), classical statistical mechanics and information theory. PMID:21187223
Use of mutual information to decrease entropy: Implications for the second law of thermodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lloyd, S.
1989-05-15
Several theorems on the mechanics of gathering information are proved, and the possibility of violating the second law of thermodynamics by obtaining information is discussed in light of these theorems. Maxwell's demon can lower the entropy of his surroundings by an amount equal to the difference between the maximum entropy of his recording device and its initial entropy, without generating a compensating entropy increase. A demon with human-scale recording devices can reduce the entropy of a gas by a negligible amount only, but the proof of the demon's impracticability leaves open the possibility that systems highly correlated with their environmentmore » can reduce the environment's entropy by a substantial amount without increasing entropy elsewhere. In the event that a boundary condition for the universe requires it to be in a state of low entropy when small, the correlations induced between different particle modes during the expansion phase allow the modes to behave like Maxwell's demons during the contracting phase, reducing the entropy of the universe to a low value.« less
High-Speed Optical Wide-Area Data-Communication Network
NASA Technical Reports Server (NTRS)
Monacos, Steve P.
1994-01-01
Proposed fiber-optic wide-area network (WAN) for digital communication balances input and output flows of data with its internal capacity by routing traffic via dynamically interconnected routing planes. Data transmitted optically through network by wavelength-division multiplexing in synchronous or asynchronous packets. WAN implemented with currently available technology. Network is multiple-ring cyclic shuffle exchange network ensuring traffic reaches its destination with minimum number of hops.
Boundaries of the Realizability Region of Membrane Separation Processes
NASA Astrophysics Data System (ADS)
Tsirlin, A. M.; Akhrenemkov, A. A.
2018-01-01
The region of realizability of membrane separation systems having a constant total membrane area has been determined for a definite output of a final product at a definite composition of a mixture flow. The law of change in the pressure in the mixture, corresponding to the minimum energy required for its separation, was concretized for media close in properties to ideal gases and solutions.
Dagade, Dilip H; Shetake, Poonam K; Patil, Kesharsingh J
2007-07-05
The density and osmotic coefficient data for solutions of 15-crown-5 (15C5) in water and in CCl4 solvent systems at 298.15 K have been reported using techniques of densitometry and vapor pressure osmometry in the concentration range of 0.01-2 mol kg-1. The data are used to obtain apparent molar and partial molar volumes, activity coefficients of the components as a function of 15C5 concentration. Using the literature heat of dilution data for aqueous system, it has become possible to calculate entropy of mixing (DeltaS(mix)), excess entropy of solution (DeltaS(E)), and partial molar entropies of the components at different concentrations. The results of all these are compared to those obtained for aqueous 18-crown-6 solutions reported earlier. It has been observed that the partial molar volume of 15C5 goes through a minimum and that of water goes through a maximum at approximately 1.2 mol kg(-1) in aqueous solutions whereas the opposite is true in CCl4 medium but at approximately 0.5 mol kg(-1). The osmotic and activity coefficients of 15C5 and excess free energy change for solution exhibit distinct differences in the two solvent systems studied. These results have been explained in terms of hydrophobic hydration and interactions in aqueous solution while weak solvophobic association of 15C5 molecules in CCl4 solutions is proposed. The data are further subjected to analysis by applying McMillan-Mayer and Kirkwood-Buff theories of solutions. The analysis shows that osmotic second virial coefficient value for 15C5 is marginally less than that of 18C6 indicating that reduction in ring flexibility does not affect the energetics of the interactions much in aqueous solution while the same gets influenced much in nonpolar solvent CCl4.
Tendency towards maximum complexity in a nonequilibrium isolated system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calbet, Xavier; Lopez-Ruiz, Ricardo
2001-06-01
The time evolution equations of a simplified isolated ideal gas, the {open_quotes}tetrahedral{close_quotes} gas, are derived. The dynamical behavior of the Lopez-Ruiz{endash}Mancini{endash}Calbet complexity [R. Lopez-Ruiz, H. L. Mancini, and X. Calbet, Phys. Lett. A >209, 321 (1995)] is studied in this system. In general, it is shown that the complexity remains within the bounds of minimum and maximum complexity. We find that there are certain restrictions when the isolated {open_quotes}tetrahedral{close_quotes} gas evolves towards equilibrium. In addition to the well-known increase in entropy, the quantity called disequilibrium decreases monotonically with time. Furthermore, the trajectories of the system in phase space approach themore » maximum complexity path as it evolves toward equilibrium.« less
Alpizar-Reyes, E; Castaño, J; Carrillo-Navas, H; Alvarez-Ramírez, J; Gallardo-Rivera, R; Pérez-Alonso, C; Guadarrama-Lezama, A Y
2018-03-01
Freeze-dried faba bean ( Vicia faba L.) protein adsorption isotherms were determined at 25, 35 and 40 °C and fitted with the Guggenheim-Anderson-de Boer model. The pore radius of protein was in the range of 0.87-6.44 nm, so that they were considered as micropores and mesopores. The minimum integral entropy ranged between 4.33 and 4.44 kg H 2 O/100 kg d.s., was regarded as the point of maximum of stability. The glass transition temperature of the protein equilibrated at the different conditions of storage was determined, showing that the protein remained in glassy state for all cases. The protein showed compact and rigid structures, evidenced by microscopy analysis.
Renyi entropy measures of heart rate Gaussianity.
Lake, Douglas E
2006-01-01
Sample entropy and approximate entropy are measures that have been successfully utilized to study the deterministic dynamics of heart rate (HR). A complementary stochastic point of view and a heuristic argument using the Central Limit Theorem suggests that the Gaussianity of HR is a complementary measure of the physiological complexity of the underlying signal transduction processes. Renyi entropy (or q-entropy) is a widely used measure of Gaussianity in many applications. Particularly important members of this family are differential (or Shannon) entropy (q = 1) and quadratic entropy (q = 2). We introduce the concepts of differential and conditional Renyi entropy rate and, in conjunction with Burg's theorem, develop a measure of the Gaussianity of a linear random process. Robust algorithms for estimating these quantities are presented along with estimates of their standard errors.
Valence bond and von Neumann entanglement entropy in Heisenberg ladders.
Kallin, Ann B; González, Iván; Hastings, Matthew B; Melko, Roger G
2009-09-11
We present a direct comparison of the recently proposed valence bond entanglement entropy and the von Neumann entanglement entropy on spin-1/2 Heisenberg systems using quantum Monte Carlo and density-matrix renormalization group simulations. For one-dimensional chains we show that the valence bond entropy can be either less or greater than the von Neumann entropy; hence, it cannot provide a bound on the latter. On ladder geometries, simulations with up to seven legs are sufficient to indicate that the von Neumann entropy in two dimensions obeys an area law, even though the valence bond entanglement entropy has a multiplicative logarithmic correction.
Quantum thermodynamics of general quantum processes.
Binder, Felix; Vinjanampathy, Sai; Modi, Kavan; Goold, John
2015-03-01
Accurately describing work extraction from a quantum system is a central objective for the extension of thermodynamics to individual quantum systems. The concepts of work and heat are surprisingly subtle when generalizations are made to arbitrary quantum states. We formulate an operational thermodynamics suitable for application to an open quantum system undergoing quantum evolution under a general quantum process by which we mean a completely positive and trace-preserving map. We derive an operational first law of thermodynamics for such processes and show consistency with the second law. We show that heat, from the first law, is positive when the input state of the map majorizes the output state. Moreover, the change in entropy is also positive for the same majorization condition. This makes a strong connection between the two operational laws of thermodynamics.
Endoreversible quantum heat engines in the linear response regime.
Wang, Honghui; He, Jizhou; Wang, Jianhui
2017-07-01
We analyze general models of quantum heat engines operating a cycle of two adiabatic and two isothermal processes. We use the quantum master equation for a system to describe heat transfer current during a thermodynamic process in contact with a heat reservoir, with no use of phenomenological thermal conduction. We apply the endoreversibility description to such engine models working in the linear response regime and derive expressions of the efficiency and the power. By analyzing the entropy production rate along a single cycle, we identify the thermodynamic flux and force that a linear relation connects. From maximizing the power output, we find that such heat engines satisfy the tight-coupling condition and the efficiency at maximum power agrees with the Curzon-Ahlborn efficiency known as the upper bound in the linear response regime.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feitelson, J.; Mauzerall, D.C.
1993-08-12
Wide-band, time-resolved, pulsed photoacoustics has been employed to study the electron-transfer reaction between a triplet magnesium porphyrin and various quinones in polar and nonpolar solvents. The reaction rate constants are near encounter limited. The yield of triplet state is 70% in both solvents. The yield of ions is 85% in the former and zero in the latter, in agreement with spin dephasing time and escape times from the Coulomb wells in the two solvents. In methanol the plot of measured heat output versus quinone redox potential is linear. This implies that the entropy of electron transfer is constant through themore » series, but it may not be negligible. 16 refs., 2 figs., 1 tab.« less
Time-delay signature of chaos in 1550 nm VCSELs with variable-polarization FBG feedback.
Li, Yan; Wu, Zheng-Mao; Zhong, Zhu-Qiang; Yang, Xian-Jie; Mao, Song; Xia, Guang-Qiong
2014-08-11
Based on the framework of spin-flip model (SFM), the output characteristics of a 1550 nm vertical-cavity surface-emitting laser (VCSEL) subject to variable-polarization fiber Bragg grating (FBG) feedback (VPFBGF) have been investigated. With the aid of the self-correlation function (SF) and the permutation entropy (PE) function, the time-delay signature (TDS) of chaos in the VPFBGF-VCSEL is evaluated, and then the influences of the operation parameters on the TDS of chaos are analyzed. The results show that the TDS of chaos can be suppressed efficiently through selecting suitable coupling coefficient and feedback rate of the FBG, and is weaker than that of chaos generated by traditional variable-polarization mirror feedback VCSELs (VPMF-VCSELs) or polarization-preserved FBG feedback VCSELs (PPFBGF-VCSELs).
Image sensor system with bio-inspired efficient coding and adaptation.
Okuno, Hirotsugu; Yagi, Tetsuya
2012-08-01
We designed and implemented an image sensor system equipped with three bio-inspired coding and adaptation strategies: logarithmic transform, local average subtraction, and feedback gain control. The system comprises a field-programmable gate array (FPGA), a resistive network, and active pixel sensors (APS), whose light intensity-voltage characteristics are controllable. The system employs multiple time-varying reset voltage signals for APS in order to realize multiple logarithmic intensity-voltage characteristics, which are controlled so that the entropy of the output image is maximized. The system also employs local average subtraction and gain control in order to obtain images with an appropriate contrast. The local average is calculated by the resistive network instantaneously. The designed system was successfully used to obtain appropriate images of objects that were subjected to large changes in illumination.
NASA Astrophysics Data System (ADS)
Amaral, Barbara; Cabello, Adán; Cunha, Marcelo Terra; Aolita, Leandro
2018-03-01
Contextuality is a fundamental feature of quantum theory necessary for certain models of quantum computation and communication. Serious steps have therefore been taken towards a formal framework for contextuality as an operational resource. However, the main ingredient of a resource theory—a concrete, explicit form of free operations of contextuality—was still missing. Here we provide such a component by introducing noncontextual wirings: a class of contextuality-free operations with a clear operational interpretation and a friendly parametrization. We characterize them completely for general black-box measurement devices with arbitrarily many inputs and outputs. As applications, we show that the relative entropy of contextuality is a contextuality monotone and that maximally contextual boxes that serve as contextuality bits exist for a broad class of scenarios. Our results complete a unified resource-theoretic framework for contextuality and Bell nonlocality.
Amaral, Barbara; Cabello, Adán; Cunha, Marcelo Terra; Aolita, Leandro
2018-03-30
Contextuality is a fundamental feature of quantum theory necessary for certain models of quantum computation and communication. Serious steps have therefore been taken towards a formal framework for contextuality as an operational resource. However, the main ingredient of a resource theory-a concrete, explicit form of free operations of contextuality-was still missing. Here we provide such a component by introducing noncontextual wirings: a class of contextuality-free operations with a clear operational interpretation and a friendly parametrization. We characterize them completely for general black-box measurement devices with arbitrarily many inputs and outputs. As applications, we show that the relative entropy of contextuality is a contextuality monotone and that maximally contextual boxes that serve as contextuality bits exist for a broad class of scenarios. Our results complete a unified resource-theoretic framework for contextuality and Bell nonlocality.
Restricted numerical range: A versatile tool in the theory of quantum information
NASA Astrophysics Data System (ADS)
Gawron, Piotr; Puchała, Zbigniew; Miszczak, Jarosław Adam; Skowronek, Łukasz; Życzkowski, Karol
2010-10-01
Numerical range of a Hermitian operator X is defined as the set of all possible expectation values of this observable among a normalized quantum state. We analyze a modification of this definition in which the expectation value is taken among a certain subset of the set of all quantum states. One considers, for instance, the set of real states, the set of product states, separable states, or the set of maximally entangled states. We show exemplary applications of these algebraic tools in the theory of quantum information: analysis of k-positive maps and entanglement witnesses, as well as study of the minimal output entropy of a quantum channel. Product numerical range of a unitary operator is used to solve the problem of local distinguishability of a family of two unitary gates.
Entropy in molecular recognition by proteins
Caro, José A.; Harpole, Kyle W.; Kasinath, Vignesh; Lim, Jackwee; Granja, Jeffrey; Valentine, Kathleen G.; Sharp, Kim A.
2017-01-01
Molecular recognition by proteins is fundamental to molecular biology. Dissection of the thermodynamic energy terms governing protein–ligand interactions has proven difficult, with determination of entropic contributions being particularly elusive. NMR relaxation measurements have suggested that changes in protein conformational entropy can be quantitatively obtained through a dynamical proxy, but the generality of this relationship has not been shown. Twenty-eight protein–ligand complexes are used to show a quantitative relationship between measures of fast side-chain motion and the underlying conformational entropy. We find that the contribution of conformational entropy can range from favorable to unfavorable, which demonstrates the potential of this thermodynamic variable to modulate protein–ligand interactions. For about one-quarter of these complexes, the absence of conformational entropy would render the resulting affinity biologically meaningless. The dynamical proxy for conformational entropy or “entropy meter” also allows for refinement of the contributions of solvent entropy and the loss in rotational-translational entropy accompanying formation of high-affinity complexes. Furthermore, structure-based application of the approach can also provide insight into long-lived specific water–protein interactions that escape the generic treatments of solvent entropy based simply on changes in accessible surface area. These results provide a comprehensive and unified view of the general role of entropy in high-affinity molecular recognition by proteins. PMID:28584100
Statistical mechanical theory of liquid entropy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wallace, D.C.
The multiparticle correlation expansion for the entropy of a classical monatomic liquid is presented. This entropy expresses the physical picture in which there is no free particle motion, but rather, each atom moves within a cage formed by its neighbors. The liquid expansion, including only pair correlations, gives an excellent account of the experimental entropy of most liquid metals, of liquid argon, and the hard sphere liquid. The pair correlation entropy is well approximated by a universal function of temperature. Higher order correlation entropy, due to n-particle irreducible correlations for n{ge}3, is significant in only a few liquid metals, andmore » its occurrence suggests the presence of n-body forces. When the liquid theory is applied to the study of melting, the author discovers the important classification of normal and anomalous melting, according to whether there is not or is a significant change in the electronic structure upon melting, and he discovers the universal disordering entropy for melting of a monatomic crystal. Interesting directions for future research are: extension to include orientational correlations of molecules, theoretical calculation of the entropy of water, application to the entropy of the amorphous state, and correlational entropy of compressed argon. The author clarifies the relation among different entropy expansions in the recent literature.« less
Characterization of time series via Rényi complexity-entropy curves
NASA Astrophysics Data System (ADS)
Jauregui, M.; Zunino, L.; Lenzi, E. K.; Mendes, R. S.; Ribeiro, H. V.
2018-05-01
One of the most useful tools for distinguishing between chaotic and stochastic time series is the so-called complexity-entropy causality plane. This diagram involves two complexity measures: the Shannon entropy and the statistical complexity. Recently, this idea has been generalized by considering the Tsallis monoparametric generalization of the Shannon entropy, yielding complexity-entropy curves. These curves have proven to enhance the discrimination among different time series related to stochastic and chaotic processes of numerical and experimental nature. Here we further explore these complexity-entropy curves in the context of the Rényi entropy, which is another monoparametric generalization of the Shannon entropy. By combining the Rényi entropy with the proper generalization of the statistical complexity, we associate a parametric curve (the Rényi complexity-entropy curve) with a given time series. We explore this approach in a series of numerical and experimental applications, demonstrating the usefulness of this new technique for time series analysis. We show that the Rényi complexity-entropy curves enable the differentiation among time series of chaotic, stochastic, and periodic nature. In particular, time series of stochastic nature are associated with curves displaying positive curvature in a neighborhood of their initial points, whereas curves related to chaotic phenomena have a negative curvature; finally, periodic time series are represented by vertical straight lines.
Beyond the Shannon–Khinchin formulation: The composability axiom and the universal-group entropy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tempesta, Piergiulio, E-mail: p.tempesta@fis.ucm.es
2016-02-15
The notion of entropy is ubiquitous both in natural and social sciences. In the last two decades, a considerable effort has been devoted to the study of new entropic forms, which generalize the standard Boltzmann–Gibbs (BG) entropy and could be applicable in thermodynamics, quantum mechanics and information theory. In Khinchin (1957), by extending previous ideas of Shannon (1948) and Shannon and Weaver (1949), Khinchin proposed a characterization of the BG entropy, based on four requirements, nowadays known as the Shannon–Khinchin (SK) axioms. The purpose of this paper is twofold. First, we show that there exists an intrinsic group-theoretical structure behindmore » the notion of entropy. It comes from the requirement of composability of an entropy with respect to the union of two statistically independent systems, that we propose in an axiomatic formulation. Second, we show that there exists a simple universal family of trace-form entropies. This class contains many well known examples of entropies and infinitely many new ones, a priori multi-parametric. Due to its specific relation with Lazard’s universal formal group of algebraic topology, the new general entropy introduced in this work will be called the universal-group entropy. A new example of multi-parametric entropy is explicitly constructed.« less
2016-01-01
We shall prove that the celebrated Rényi entropy is the first example of a new family of infinitely many multi-parametric entropies. We shall call them the Z-entropies. Each of them, under suitable hypotheses, generalizes the celebrated entropies of Boltzmann and Rényi. A crucial aspect is that every Z-entropy is composable (Tempesta 2016 Ann. Phys. 365, 180–197. (doi:10.1016/j.aop.2015.08.013)). This property means that the entropy of a system which is composed of two or more independent systems depends, in all the associated probability space, on the choice of the two systems only. Further properties are also required to describe the composition process in terms of a group law. The composability axiom, introduced as a generalization of the fourth Shannon–Khinchin axiom (postulating additivity), is a highly non-trivial requirement. Indeed, in the trace-form class, the Boltzmann entropy and Tsallis entropy are the only known composable cases. However, in the non-trace form class, the Z-entropies arise as new entropic functions possessing the mathematical properties necessary for information-theoretical applications, in both classical and quantum contexts. From a mathematical point of view, composability is intimately related to formal group theory of algebraic topology. The underlying group-theoretical structure determines crucially the statistical properties of the corresponding entropies. PMID:27956871
Entropy in molecular recognition by proteins.
Caro, José A; Harpole, Kyle W; Kasinath, Vignesh; Lim, Jackwee; Granja, Jeffrey; Valentine, Kathleen G; Sharp, Kim A; Wand, A Joshua
2017-06-20
Molecular recognition by proteins is fundamental to molecular biology. Dissection of the thermodynamic energy terms governing protein-ligand interactions has proven difficult, with determination of entropic contributions being particularly elusive. NMR relaxation measurements have suggested that changes in protein conformational entropy can be quantitatively obtained through a dynamical proxy, but the generality of this relationship has not been shown. Twenty-eight protein-ligand complexes are used to show a quantitative relationship between measures of fast side-chain motion and the underlying conformational entropy. We find that the contribution of conformational entropy can range from favorable to unfavorable, which demonstrates the potential of this thermodynamic variable to modulate protein-ligand interactions. For about one-quarter of these complexes, the absence of conformational entropy would render the resulting affinity biologically meaningless. The dynamical proxy for conformational entropy or "entropy meter" also allows for refinement of the contributions of solvent entropy and the loss in rotational-translational entropy accompanying formation of high-affinity complexes. Furthermore, structure-based application of the approach can also provide insight into long-lived specific water-protein interactions that escape the generic treatments of solvent entropy based simply on changes in accessible surface area. These results provide a comprehensive and unified view of the general role of entropy in high-affinity molecular recognition by proteins.
NASA Astrophysics Data System (ADS)
Preda, Vasile; Dedu, Silvia; Gheorghe, Carmen
2015-10-01
In this paper, by using the entropy maximization principle with Tsallis entropy, new distribution families for modeling the income distribution are derived. Also, new classes of Lorenz curves are obtained by applying the entropy maximization principle with Tsallis entropy, under mean and Gini index equality and inequality constraints.
NASA Astrophysics Data System (ADS)
Kerner, H. R.; Bell, J. F., III; Ben Amor, H.
2017-12-01
The Mastcam color imaging system on the Mars Science Laboratory Curiosity rover acquires images within Gale crater for a variety of geologic and atmospheric studies. Images are often JPEG compressed before being downlinked to Earth. While critical for transmitting images on a low-bandwidth connection, this compression can result in image artifacts most noticeable as anomalous brightness or color changes within or near JPEG compression block boundaries. In images with significant high-frequency detail (e.g., in regions showing fine layering or lamination in sedimentary rocks), the image might need to be re-transmitted losslessly to enable accurate scientific interpretation of the data. The process of identifying which images have been adversely affected by compression artifacts is performed manually by the Mastcam science team, costing significant expert human time. To streamline the tedious process of identifying which images might need to be re-transmitted, we present an input-efficient neural network solution for predicting the perceived quality of a compressed Mastcam image. Most neural network solutions require large amounts of hand-labeled training data for the model to learn the target mapping between input (e.g. distorted images) and output (e.g. quality assessment). We propose an automatic labeling method using joint entropy between a compressed and uncompressed image to avoid the need for domain experts to label thousands of training examples by hand. We use automatically labeled data to train a convolutional neural network to estimate the probability that a Mastcam user would find the quality of a given compressed image acceptable for science analysis. We tested our model on a variety of Mastcam images and found that the proposed method correlates well with image quality perception by science team members. When assisted by our proposed method, we estimate that a Mastcam investigator could reduce the time spent reviewing images by a minimum of 70%.
Entropy change of biological dynamics in COPD.
Jin, Yu; Chen, Chang; Cao, Zhixin; Sun, Baoqing; Lo, Iek Long; Liu, Tzu-Ming; Zheng, Jun; Sun, Shixue; Shi, Yan; Zhang, Xiaohua Douglas
2017-01-01
In this century, the rapid development of large data storage technologies, mobile network technology, and portable medical devices makes it possible to measure, record, store, and track analysis of large amount of data in human physiological signals. Entropy is a key metric for quantifying the irregularity contained in physiological signals. In this review, we focus on how entropy changes in various physiological signals in COPD. Our review concludes that the entropy change relies on the types of physiological signals under investigation. For major physiological signals related to respiratory diseases, such as airflow, heart rate variability, and gait variability, the entropy of a patient with COPD is lower than that of a healthy person. However, in case of hormone secretion and respiratory sound, the entropy of a patient is higher than that of a healthy person. For mechanomyogram signal, the entropy increases with the increased severity of COPD. This result should give valuable guidance for the use of entropy for physiological signals measured by wearable medical device as well as for further research on entropy in COPD.
NASA Astrophysics Data System (ADS)
Lechner, Joseph H.
1999-10-01
This report describes two classroom activities that help students visualize the abstract concept of entropy and apply the second law of thermodynamics to real situations. (i) A sealed "rainbow tube" contains six smaller vessels, each filled with a different brightly colored solution (low entropy). When the tube is inverted, the solutions mix together and react to form an amorphous precipitate (high entropy). The change from low entropy to high entropy is irreversible as long as the tube remains sealed. (ii) When U.S. currency is withdrawn from circulation, intact bills (low entropy) are shredded into small fragments (high entropy). Shredding is quick and easy; the reverse process is clearly nonspontaneous. It is theoretically possible, but it is time-consuming and energy-intensive, to reassemble one bill from a pile that contains fragments of hundreds of bills. We calculate the probability P of drawing pieces of only one specific bill from a mixture containing one pound of bills, each shredded into n fragments. This result can be related to Boltzmann's entropy formula S?=klnW.
Ito, Sosuke
2016-01-01
The transfer entropy is a well-established measure of information flow, which quantifies directed influence between two stochastic time series and has been shown to be useful in a variety fields of science. Here we introduce the transfer entropy of the backward time series called the backward transfer entropy, and show that the backward transfer entropy quantifies how far it is from dynamics to a hidden Markov model. Furthermore, we discuss physical interpretations of the backward transfer entropy in completely different settings of thermodynamics for information processing and the gambling with side information. In both settings of thermodynamics and the gambling, the backward transfer entropy characterizes a possible loss of some benefit, where the conventional transfer entropy characterizes a possible benefit. Our result implies the deep connection between thermodynamics and the gambling in the presence of information flow, and that the backward transfer entropy would be useful as a novel measure of information flow in nonequilibrium thermodynamics, biochemical sciences, economics and statistics. PMID:27833120
NASA Astrophysics Data System (ADS)
Ito, Sosuke
2016-11-01
The transfer entropy is a well-established measure of information flow, which quantifies directed influence between two stochastic time series and has been shown to be useful in a variety fields of science. Here we introduce the transfer entropy of the backward time series called the backward transfer entropy, and show that the backward transfer entropy quantifies how far it is from dynamics to a hidden Markov model. Furthermore, we discuss physical interpretations of the backward transfer entropy in completely different settings of thermodynamics for information processing and the gambling with side information. In both settings of thermodynamics and the gambling, the backward transfer entropy characterizes a possible loss of some benefit, where the conventional transfer entropy characterizes a possible benefit. Our result implies the deep connection between thermodynamics and the gambling in the presence of information flow, and that the backward transfer entropy would be useful as a novel measure of information flow in nonequilibrium thermodynamics, biochemical sciences, economics and statistics.
A new Method for Determining the Interplanetary Current-Sheet Local Orientation
NASA Astrophysics Data System (ADS)
Blanco, J. J.; Rodríguez-pacheco, J.; Sequeiros, J.
2003-03-01
In this work we have developed a new method for determining the interplanetary current sheet local parameters. The method, called `HYTARO' (from Hyperbolic Tangent Rotation), is based on a modified Harris magnetic field. This method has been applied to a pool of 57 events, all of them recorded during solar minimum conditions. The model performance has been tested by comparing both, its outputs and noise response, with these of the `classic MVM' (from Minimum Variance Method). The results suggest that, despite the fact that in many cases they behave in a similar way, there are specific crossing conditions that produce an erroneous MVM response. Moreover, our method shows a lower noise level sensitivity than that of MVM.
Multilayer perceptron, fuzzy sets, and classification
NASA Technical Reports Server (NTRS)
Pal, Sankar K.; Mitra, Sushmita
1992-01-01
A fuzzy neural network model based on the multilayer perceptron, using the back-propagation algorithm, and capable of fuzzy classification of patterns is described. The input vector consists of membership values to linguistic properties while the output vector is defined in terms of fuzzy class membership values. This allows efficient modeling of fuzzy or uncertain patterns with appropriate weights being assigned to the backpropagated errors depending upon the membership values at the corresponding outputs. During training, the learning rate is gradually decreased in discrete steps until the network converges to a minimum error solution. The effectiveness of the algorithm is demonstrated on a speech recognition problem. The results are compared with those of the conventional MLP, the Bayes classifier, and the other related models.
Reverse matrix converter control method for PMSM drives using DPC
NASA Astrophysics Data System (ADS)
Bak, Yeongsu; Lee, Kyo-Beum
2018-05-01
This paper proposes a control method for a reverse matrix converter (RMC) that drives a three-phase permanent magnet synchronous motor (PMSM). In this proposed method, direct power control (DPC) is used to control the voltage source rectifier of the RMC. The RMC is an indirect matrix converter operating in the boost mode, in which the power-flow directions of the input and output are switched. It has a minimum voltage transfer ratio of 1/0.866 in a linear-modulation region. In this paper, a control method that uses DPC as an additional control method is proposed in order to control the RMC driving a PMSM in the output stage. Simulations and experimental results verify the effectiveness of the proposed control method.
Wavelet entropy characterization of elevated intracranial pressure.
Xu, Peng; Scalzo, Fabien; Bergsneider, Marvin; Vespa, Paul; Chad, Miller; Hu, Xiao
2008-01-01
Intracranial Hypertension (ICH) often occurs for those patients with traumatic brain injury (TBI), stroke, tumor, etc. Pathology of ICH is still controversial. In this work, we used wavelet entropy and relative wavelet entropy to study the difference existed between normal and hypertension states of ICP for the first time. The wavelet entropy revealed the similar findings as the approximation entropy that entropy during ICH state is smaller than that in normal state. Moreover, with wavelet entropy, we can see that ICH state has the more focused energy in the low wavelet frequency band (0-3.1 Hz) than the normal state. The relative wavelet entropy shows that the energy distribution in the wavelet bands between these two states is actually different. Based on these results, we suggest that ICH may be formed by the re-allocation of oscillation energy within brain.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riley, Pete; Lionello, Roberto; Linker, Jon A., E-mail: pete@predsci.com, E-mail: lionel@predsci.com, E-mail: linkerj@predsci.com
Observations of the Sun’s corona during the space era have led to a picture of relatively constant, but cyclically varying solar output and structure. Longer-term, more indirect measurements, such as from {sup 10}Be, coupled by other albeit less reliable contemporaneous reports, however, suggest periods of significant departure from this standard. The Maunder Minimum was one such epoch where: (1) sunspots effectively disappeared for long intervals during a 70 yr period; (2) eclipse observations suggested the distinct lack of a visible K-corona but possible appearance of the F-corona; (3) reports of aurora were notably reduced; and (4) cosmic ray intensities atmore » Earth were inferred to be substantially higher. Using a global thermodynamic MHD model, we have constructed a range of possible coronal configurations for the Maunder Minimum period and compared their predictions with these limited observational constraints. We conclude that the most likely state of the corona during—at least—the later portion of the Maunder Minimum was not merely that of the 2008/2009 solar minimum, as has been suggested recently, but rather a state devoid of any large-scale structure, driven by a photospheric field composed of only ephemeral regions, and likely substantially reduced in strength. Moreover, we suggest that the Sun evolved from a 2008/2009-like configuration at the start of the Maunder Minimum toward an ephemeral-only configuration by the end of it, supporting a prediction that we may be on the cusp of a new grand solar minimum.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartolac, S; Letourneau, D; University of Toronto, Toronto, Ontario
Purpose: Application of process control theory in quality assurance programs promises to allow earlier identification of problems and potentially better quality in delivery than traditional paradigms based primarily on tolerances and action levels. The purpose of this project was to characterize underlying seasonal variations in linear accelerator output that can be used to improve performance or trigger preemptive maintenance. Methods: Review of runtime plots of daily (6 MV) output data acquired using in house ion chamber based devices over three years and for fifteen linear accelerators of varying make and model were evaluated. Shifts in output due to known interventionsmore » with the machines were subtracted from the data to model an uncorrected scenario for each linear accelerator. Observable linear trends were also removed from the data prior to evaluation of periodic variations. Results: Runtime plots of output revealed sinusoidal, seasonal variations that were consistent across all units, irrespective of manufacturer, model or age of machine. The average amplitude of the variation was on the order of 1%. Peak and minimum variations were found to correspond to early April and September, respectively. Approximately 48% of output adjustments made over the period examined were potentially avoidable if baseline levels had corresponded to the mean output, rather than to points near a peak or valley. Linear trends were observed for three of the fifteen units, with annual increases in output ranging from 2–3%. Conclusion: Characterization of cyclical seasonal trends allows for better separation of potentially innate accelerator behaviour from other behaviours (e.g. linear trends) that may be better described as true out of control states (i.e. non-stochastic deviations from otherwise expected behavior) and could indicate service requirements. Results also pointed to an optimal setpoint for accelerators such that output of machines is maintained within set tolerances and interventions are required less frequently.« less
The second law of thermodynamics under unitary evolution and external operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ikeda, Tatsuhiko N., E-mail: ikeda@cat.phys.s.u-tokyo.ac.jp; Physics Department, Boston University, Boston, MA 02215; Sakumichi, Naoyuki
The von Neumann entropy cannot represent the thermodynamic entropy of equilibrium pure states in isolated quantum systems. The diagonal entropy, which is the Shannon entropy in the energy eigenbasis at each instant of time, is a natural generalization of the von Neumann entropy and applicable to equilibrium pure states. We show that the diagonal entropy is consistent with the second law of thermodynamics upon arbitrary external unitary operations. In terms of the diagonal entropy, thermodynamic irreversibility follows from the facts that quantum trajectories under unitary evolution are restricted by the Hamiltonian dynamics and that the external operation is performed withoutmore » reference to the microscopic state of the system.« less
Nonadditive entropies yield probability distributions with biases not warranted by the data.
Pressé, Steve; Ghosh, Kingshuk; Lee, Julian; Dill, Ken A
2013-11-01
Different quantities that go by the name of entropy are used in variational principles to infer probability distributions from limited data. Shore and Johnson showed that maximizing the Boltzmann-Gibbs form of the entropy ensures that probability distributions inferred satisfy the multiplication rule of probability for independent events in the absence of data coupling such events. Other types of entropies that violate the Shore and Johnson axioms, including nonadditive entropies such as the Tsallis entropy, violate this basic consistency requirement. Here we use the axiomatic framework of Shore and Johnson to show how such nonadditive entropy functions generate biases in probability distributions that are not warranted by the underlying data.
On determining absolute entropy without quantum theory or the third law of thermodynamics
NASA Astrophysics Data System (ADS)
Steane, Andrew M.
2016-04-01
We employ classical thermodynamics to gain information about absolute entropy, without recourse to statistical methods, quantum mechanics or the third law of thermodynamics. The Gibbs-Duhem equation yields various simple methods to determine the absolute entropy of a fluid. We also study the entropy of an ideal gas and the ionization of a plasma in thermal equilibrium. A single measurement of the degree of ionization can be used to determine an unknown constant in the entropy equation, and thus determine the absolute entropy of a gas. It follows from all these examples that the value of entropy at absolute zero temperature does not need to be assigned by postulate, but can be deduced empirically.
Maximum Tsallis entropy with generalized Gini and Gini mean difference indices constraints
NASA Astrophysics Data System (ADS)
Khosravi Tanak, A.; Mohtashami Borzadaran, G. R.; Ahmadi, J.
2017-04-01
Using the maximum entropy principle with Tsallis entropy, some distribution families for modeling income distribution are obtained. By considering income inequality measures, maximum Tsallis entropy distributions under the constraint on generalized Gini and Gini mean difference indices are derived. It is shown that the Tsallis entropy maximizers with the considered constraints belong to generalized Pareto family.
Exact-Output Tracking Theory for Systems with Parameter Jumps
NASA Technical Reports Server (NTRS)
Devasia, Santosh; Paden, Brad; Rossi, Carlo
1996-01-01
In this paper we consider the exact output tracking problem for systems with parameter jumps. Necessary and sufficient conditions are derived for the elimination of switching-introduced output transient. Previous works have studied this problem by developing a regulator that maintains exact tracking through parameter jumps (switches). Such techniques are, however, only applicable to minimum-phase systems. In contrast, our approach is applicable to nonminimum-phase systems and obtains bounded but possibly non-causal solutions. If the reference trajectories are generated by an exo-system, then we develop an exact-tracking controller in a feedback form. As in standard regulator theory, we obtain a linear map from the states of the exo-system to the desired system state which is defined via a matrix differential equation. The constant solution of this differential equation provides asymptotic tracking, and coincides with the feedback law used in standard regulator theory. The obtained results are applied to a simple flexible manipulator with jumps in the pay-load mass.
NASA Technical Reports Server (NTRS)
Bayard, David S. (Inventor)
1996-01-01
Periodic gain adjustment in plants of irreducible order, n, or for equalization of communications channels is effected in such a way that the plant (system) appears to be minimum phase by choosing a horizon time N greater then n of liftings in periodic input and output windows Pu and Py, respectively, where N is an integer chosen to define the extent (length) of each of the windows Pu and Py, and n is the order of an irreducible input/output plant. The plant may be an electrical, mechanical or chemical system, in which case output tracking (OT) is carried out for feedback control or a communication channel, in which case input tracking (IT) is carried out. Conditions for OT are distinct from IT in terms of zero annihilation, namely for OT and of IT, where the OT conditions are intended for gain adjustments in the control system, and IT conditions are intended for equalization for communication channels.
Two time scale output feedback regulation for ill-conditioned systems
NASA Technical Reports Server (NTRS)
Calise, A. J.; Moerder, D. D.
1986-01-01
Issues pertaining to the well-posedness of a two time scale approach to the output feedback regulator design problem are examined. An approximate quadratic performance index which reflects a two time scale decomposition of the system dynamics is developed. It is shown that, under mild assumptions, minimization of this cost leads to feedback gains providing a second-order approximation of optimal full system performance. A simplified approach to two time scale feedback design is also developed, in which gains are separately calculated to stabilize the slow and fast subsystem models. By exploiting the notion of combined control and observation spillover suppression, conditions are derived assuring that these gains will stabilize the full-order system. A sequential numerical algorithm is described which obtains output feedback gains minimizing a broad class of performance indices, including the standard LQ case. It is shown that the algorithm converges to a local minimum under nonrestrictive assumptions. This procedure is adapted to and demonstrated for the two time scale design formulations.
Diffusive mixing and Tsallis entropy
O'Malley, Daniel; Vesselinov, Velimir V.; Cushman, John H.
2015-04-29
Brownian motion, the classical diffusive process, maximizes the Boltzmann-Gibbs entropy. The Tsallis q-entropy, which is non-additive, was developed as an alternative to the classical entropy for systems which are non-ergodic. A generalization of Brownian motion is provided that maximizes the Tsallis entropy rather than the Boltzmann-Gibbs entropy. This process is driven by a Brownian measure with a random diffusion coefficient. In addition, the distribution of this coefficient is derived as a function of q for 1 < q < 3. Applications to transport in porous media are considered.
Holographic charged Rényi entropies
NASA Astrophysics Data System (ADS)
Belin, Alexandre; Hung, Ling-Yan; Maloney, Alexander; Matsuura, Shunji; Myers, Robert C.; Sierens, Todd
2013-12-01
We construct a new class of entanglement measures by extending the usual definition of Rényi entropy to include a chemical potential. These charged Rényi entropies measure the degree of entanglement in different charge sectors of the theory and are given by Euclidean path integrals with the insertion of a Wilson line encircling the entangling surface. We compute these entropies for a spherical entangling surface in CFT's with holographic duals, where they are related to entropies of charged black holes with hyperbolic horizons. We also compute charged Rényi entropies in free field theories.
Wavelet entropy of BOLD time series: An application to Rolandic epilepsy.
Gupta, Lalit; Jansen, Jacobus F A; Hofman, Paul A M; Besseling, René M H; de Louw, Anton J A; Aldenkamp, Albert P; Backes, Walter H
2017-12-01
To assess the wavelet entropy for the characterization of intrinsic aberrant temporal irregularities in the time series of resting-state blood-oxygen-level-dependent (BOLD) signal fluctuations. Further, to evaluate the temporal irregularities (disorder/order) on a voxel-by-voxel basis in the brains of children with Rolandic epilepsy. The BOLD time series was decomposed using the discrete wavelet transform and the wavelet entropy was calculated. Using a model time series consisting of multiple harmonics and nonstationary components, the wavelet entropy was compared with Shannon and spectral (Fourier-based) entropy. As an application, the wavelet entropy in 22 children with Rolandic epilepsy was compared to 22 age-matched healthy controls. The images were obtained by performing resting-state functional magnetic resonance imaging (fMRI) using a 3T system, an 8-element receive-only head coil, and an echo planar imaging pulse sequence ( T2*-weighted). The wavelet entropy was also compared to spectral entropy, regional homogeneity, and Shannon entropy. Wavelet entropy was found to identify the nonstationary components of the model time series. In Rolandic epilepsy patients, a significantly elevated wavelet entropy was observed relative to controls for the whole cerebrum (P = 0.03). Spectral entropy (P = 0.41), regional homogeneity (P = 0.52), and Shannon entropy (P = 0.32) did not reveal significant differences. The wavelet entropy measure appeared more sensitive to detect abnormalities in cerebral fluctuations represented by nonstationary effects in the BOLD time series than more conventional measures. This effect was observed in the model time series as well as in Rolandic epilepsy. These observations suggest that the brains of children with Rolandic epilepsy exhibit stronger nonstationary temporal signal fluctuations than controls. 2 Technical Efficacy: Stage 3 J. Magn. Reson. Imaging 2017;46:1728-1737. © 2017 International Society for Magnetic Resonance in Medicine.